We use data from a mental health survey to explore factors that may cause individuals to experience depression. A model will be developed to predict if the person is undergoing depression and suggest activites to improve on mental health making use of Chat GPT.
├── LICENSE <- Open-source license if one is chosen
├── Makefile <- Makefile with convenience commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default mkdocs project; see www.mkdocs.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── pyproject.toml <- Project configuration file with package metadata for
│ health and configuration for tools like black
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, CSV, etc.
│ |── figures <- Generated graphics and figures to be used in reporting
| └── experiment <- Generated analysis as HTML, PDF, CSV
│ └── figures <- Generated graphics and figures from an experiement
| report1.csv ..
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── setup.cfg <- Configuration file for flake8
│
└── health <- Source code for use in this project.
│
├── __init__.py <- Makes health a Python module
│
├── config.py <- Store useful variables and configuration
│
├── dataset.py <- Scripts to download or generate data
│
├── features.py <- Code to create features for modeling
│
├── modeling
│ ├── __init__.py
│ ├── predict.py <- Code to run model inference with trained models
│ └── train.py <- Code to train models
│
└── plots.py <- Code to create visualizations
- Clone the github repo / download the zip and unzip
- Create folder data and subfolders at top level as per the hierarchy shown above.
- Setup virtual environment for the project
- install packages using pip install -r requirements.txt
- Download the files from https://www.kaggle.com/competitions/playground-series-s4e11/ and keep it the "data/external" folder use the same naming convension
- Run the 1.01.gij.prepare notebook to generate data splits
- Run the 1.02.gij.clean notebook to cleanup the data and genrate output
- Run the 2.01.gij.eda notebook to generate reports and visualisations