Skip to content

[CS726-2023] Programming assignment exploring diffusion models

Notifications You must be signed in to change notification settings

ayushayush591/explore-diffusion

 
 

Repository files navigation

explore-diffusion

[CS726-2023] Programming assignment exploring diffusion models

Getting started

Steps to get started with the code:

  1. Install Anaconda on your system, download from -- https://www.anaconda.com.
  2. Clone the github repo -- git clone https://github.com/ashutoshbsathe/explore-diffusion.git, into some convenient folder of your choice.
  3. cd explore-diffusion.
  4. Run the command -- conda env create --file environment.yaml. This will setup all the required dependencies.
  5. Activate the environment using source activate cs726-env or conda activate cs726-env. You are done with the setup.

Training your model

Once you code up your model in the model.py file, you can use the provided trainer in the train.py file to train your model, as -- python train.py.

You can use various command line arguments to tweak the number of epochs, batch size, etc. Please check the train.py file for details. You can get the full list of available hyperparameters by doing python train.py -h

After completion of training you can find the checkpoint and hyperparams under the runs directory. A demo directory structure is shown as follows:

image

Of interest are the last.ckpt and hparams.yaml files, which will be used while evaluating the trained model.

Evaluating your trained model

Once the trained model is available, you can use the eval.py file to generate the metrics and visualizations. Refer the command line arguments to understand further. A demo run is as follows:

 python eval.py --ckpt_path runs/n_dim=3,n_steps=50,lbeta=1.000e-05,ubeta=1.280e-02,batch_size=1024,n_epochs=500/last.ckpt \
                --hparams_path runs/n_dim=3,n_steps=50,lbeta=1.000e-05,ubeta=1.280e-02,batch_size=1024,n_epochs=500/lightning_logs/version_0/hparams.yaml \
                --eval_nll --vis_diffusion --vis_overlay

This evaluates the trained model on samples generated from $3$ runs, using only the negative log likelihood (--eval_nll). It also generates neat visualization of the diffusion process as gif animations.

Example plot generated with --vis_overlay.

image

Here, yellow-magenta points represent the original distribution and the blue-purple points indicate samples generated from a trained DDPM

Example animation produced with --vis_diffusion.

00 diffusionvis track_max=False track_min=False smoothed_end=True

Here, yellow-magenta points represent the original distribution and the blue-purple points indicate samples generated from a trained DDPM. Notice how the blue-purple points slowly become closer and closer to the original distribution as the reverse process progresses.

Acknowledgements

Special thanks to Kanad Pardeshi for generating the 3d_sin_5_5 and helix distributions and helping with the implementation of several evaluation metrics

About

[CS726-2023] Programming assignment exploring diffusion models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%