Skip to content

Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better

License

Notifications You must be signed in to change notification settings

imagination-research/LCSC

Repository files navigation

LCSC

[website (on the way)] [paper (arXiv)] [code]

This is the official code of paper "Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better".

If you find this repository or paper useful, you can cite

@misc{liu2024linear,
      title={Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better}, 
      author={Enshu Liu and Junyi Zhu and Zinan Lin and Xuefei Ning and Matthew B. Blaschko and Sergey Yekhanin and Shengen Yan and Guohao Dai and Huazhong Yang and Yu Wang},
      year={2024},
      eprint={2404.02241},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Enhanced Models

We have released some of the model weights enhanced by LCSC. Here are the download links for these model weights:

Dependencies

To install all packages in this codebase along with their dependencies, run

pip install -e .

To install with Docker, run the following commands:

cd docker && make build && make run

Model training

We recommend using the original codebase to train models and saving checkpoints for LCSC during the training process. In our paper, we use the official code of consistency models to conduct experimetns with consistency distillation and consistency training. For diffusion models, we use the official code of DDIM and iDDPM to train models on CIFAR-10 and ImageNet-64, respectively.

Search the combination coefficients

We are still rearranging our code of evolutionary search and will soon release it.

Model sampling

We provide examples of EDM training, consistency distillation, consistency training, single-step generation, and multistep generation in scripts/sample.sh.

Evaluations

We use FID to compare the different generative models. We implement FID calculation in evaluations/fid_score.py based on Pytorch FID. One can also use the implementation from the original Consistency Models codebase in evaluations/evaluator.py, which may yield results slightly different from the former implementation. Our sample scripts will automatically evaluate the generated samples stored in .npz (numpy) files after finishing the sampling process.

Acknowledgement

This work is a joint work from NICS-EFC Lab (Tsinghua University), Infinigence-AI (Beijing, China), ESAT-PSI lab (KU Leuven), Microsoft Research and Shanghai Jiao Tong University.

All computational resources used in this work are supported by Infinigence-AI.

Our search and sampling implementations have been developed based on Consistency Models, DDIM, DPM-Solver, Improved DDPM. We thank these valuable works.

Contact Us

About

Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages