Skip to content

Latest commit

 

History

History
48 lines (32 loc) · 2.05 KB

File metadata and controls

48 lines (32 loc) · 2.05 KB

Inpainting using diffusion models

Brief

This project implements inpainting task on MiniPlaces dataset based on existing work from Palette: Image-to-Image Diffusion Models paper and GitHub repository. In this project we perform diffusion using the following architectures:

  • Simple UNet (baseline)
  • Resblocks with Group normalisation and attention blocks
  • ConvNeXt blocks
  • Palette UNet (state-of-the-art) from Palette repository

Results

After training on 100,000 samples of MiniPlaces dataset we obtained the results displayed in the following table:

Network IS(+) FID(-) PSNR(+) SSIM(+)
Baseline (Simple U-Net) 14.397 68.360 21.340 0.853
ConvNeXt blocks 14.599 99.085 21.000 0.847
Optimised ResBlocks 15.050 44.046 21.218 0.854
Palette U-Net 15.054 36.118 21.25 0.861
Truth (upper bound) 15.893 0 1

Visuals

Results with 100 epochs and ~10 million iterations:

drawing drawing

Usage

Environment

pip install -r requirements.txt

For further details please follow the guidelines on the Palette repository.

Acknowledge

Our work is based on the following works: