The implementation of paper Interactive Separation Network For Image Inpainting.
If you want to download this code, please use command:
git clone https://github.com/GuardSkill/Large-Scale-Feature-Inpainting.git
- Python 3.6
- PyTorch 1.0 (The version MUST >=1.0)
- NVIDIA GPU + CUDA cuDNN
- Clone this repo:
git clone https://github.com/GuardSkill/Large-Scale-Feature-Inpainting.git
cd Large-Scale-Feature-Inpainting
- Install PyTorch.
- Install python requirements:
pip3 install -r requirements.txt
We use Places2, CelebA datasets. To train a model on the full dataset, download datasets from official websites.
After downloading, run scripts/flist.py
to generate train, test and validation set file lists for images or masks. To generate the training set file lists on Places2 dataset run:
mkdir datasets
python3 ./scripts/flist.py --path [path_to_places2_train_set] --output ./datasets/places2_train.flist
python3 ./scripts/flist.py --path [path_to_places2_validation_set] --output ./datasets/places2_val.flist
python3 ./scripts/flist.py --path [path_to_places2_test_set] --output ./datasets/mask_test.flist
We alse provide the function for generate the file lists of CelebA by using the official partition file. To generate the train,val,test dataset file lists on celeba dataset run:
python3 ./scripts/flist.py --path [path_to_celeba_dataset] --celeba [path_to_celeba_partition_file]
Our model is trained on the noised irregular mask dataset(.jpg format, 0 represents invalid region), which an be download at the Chinese NetDisk-坚果云. These jpeg masks are made from Irregular Mask Dataset provided by Liu et al..
We additionally provide the code for dividing the mask maps into to 4 class according to proportion of their corrupted region.
Download the pre-trained models using the following links and copy them under ./checkpoints
directory.
Pretrained on Places2: mega | Google Drive | 坚果云
Pretrained on CelebA: mega | Google Drive
To train the model, create a config.yaml
file similar to the example config file and copy it under your checkpoints directory. Read the configuration guide for more information on model configuration.
To train the ISNet:
python3 train.py --checkpoints [path to checkpoints]
To test the model, create a config.yaml
file similar to the example config file and copy it under your checkpoints directory. Read the configuration guide for more information on model configuration.
You can evaluate the test dataset which list file path is recorded in config.yaml
by this command:
python3 test.py \
--path ./checkpoints/Celeba
You can test the model for some specific images and masks, you need to provide an input images and a binary masks. Please make sure that the resolution of mask is same as images To test the model:
python3 test.py \
--checkpoints [path to checkpoints] \
--input [path to input directory or file] \
--mask [path to masks directory or mask file] \
--output [path to the output directory]
We provide some test examples under ./examples
directory. Please download the pre-trained models and run:
python3 test.py \
--checkpoints ./checkpoints/places2
--input ./examples/places2/images
--mask ./examples/places2/masks
--output ./examples/places2/results
This script will inpaint all images in ./examples/places2/images
using their corresponding masks in ./examples/places2/mask
directory and saves the results in ./checkpoints/places2/results
directory.
The model configuration is stored in a config.yaml
file under your checkpoints directory. The following tables provide the documentation for all the options available in the configuration file:
Licensed under a Creative Commons Attribution-NonCommercial 4.0 International.
Except where otherwise noted, this content is published under a CC BY-NC license, which means that you can copy, remix, transform and build upon the content as long as you do not use the material for commercial purposes and give appropriate credit and provide a link to the license.
Lots of logic code and readme file comes from Edge-Connect, we sincerely thanks their contribution.
If you use this code for your research, please cite our paper Interactive Separation Network For Image Inpainting:
@inproceedings{li2020interactive,
title={Interactive Separation Network For Image Inpainting},
author={Li, Siyuan and Lu, Lu and Zhang, Zhiqiang and Cheng, Xin and Xu, Kepeng and Yu, Wenxin and He, Gang and Zhou, Jinjia and Yang, Zhuo},
booktitle={2020 IEEE International Conference on Image Processing (ICIP)},
pages={1008--1012},
year={2020},
organization={IEEE}
}
Option | Description |
---|---|
MODE | 1: train, 2: test, 3: eval # |
MASK | 1: random block, 2: half, 3: external, 4: external + random block, 5: external + random block + half |
SEED | random number generator seed |
GPU | list of gpu ids, comma separated list e.g. [0,1] |
DEBUG | 0: no debug, 1: debugging mode |
VERBOSE | 0: no verbose, 1: output detailed statistics in the output console |
Option | Description |
---|---|
TRAIN_FLIST | text file containing training set files list |
VAL_FLIST | text file containing validation set files list |
TEST_FLIST | text file containing test set files list |
TRAIN_MASK_FLIST | text file containing training set masks files list (only with MASK=3, 4, 5) |
VAL_MASK_FLIST | text file containing validation set masks files list (only with MASK=3, 4, 5) |
TEST_MASK_FLIST | text file containing test set masks files list (only with MASK=3, 4, 5) |
Option | Default | Description |
---|---|---|
LR | 0.0001 | learning rate |
D2G_LR | 0.1 | discriminator/generator learning rate ratio |
BETA1 | 0.0 | adam optimizer beta1 |
BETA2 | 0.9 | adam optimizer beta2 |
BATCH_SIZE | 8 | input batch size |
INPUT_SIZE | 256 | input image size for training. (0 for original size) |
MAX_ITERS | 2e6 | maximum number of iterations to train the model |
MAX_STEPS: | 5000 | maximum number of each epoch |
MAX_EPOCHES: | 100 | maximum number of epoches 100 |
L1_LOSS_WEIGHT | 1 | l1 loss weight |
FM_LOSS_WEIGHT | 10 | feature-matching loss weight |
STYLE_LOSS_WEIGHT | 1 | style loss weight |
CONTENT_LOSS_WEIGHT | 1 | perceptual loss weight |
INPAINT_ADV_LOSS_WEIGHT | 0.01 | adversarial loss weight |
GAN_LOSS | nsgan | nsgan: non-saturating gan, lsgan: least squares GAN, hinge: hinge loss GAN |
GAN_POOL_SIZE | 0 | fake images pool size |
SAVE_INTERVAL | 1000 | how many iterations to wait before saving model (0: never) |
EVAL_INTERVAL | 0 | how many iterations to wait before evaluating the model (0: never) |
SAMPLE_INTERVAL | 1000 | how many iterations to wait before saving sample (0: never) |
SAMPLE_SIZE | 12 | number of images to sample on each samling interval |
EVAL_INTERVAL | 3 | How many INTERVAL sample while valuation (0: never 36000 in places) |