My implemetaion of super-resolution based on dual-path networks but is quit different from orignal networks,the differences are shown as fllowing:
- I modifed the structure of the dual-path blocks for faster training spped.
- I introduced bottleneck to reduce dimension and deconvolution to restore the detail.
- I introduce the preceptual loss and gram loss based on feature space of VGG19.
tensorflow >=1.3.0
Scipy >= 0.18
GPU memory > 7G
First, you need to download the module of VGG19 here for loss function caulation.
Then, Move downloaded fileimagenet-vgg-verydeep-19.mat
to SRDPNs folder in this project.Open
main.py
,change the data path to your data,for example:
flags.DEFINE_string("testimg", "2.bmp", "Name of test image")
Excute thepython main.py
for testing, and The result will be saved at sample directory.Put your own dataset into Train directory,change the code
asflags.DEFINE_boolean("is_train",True,"True for training, False for testing")
and excutepython main.py
for training.
After training 100 epochs on 120 images, the networks can be well trained and generate the high resolution image. It takes about 7 hours to train the model,and both training and testing are preformed on single NIVIDA 1080ti GPU.Empirically,i set the initial learning rate 0.0001 and set the decay rate 0.98 with every 1000 steps.the results are shown as following.