Skip to content

PyTorch codes for "Blind Image Super Resolution with Semantic-Aware Quantized Texture Prior"

License

Notifications You must be signed in to change notification settings

ljdream0710/QuanTexSR

 
 

Repository files navigation

QuanTexSR

This is the official PyTorch codes for the paper
Blind Image Super Resolution with Semantic-Aware Quantized Texture Prior
Chaofeng Chen*, Xinyu Shi*, Yipeng Qin, Xiaoming Li, Xiaoguang Han, Tao Yang, Shihui Guo
(* indicates equal contribution)

framework_img

Update

  • 2022.03.02: Add onedrive download link for pretrained weights.

Here are some example results on test images from BSRGAN and RealESRGAN.


Left: real images | Right: super-resolved images with scale factor 4

Dependencies and Installation

  • Ubuntu >= 18.04
  • CUDA >= 11.0
  • Other required packages in requirements.txt
# git clone this repository
git clone https://github.com/chaofengc/QuanTexSR.git
cd QuanTexSR

# create new anaconda env
conda create -n quantexsr python=3.8
source activate quantexsr

# install python dependencies
pip3 install -r requirements.txt
python setup.py develop

Quick Inference

Download pretrained model (only provide x4 model now) from

Test the model with the following script

python inference_quantexsr.py -w ./path/to/model/weight -i ./path/to/test/image[or folder]

Train the model

Preparation

Dataset

Please prepare the training and testing data follow descriptions in the main paper and supplementary material. In brief, you need to crop 512 x 512 high resolution patches, and generate the low resolution patches with degradation_bsrgan function provided by BSRGAN. While the synthetic testing LR images are generated by the degradation_bsrgan_plus function for fair comparison.

Model preparation

Before training, you need to put the following pretrained models in experiments/pretrained_models and specify their path in the corresponding option file.

  • HQ pretrain stage: pretrained semantic cluster codebook
  • LQ stage (SR model training): pretrained semantic aware vqgan, pretrained PSNR oriented RRDB model
  • lpips weight for validation

The above models can be downloaded from the BaiduNetDisk.

Train SR model

python basicsr/train.py -opt options/train_QuanTexSR_LQ_stage.yml

Model pretrain

In case you want to pretrain your own VQGAN prior, we also provide the training instructions below.

Pretrain semantic codebook

The semantic-aware codebook is obtained with VGG19 features using a mini-batch version of K-means, optimized with Adam. This script will give three levels of codebooks from relu3_4, relu4_4 and relu5_4 features. We use relu4_4 for this project.

python basicsr/train.py -opt options/train_QuanTexSR_semantic_cluster_stage.yml

Pretrain of semantic-aware VQGAN

python basicsr/train.py -opt options/train_QuanTexSR_HQ_pretrain_stage.yml

Citation

@misc{chen2022quantexsr,
      author={Chaofeng Chen and Xinyu Shi and Yipeng Qin and Xiaoming Li and Xiaoguang Han and Tao Yang and Shihui Guo},
      title={Blind Image Super Resolution with Semantic-Aware Quantized Texture Prior}, 
      year={2022},
      eprint={2202.13142},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Acknowledgement

This project is based on BasicSR.

About

PyTorch codes for "Blind Image Super Resolution with Semantic-Aware Quantized Texture Prior"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.8%
  • MATLAB 1.2%