This is the official PyTorch codes for the paper
Blind Image Super Resolution with Semantic-Aware Quantized Texture Prior
Chaofeng Chen*, Xinyu Shi*, Yipeng Qin, Xiaoming Li, Xiaoguang Han, Tao Yang, Shihui Guo
(* indicates equal contribution)
- 2022.03.02: Add onedrive download link for pretrained weights.
Here are some example results on test images from BSRGAN and RealESRGAN.
Left: real images | Right: super-resolved images with scale factor 4
- Ubuntu >= 18.04
- CUDA >= 11.0
- Other required packages in
requirements.txt
# git clone this repository
git clone https://github.com/chaofengc/QuanTexSR.git
cd QuanTexSR
# create new anaconda env
conda create -n quantexsr python=3.8
source activate quantexsr
# install python dependencies
pip3 install -r requirements.txt
python setup.py develop
Download pretrained model (only provide x4 model now) from
- BaiduNetdisk, extract code
qtsr
. - OneDrive
Test the model with the following script
python inference_quantexsr.py -w ./path/to/model/weight -i ./path/to/test/image[or folder]
Please prepare the training and testing data follow descriptions in the main paper and supplementary material. In brief, you need to crop 512 x 512 high resolution patches, and generate the low resolution patches with degradation_bsrgan
function provided by BSRGAN. While the synthetic testing LR images are generated by the degradation_bsrgan_plus
function for fair comparison.
Before training, you need to put the following pretrained models in experiments/pretrained_models
and specify their path in the corresponding option file.
- HQ pretrain stage: pretrained semantic cluster codebook
- LQ stage (SR model training): pretrained semantic aware vqgan, pretrained PSNR oriented RRDB model
- lpips weight for validation
The above models can be downloaded from the BaiduNetDisk.
python basicsr/train.py -opt options/train_QuanTexSR_LQ_stage.yml
In case you want to pretrain your own VQGAN prior, we also provide the training instructions below.
The semantic-aware codebook is obtained with VGG19 features using a mini-batch version of K-means, optimized with Adam. This script will give three levels of codebooks from relu3_4
, relu4_4
and relu5_4
features. We use relu4_4
for this project.
python basicsr/train.py -opt options/train_QuanTexSR_semantic_cluster_stage.yml
python basicsr/train.py -opt options/train_QuanTexSR_HQ_pretrain_stage.yml
@misc{chen2022quantexsr,
author={Chaofeng Chen and Xinyu Shi and Yipeng Qin and Xiaoming Li and Xiaoguang Han and Tao Yang and Shihui Guo},
title={Blind Image Super Resolution with Semantic-Aware Quantized Texture Prior},
year={2022},
eprint={2202.13142},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
This project is based on BasicSR.