diff --git a/README.md b/README.md
index 2aa1e63..9678432 100644
--- a/README.md
+++ b/README.md
@@ -5,8 +5,15 @@ Note that this version is not compatible with previous versions. If you want to
# :rocket: [BasicSR](https://github.com/xinntao/BasicSR)
-[GitHub](https://github.com/xinntao/BasicSR) | [Gitee码云](https://gitee.com/xinntao/BasicSR)
-[English](README.md) | [简体中文](README_CN.md)
+[English](README.md) **|** [简体中文](README_CN.md) [GitHub](https://github.com/xinntao/BasicSR) **|** [Gitee码云](https://gitee.com/xinntao/BasicSR)
+
+:arrow_double_down: Google Drive: [Pretrained Models](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) **|** [Reproduced Experiments](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing)
+:arrow_double_down: 百度网盘: [预训练模型](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g) **|** [复现实验](https://pan.baidu.com/s/1UElD6q8sVAgn_cxeBDOlvQ)
+:chart_with_upwards_trend: [Training curves in wandb](https://app.wandb.ai/xintao/basicsr)
+:computer: [Commands for training and testing](docs/TrainTest.md)
+:zap: [HOWTOs](#zap-howtos)
+
+---
BasicSR is an **open source** image and video super-resolution toolbox based on PyTorch (will extend to more restoration tasks in the future).
([ESRGAN](https://github.com/xinntao/ESRGAN), [EDVR](https://github.com/xinntao/EDVR), [DNI](https://github.com/xinntao/DNI), [SFTGAN](https://github.com/xinntao/SFTGAN))
@@ -34,11 +41,11 @@ BasicSR is an **open source** image and video super-resolution toolbox based on
We provides simple pipelines to train/test/inference models for quick start.
These pipelines/commands cannot cover all the cases and more details are in the following sections.
-- :zap: [How to train StyleGAN2](docs/HOWTOs.md#How-to-train-StyleGAN2)
-- :zap: [How to test StyleGAN2](docs/HOWTOs.md#How-to-test-StyleGAN2)
-- :zap: [How to test DFDNet](docs/HOWTOs.md#How-to-test-DFDNet)
+- [How to train StyleGAN2](docs/HOWTOs.md#How-to-train-StyleGAN2)
+- [How to test StyleGAN2](docs/HOWTOs.md#How-to-test-StyleGAN2)
+- [How to test DFDNet](docs/HOWTOs.md#How-to-test-DFDNet)
-## Dependencies and Installation
+## :wrench: Dependencies and Installation
- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.3](https://pytorch.org/)
@@ -54,25 +61,22 @@ python setup.py develop
Note that BasicSR is only tested in Ubuntu, and may be not suitable for Windows. You may try [Windows WSL with CUDA supports](https://docs.microsoft.com/en-us/windows/win32/direct3d12/gpu-cuda-in-wsl) :-) (It is now only available for insider build with Fast ring).
-## TODO List
+## :hourglass_flowing_sand: TODO List
Please see [project boards](https://github.com/xinntao/BasicSR/projects).
-## Dataset Preparation
+## :turtle: Dataset Preparation
- Please refer to **[DatasetPreparation.md](docs/DatasetPreparation.md)** for more details.
- The descriptions of currently supported datasets (`torch.utils.data.Dataset` classes) are in [Datasets.md](docs/Datasets.md).
-## Train and Test
+## :computer: Train and Test
- **Training and testing commands**: Please see **[TrainTest.md](docs/TrainTest.md)** for the basic usage.
- **Options/Configs**: Please refer to [Config.md](docs/Config.md).
- **Logging**: Please refer to [Logging.md](docs/Logging.md).
-## Model Zoo and Baselines
-
-**[Download official pre-trained models](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing)**
-**[Download reproduced models and logs](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing)**
+## :card_file_box: Model Zoo and Baselines
- The descriptions of currently supported models are in [Models.md](docs/Models.md).
- **Pre-trained models and log examples** are available in **[ModelZoo.md](docs/ModelZoo.md)**.
@@ -83,7 +87,7 @@ Please see [project boards](https://github.com/xinntao/BasicSR/projects).
-## Codebase Designs and Conventions
+## :memo: Codebase Designs and Conventions
Please see [DesignConvention.md](docs/DesignConvention.md) for the designs and conventions of the BasicSR codebase.
The figure below shows the overall framework. More descriptions for each component:
@@ -91,11 +95,11 @@ The figure below shows the overall framework. More descriptions for each compone
![overall_structure](./assets/overall_structure.png)
-## License and Acknowledgement
+## :scroll: License and Acknowledgement
This project is released under the Apache 2.0 license.
More details about license and acknowledgement are in [LICENSE](LICENSE/README.md).
-## Contact
+## :e-mail: Contact
If you have any question, please email `xintao.wang@outlook.com`.
diff --git a/README_CN.md b/README_CN.md
index 9dcd378..c963f68 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -6,8 +6,15 @@
# :rocket: [BasicSR](https://github.com/xinntao/BasicSR)
-[GitHub](https://github.com/xinntao/BasicSR) | [Gitee码云](https://gitee.com/xinntao/BasicSR)
-[English](README.md) | [简体中文](README_CN.md)
+[English](README.md) **|** [简体中文](README_CN.md) [GitHub](https://github.com/xinntao/BasicSR) **|** [Gitee码云](https://gitee.com/xinntao/BasicSR)
+
+:arrow_double_down: 百度网盘: [预训练模型](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g) **|** [复现实验](https://pan.baidu.com/s/1UElD6q8sVAgn_cxeBDOlvQ)
+:arrow_double_down: Google Drive: [Pretrained Models](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) **|** [Reproduced Experiments](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing)
+:chart_with_upwards_trend: [wandb的训练曲线](https://app.wandb.ai/xintao/basicsr)
+:computer: [训练和测试的命令](docs/TrainTest_CN.md)
+:zap: [HOWTOs](#zap-howtos)
+
+---
BasicSR 是一个基于 PyTorch 的**开源**图像视频超分辨率 (Super-Resolution) 工具箱 (之后会支持更多的 Restoration 任务).
([ESRGAN](https://github.com/xinntao/ESRGAN), [EDVR](https://github.com/xinntao/EDVR), [DNI](https://github.com/xinntao/DNI), [SFTGAN](https://github.com/xinntao/SFTGAN))
@@ -30,15 +37,15 @@ BasicSR 是一个基于 PyTorch 的**开源**图像视频超分辨率 (Super-Res
-## :zap:HOWTOs
+## :zap: HOWTOs
我们提供了简单的流程来快速上手 训练/测试/推理 模型. 这些命令并不能涵盖所有用法, 更多的细节参见下面的部分.
-- :zap: [如何训练 StyleGAN2](docs/HOWTOs_CN.md#如何训练-StyleGAN2)
-- :zap: [如何测试 StyleGAN2](docs/HOWTOs_CN.md#如何测试-StyleGAN2)
-- :zap: [如何测试 DFDNet](docs/HOWTOs_CN.md#如何测试-DFDNet)
+- [如何训练 StyleGAN2](docs/HOWTOs_CN.md#如何训练-StyleGAN2)
+- [如何测试 StyleGAN2](docs/HOWTOs_CN.md#如何测试-StyleGAN2)
+- [如何测试 DFDNet](docs/HOWTOs_CN.md#如何测试-DFDNet)
-## 依赖和安装
+## :wrench: 依赖和安装
- Python >= 3.7 (推荐使用 [Anaconda](https://www.anaconda.com/download/#linux) 或 [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.3](https://pytorch.org/)
@@ -54,25 +61,22 @@ python setup.py develop
注意: BasicSR 仅在 Ubuntu 下进行测试,或许不支持Windows. 可以在Windows下尝试[支持CUDA的Windows WSL](https://docs.microsoft.com/en-us/windows/win32/direct3d12/gpu-cuda-in-wsl) :-) (目前只有Fast ring的预览版系统可以安装).
-## TODO 清单
+## :hourglass_flowing_sand: TODO 清单
参见 [project boards](https://github.com/xinntao/BasicSR/projects).
-## 数据准备
+## :turtle: 数据准备
- 数据准备步骤, 参见 **[DatasetPreparation_CN.md](docs/DatasetPreparation_CN.md)**.
- 目前支持的数据集 (`torch.utils.data.Dataset`类), 参见 [Datasets_CN.md](docs/Datasets_CN.md).
-## 训练和测试
+## :computer: 训练和测试
- **训练和测试的命令**, 参见 **[TrainTest_CN.md](docs/TrainTest_CN.md)**.
- **Options/Configs**配置文件的说明, 参见 [Config_CN.md](docs/Config_CN.md).
- **Logging**日志系统的说明, 参见 [Logging_CN.md](docs/Logging_CN.md).
-## 模型库和基准
-
-**[下载官方提供的预训练模型](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing)**
-**[下载复现的模型和log](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing)**
+## :card_file_box: 模型库和基准
- 目前支持的模型描述, 参见 [Models_CN.md](docs/Models_CN.md).
- **预训练模型和log样例**, 参见 **[ModelZoo_CN.md](docs/ModelZoo_CN.md)**.
@@ -83,7 +87,7 @@ python setup.py develop
-## 代码库的设计和约定
+## :memo: 代码库的设计和约定
参见 [DesignConvention_CN.md](docs/DesignConvention_CN.md).
下图概括了整体的框架. 每个模块更多的描述参见:
@@ -91,11 +95,11 @@ python setup.py develop
![overall_structure](./assets/overall_structure.png)
-## 许可
+## :scroll: 许可
本项目使用 Apache 2.0 license.
更多细节参见 [LICENSE](LICENSE/README.md).
-#### 联系
+## :e-mail: 联系
若有任何问题, 请电邮 `xintao.wang@outlook.com`.
diff --git a/VERSION b/VERSION
index 9084fa2..524cb55 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-1.1.0
+1.1.1
diff --git a/basicsr/data/util.py b/basicsr/data/util.py
index 798d368..50245ac 100644
--- a/basicsr/data/util.py
+++ b/basicsr/data/util.py
@@ -246,6 +246,21 @@ def paired_paths_from_folder(folders, keys, filename_tmpl):
return paths
+def paths_from_folder(folder):
+ """Generate paths from folder.
+
+ Args:
+ folder (str): Folder path.
+
+ Returns:
+ list[str]: Returned path list.
+ """
+
+ paths = list(mmcv.scandir(folder))
+ paths = [osp.join(folder, path) for path in paths]
+ return paths
+
+
def generate_gaussian_kernel(kernel_size=13, sigma=1.6):
"""Generate Gaussian kernel used in `duf_downsample`.
diff --git a/basicsr/models/archs/__init__.py b/basicsr/models/archs/__init__.py
index 8f17af5..a00982a 100644
--- a/basicsr/models/archs/__init__.py
+++ b/basicsr/models/archs/__init__.py
@@ -15,3 +15,31 @@
importlib.import_module(f'basicsr.models.archs.{file_name}')
for file_name in arch_filenames
]
+
+
+def dynamic_instantiation(modules, cls_type, opt):
+ """Dynamically instantiate class.
+
+ Args:
+ modules (list[importlib modules]): List of modules from importlib
+ files.
+ cls_type (str): Class type.
+ opt (dict): Class initialization kwargs.
+
+ Returns:
+ class: Instantiated class.
+ """
+
+ for module in modules:
+ cls_ = getattr(module, cls_type, None)
+ if cls_ is not None:
+ break
+ if cls_ is None:
+ raise ValueError(f'{cls_type} is not found.')
+ return cls_(**opt)
+
+
+def define_network(opt):
+ network_type = opt.pop('type')
+ net = dynamic_instantiation(_arch_modules, network_type, opt)
+ return net
diff --git a/basicsr/models/networks.py b/basicsr/models/networks.py
deleted file mode 100644
index ecc97f1..0000000
--- a/basicsr/models/networks.py
+++ /dev/null
@@ -1,39 +0,0 @@
-from basicsr.models.archs import _arch_modules
-
-
-def dynamic_instantiation(modules, cls_type, opt):
- """Dynamically instantiate class.
-
- Args:
- modules (list[importlib modules]): List of modules from importlib
- files.
- cls_type (str): Class type.
- opt (dict): Class initialization kwargs.
-
- Returns:
- class: Instantiated class.
- """
-
- for module in modules:
- cls_ = getattr(module, cls_type, None)
- if cls_ is not None:
- break
- if cls_ is None:
- raise ValueError(f'{cls_type} is not found.')
- return cls_(**opt)
-
-
-# Generator
-def define_net_g(opt):
- network_type = opt.pop('type')
- net_g = dynamic_instantiation(_arch_modules, network_type, opt)
-
- return net_g
-
-
-# Discriminator
-def define_net_d(opt):
- network_type = opt.pop('type')
-
- net_d = dynamic_instantiation(_arch_modules, network_type, opt)
- return net_d
diff --git a/basicsr/models/sr_model.py b/basicsr/models/sr_model.py
index 56b6193..66a98b6 100644
--- a/basicsr/models/sr_model.py
+++ b/basicsr/models/sr_model.py
@@ -5,7 +5,7 @@
from copy import deepcopy
from os import path as osp
-from basicsr.models import networks as networks
+from basicsr.models.archs import define_network
from basicsr.models.base_model import BaseModel
from basicsr.utils import ProgressBar, get_root_logger, tensor2img
@@ -20,7 +20,7 @@ def __init__(self, opt):
super(SRModel, self).__init__(opt)
# define network
- self.net_g = networks.define_net_g(deepcopy(opt['network_g']))
+ self.net_g = define_network(deepcopy(opt['network_g']))
self.net_g = self.model_to_device(self.net_g)
self.print_network(self.net_g)
diff --git a/basicsr/models/srgan_model.py b/basicsr/models/srgan_model.py
index a9e7c27..d927773 100644
--- a/basicsr/models/srgan_model.py
+++ b/basicsr/models/srgan_model.py
@@ -3,7 +3,7 @@
from collections import OrderedDict
from copy import deepcopy
-from basicsr.models import networks as networks
+from basicsr.models.archs import define_network
from basicsr.models.sr_model import SRModel
loss_module = importlib.import_module('basicsr.models.losses')
@@ -16,7 +16,7 @@ def init_training_settings(self):
train_opt = self.opt['train']
# define network net_d
- self.net_d = networks.define_net_d(deepcopy(self.opt['network_d']))
+ self.net_d = define_network(deepcopy(self.opt['network_d']))
self.net_d = self.model_to_device(self.net_d)
self.print_network(self.net_d)
diff --git a/basicsr/models/stylegan2_model.py b/basicsr/models/stylegan2_model.py
index d3ab279..7cf7aec 100644
--- a/basicsr/models/stylegan2_model.py
+++ b/basicsr/models/stylegan2_model.py
@@ -1,13 +1,14 @@
import importlib
import math
import mmcv
+import numpy as np
import random
import torch
from collections import OrderedDict
from copy import deepcopy
from os import path as osp
-from basicsr.models import networks as networks
+from basicsr.models.archs import define_network
from basicsr.models.base_model import BaseModel
from basicsr.models.losses.losses import g_path_regularize, r1_penalty
from basicsr.utils import tensor2img
@@ -22,7 +23,7 @@ def __init__(self, opt):
super(StyleGAN2Model, self).__init__(opt)
# define network net_g
- self.net_g = networks.define_net_g(deepcopy(opt['network_g']))
+ self.net_g = define_network(deepcopy(opt['network_g']))
self.net_g = self.model_to_device(self.net_g)
self.print_network(self.net_g)
# load pretrained model
@@ -34,8 +35,9 @@ def __init__(self, opt):
# latent dimension: self.num_style_feat
self.num_style_feat = opt['network_g']['num_style_feat']
+ num_val_samples = self.opt['val'].get('num_val_samples', 16)
self.fixed_sample = torch.randn(
- 16, self.num_style_feat, device=self.device)
+ num_val_samples, self.num_style_feat, device=self.device)
if self.is_train:
self.init_training_settings()
@@ -44,7 +46,7 @@ def init_training_settings(self):
train_opt = self.opt['train']
# define network net_d
- self.net_d = networks.define_net_d(deepcopy(self.opt['network_d']))
+ self.net_d = define_network(deepcopy(self.opt['network_d']))
self.net_d = self.model_to_device(self.net_d)
self.print_network(self.net_d)
@@ -57,8 +59,8 @@ def init_training_settings(self):
# define network net_g with Exponential Moving Average (EMA)
# net_g_ema only used for testing on one GPU and saving, do not need to
# wrap with DistributedDataParallel
- self.net_g_ema = networks.define_net_g(
- deepcopy(self.opt['network_g'])).to(self.device)
+ self.net_g_ema = define_network(deepcopy(self.opt['network_g'])).to(
+ self.device)
# load pretrained model
load_path = self.opt['path'].get('pretrain_model_g', None)
if load_path is not None:
@@ -311,7 +313,8 @@ def nondist_validation(self, dataloader, current_iter, tb_logger,
f'test_{self.opt["name"]}.png')
mmcv.imwrite(result, save_img_path)
# add sample images to tb_logger
- result = mmcv.bgr2rgb(result / 255.)
+ result = (result / 255.).astype(np.float32)
+ result = mmcv.bgr2rgb(result)
if tb_logger is not None:
tb_logger.add_image(
'samples', result, global_step=current_iter, dataformats='HWC')
diff --git a/basicsr/models/video_gan_model.py b/basicsr/models/video_gan_model.py
index 836954b..94ccf4b 100644
--- a/basicsr/models/video_gan_model.py
+++ b/basicsr/models/video_gan_model.py
@@ -3,7 +3,7 @@
from collections import OrderedDict
from copy import deepcopy
-from basicsr.models import networks as networks
+from basicsr.models.archs import define_network
from basicsr.models.video_base_model import VideoBaseModel
loss_module = importlib.import_module('basicsr.models.losses')
@@ -16,7 +16,7 @@ def init_training_settings(self):
train_opt = self.opt['train']
# define network net_d
- self.net_d = networks.define_net_d(deepcopy(self.opt['network_d']))
+ self.net_d = define_network(deepcopy(self.opt['network_d']))
self.net_d = self.model_to_device(self.net_d)
self.print_network(self.net_d)
diff --git a/basicsr/train.py b/basicsr/train.py
index 07c0736..0d769c8 100644
--- a/basicsr/train.py
+++ b/basicsr/train.py
@@ -205,8 +205,8 @@ def main():
model.save(epoch, current_iter)
# validation
- if opt['val']['val_freq'] is not None and current_iter % opt[
- 'val']['val_freq'] == 0:
+ if opt.get('val') is not None and (current_iter %
+ opt['val']['val_freq'] == 0):
model.validation(val_loader, current_iter, tb_logger,
opt['val']['save_img'])
@@ -222,7 +222,7 @@ def main():
logger.info(f'End of training. Time consumed: {consumed_time}')
logger.info('Save the latest model.')
model.save(epoch=-1, current_iter=-1) # -1 stands for the latest
- if opt['val']['val_freq'] is not None:
+ if opt.get('val') is not None:
model.validation(val_loader, current_iter, tb_logger,
opt['val']['save_img'])
if tb_logger:
diff --git a/basicsr/utils/options.py b/basicsr/utils/options.py
index cddf25e..f7717f0 100644
--- a/basicsr/utils/options.py
+++ b/basicsr/utils/options.py
@@ -75,7 +75,8 @@ def parse(opt_path, is_train=True):
# change some options for debug mode
if 'debug' in opt['name']:
- opt['val']['val_freq'] = 8
+ if 'val' in opt:
+ opt['val']['val_freq'] = 8
opt['logger']['print_freq'] = 1
opt['logger']['save_checkpoint_freq'] = 8
else: # test
diff --git a/docs/Config.md b/docs/Config.md
index 3c0163b..f2a0775 100644
--- a/docs/Config.md
+++ b/docs/Config.md
@@ -1,6 +1,6 @@
# Configuration
-[English](Config.md) | [简体中文](Config_CN.md)
+[English](Config.md) **|** [简体中文](Config_CN.md)
#### Contents
diff --git a/docs/Config_CN.md b/docs/Config_CN.md
index db77fc6..6fa159d 100644
--- a/docs/Config_CN.md
+++ b/docs/Config_CN.md
@@ -1,6 +1,6 @@
# 配置文件
-[English](Config.md) | [简体中文](Config_CN.md)
+[English](Config.md) **|** [简体中文](Config_CN.md)
#### 目录
diff --git a/docs/DatasetPreparation.md b/docs/DatasetPreparation.md
index a18159d..b579f58 100644
--- a/docs/DatasetPreparation.md
+++ b/docs/DatasetPreparation.md
@@ -1,6 +1,6 @@
# Dataset Preparation
-[English](DatasetPreparation.md) | [简体中文](DatasetPreparation_CN.md)
+[English](DatasetPreparation.md) **|** [简体中文](DatasetPreparation_CN.md)
#### Contents
diff --git a/docs/DatasetPreparation_CN.md b/docs/DatasetPreparation_CN.md
index b8dc492..7600256 100644
--- a/docs/DatasetPreparation_CN.md
+++ b/docs/DatasetPreparation_CN.md
@@ -1,6 +1,6 @@
# 数据准备
-[English](DatasetPreparation.md) | [简体中文](DatasetPreparation_CN.md)
+[English](DatasetPreparation.md) **|** [简体中文](DatasetPreparation_CN.md)
#### 目录
diff --git a/docs/Datasets.md b/docs/Datasets.md
index 2264823..c7653af 100644
--- a/docs/Datasets.md
+++ b/docs/Datasets.md
@@ -1,6 +1,6 @@
# Datasets
-[English](Datasets.md) | [简体中文](Datasets_CN.md)
+[English](Datasets.md) **|** [简体中文](Datasets_CN.md)
## Supported Datasets
diff --git a/docs/Datasets_CN.md b/docs/Datasets_CN.md
index e785f2c..0a6324a 100644
--- a/docs/Datasets_CN.md
+++ b/docs/Datasets_CN.md
@@ -1,6 +1,6 @@
# 数据处理
-[English](Datasets.md) | [简体中文](Datasets_CN.md)
+[English](Datasets.md) **|** [简体中文](Datasets_CN.md)
## 支持的数据处理
diff --git a/docs/DesignConvention.md b/docs/DesignConvention.md
index 4828147..35ee55f 100644
--- a/docs/DesignConvention.md
+++ b/docs/DesignConvention.md
@@ -1,6 +1,6 @@
# Codebase Designs and Conventions
-[English](DesignConvention.md) | [简体中文](DesignConvention_CN.md)
+[English](DesignConvention.md) **|** [简体中文](DesignConvention_CN.md)
#### Contents
diff --git a/docs/DesignConvention_CN.md b/docs/DesignConvention_CN.md
index 8b20111..d3c16d3 100644
--- a/docs/DesignConvention_CN.md
+++ b/docs/DesignConvention_CN.md
@@ -1,6 +1,6 @@
# 代码库的设计和约定
-[English](DesignConvention.md) | [简体中文](DesignConvention_CN.md)
+[English](DesignConvention.md) **|** [简体中文](DesignConvention_CN.md)
#### 目录
diff --git a/docs/HOWTOs.md b/docs/HOWTOs.md
index 1b2632c..a2d6433 100644
--- a/docs/HOWTOs.md
+++ b/docs/HOWTOs.md
@@ -1,6 +1,6 @@
# HOWTOs
-[English](HOWTOs.md) | [简体中文](HOWTOs_CN.md)
+[English](HOWTOs.md) **|** [简体中文](HOWTOs_CN.md)
## How to train StyleGAN2
@@ -17,7 +17,7 @@
## How to test StyleGAN2
-1. Download pre-trained models from [ModelZoo](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) to the `experiments/pretrained_models` folder.
+1. Download pre-trained models from **ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) to the `experiments/pretrained_models` folder.
1. Test.
> python tests/test_stylegan2.py
@@ -30,15 +30,15 @@
1. Clone dlib repo: `git clone git@github.com:davisking/dlib.git`
1. `cd dlib`
1. Install: `python setup.py install`
-2. Download the dlib pretrained models from [ModelZoo](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) to the `experiments/pretrained_models/dlib` folder.
+2. Download the dlib pretrained models from **ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) to the `experiments/pretrained_models/dlib` folder.
You can download by run the following command OR manually download the pretrained models.
- > python scripts/download_pretrained_models.py --method dlib
+ > python scripts/download_pretrained_models.py dlib
-3. Download pretrained DFDNet models, dictionary and face template from [ModelZoo](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) to the `experiments/pretrained_models/DFDNet` folder.
+3. Download pretrained DFDNet models, dictionary and face template from **ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) to the `experiments/pretrained_models/DFDNet` folder.
You can download by run the the following command OR manually download the pretrained models.
- > python scripts/download_pretrained_models.py --method DFDNet
+ > python scripts/download_pretrained_models.py DFDNet
4. Prepare the testing dataset in the `datasets`, for example, we put images in the `datasets/TestWhole` folder.
5. Test.
diff --git a/docs/HOWTOs_CN.md b/docs/HOWTOs_CN.md
index e728e53..aad7f25 100644
--- a/docs/HOWTOs_CN.md
+++ b/docs/HOWTOs_CN.md
@@ -1,6 +1,6 @@
# HOWTOs
-[English](HOWTOs.md) | [简体中文](HOWTOs_CN.md)
+[English](HOWTOs.md) **|** [简体中文](HOWTOs_CN.md)
## 如何训练 StyleGAN2
@@ -17,7 +17,7 @@
## 如何测试 StyleGAN2
-1. 从 [ModelZoo](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) 下载预训练模型到 `experiments/pretrained_models` 文件夹.
+1. 从 **ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) 下载预训练模型到 `experiments/pretrained_models` 文件夹.
1. 测试.
> python tests/test_stylegan2.py
@@ -30,15 +30,15 @@
1. 克隆 dlib repo: `git clone git@github.com:davisking/dlib.git`
1. `cd dlib`
1. 安装: `python setup.py install`
-2. 从 [ModelZoo](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) 下载预训练的 dlib 模型到 `experiments/pretrained_models/dlib` 文件夹.
+2. 从 **ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) 下载预训练的 dlib 模型到 `experiments/pretrained_models/dlib` 文件夹.
你可以通过运行下面的命令下载 或 手动下载.
- > python scripts/download_pretrained_models.py --method dlib
+ > python scripts/download_pretrained_models.py dlib
-3. 从 [ModelZoo](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing) 下载 DFDNet 模型, 字典和人脸关键点模板到 `experiments/pretrained_models/DFDNet` 文件夹.
+3. 从 **ModelZoo** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g)) 下载 DFDNet 模型, 字典和人脸关键点模板到 `experiments/pretrained_models/DFDNet` 文件夹.
你可以通过运行下面的命令下载 或 手动下载.
- > python scripts/download_pretrained_models.py --method DFDNet
+ > python scripts/download_pretrained_models.py DFDNet
4. 准备测试图片到 `datasets`, 比如说我们把测试图片放在 `datasets/TestWhole` 文件夹.
5. 测试.
diff --git a/docs/Logging.md b/docs/Logging.md
index 68e486b..ac2e45c 100644
--- a/docs/Logging.md
+++ b/docs/Logging.md
@@ -1,6 +1,6 @@
# Logging
-[English](Logging.md) | [简体中文](Logging_CN.md)
+[English](Logging.md) **|** [简体中文](Logging_CN.md)
#### Contents
diff --git a/docs/Logging_CN.md b/docs/Logging_CN.md
index 01d489e..cb1889f 100644
--- a/docs/Logging_CN.md
+++ b/docs/Logging_CN.md
@@ -1,6 +1,6 @@
# Logging日志
-[English](Logging.md) | [简体中文](Logging_CN.md)
+[English](Logging.md) **|** [简体中文](Logging_CN.md)
#### 目录
diff --git a/docs/ModelZoo.md b/docs/ModelZoo.md
index 4431ec5..af6579c 100644
--- a/docs/ModelZoo.md
+++ b/docs/ModelZoo.md
@@ -1,6 +1,6 @@
# Model Zoo and Baselines
-[English](ModelZoo.md) | [简体中文](ModelZoo_CN.md)
+[English](ModelZoo.md) **|** [简体中文](ModelZoo_CN.md)
We provide:
@@ -9,16 +9,16 @@ We provide:
You can put the downloaded models in the `experiments/pretrained_models` folder.
-**[Download official pre-trained models](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing)**
+**[Download official pre-trained models]** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g))(https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing))
You can use the scrip to download pre-trained models from Google Drive.
```python
-python scripts/download_pretrained_models.py --method ESRGAN
+python scripts/download_pretrained_models.py ESRGAN
# method can be ESRGAN, EDVR, StyleGAN, EDSR, DUF, DFDNet, dlib
```
-**[Download reproduced models and logs](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing)**
+**[Download reproduced models and logs]** ([Google Drive](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing), [百度网盘](https://pan.baidu.com/s/1UElD6q8sVAgn_cxeBDOlvQ))
In addition, we upload the training process and curves in [wandb](https://www.wandb.com/).
diff --git a/docs/ModelZoo_CN.md b/docs/ModelZoo_CN.md
index 8299231..9290a91 100644
--- a/docs/ModelZoo_CN.md
+++ b/docs/ModelZoo_CN.md
@@ -1,6 +1,6 @@
# 模型库和基准
-[English](ModelZoo.md) | [简体中文](ModelZoo_CN.md)
+[English](ModelZoo.md) **|** [简体中文](ModelZoo_CN.md)
我们提供了:
@@ -9,16 +9,15 @@
下载的模型可以放在 `experiments/pretrained_models` 文件夹.
-**[下载官方提供的预训练模型](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing)**
-
+**[下载官方提供的预训练模型]** ([Google Drive](https://drive.google.com/drive/folders/15DgDtfaLASQ3iAPJEVHQF49g9msexECG?usp=sharing), [百度网盘](https://pan.baidu.com/s/1R6Nc4v3cl79XPAiK0Toe7g))
你可以使用以下脚本从Google Drive下载预训练模型.
```python
-python scripts/download_pretrained_models.py --method ESRGAN
+python scripts/download_pretrained_models.py ESRGAN
# method can be ESRGAN, EDVR, StyleGAN, EDSR, DUF, DFDNet, dlib
```
-**[下载复现的模型和log](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing)**
+**[下载复现的模型和log]** ([Google Drive](https://drive.google.com/drive/folders/1XN4WXKJ53KQ0Cu0Yv-uCt8DZWq6uufaP?usp=sharing), [百度网盘](https://pan.baidu.com/s/1UElD6q8sVAgn_cxeBDOlvQ))
此外, 我们在 [wandb](https://www.wandb.com/) 上更新了模型训练的过程和曲线. 大家可以方便的比较:
diff --git a/docs/Models.md b/docs/Models.md
index bf6e96d..35e156a 100644
--- a/docs/Models.md
+++ b/docs/Models.md
@@ -1,6 +1,6 @@
# Models
-[English](Models.md) | [简体中文](Models_CN.md)
+[English](Models.md) **|** [简体中文](Models_CN.md)
#### Contents
diff --git a/docs/Models_CN.md b/docs/Models_CN.md
index 7f95376..35eee42 100644
--- a/docs/Models_CN.md
+++ b/docs/Models_CN.md
@@ -1,6 +1,6 @@
# 模型
-[English](Models.md) | [简体中文](Models_CN.md)
+[English](Models.md) **|** [简体中文](Models_CN.md)
#### 目录
diff --git a/docs/TrainTest.md b/docs/TrainTest.md
index 4a6cab2..b3d7cd7 100644
--- a/docs/TrainTest.md
+++ b/docs/TrainTest.md
@@ -1,6 +1,6 @@
# Training and Testing
-[English](TrainTest.md) | [简体中文](TrainTest_CN.md)
+[English](TrainTest.md) **|** [简体中文](TrainTest_CN.md)
Please run the commands in the root path of `BasicSR`.
In general, both the training and testing include the following steps:
diff --git a/docs/TrainTest_CN.md b/docs/TrainTest_CN.md
index 2bb66e0..21c4cbd 100644
--- a/docs/TrainTest_CN.md
+++ b/docs/TrainTest_CN.md
@@ -1,6 +1,6 @@
# 训练和测试
-[English](TrainTest.md) | [简体中文](TrainTest_CN.md)
+[English](TrainTest.md) **|** [简体中文](TrainTest_CN.md)
所有的命令都在 `BasicSR` 的根目录下运行.
一般来说, 训练和测试都有以下的步骤:
diff --git a/scripts/download_pretrained_models.py b/scripts/download_pretrained_models.py
index a2fec5d..cc26218 100644
--- a/scripts/download_pretrained_models.py
+++ b/scripts/download_pretrained_models.py
@@ -29,7 +29,12 @@ def download_pretrained_models(method, file_ids):
if __name__ == '__main__':
parser = argparse.ArgumentParser()
- parser.add_argument('--method', type=str, default='ESRGAN')
+ parser.add_argument(
+ 'method',
+ type=str,
+ help=(
+ "Options: 'ESRGAN', 'EDVR', 'StyleGAN', 'EDSR', 'DUF', 'DFDNet', "
+ "'dlib'. Set to 'all' if you want to download all the models."))
args = parser.parse_args()
file_ids = {
@@ -121,4 +126,8 @@ def download_pretrained_models(method, file_ids):
}
}
- download_pretrained_models(args.method, file_ids[args.method])
+ if args.method == 'all':
+ for method in file_ids.keys():
+ download_pretrained_models(method, file_ids[method])
+ else:
+ download_pretrained_models(args.method, file_ids[args.method])