Skip to content
/ AODA Public
forked from Mukosame/AODA

Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)

License

Notifications You must be signed in to change notification settings

pengjunlu/AODA

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AODA

By Xiaoyu Xiang, Ding Liu, Xiao Yang, Yiheng Zhu, Xiaohui Shen, Jan P. Allebach

This is the official Pytorch implementation of Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis.

aoda

Updates

  • Our paper will be presented on WACV-2022 on Jan 5, 19:30 pm GMT-10. Welcome to come and ask questions!
  • 2021.12.26: Edit some comments of the code.
  • 2021.12.25: Upload all codes. Merry Christmas!
  • 2021.12.21: Update the LICENSE and repo contents.
  • 2021.4.15: Create the repo

Contents

  1. Introduction
  2. Prerequisites
  3. Get Started
  4. Contact
  5. License
  6. Citations
  7. Acknowledgments

Introduction

The repository contains the entire project (including all the util scripts) for our open domain sketch-to-photo synthesis network, AODA.

AODA aims to synthesize a realistic photo from a freehand sketch with its class label, even if the sketches of that class are missing in the training data. It is accepted by WACV-2022 and CVPR Workshop-2021. The most updated paper with supplementary materials can be found at arXiv.

In AODA, we propose a simple yet effective open-domain sampling and optimization strategy to "fool" the generator into treating fake sketches as real ones. To achieve this goal, we adopt a framework that jointly learns sketch-to-photo and photo-to-sketch generation. Our approach shows impressive results in synthesizing realistic color, texture, and maintaining the geometric composition for various categories of open-domain sketches.

If our proposed architectures also help your research, please consider citing our paper.

framework

Prerequisites

  • Linux or macOS
  • Python 3 (Recommend to use Anaconda)
  • CPU or NVIDIA GPU + CUDA CuDNN

Get Started

Installation

First, clone this repository:

git clone https://github.com/Mukosame/AODA.git

Install the required packages: pip install -r requirements.txt.

Data Preparation

There are three datasets used in this paper: Scribble, SketchyCOCO, and QMUL-Sketch:

Scribble:

wget -N "http://www.robots.ox.ac.uk/~arnabg/scribble_dataset.zip"

SketchyCOCO:

Download from Google Drive.

QMUL-Sketch:

This dataset includes three datasets: handbags with 400 photos and sketches, ShoeV2 with 2000 photos and 6648 sketches, and ChairV2 with 400 photos and 1297 sketches. The complete dataset can be downloaded through Google Drive.

Training

Train an AODA model:

python train.py --dataroot ./dataset/scribble_10class_open/ \
                --name scribble_aoda \
                --model aoda_gan \
                --gan_mode vanilla \
                --no_dropout \
                --n_classes 10 \
                --direction BtoA \
                --load_size 260

After training, your models models/latest_net_G_A.pth, models/latest_net_G_B.pth and its training states states/latest.state, and a corresponding log file train_scribble_aoda_xxx are placed in the directory of ./checkpoints/scribble_aoda/.

Testing

Please download the weights from [GoogleDrive], and put it into the weights/ folder.

You can switch the --model_suffix to control the direction of sketch-to-photo or photo-to-sketch synthesis. For different datasets, you need to change the --name and the corresponding --n_classes:

python test.py --model_suffix _B --dataroot ./dataset/scribble/testA --name scribble_aoda --model test --phase test --no_dropout --n_classes 10

Your test results will be saved at ./results/test_latest/.

Contact

Xiaoyu Xiang.

You can also leave your questions as issues in the repository. I will be glad to answer them!

License

This project is released under the BSD-3 Clause License.

Citations

@inproceedings{xiang2022adversarial,
  title={Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis},
  author={Xiang, Xiaoyu and Liu, Ding and Yang, Xiao and Zhu, Yiheng and Shen, Xiaohui and Allebach, Jan P},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  year={2022}
}

Acknowledgments

This project is based on the CycleGAN PyTorch.

About

Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%