Skip to content

booker-max/Unsupervised-Deraining-with-Event-Camera

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🚀Unsupervised-Deraining-with-Event-Camera (ICCV2023)

The is official PyTorch implementation of paper "Unsupervised Video Deraining with An Event Camera".

Jin Wang, Wenming Weng, Yueyi Zhang*, Zhiwei Xiong

University of Science and Technology of China (USTC), Hefei, China

*Corresponding Author

🚀 Greatly inspired by "Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a “dog” can be seen, heard, and felt)."Contrastive Multiview Coding

Abstract

Current unsupervised video deraining methods are inefficient in modeling the intricate spatio-temporal properties of rain, which leads to unsatisfactory results. In this paper, we propose a novel approach by integrating a bio-inspired event camera into the unsupervised video deraining pipeline, which enables us to capture high temporal resolution information and model complex rain characteristics. Specifically, we first design an end-to-end learning-based network consisting of two modules, the asymmetric separation module and the cross-modal fusion module. The two modules are responsible for segregating the features of the rain-background layer, and for positive enhancement and negative suppression from a cross-modal perspective, respectively. Second, to regularize the network training, we elaborately design a cross-modal contrastive learning method that leverages the complementary information from event cameras, exploring the mutual exclusion and similarity of rain-background layers in different domains. This encourages the deraining network to focus on the distinctive characteristics of each layer and learn a more discriminative representation. Moreover, we construct the first real-world dataset comprising rainy videos and events using a hybrid imaging system. Extensive experiments demonstrate the superior performance of our method on both synthetic and real-world datasets.

Enviromenent

The entire network is implemented using PyTorch 1.6, Python 3.8, CUDA 11.3 on two NVIDIA GTX1080Ti GPUs.

sh requirements.sh

Dataset

Link: https://rec.ustc.edu.cn/share/e3f573e0-849b-11ef-9860-4ff745dd7f41
Password: 5eom

Training stage

CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 --master_port 29501 main.py

More Details

The following paths and files are specified in the ./options/v1.json file:

  • model_dir: Directory where the model is stored.
  • record_dir: Directory where logs are saved.
  • log_dir: Directory for TensorBoard logs.
  • train_txt_file: Path to the training set file located in the data directory.
  • test_txt_file: Path to the test set file located in the data directory.
  • log_txt: Path to the log file.
  • h5_file: Path to the training dataset (HDF5 format).
  • test_h5_file: Path to the test dataset (HDF5 format).

Contact

If you have any problem with the released code and dataset, please contact me by email ([email protected]).

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages