Project Page | Paper | Data
- Set up the python environment
conda create -n invrender python=3.7
conda activate invrender
pip install -r requirement.txt
- Download our example synthetic dataset from Google Drive
Taking the scene hotdog
as an example, the training process is as follows.
-
Optimize geometry and outgoing radiance field from multi-view images. (Same as IDR)
cd code python training/exp_runner.py --conf confs_sg/default.conf \ --data_split_dir ../Synthetic4Relight/hotdog \ --expname hotdog \ --trainstage IDR \ --gpu 1
-
Draw sample rays above surface points to train the indirect illumination and visibility MLP.
python training/exp_runner.py --conf confs_sg/default.conf \ --data_split_dir ../Synthetic4Relight/hotdog \ --expname hotdog \ --trainstage Illum \ --gpu 1
-
Jointly optimize diffuse albedo, roughness and direct illumination.
python training/exp_runner.py --conf confs_sg/default.conf \ --data_split_dir ../Synthetic4Relight/hotdog \ --expname hotdog \ --trainstage Material \ --gpu 1
-
Generate videos under novel illumination.
python scripts/relight.py --conf confs_sg/default.conf \ --data_split_dir ../Synthetic4Relight/hotdog \ --expname hotdog \ --timestamp latest \ --gpu 1
@inproceedings{zhang2022invrender,
title={Modeling Indirect Illumination for Inverse Rendering},
author={Zhang, Yuanqing and Sun, Jiaming and He, Xingyi and Fu, Huan and Jia, Rongfei and Zhou, Xiaowei},
booktitle={CVPR},
year={2022}
}
Acknowledgements: part of our code is inherited from IDR and PhySG. We are grateful to the authors for releasing their code.