Yuliang Xiu · Jinlong Yang · Xu Cao · Dimitrios Tzionas · Michael J. Black
ECON is designed for "Human digitization from a color image", which combines the best properties of implicit and explicit representations, to infer high-fidelity 3D clothed humans from in-the-wild images, even with loose clothing or in challenging poses. ECON also supports multi-person reconstruction and SMPL-X based animation.
"3D guidance" for SHHQ Dataset | multi-person reconstruction w/ occlusion |
"All-in-One" Blender add-on | SMPL-X based Animation (Instruction) |
- [2023/08/19] We released TeCH, which extends ECON with full texture support.
- [2023/06/01] Lee Kwan Joong updates a Blender Addon (Github, Tutorial).
- [2023/04/16] is ready to use!
- [2023/02/27] ECON got accepted by CVPR 2023 as Highlight (top 10%)!
- [2023/01/12] Carlos Barreto creates a Blender Addon (Download, Tutorial).
- [2023/01/08] Teddy Huang creates install-with-docker for ECON .
- [2023/01/06] Justin John and Carlos Barreto creates install-on-windows for ECON .
- [2022/12/22] is now available, created by Aron Arzoomand.
- [2022/12/15] Both demo and arXiv are available.
d-BiNI jointly optimizes front-back 2.5D surfaces such that: (1) high-frequency surface details agree with normal maps, (2) low-frequency surface variations, including discontinuities, align with SMPL-X surfaces, and (3) front-back 2.5D surface silhouettes are coherent with each other.
Front-view | Back-view | Side-view |
---|---|---|
Please consider cite BiNI if it also helps on your project
@inproceedings{cao2022bilateral,
title={Bilateral normal integration},
author={Cao, Xu and Santo, Hiroaki and Shi, Boxin and Okura, Fumio and Matsushita, Yasuyuki},
booktitle={Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part I},
pages={552--567},
year={2022},
organization={Springer}
}
Table of Contents
- See installion doc for Docker to run a docker container with pre-built image for ECON demo
- See installion doc for Windows to install all the required packages and setup the models on Windows
- See installion doc for Ubuntu to install all the required packages and setup the models on Ubuntu
- See magic tricks to know a few technical tricks to further improve and accelerate ECON
- See testing to prepare the testing data and evaluate ECON
# For single-person image-based reconstruction (w/ l visualization steps, 1.8min)
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results
# For multi-person image-based reconstruction (see config/econ.yaml)
python -m apps.infer -cfg ./configs/econ.yaml -in_dir ./examples -out_dir ./results -multi
# To generate the demo video of reconstruction results
python -m apps.multi_render -n <file_name>
-
Animation with SMPL-X sequences (ECON + HybrIK-X)
# 1. Use HybrIK-X to estimate SMPL-X pose sequences from input video
# 2. Rig ECON's reconstruction mesh, to be compatible with SMPL-X's parametrization (-dress for dress/skirts).
# 3. Animate with SMPL-X pose sequences obtained from HybrIK-X, getting <file_name>_motion.npz
# 4. Render the frames with Blender (rgb-partial texture, normal-normal colors), and combine them to get final video
python -m apps.avatarizer -n <file_name>
python -m apps.animation -n <file_name> -m <motion_name>
# Note: to install missing python packages into Blender
# blender -b --python-expr "__import__('pip._internal')._internal.main(['install', 'moviepy'])"
wget https://download.is.tue.mpg.de/icon/econ_empty.blend
blender -b --python apps.blender_dance.py -- normal <file_name> 10 > /tmp/NULL
Please consider cite HybrIK-X if it also helps on your project
@article{li2023hybrik,
title={HybrIK-X: Hybrid Analytical-Neural Inverse Kinematics for Whole-body Mesh Recovery},
author={Li, Jiefeng and Bian, Siyuan and Xu, Chao and Chen, Zhicun and Yang, Lixin and Lu, Cewu},
journal={arXiv preprint arXiv:2304.05690},
year={2023}
}
We also provide a UI for testing our method that is built with gradio. This demo also supports pose&prompt guided human image generation! Running the following command in a terminal will launch the demo:
git checkout main
python app.py
This demo is also hosted on HuggingFace Space
Please firstly follow the TEXTure's installation to setup the env of TEXTure.
# generate required UV atlas
python -m apps.avatarizer -n <file_name> -uv
# generate new texture using TEXTure
git clone https://github.com/YuliangXiu/TEXTure
cd TEXTure
ln -s ../ECON/results/econ/cache
python -m scripts.run_texture --config_path=configs/text_guided/avatar.yaml
Then check ./experiments/<file_name>/mesh
for the results.
Please consider cite TEXTure if it also helps on your project
@article{richardson2023texture,
title={Texture: Text-guided texturing of 3d shapes},
author={Richardson, Elad and Metzer, Gal and Alaluf, Yuval and Giryes, Raja and Cohen-Or, Daniel},
journal={ACM Transactions on Graphics (TOG)},
publisher={ACM New York, NY, USA},
year={2023}
}
Please check out our new paper, TeCH: Text-guided Reconstruction of Lifelike Clothed Humans (Page, Code)
Please consider cite TeCH if it also helps on your project
@inproceedings{huang2024tech,
title={{TeCH: Text-guided Reconstruction of Lifelike Clothed Humans}},
author={Huang, Yangyi and Yi, Hongwei and Xiu, Yuliang and Liao, Tingting and Tang, Jiaxiang and Cai, Deng and Thies, Justus},
booktitle={International Conference on 3D Vision (3DV)},
year={2024}
}
Challenging Poses |
Loose Clothes |
@inproceedings{xiu2023econ,
title = {{ECON: Explicit Clothed humans Optimized via Normal integration}},
author = {Xiu, Yuliang and Yang, Jinlong and Cao, Xu and Tzionas, Dimitrios and Black, Michael J.},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2023},
}
We thank Lea Hering and Radek Daněček for proof reading, Yao Feng, Haven Feng, and Weiyang Liu for their feedback and discussions, Tsvetelina Alexiadis for her help with the AMT perceptual study.
Here are some great resources we benefit from:
- ICON for SMPL-X Body Fitting
- BiNI for Bilateral Normal Integration
- MonoPortDataset for Data Processing, MonoPort for fast implicit surface query
- rembg for Human Segmentation
- MediaPipe for full-body landmark estimation
- PyTorch-NICP for non-rigid registration
- smplx, PyMAF-X, PIXIE for Human Pose & Shape Estimation
- CAPE and THuman for Dataset
- PyTorch3D for Differential Rendering
Some images used in the qualitative examples come from pinterest.com.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.860768 (CLIPE Project).
Kudos to all of our amazing contributors! ECON thrives through open-source. In that spirit, we welcome all kinds of contributions from the community.
Contributor avatars are randomly shuffled.
This code and model are available for non-commercial scientific research purposes as defined in the LICENSE file. By downloading and using the code and model you agree to the terms in the LICENSE.
MJB has received research gift funds from Adobe, Intel, Nvidia, Meta/Facebook, and Amazon. MJB has financial interests in Amazon, Datagen Technologies, and Meshcapade GmbH. While MJB is a part-time employee of Meshcapade, his research was performed solely at, and funded solely by, the Max Planck Society.
For technical questions, please contact [email protected]
For commercial licensing, please contact [email protected]