🔥 Thanks for the wonderful work! Visit the official code and results in homepage 🔥
2024/07/10
: 💪 Support animial face. More results in./assets/docs/
I fork this repo from LivePortrait. The official PyTorch implementation of this paper LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control in https://github.com/KwaiVGI/LivePortrait.
Adapting to both human and animal faces, I unified the keypoint detection module using X-Pose https://github.com/IDEA-Research/X-Pose.
git clone https://github.com/ShiJiaying/LivePortrait.git
cd LivePortrait
# create env using conda
conda create -n LivePortrait python==3.10
conda activate LivePortrait
# install dependencies with pip
pip install -r requirements.txt
# Compiling CUDA operators for X-Pose
cd XPose/models/UniPose/ops
python setup.py build install
Download the pretrained weights from HuggingFace:
# you may need to run `git lfs install` first
git clone https://huggingface.co/KwaiVGI/liveportrait pretrained_weights
Or, download all pretrained weights from Google Drive or Baidu Yun. We have packed all weights in one directory 😊. Unzip and place them in ./pretrained_weights
ensuring the directory structure is as follows:
pretrained_weights
├── insightface
│ └── models
│ └── buffalo_l
│ ├── 2d106det.onnx
│ └── det_10g.onnx
└── liveportrait
├── base_models
│ ├── appearance_feature_extractor.pth
│ ├── motion_extractor.pth
│ ├── spade_generator.pth
│ └── warping_module.pth
├── landmark.onnx
└── retargeting_models
└── stitching_retargeting_module.pth
Then download X-Pose weight from Google Drive and place it in ./XPose
XPose
└── unipose_swint.pth
python inference.py
📕 Change source_image type in src/config/argument_config.py
source_image_type: str = 'animal' # animal, human
If the script runs successfully, you will get an output mp4 file named animations/s6--d0_concat.mp4
. This file includes the following results: driving video, input image, and generated result.
Or, you can change the input by specifying the -s
and -d
arguments:
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
# disable pasting back to run faster
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback
# more options to see
python inference.py -h
📕 To use your own driving video, we recommend:
- Crop it to a 1:1 aspect ratio (e.g., 512x512 or 256x256 pixels), or enable auto-cropping by
--flag_crop_driving_video
. - Focus on the head area, similar to the example videos.
- Minimize shoulder movement.
- Make sure the first frame of driving video is a frontal face with neutral expression.
Below is a auto-cropping case by --flag_crop_driving_video
:
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
If you find the results of auto-cropping is not well, you can modify the --scale_crop_video
, --vy_ratio_crop_video
options to adjust the scale and offset, or do it manually.
You can also use the .pkl
file auto-generated to speed up the inference, and protect privacy, such as:
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl
Discover more interesting results on our Homepage 😊
We also provide a Gradio interface for a better experience, just run by:
python app.py
You can specify the --server_port
, --share
, --server_name
arguments to satisfy your needs!
Or, try it out effortlessly on HuggingFace 🤗
We have also provided a script to evaluate the inference speed of each module:
python speed.py
Below are the results of inferring one frame on an RTX 4090 GPU using the native PyTorch framework with torch.compile
:
Model | Parameters(M) | Model Size(MB) | Inference(ms) |
---|---|---|---|
Appearance Feature Extractor | 0.84 | 3.3 | 0.82 |
Motion Extractor | 28.12 | 108 | 0.84 |
Spade Generator | 55.37 | 212 | 7.59 |
Warping Module | 45.53 | 174 | 5.21 |
Stitching and Retargeting Modules | 0.23 | 2.3 | 0.31 |
Note: The values for the Stitching and Retargeting Modules represent the combined parameter counts and total inference time of three sequential MLP networks.
Discover the invaluable resources contributed by our community to enhance your LivePortrait experience:
- ComfyUI-LivePortraitKJ by @kijai
- comfyui-liveportrait by @shadowcz007
- LivePortrait hands-on tutorial by @AI Search
- ComfyUI tutorial by @Sebastian Kamph
- LivePortrait In ComfyUI by @Benji
- Replicate Playground and cog-comfyui by @fofr
And many more amazing contributions from our community!
We would like to thank the contributors of FOMM, Open Facevid2vid, SPADE, InsightFace repositories, for their open research and contributions.
If you find LivePortrait useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
@article{guo2024liveportrait,
title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
author = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
journal = {arXiv preprint arXiv:2407.03168},
year = {2024}
}