Chaojie Mao
·
Jingfeng Zhang
·
Yulin Pan
·
Zeyinzi Jiang
·
Zhen Han
·
Yu Liu
·
Jingren Zhou
Tongyi Lab, Alibaba Group
- [2025.01.06] Release the code and models of ACE++.
- [2025.01.07] Release the demo on HuggingFace.
- [2025.01.16] Release the training code for lora.
- [] [ToDo] Update Models.
ACE++ provides a comprehensive toolkit for image editing and generation to support various applications. We encourage developers to choose the appropriate model based on their own scenarios and to fine-tune their models using data from their specific scenarios to achieve more stable results.
Portrait-consistent generation to maintain the consistency of the portrait.
Models' scepter_path:
- ModelScope: ms://iic/ACE_Plus@portrait/xxxx.safetensors
- HuggingFace: hf://ali-vilab/ACE_Plus@portrait/xxxx.safetensors
Subject-driven image generation task to maintain the consistency of a specific subject in different scenes.
Models' scepter_path:
- ModelScope: ms://iic/ACE_Plus@subject/xxxx.safetensors
- HuggingFace: hf://ali-vilab/ACE_Plus@subject/xxxx.safetensors
Redrawing the mask area of images while maintaining the original structural information of the edited area.
Tuning Method | Input | Output | Instruction | Models |
LoRA + ACE Data |
"By referencing the mask, restore a partial image from the doodle {image} that aligns with the textual explanation: "1 white old owl"." |
Models' scepter_path:
- ModelScope: ms://iic/ACE_Plus@local_editing/xxxx.safetensors
- HuggingFace: hf://ali-vilab/ACE_Plus@local_editing/xxxx.safetensors
Fully finetuning a composite model with ACE’s data to support various editing and reference generation tasks through an instructive approach.
The ACE++ model supports a wide range of downstream tasks through simple adaptations. Here are some examples, and we look forward to seeing the community explore even more exciting applications utilizing the ACE++ model.
Download the code using the following command:
git clone https://github.com/ali-vilab/ACE_plus.git
Install the necessary packages with pip
:
cd ACE_plus
pip install -r requirements.txt
ACE++ depends on FLUX.1-Fill-dev as its base model, which you can download from . In order to run the inference code or Gradio demo normally, we have defined the relevant environment variables to specify the location of the model. For model preparation, we provide three methods for downloading the model. The summary of relevant settings is as follows.
Model Downloading Method | Clone to Local Path | Automatic Downloading during Runtime (Setting the Environment Variables using scepter_path in ACE Models) |
---|---|---|
Environment Variables Setting |
|
|
Under the condition that the environment variables defined in Installation, users can run examples and test your own samples by executing infer.py. The relevant commands are as follows:
export FLUX_FILL_PATH="hf://black-forest-labs/FLUX.1-Fill-dev"
export PORTRAIT_MODEL_PATH="ms://iic/ACE_Plus@portrait/comfyui_portrait_lora64.safetensors"
export SUBJECT_MODEL_PATH="ms://iic/ACE_Plus@subject/comfyui_subject_lora16.safetensors"
export LOCAL_MODEL_PATH="ms://iic/ACE_Plus@local_editing/comfyui_local_lora16.safetensors"
# Use the model from huggingface
# export PORTRAIT_MODEL_PATH="hf://ali-vilab/ACE_Plus@portrait/comfyui_portrait_lora64.safetensors"
# export SUBJECT_MODEL_PATH="hf://ali-vilab/ACE_Plus@subject/comfyui_subject_lora16.safetensors"
# export LOCAL_MODEL_PATH="hf://ali-vilab/ACE_Plus@local_editing/comfyui_local_lora16.safetensors"
python infer.py
We provide training code that allows users to train on their own data. Reference the data in 'data/train.csv' and 'data/eval.csv' to construct the training data and test data, respectively. We use '#;#' to separate fields. The required fields include the following six, with their explanations as follows.
"edit_image": represents the input image for the editing task. If it is not an editing task but a reference generation, this field can be left empty.
"edit_mask": represents the input image mask for the editing task, used to specify the editing area. If it is not an editing task but rather for reference generation, this field can be left empty.
"ref_image": represents the input image for the reference image generation task; if it is a pure editing task, this field can be left empty.
"target_image": represents the generated target image and cannot be empty.
"prompt": represents the prompt for the generation task.
"data_type": represents the type of data, which can be 'portrait', 'subject', or 'local'. This field is not used in training phase.
All parameters related to training are stored in 'train_config/ace_plus_lora.yaml'. To run the training code, execute the following command.
export FLUX_FILL_PATH="hf://black-forest-labs/FLUX.1-Fill-dev"
python run_train.py --cfg train_config/ace_plus_lora.yaml
The models trained by ACE++ can be found in ./examples/exp_example/xxxx/checkpoints/xxxx/0_SwiftLoRA/comfyui_model.safetensors.
We have built a GUI demo based on Gradio to help users better utilize the ACE++ model. Just execute the following command.
export FLUX_FILL_PATH="hf://black-forest-labs/FLUX.1-Fill-dev"
export PORTRAIT_MODEL_PATH="ms://iic/ACE_Plus@portrait/comfyui_portrait_lora64.safetensors"
export SUBJECT_MODEL_PATH="ms://iic/ACE_Plus@subject/comfyui_subject_lora16.safetensors"
export LOCAL_MODEL_PATH="ms://iic/ACE_Plus@local_editing/comfyui_local_lora16.safetensors"
# Use the model from huggingface
# export PORTRAIT_MODEL_PATH="hf://ali-vilab/ACE_Plus@portrait/comfyui_portrait_lora64.safetensors"
# export SUBJECT_MODEL_PATH="hf://ali-vilab/ACE_Plus@subject/comfyui_subject_lora16.safetensors"
# export LOCAL_MODEL_PATH="hf://ali-vilab/ACE_Plus@local_editing/comfyui_local_lora16.safetensors"
python demo.py
- For certain tasks, such as deleting and adding objects, there are flaws in instruction following. For adding and replacing objects, we recommend trying the repainting method of the local editing model to achieve this.
- The generated results may contain artifacts, especially when it comes to the generation of hands, which still exhibit distortions.
- The current version of ACE++ is still in the development stage. We are working on improving the model's performance and adding more features.
ACE++ is a post-training model based on the FLUX.1-dev series from black-forest-labs. Please adhere to its open-source license. The test materials used in ACE++ come from the internet and are intended for academic research and communication purposes. If the original creators feel uncomfortable, please contact us to have them removed.
If you use this model in your research, please cite the works of FLUX.1-dev and the following papers:
@article{mao2025ace++,
title={ACE++: Instruction-Based Image Creation and Editing via Context-Aware Content Filling},
author={Mao, Chaojie and Zhang, Jingfeng and Pan, Yulin and Jiang, Zeyinzi and Han, Zhen and Liu, Yu and Zhou, Jingren},
journal={arXiv preprint arXiv:2501.02487},
year={2025}
}
@article{han2024ace,
title={ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer},
author={Han, Zhen and Jiang, Zeyinzi and Pan, Yulin and Zhang, Jingfeng and Mao, Chaojie and Xie, Chenwei and Liu, Yu and Zhou, Jingren},
journal={arXiv preprint arXiv:2410.00086},
year={2024}
}