Inference of Stable Diffusion and Flux in pure C/C++
-
Plain C/C++ implementation based on ggml, working in the same way as llama.cpp
-
Super lightweight and without external dependencies
-
SD1.x, SD2.x, SDXL and SD3/SD3.5 support
- !!!The VAE in SDXL encounters NaN issues under FP16, but unfortunately, the ggml_conv_2d only operates under FP16. Hence, a parameter is needed to specify the VAE that has fixed the FP16 NaN issue. You can find it here: SDXL VAE FP16 Fix.
-
SD-Turbo and SDXL-Turbo support
-
PhotoMaker support.
-
16-bit, 32-bit float support
-
2-bit, 3-bit, 4-bit, 5-bit and 8-bit integer quantization support
-
Accelerated memory-efficient CPU inference
- Only requires ~2.3GB when using txt2img with fp16 precision to generate a 512x512 image, enabling Flash Attention just requires ~1.8GB.
-
AVX, AVX2 and AVX512 support for x86 architectures
-
Full CUDA, Metal, Vulkan and SYCL backend for GPU acceleration.
-
Can load ckpt, safetensors and diffusers models/checkpoints. Standalone VAEs models
- No need to convert to
.ggml
or.gguf
anymore!
- No need to convert to
-
Flash Attention for memory usage optimization
-
Original
txt2img
andimg2img
mode -
Negative prompt
-
stable-diffusion-webui style tokenizer (not all the features, only token weighting for now)
-
LoRA support, same as stable-diffusion-webui
-
Latent Consistency Models support (LCM/LCM-LoRA)
-
Faster and memory efficient latent decoding with TAESD
-
Upscale images generated with ESRGAN
-
VAE tiling processing for reduce memory usage
-
Control Net support with SD 1.5
-
Sampling method
Euler A
Euler
Heun
DPM2
DPM++ 2M
DPM++ 2M v2
DPM++ 2S a
LCM
-
Cross-platform reproducibility (
--rng cuda
, consistent with thestable-diffusion-webui GPU RNG
) -
Embedds generation parameters into png output as webui-compatible text string
-
Supported platforms
- Linux
- Mac OS
- Windows
- Android (via Termux)
- More sampling methods
- Make inference faster
- The current implementation of ggml_conv_2d is slow and has high memory usage
- Continuing to reduce memory usage (quantizing the weights of ggml_conv_2d)
- Implement Inpainting support
For most users, you can download the built executable program from the latest release. If the built product does not meet your requirements, you can choose to build it manually.
git clone --recursive https://github.com/leejet/stable-diffusion.cpp
cd stable-diffusion.cpp
- If you have already cloned the repository, you can use the following command to update the repository to the latest code.
cd stable-diffusion.cpp
git pull origin master
git submodule init
git submodule update
-
download original weights(.ckpt or .safetensors). For example
- Stable Diffusion v1.4 from https://huggingface.co/CompVis/stable-diffusion-v-1-4-original
- Stable Diffusion v1.5 from https://huggingface.co/runwayml/stable-diffusion-v1-5
- Stable Diffuison v2.1 from https://huggingface.co/stabilityai/stable-diffusion-2-1
- Stable Diffusion 3 2B from https://huggingface.co/stabilityai/stable-diffusion-3-medium
curl -L -O https://huggingface.co/CompVis/stable-diffusion-v-1-4-original/resolve/main/sd-v1-4.ckpt # curl -L -O https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors # curl -L -O https://huggingface.co/stabilityai/stable-diffusion-2-1/resolve/main/v2-1_768-nonema-pruned.safetensors # curl -L -O https://huggingface.co/stabilityai/stable-diffusion-3-medium/resolve/main/sd3_medium_incl_clips_t5xxlfp16.safetensors
mkdir build
cd build
cmake ..
cmake --build . --config Release
cmake .. -DGGML_OPENBLAS=ON
cmake --build . --config Release
This provides BLAS acceleration using the CUDA cores of your Nvidia GPU. Make sure to have the CUDA toolkit installed. You can download it from your Linux distro's package manager (e.g. apt install nvidia-cuda-toolkit
) or from here: CUDA Toolkit. Recommended to have at least 4 GB of VRAM.
cmake .. -DSD_CUBLAS=ON
cmake --build . --config Release
This provides BLAS acceleration using the ROCm cores of your AMD GPU. Make sure to have the ROCm toolkit installed.
Windows User Refer to docs/hipBLAS_on_Windows.md for a comprehensive guide.
cmake .. -G "Ninja" -DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ -DSD_HIPBLAS=ON -DCMAKE_BUILD_TYPE=Release -DAMDGPU_TARGETS=gfx1100
cmake --build . --config Release
Using Metal makes the computation run on the GPU. Currently, there are some issues with Metal when performing operations on very large matrices, making it highly inefficient at the moment. Performance improvements are expected in the near future.
cmake .. -DSD_METAL=ON
cmake --build . --config Release
Install Vulkan SDK from https://www.lunarg.com/vulkan-sdk/.
cmake .. -DSD_VULKAN=ON
cmake --build . --config Release
Using SYCL makes the computation run on the Intel GPU. Please make sure you have installed the related driver and Intel® oneAPI Base toolkit before start. More details and steps can refer to llama.cpp SYCL backend.
# Export relevant ENV variables
source /opt/intel/oneapi/setvars.sh
# Option 1: Use FP32 (recommended for better performance in most cases)
cmake .. -DSD_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx
# Option 2: Use FP16
cmake .. -DSD_SYCL=ON -DCMAKE_C_COMPILER=icx -DCMAKE_CXX_COMPILER=icpx -DGGML_SYCL_F16=ON
cmake --build . --config Release
Example of text2img by using SYCL backend:
-
download
stable-diffusion
model weight, refer to download-weight. -
run
./bin/sd -m ../models/sd3_medium_incl_clips_t5xxlfp16.safetensors --cfg-scale 5 --steps 30 --sampling-method euler -H 1024 -W 1024 --seed 42 -p "fantasy medieval village world inside a glass sphere , high detail, fantasy, realistic, light effect, hyper detail, volumetric lighting, cinematic, macro, depth of field, blur, red light and clouds from the back, highly detailed epic cinematic concept art cg render made in maya, blender and photoshop, octane render, excellent composition, dynamic dramatic cinematic lighting, aesthetic, very inspirational, world inside a glass sphere by james gurney by artgerm with james jean, joe fenton and tristan eaton by ross tran, fine details, 4k resolution"
Enabling flash attention for the diffusion model reduces memory usage by varying amounts of MB. eg.:
- flux 768x768 ~600mb
- SD2 768x768 ~1400mb
For most backends, it slows things down, but for cuda it generally speeds it up too. At the moment, it is only supported for some models and some backends (like cpu, cuda/rocm, metal).
Run by adding --diffusion-fa
to the arguments and watch for:
[INFO ] stable-diffusion.cpp:312 - Using flash attention in the diffusion model
and the compute buffer shrink in the debug log:
[DEBUG] ggml_extend.hpp:1004 - flux compute buffer size: 650.00 MB(VRAM)
usage: ./bin/sd [arguments]
arguments:
-h, --help show this help message and exit
-M, --mode [MODEL] run mode (txt2img or img2img or convert, default: txt2img)
-t, --threads N number of threads to use during computation (default: -1)
If threads <= 0, then threads will be set to the number of CPU physical cores
-m, --model [MODEL] path to full model
--diffusion-model path to the standalone diffusion model
--clip_l path to the clip-l text encoder
--clip_g path to the clip-l text encoder
--t5xxl path to the the t5xxl text encoder
--vae [VAE] path to vae
--taesd [TAESD_PATH] path to taesd. Using Tiny AutoEncoder for fast decoding (low quality)
--control-net [CONTROL_PATH] path to control net model
--embd-dir [EMBEDDING_PATH] path to embeddings
--stacked-id-embd-dir [DIR] path to PHOTOMAKER stacked id embeddings
--input-id-images-dir [DIR] path to PHOTOMAKER input id images dir
--normalize-input normalize PHOTOMAKER input id images
--upscale-model [ESRGAN_PATH] path to esrgan model. Upscale images after generate, just RealESRGAN_x4plus_anime_6B supported by now
--upscale-repeats Run the ESRGAN upscaler this many times (default 1)
--type [TYPE] weight type (f32, f16, q4_0, q4_1, q5_0, q5_1, q8_0, q2_k, q3_k, q4_k)
If not specified, the default is the type of the weight file
--lora-model-dir [DIR] lora model directory
-i, --init-img [IMAGE] path to the input image, required by img2img
--control-image [IMAGE] path to image condition, control net
-o, --output OUTPUT path to write result image to (default: ./output.png)
-p, --prompt [PROMPT] the prompt to render
-n, --negative-prompt PROMPT the negative prompt (default: "")
--cfg-scale SCALE unconditional guidance scale: (default: 7.0)
--strength STRENGTH strength for noising/unnoising (default: 0.75)
--style-ratio STYLE-RATIO strength for keeping input identity (default: 20%)
--control-strength STRENGTH strength to apply Control Net (default: 0.9)
1.0 corresponds to full destruction of information in init image
-H, --height H image height, in pixel space (default: 512)
-W, --width W image width, in pixel space (default: 512)
--sampling-method {euler, euler_a, heun, dpm2, dpm++2s_a, dpm++2m, dpm++2mv2, ipndm, ipndm_v, lcm}
sampling method (default: "euler_a")
--steps STEPS number of sample steps (default: 20)
--rng {std_default, cuda} RNG (default: cuda)
-s SEED, --seed SEED RNG seed (default: 42, use random seed for < 0)
-b, --batch-count COUNT number of images to generate
--schedule {discrete, karras, exponential, ays, gits} Denoiser sigma schedule (default: discrete)
--clip-skip N ignore last layers of CLIP network; 1 ignores none, 2 ignores one layer (default: -1)
<= 0 represents unspecified, will be 1 for SD1.x, 2 for SD2.x
--vae-tiling process vae in tiles to reduce memory usage
--vae-on-cpu keep vae in cpu (for low vram)
--clip-on-cpu keep clip in cpu (for low vram)
--diffusion-fa use flash attention in the diffusion model (for low vram)
Might lower quality, since it implies converting k and v to f16.
This might crash if it is not supported by the backend.
--control-net-cpu keep controlnet in cpu (for low vram)
--canny apply canny preprocessor (edge detection)
--color Colors the logging tags according to level
-v, --verbose print extra info
./bin/sd -m ../models/sd-v1-4.ckpt -p "a lovely cat"
# ./bin/sd -m ../models/v1-5-pruned-emaonly.safetensors -p "a lovely cat"
# ./bin/sd -m ../models/sd_xl_base_1.0.safetensors --vae ../models/sdxl_vae-fp16-fix.safetensors -H 1024 -W 1024 -p "a lovely cat" -v
# ./bin/sd -m ../models/sd3_medium_incl_clips_t5xxlfp16.safetensors -H 1024 -W 1024 -p 'a lovely cat holding a sign says \"Stable Diffusion CPP\"' --cfg-scale 4.5 --sampling-method euler -v
# ./bin/sd --diffusion-model ../models/flux1-dev-q3_k.gguf --vae ../models/ae.sft --clip_l ../models/clip_l.safetensors --t5xxl ../models/t5xxl_fp16.safetensors -p "a lovely cat holding a sign says 'flux.cpp'" --cfg-scale 1.0 --sampling-method euler -v
# ./bin/sd -m ..\models\sd3.5_large.safetensors --clip_l ..\models\clip_l.safetensors --clip_g ..\models\clip_g.safetensors --t5xxl ..\models\t5xxl_fp16.safetensors -H 1024 -W 1024 -p 'a lovely cat holding a sign says \"Stable diffusion 3.5 Large\"' --cfg-scale 4.5 --sampling-method euler -v
Using formats of different precisions will yield results of varying quality.
f32 | f16 | q8_0 | q5_0 | q5_1 | q4_0 | q4_1 |
---|---|---|---|---|---|---|
./output.png
is the image generated from the above txt2img pipeline
./bin/sd --mode img2img -m ../models/sd-v1-4.ckpt -p "cat with blue eyes" -i ./output.png -o ./img2img_output.png --strength 0.4
- LoRA
- LCM/LCM-LoRA
- Using PhotoMaker to personalize image generation
- Using ESRGAN to upscale results
- Using TAESD to faster decoding
- Docker
- Quantization and GGUF
These projects wrap stable-diffusion.cpp
for easier use in other languages/frameworks.
- Golang: seasonjs/stable-diffusion
- C#: DarthAffe/StableDiffusion.NET
- Python: william-murray1204/stable-diffusion-cpp-python
- Rust: newfla/diffusion-rs
These projects use stable-diffusion.cpp
as a backend for their image generation.
Thank you to all the people who have already contributed to stable-diffusion.cpp!