-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.OutOfMemoryError: CUDA out of memory. #110
Comments
thanks. i found a clue: when i run 【export MODEL_BASE=data/FastHunyuan |
i have tried it. it does not work. same error |
I think one way is to reduce num_frames, or you can try this for running fasthunyuan. Currently FastMochi is unable to support 163 frames on a single 48GB GPU. We will support quantization version FastMochi soon. |
when i run:【python -u gradio_server.py --video-size 544 960 --video-length 129 --infer-steps 50 --flow-reverse --use-cpu-offload 】in hunyuan, it is OK to generate video.
But run【python demo/gradio_web_demo.py
--model_path data/FastMochi-diffusers
--num_frames 163
--height 480
--width 848
--num_inference_steps 8
--guidance_scale 1.5
--seed 1024
--scheduler_type "pcm_linear_quadratic"
--linear_threshold 0.1
--linear_range 0.75 】in fasthunyuan, i can't generate video. how to reduce memory in fasthunyuan? thanks!
The text was updated successfully, but these errors were encountered: