Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

calculate_weight() got an unexpected keyword argument 'intermediate_dtype' #5766

Open
wangyizhi1213 opened this issue Nov 25, 2024 · 0 comments
Labels
User Support A user needs help with something, probably not a bug.

Comments

@wangyizhi1213
Copy link

Your question

when I try to use FLUX, ksamplers error.
Snipaste_2024-11-25_13-45-55

Logs

## Logs

2024-11-25 12:45:28,952 - root - ERROR - * SaveImage 139:
2024-11-25 12:45:28,952 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:28,952 - root - ERROR - Output will be ignored
2024-11-25 12:45:28,952 - root - ERROR - Failed to validate prompt for output 134:
2024-11-25 12:45:28,952 - root - ERROR - * (prompt):
2024-11-25 12:45:28,952 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:28,952 - root - ERROR - * SaveImage 134:
2024-11-25 12:45:28,952 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:28,952 - root - ERROR - Output will be ignored
2024-11-25 12:45:28,952 - root - ERROR - Failed to validate prompt for output 136:
2024-11-25 12:45:28,952 - root - ERROR - * (prompt):
2024-11-25 12:45:28,952 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:28,952 - root - ERROR - * SaveImage 136:
2024-11-25 12:45:28,952 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:28,952 - root - ERROR - Output will be ignored
2024-11-25 12:45:28,952 - root - ERROR - Failed to validate prompt for output 133:
2024-11-25 12:45:28,952 - root - ERROR - * (prompt):
2024-11-25 12:45:28,952 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:28,952 - root - ERROR - * SaveImage 133:
2024-11-25 12:45:28,952 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:28,952 - root - ERROR - Output will be ignored
2024-11-25 12:45:29,767 - root - INFO - Prompt executed in 0.81 seconds
2024-11-25 12:45:38,340 - root - INFO - got prompt
2024-11-25 12:45:38,341 - root - ERROR - Failed to validate prompt for output 135:
2024-11-25 12:45:38,341 - root - ERROR - * (prompt):
2024-11-25 12:45:38,341 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,341 - root - ERROR - * SaveImage 135:
2024-11-25 12:45:38,342 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,342 - root - ERROR - Output will be ignored
2024-11-25 12:45:38,342 - root - ERROR - Failed to validate prompt for output 138:
2024-11-25 12:45:38,342 - root - ERROR - * (prompt):
2024-11-25 12:45:38,342 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,342 - root - ERROR - * SaveImage 138:
2024-11-25 12:45:38,342 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,342 - root - ERROR - Output will be ignored
2024-11-25 12:45:38,390 - root - ERROR - Failed to validate prompt for output 137:
2024-11-25 12:45:38,391 - root - ERROR - * (prompt):
2024-11-25 12:45:38,391 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,391 - root - ERROR - * SaveImage 137:
2024-11-25 12:45:38,391 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,391 - root - ERROR - Output will be ignored
2024-11-25 12:45:38,391 - root - ERROR - Failed to validate prompt for output 139:
2024-11-25 12:45:38,391 - root - ERROR - * (prompt):
2024-11-25 12:45:38,391 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,391 - root - ERROR - * SaveImage 139:
2024-11-25 12:45:38,391 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,391 - root - ERROR - Output will be ignored
2024-11-25 12:45:38,391 - root - ERROR - Failed to validate prompt for output 134:
2024-11-25 12:45:38,391 - root - ERROR - * (prompt):
2024-11-25 12:45:38,391 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,391 - root - ERROR - * SaveImage 134:
2024-11-25 12:45:38,391 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,391 - root - ERROR - Output will be ignored
2024-11-25 12:45:38,391 - root - ERROR - Failed to validate prompt for output 136:
2024-11-25 12:45:38,391 - root - ERROR - * (prompt):
2024-11-25 12:45:38,391 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,391 - root - ERROR - * SaveImage 136:
2024-11-25 12:45:38,391 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,391 - root - ERROR - Output will be ignored
2024-11-25 12:45:38,392 - root - ERROR - Failed to validate prompt for output 133:
2024-11-25 12:45:38,392 - root - ERROR - * (prompt):
2024-11-25 12:45:38,392 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,392 - root - ERROR - * SaveImage 133:
2024-11-25 12:45:38,392 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:38,392 - root - ERROR - Output will be ignored
2024-11-25 12:45:39,192 - root - INFO - Prompt executed in 0.80 seconds
2024-11-25 12:45:54,707 - root - INFO - got prompt
2024-11-25 12:45:54,755 - root - ERROR - Failed to validate prompt for output 135:
2024-11-25 12:45:54,755 - root - ERROR - * CheckpointLoaderSimple 56:
2024-11-25 12:45:54,755 - root - ERROR -   - Value not in list: ckpt_name: 'FLUX\flux1-dev-fp8-with_clip_vae.safetensors' not in (list of length 57)
2024-11-25 12:45:54,755 - root - ERROR - * VAELoader 39:
2024-11-25 12:45:54,755 - root - ERROR -   - Value not in list: vae_name: 'SD15\vaeFtMse840000Ema_v10.safetensors' not in ['FLUX1\\ae.safetensors', 'animevae.pt', 'chilloutmix_NiPrunedFp32Fix (1).safetensors', 'sdxl_vae.safetensors', 'sdxl_vae_fp16fix.safetensors', 'sdxl_vaefp16.safetensors', 'vae-ft-mse-840000-ema-pruned.safetensors', 'taesd', 'taesdxl', 'taesd3', 'taef1']
2024-11-25 12:45:54,755 - root - ERROR - * CheckpointLoaderSimple 30:
2024-11-25 12:45:54,756 - root - ERROR -   - Value not in list: ckpt_name: 'SD15\majicmixRealistic_v7.safetensors' not in (list of length 57)
2024-11-25 12:45:54,756 - root - ERROR - * LoadAndApplyICLightUnet 45:
2024-11-25 12:45:54,756 - root - ERROR -   - Value not in list: model_path: 'IC-Light\iclight_sd15_fc_unet_ldm.safetensors' not in ['FLUX1\\flux1-dev-fp8.safetensors', 'FLUX1\\pixelwave_flux1Dev03.safetensors', 'IC-Light\\iclight_sd15_fc.safetensors', 'ic-light\\IC-Light.SD15.FBC.safetensors', 'ic-light\\IC-Light.SD15.FC.safetensors']
2024-11-25 12:45:54,756 - root - ERROR - Output will be ignored
2024-11-25 12:45:54,756 - root - ERROR - Failed to validate prompt for output 138:
2024-11-25 12:45:54,756 - root - ERROR - Output will be ignored
2024-11-25 12:45:54,756 - root - ERROR - Failed to validate prompt for output 137:
2024-11-25 12:45:54,756 - root - ERROR - Output will be ignored
2024-11-25 12:45:54,756 - root - ERROR - Failed to validate prompt for output 139:
2024-11-25 12:45:54,756 - root - ERROR - Output will be ignored
2024-11-25 12:45:54,756 - root - ERROR - Failed to validate prompt for output 134:
2024-11-25 12:45:54,756 - root - ERROR - Output will be ignored
2024-11-25 12:45:54,756 - root - ERROR - Failed to validate prompt for output 136:
2024-11-25 12:45:54,756 - root - ERROR - Output will be ignored
2024-11-25 12:45:54,756 - root - ERROR - Failed to validate prompt for output 133:
2024-11-25 12:45:54,756 - root - ERROR - * (prompt):
2024-11-25 12:45:54,756 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:54,756 - root - ERROR - * SaveImage 133:
2024-11-25 12:45:54,756 - root - ERROR -   - Required input is missing: images
2024-11-25 12:45:54,756 - root - ERROR - Output will be ignored
2024-11-25 12:46:03,025 - root - INFO - Prompt executed in 8.27 seconds
2024-11-25 12:47:25,140 - root - INFO - got prompt
2024-11-25 12:47:25,193 - root - ERROR - Failed to validate prompt for output 133:
2024-11-25 12:47:25,193 - root - ERROR - * (prompt):
2024-11-25 12:47:25,194 - root - ERROR -   - Required input is missing: images
2024-11-25 12:47:25,194 - root - ERROR - * SaveImage 133:
2024-11-25 12:47:25,194 - root - ERROR -   - Required input is missing: images
2024-11-25 12:47:25,194 - root - ERROR - Output will be ignored
2024-11-25 12:47:30,839 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-11-25 12:47:30,865 - root - INFO - model_type FLUX
2024-11-25 12:48:49,029 - root - INFO - Using xformers attention in VAE
2024-11-25 12:48:49,097 - root - INFO - Using xformers attention in VAE
2024-11-25 12:51:06,650 - root - INFO - Processing interrupted
2024-11-25 12:51:07,847 - root - INFO - Prompt executed in 222.65 seconds
2024-11-25 12:52:34,878 - root - INFO - got prompt
2024-11-25 12:52:34,944 - root - ERROR - Failed to validate prompt for output 133:
2024-11-25 12:52:34,944 - root - ERROR - * (prompt):
2024-11-25 12:52:34,944 - root - ERROR -   - Required input is missing: images
2024-11-25 12:52:34,944 - root - ERROR - * SaveImage 133:
2024-11-25 12:52:34,944 - root - ERROR -   - Required input is missing: images
2024-11-25 12:52:34,944 - root - ERROR - Output will be ignored
2024-11-25 12:52:39,630 - root - INFO - Using xformers attention in VAE
2024-11-25 12:52:39,652 - root - INFO - Using xformers attention in VAE
2024-11-25 12:52:43,521 - root - INFO - Requested to load FluxClipModel_
2024-11-25 12:52:43,521 - root - INFO - Loading 1 new model
2024-11-25 12:52:45,218 - root - INFO - loaded completely 0.0 4777.53759765625 True
2024-11-25 12:52:53,453 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-11-25 12:52:54,024 - root - INFO - model_type EPS
2024-11-25 12:53:06,048 - root - INFO - Using xformers attention in VAE
2024-11-25 12:53:06,049 - root - INFO - Using xformers attention in VAE
2024-11-25 12:53:13,776 - root - INFO - Requested to load SD1ClipModel
2024-11-25 12:53:13,776 - root - INFO - Loading 1 new model
2024-11-25 12:53:13,940 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-11-25 12:53:14,073 - root - INFO - Requested to load AutoencoderKL
2024-11-25 12:53:14,073 - root - INFO - Loading 1 new model
2024-11-25 12:53:15,038 - root - INFO - loaded completely 0.0 159.55708122253418 True
2024-11-25 12:53:20,836 - root - INFO - Requested to load BaseModel
2024-11-25 12:53:20,836 - root - INFO - Loading 1 new model
2024-11-25 12:53:38,035 - root - WARNING - WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320, 8, 3, 3]) != torch.Size([320, 4, 3, 3])
2024-11-25 12:53:38,228 - root - INFO - loaded completely 0.0 1639.406135559082 True
2024-11-25 12:53:56,338 - root - INFO - Requested to load AutoencodingEngine
2024-11-25 12:53:56,339 - root - INFO - Loading 1 new model
2024-11-25 12:53:56,985 - root - INFO - loaded completely 0.0 159.87335777282715 True
2024-11-25 12:53:57,419 - root - INFO - Requested to load Flux
2024-11-25 12:53:57,419 - root - INFO - Loading 1 new model
2024-11-25 12:55:47,019 - root - INFO - loaded partially 5872.07349319458 5871.990234375 0
2024-11-25 12:57:16,387 - root - INFO - Requested to load AutoencodingEngine
2024-11-25 12:57:16,387 - root - INFO - Loading 1 new model
2024-11-25 12:57:17,882 - root - INFO - loaded completely 0.0 159.87335777282715 True
2024-11-25 12:57:22,149 - root - INFO - Prompt executed in 287.19 seconds
2024-11-25 13:06:26,341 - root - INFO - got prompt
2024-11-25 13:06:26,418 - root - ERROR - Failed to validate prompt for output 174:
2024-11-25 13:06:26,418 - root - ERROR - * ControlNetLoader 60:
2024-11-25 13:06:26,418 - root - ERROR -   - Value not in list: control_net_name: 'control_v11p_sd15_openpose_fp16.safetensors' not in (list of length 30)
2024-11-25 13:06:26,418 - root - ERROR - Output will be ignored
2024-11-25 13:06:26,418 - root - ERROR - Failed to validate prompt for output 177:
2024-11-25 13:06:26,418 - root - ERROR - * (prompt):
2024-11-25 13:06:26,418 - root - ERROR -   - Required input is missing: images
2024-11-25 13:06:26,418 - root - ERROR - * SaveImage 177:
2024-11-25 13:06:26,418 - root - ERROR -   - Required input is missing: images
2024-11-25 13:06:26,418 - root - ERROR - Output will be ignored
2024-11-25 13:06:26,418 - root - ERROR - Failed to validate prompt for output 175:
2024-11-25 13:06:26,418 - root - ERROR - Output will be ignored
2024-11-25 13:07:00,705 - root - INFO - Prompt executed in 34.28 seconds
2024-11-25 13:07:27,926 - root - INFO - got prompt
2024-11-25 13:07:27,986 - root - ERROR - Failed to validate prompt for output 177:
2024-11-25 13:07:27,986 - root - ERROR - * (prompt):
2024-11-25 13:07:27,986 - root - ERROR -   - Required input is missing: images
2024-11-25 13:07:27,986 - root - ERROR - * SaveImage 177:
2024-11-25 13:07:27,986 - root - ERROR -   - Required input is missing: images
2024-11-25 13:07:27,986 - root - ERROR - Output will be ignored
2024-11-25 13:07:31,495 - root - INFO - model weight dtype torch.float16, manual cast: None
2024-11-25 13:07:31,495 - root - INFO - model_type EPS
2024-11-25 13:07:43,092 - root - INFO - Using xformers attention in VAE
2024-11-25 13:07:43,093 - root - INFO - Using xformers attention in VAE
2024-11-25 13:07:46,385 - root - INFO - Requested to load SD1ClipModel
2024-11-25 13:07:46,385 - root - INFO - Loading 1 new model
2024-11-25 13:07:46,580 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-11-25 13:07:52,800 - root - INFO - Requested to load ControlNet
2024-11-25 13:07:52,800 - root - INFO - Requested to load BaseModel
2024-11-25 13:07:52,800 - root - INFO - Loading 2 new models
2024-11-25 13:07:53,189 - root - INFO - loaded completely 0.0 689.0852355957031 True
2024-11-25 13:07:53,447 - root - INFO - loaded completely 0.0 1639.406135559082 True
2024-11-25 13:08:04,107 - root - INFO - Requested to load AutoencoderKL
2024-11-25 13:08:04,107 - root - INFO - Loading 1 new model
2024-11-25 13:08:04,916 - root - INFO - loaded completely 0.0 159.55708122253418 True
2024-11-25 13:08:05,650 - root - INFO - Requested to load BaseModel
2024-11-25 13:08:05,651 - root - INFO - Loading 1 new model
2024-11-25 13:08:05,934 - root - INFO - loaded completely 0.0 1639.406135559082 True
2024-11-25 13:09:21,382 - root - INFO - Unloading models for lowram load.
2024-11-25 13:09:21,412 - root - INFO - 1 models unloaded.
2024-11-25 13:09:21,412 - root - INFO - Loading 1 new model
2024-11-25 13:09:21,442 - root - INFO - loaded completely 0.0 159.55708122253418 True
2024-11-25 13:09:41,236 - root - INFO - Prompt executed in 133.25 seconds
2024-11-25 13:09:48,837 - root - INFO - got prompt
2024-11-25 13:09:48,895 - root - ERROR - Failed to validate prompt for output 177:
2024-11-25 13:09:48,895 - root - ERROR - * (prompt):
2024-11-25 13:09:48,895 - root - ERROR -   - Required input is missing: images
2024-11-25 13:09:48,895 - root - ERROR - * SaveImage 177:
2024-11-25 13:09:48,896 - root - ERROR -   - Required input is missing: images
2024-11-25 13:09:48,896 - root - ERROR - Output will be ignored
2024-11-25 13:09:48,896 - root - ERROR - Failed to validate prompt for output 175:
2024-11-25 13:09:48,896 - root - ERROR - * (prompt):
2024-11-25 13:09:48,896 - root - ERROR -   - Required input is missing: images
2024-11-25 13:09:48,896 - root - ERROR - * SaveImage 175:
2024-11-25 13:09:48,896 - root - ERROR -   - Required input is missing: images
2024-11-25 13:09:48,896 - root - ERROR - Output will be ignored
2024-11-25 13:09:59,445 - root - INFO - Requested to load SD1ClipModel
2024-11-25 13:09:59,445 - root - INFO - Loading 1 new model
2024-11-25 13:09:59,494 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-11-25 13:09:59,511 - root - INFO - Requested to load ControlNet
2024-11-25 13:09:59,511 - root - INFO - Requested to load BaseModel
2024-11-25 13:09:59,511 - root - INFO - Loading 2 new models
2024-11-25 13:09:59,646 - root - INFO - loaded completely 0.0 689.0852355957031 True
2024-11-25 13:09:59,962 - root - INFO - loaded completely 0.0 1639.406135559082 True
2024-11-25 13:10:10,956 - root - INFO - Unloading models for lowram load.
2024-11-25 13:10:10,993 - root - INFO - 0 models unloaded.
2024-11-25 13:10:11,771 - root - INFO - Prompt executed in 22.87 seconds
2024-11-25 13:11:15,059 - root - INFO - got prompt
2024-11-25 13:11:15,082 - root - ERROR - Failed to validate prompt for output 177:
2024-11-25 13:11:15,082 - root - ERROR - * (prompt):
2024-11-25 13:11:15,082 - root - ERROR -   - Required input is missing: images
2024-11-25 13:11:15,082 - root - ERROR - * SaveImage 177:
2024-11-25 13:11:15,082 - root - ERROR -   - Required input is missing: images
2024-11-25 13:11:15,082 - root - ERROR - Output will be ignored
2024-11-25 13:11:15,082 - root - ERROR - Failed to validate prompt for output 175:
2024-11-25 13:11:15,082 - root - ERROR - * (prompt):
2024-11-25 13:11:15,082 - root - ERROR -   - Required input is missing: images
2024-11-25 13:11:15,082 - root - ERROR - * SaveImage 175:
2024-11-25 13:11:15,082 - root - ERROR -   - Required input is missing: images
2024-11-25 13:11:15,082 - root - ERROR - Output will be ignored
2024-11-25 13:11:15,210 - root - INFO - Requested to load SD1ClipModel
2024-11-25 13:11:15,210 - root - INFO - Loading 1 new model
2024-11-25 13:11:15,364 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-11-25 13:11:15,530 - root - INFO - Requested to load ControlNet
2024-11-25 13:11:15,530 - root - INFO - Requested to load BaseModel
2024-11-25 13:11:15,530 - root - INFO - Loading 2 new models
2024-11-25 13:11:15,663 - root - INFO - loaded completely 0.0 689.0852355957031 True
2024-11-25 13:11:15,920 - root - INFO - loaded completely 0.0 1639.406135559082 True
2024-11-25 13:11:26,786 - root - INFO - loaded partially 38.73265075683594 38.73261833190918 0
2024-11-25 13:11:27,587 - root - INFO - Prompt executed in 12.49 seconds
2024-11-25 13:11:41,037 - root - INFO - got prompt
2024-11-25 13:11:41,064 - root - ERROR - Failed to validate prompt for output 177:
2024-11-25 13:11:41,064 - root - ERROR - * (prompt):
2024-11-25 13:11:41,064 - root - ERROR -   - Required input is missing: images
2024-11-25 13:11:41,064 - root - ERROR - * SaveImage 177:
2024-11-25 13:11:41,064 - root - ERROR -   - Required input is missing: images
2024-11-25 13:11:41,064 - root - ERROR - Output will be ignored
2024-11-25 13:11:41,064 - root - ERROR - Failed to validate prompt for output 175:
2024-11-25 13:11:41,064 - root - ERROR - * (prompt):
2024-11-25 13:11:41,064 - root - ERROR -   - Required input is missing: images
2024-11-25 13:11:41,064 - root - ERROR - * SaveImage 175:
2024-11-25 13:11:41,064 - root - ERROR -   - Required input is missing: images
2024-11-25 13:11:41,064 - root - ERROR - Output will be ignored
2024-11-25 13:11:41,186 - root - INFO - Requested to load SD1ClipModel
2024-11-25 13:11:41,186 - root - INFO - Loading 1 new model
2024-11-25 13:11:41,355 - root - INFO - loaded completely 0.0 235.84423828125 True
2024-11-25 13:11:41,521 - root - INFO - Requested to load ControlNet
2024-11-25 13:11:41,521 - root - INFO - Requested to load BaseModel
2024-11-25 13:11:41,521 - root - INFO - Loading 2 new models
2024-11-25 13:11:41,624 - root - INFO - loaded completely 0.0 689.0852355957031 True
2024-11-25 13:11:41,878 - root - INFO - loaded completely 0.0 1639.406135559082 True
2024-11-25 13:11:52,913 - root - INFO - loaded partially 38.73265075683594 38.73261833190918 0
2024-11-25 13:11:53,702 - root - INFO - Prompt executed in 12.63 seconds
2024-11-25 13:27:03,342 - root - INFO - got prompt
2024-11-25 13:27:03,982 - root - INFO - Using xformers attention in VAE
2024-11-25 13:27:03,984 - root - INFO - Using xformers attention in VAE
2024-11-25 13:27:11,618 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-11-25 13:27:11,650 - root - INFO - model_type FLUX
2024-11-25 13:33:35,709 - root - WARNING - clip missing: ['text_projection.weight']
2024-11-25 13:33:40,037 - root - INFO - Requested to load FluxClipModel_
2024-11-25 13:33:40,038 - root - INFO - Loading 1 new model
2024-11-25 13:33:42,149 - root - INFO - loaded completely 0.0 4777.53759765625 True
2024-11-25 13:33:48,977 - root - INFO - Requested to load Flux
2024-11-25 13:33:48,977 - root - INFO - Loading 1 new model
2024-11-25 13:35:27,440 - root - INFO - loaded partially 5196.476570648194 5196.386779785156 211
2024-11-25 13:35:31,589 - root - ERROR - !!! Exception during processing !!! calculate_weight() got an unexpected keyword argument 'intermediate_dtype'
2024-11-25 13:35:32,870 - root - ERROR - Traceback (most recent call last):
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420, in motion_sample
    return orig_comfy_sample(model, noise, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682, in __call__
    return self.predict_noise(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper
    return orig_apply_model(self, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118, in forward_orig
    img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152, in forward
    img_qkv = self.img_attn.qkv(img_modulated)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295, in forward_comfy_cast_weights
    out = fp8_linear(self, input)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264, in fp8_linear
    w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57, in cast_bias_weight
    weight = s.weight_function(weight)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91, in __call__
    return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)
TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'

2024-11-25 13:35:33,110 - root - INFO - Prompt executed in 509.73 seconds
2024-11-25 13:36:23,623 - root - INFO - got prompt
2024-11-25 13:36:38,848 - root - INFO - loaded partially 5472.057758148194 5468.657287597656 0
2024-11-25 13:36:38,872 - root - ERROR - !!! Exception during processing !!! calculate_weight() got an unexpected keyword argument 'intermediate_dtype'
2024-11-25 13:36:38,873 - root - ERROR - Traceback (most recent call last):
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420, in motion_sample
    return orig_comfy_sample(model, noise, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682, in __call__
    return self.predict_noise(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper
    return orig_apply_model(self, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118, in forward_orig
    img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152, in forward
    img_qkv = self.img_attn.qkv(img_modulated)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295, in forward_comfy_cast_weights
    out = fp8_linear(self, input)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264, in fp8_linear
    w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57, in cast_bias_weight
    weight = s.weight_function(weight)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91, in __call__
    return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)
TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'

2024-11-25 13:36:38,910 - root - INFO - Prompt executed in 15.19 seconds
2024-11-25 13:37:35,552 - root - INFO - got prompt
2024-11-25 13:37:35,852 - root - INFO - Requested to load FluxClipModel_
2024-11-25 13:37:35,852 - root - INFO - Loading 1 new model
2024-11-25 13:37:36,871 - root - INFO - loaded completely 0.0 4777.53759765625 True
2024-11-25 13:37:37,280 - root - INFO - Requested to load Flux
2024-11-25 13:37:37,281 - root - INFO - Loading 1 new model
2024-11-25 13:37:53,020 - root - INFO - loaded partially 5672.940570648194 5672.660217285156 201
2024-11-25 13:37:53,042 - root - ERROR - !!! Exception during processing !!! calculate_weight() got an unexpected keyword argument 'intermediate_dtype'
2024-11-25 13:37:53,043 - root - ERROR - Traceback (most recent call last):
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420, in motion_sample
    return orig_comfy_sample(model, noise, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682, in __call__
    return self.predict_noise(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper
    return orig_apply_model(self, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118, in forward_orig
    img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152, in forward
    img_qkv = self.img_attn.qkv(img_modulated)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295, in forward_comfy_cast_weights
    out = fp8_linear(self, input)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264, in fp8_linear
    w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57, in cast_bias_weight
    weight = s.weight_function(weight)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91, in __call__
    return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)
TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'

2024-11-25 13:37:53,044 - root - INFO - Prompt executed in 17.43 seconds
2024-11-25 13:39:49,858 - aiohttp.server - ERROR - Error handling request
Traceback (most recent call last):
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\web_protocol.py", line 477, in _handle_request
    resp = await request_handler(request)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\web_app.py", line 559, in _handle
    return await handler(request)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\web_middlewares.py", line 117, in impl
    return await handler(request)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\server.py", line 61, in cache_control
    response: web.Response = await handler(request)
  File "<enhanced_experience patches.comfyui.html_resources_patcher>", line 26, in http_resource_injector
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\server.py", line 73, in cors_middleware
    response = await handler(request)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1175, in get_notice
    async with session.get(f"https://{url}{path}") as response:
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client.py", line 1357, in __aenter__
    self._resp: _RetType = await self._coro
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client.py", line 661, in _request
    conn = await self._connector.connect(
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 556, in connect
    proto = await self._create_connection(req, traces, timeout)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 1007, in _create_connection
    _, proto = await self._create_proxy_connection(req, traces, timeout)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 1358, in _create_proxy_connection
    proxy_req = ClientRequest(
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client_reqrep.py", line 330, in __init__
    self.update_host(url)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client_reqrep.py", line 409, in update_host
    raise InvalidURL(url)
aiohttp.client_exceptions.InvalidURL: 127.0.0.1:7890
2024-11-25 13:42:47,352 - root - INFO - got prompt
2024-11-25 13:42:47,417 - root - ERROR - Failed to validate prompt for output 22:
2024-11-25 13:42:47,417 - root - ERROR - * KSampler 20:
2024-11-25 13:42:47,417 - root - ERROR -   - Required input is missing: latent_image
2024-11-25 13:42:47,417 - root - ERROR - Output will be ignored
2024-11-25 13:42:47,597 - root - INFO - Prompt executed in 0.18 seconds
2024-11-25 13:43:05,783 - root - INFO - got prompt
2024-11-25 13:43:06,017 - root - INFO - Unloading models for lowram load.
2024-11-25 13:43:06,035 - root - INFO - 0 models unloaded.
2024-11-25 13:43:06,366 - root - ERROR - !!! Exception during processing !!! calculate_weight() got an unexpected keyword argument 'intermediate_dtype'
2024-11-25 13:43:06,367 - root - ERROR - Traceback (most recent call last):
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420, in motion_sample
    return orig_comfy_sample(model, noise, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
    return orig_comfy_sample(model, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682, in __call__
    return self.predict_noise(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper
    return orig_apply_model(self, *args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118, in forward_orig
    img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152, in forward
    img_qkv = self.img_attn.qkv(img_modulated)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295, in forward_comfy_cast_weights
    out = fp8_linear(self, input)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264, in fp8_linear
    w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57, in cast_bias_weight
    weight = s.weight_function(weight)
  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91, in __call__
    return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)
TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'

2024-11-25 13:43:06,368 - root - INFO - Prompt executed in 0.52 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":33,"last_link_id":29,"nodes":[{"id":10,"type":"UNETLoader","pos":{"0":186.4521026611328,"1":213.6870880126953},"size":{"0":315,"1":82},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[10],"slot_index":0,"shape":3,"label":"模型"}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["FLUX1\\pixelwave_flux1Dev03.safetensors","fp8_e4m3fn"]},{"id":11,"type":"DualCLIPLoader","pos":{"0":186.90663146972656,"1":354.5138854980469},"size":{"0":315,"1":106},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[11],"slot_index":0,"shape":3,"label":"CLIP"}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["t5xxl_fp8_e4m3fn.safetensors","clip_l.safetensors","flux"]},{"id":16,"type":"CLIPTextEncode","pos":

Other

ComfyUI Error Report

Error Details

  • Node Type: KSampler
  • Exception Type: TypeError
  • Exception Message: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'

Stack Trace

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420, in motion_sample
    return orig_comfy_sample(model, noise, *args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
    return orig_comfy_sample(model, *args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
    return orig_comfy_sample(model, *args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144, in sample_euler
    denoised = model(x, sigma_hat * s_in, **extra_args)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682, in __call__
    return self.predict_noise(*args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228, in calc_cond_batch
    output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper
    return orig_apply_model(self, *args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159, in forward
    out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118, in forward_orig
    img, txt = block(img=img, txt=txt, vec=vec, pe=pe)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152, in forward
    img_qkv = self.img_attn.qkv(img_modulated)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76, in forward
    return self.forward_comfy_cast_weights(*args, **kwargs)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295, in forward_comfy_cast_weights
    out = fp8_linear(self, input)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264, in fp8_linear
    w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57, in cast_bias_weight
    weight = s.weight_function(weight)

  File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91, in __call__
    return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)

System Information

  • ComfyUI Version: v0.2.2
  • Arguments: F:\ComfyUI-aki\ComfyUI-aki-v1.4\main.py --auto-launch --preview-method auto --disable-cuda-malloc --fast
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.4.0+cu124

Devices

  • Name: cuda:0 NVIDIA GeForce RTX 4070 Ti : cudaMallocAsync
    • Type: cuda
    • VRAM Total: 12878086144
    • VRAM Free: 1975248112
    • Torch VRAM Total: 8824815616
    • Torch VRAM Free: 123991280
@wangyizhi1213 wangyizhi1213 added the User Support A user needs help with something, probably not a bug. label Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

1 participant