You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Logs2024-11-2512:45:28,952- root - ERROR -* SaveImage 139:
2024-11-2512:45:28,952- root - ERROR -- Required input is missing: images
2024-11-2512:45:28,952- root - ERROR - Output will be ignored
2024-11-2512:45:28,952- root - ERROR - Failed to validate prompt for output 134:
2024-11-2512:45:28,952- root - ERROR -* (prompt):
2024-11-2512:45:28,952- root - ERROR -- Required input is missing: images
2024-11-2512:45:28,952- root - ERROR -* SaveImage 134:
2024-11-2512:45:28,952- root - ERROR -- Required input is missing: images
2024-11-2512:45:28,952- root - ERROR - Output will be ignored
2024-11-2512:45:28,952- root - ERROR - Failed to validate prompt for output 136:
2024-11-2512:45:28,952- root - ERROR -* (prompt):
2024-11-2512:45:28,952- root - ERROR -- Required input is missing: images
2024-11-2512:45:28,952- root - ERROR -* SaveImage 136:
2024-11-2512:45:28,952- root - ERROR -- Required input is missing: images
2024-11-2512:45:28,952- root - ERROR - Output will be ignored
2024-11-2512:45:28,952- root - ERROR - Failed to validate prompt for output 133:
2024-11-2512:45:28,952- root - ERROR -* (prompt):
2024-11-2512:45:28,952- root - ERROR -- Required input is missing: images
2024-11-2512:45:28,952- root - ERROR -* SaveImage 133:
2024-11-2512:45:28,952- root - ERROR -- Required input is missing: images
2024-11-2512:45:28,952- root - ERROR - Output will be ignored
2024-11-2512:45:29,767- root - INFO - Prompt executed in0.81 seconds
2024-11-2512:45:38,340- root - INFO - got prompt
2024-11-2512:45:38,341- root - ERROR - Failed to validate prompt for output 135:
2024-11-2512:45:38,341- root - ERROR -* (prompt):
2024-11-2512:45:38,341- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,341- root - ERROR -* SaveImage 135:
2024-11-2512:45:38,342- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,342- root - ERROR - Output will be ignored
2024-11-2512:45:38,342- root - ERROR - Failed to validate prompt for output 138:
2024-11-2512:45:38,342- root - ERROR -* (prompt):
2024-11-2512:45:38,342- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,342- root - ERROR -* SaveImage 138:
2024-11-2512:45:38,342- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,342- root - ERROR - Output will be ignored
2024-11-2512:45:38,390- root - ERROR - Failed to validate prompt for output 137:
2024-11-2512:45:38,391- root - ERROR -* (prompt):
2024-11-2512:45:38,391- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,391- root - ERROR -* SaveImage 137:
2024-11-2512:45:38,391- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,391- root - ERROR - Output will be ignored
2024-11-2512:45:38,391- root - ERROR - Failed to validate prompt for output 139:
2024-11-2512:45:38,391- root - ERROR -* (prompt):
2024-11-2512:45:38,391- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,391- root - ERROR -* SaveImage 139:
2024-11-2512:45:38,391- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,391- root - ERROR - Output will be ignored
2024-11-2512:45:38,391- root - ERROR - Failed to validate prompt for output 134:
2024-11-2512:45:38,391- root - ERROR -* (prompt):
2024-11-2512:45:38,391- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,391- root - ERROR -* SaveImage 134:
2024-11-2512:45:38,391- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,391- root - ERROR - Output will be ignored
2024-11-2512:45:38,391- root - ERROR - Failed to validate prompt for output 136:
2024-11-2512:45:38,391- root - ERROR -* (prompt):
2024-11-2512:45:38,391- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,391- root - ERROR -* SaveImage 136:
2024-11-2512:45:38,391- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,391- root - ERROR - Output will be ignored
2024-11-2512:45:38,392- root - ERROR - Failed to validate prompt for output 133:
2024-11-2512:45:38,392- root - ERROR -* (prompt):
2024-11-2512:45:38,392- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,392- root - ERROR -* SaveImage 133:
2024-11-2512:45:38,392- root - ERROR -- Required input is missing: images
2024-11-2512:45:38,392- root - ERROR - Output will be ignored
2024-11-2512:45:39,192- root - INFO - Prompt executed in0.80 seconds
2024-11-2512:45:54,707- root - INFO - got prompt
2024-11-2512:45:54,755- root - ERROR - Failed to validate prompt for output 135:
2024-11-2512:45:54,755- root - ERROR -* CheckpointLoaderSimple 56:
2024-11-2512:45:54,755- root - ERROR -- Value not in list: ckpt_name: 'FLUX\flux1-dev-fp8-with_clip_vae.safetensors' not in (list of length 57)
2024-11-2512:45:54,755- root - ERROR -* VAELoader 39:
2024-11-2512:45:54,755- root - ERROR -- Value not in list: vae_name: 'SD15\vaeFtMse840000Ema_v10.safetensors' not in ['FLUX1\\ae.safetensors','animevae.pt','chilloutmix_NiPrunedFp32Fix (1).safetensors','sdxl_vae.safetensors','sdxl_vae_fp16fix.safetensors','sdxl_vaefp16.safetensors','vae-ft-mse-840000-ema-pruned.safetensors','taesd','taesdxl','taesd3','taef1']
2024-11-2512:45:54,755- root - ERROR -* CheckpointLoaderSimple 30:
2024-11-2512:45:54,756- root - ERROR -- Value not in list: ckpt_name: 'SD15\majicmixRealistic_v7.safetensors' not in (list of length 57)
2024-11-2512:45:54,756- root - ERROR -* LoadAndApplyICLightUnet 45:
2024-11-2512:45:54,756- root - ERROR -- Value not in list: model_path: 'IC-Light\iclight_sd15_fc_unet_ldm.safetensors' not in ['FLUX1\\flux1-dev-fp8.safetensors','FLUX1\\pixelwave_flux1Dev03.safetensors','IC-Light\\iclight_sd15_fc.safetensors','ic-light\\IC-Light.SD15.FBC.safetensors','ic-light\\IC-Light.SD15.FC.safetensors']
2024-11-2512:45:54,756- root - ERROR - Output will be ignored
2024-11-2512:45:54,756- root - ERROR - Failed to validate prompt for output 138:
2024-11-2512:45:54,756- root - ERROR - Output will be ignored
2024-11-2512:45:54,756- root - ERROR - Failed to validate prompt for output 137:
2024-11-2512:45:54,756- root - ERROR - Output will be ignored
2024-11-2512:45:54,756- root - ERROR - Failed to validate prompt for output 139:
2024-11-2512:45:54,756- root - ERROR - Output will be ignored
2024-11-2512:45:54,756- root - ERROR - Failed to validate prompt for output 134:
2024-11-2512:45:54,756- root - ERROR - Output will be ignored
2024-11-2512:45:54,756- root - ERROR - Failed to validate prompt for output 136:
2024-11-2512:45:54,756- root - ERROR - Output will be ignored
2024-11-2512:45:54,756- root - ERROR - Failed to validate prompt for output 133:
2024-11-2512:45:54,756- root - ERROR -* (prompt):
2024-11-2512:45:54,756- root - ERROR -- Required input is missing: images
2024-11-2512:45:54,756- root - ERROR -* SaveImage 133:
2024-11-2512:45:54,756- root - ERROR -- Required input is missing: images
2024-11-2512:45:54,756- root - ERROR - Output will be ignored
2024-11-2512:46:03,025- root - INFO - Prompt executed in8.27 seconds
2024-11-2512:47:25,140- root - INFO - got prompt
2024-11-2512:47:25,193- root - ERROR - Failed to validate prompt for output 133:
2024-11-2512:47:25,193- root - ERROR -* (prompt):
2024-11-2512:47:25,194- root - ERROR -- Required input is missing: images
2024-11-2512:47:25,194- root - ERROR -* SaveImage 133:
2024-11-2512:47:25,194- root - ERROR -- Required input is missing: images
2024-11-2512:47:25,194- root - ERROR - Output will be ignored
2024-11-2512:47:30,839- root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-11-2512:47:30,865- root - INFO - model_type FLUX
2024-11-2512:48:49,029- root - INFO - Using xformers attention in VAE
2024-11-2512:48:49,097- root - INFO - Using xformers attention in VAE
2024-11-2512:51:06,650- root - INFO - Processing interrupted
2024-11-2512:51:07,847- root - INFO - Prompt executed in222.65 seconds
2024-11-2512:52:34,878- root - INFO - got prompt
2024-11-2512:52:34,944- root - ERROR - Failed to validate prompt for output 133:
2024-11-2512:52:34,944- root - ERROR -* (prompt):
2024-11-2512:52:34,944- root - ERROR -- Required input is missing: images
2024-11-2512:52:34,944- root - ERROR -* SaveImage 133:
2024-11-2512:52:34,944- root - ERROR -- Required input is missing: images
2024-11-2512:52:34,944- root - ERROR - Output will be ignored
2024-11-2512:52:39,630- root - INFO - Using xformers attention in VAE
2024-11-2512:52:39,652- root - INFO - Using xformers attention in VAE
2024-11-2512:52:43,521- root - INFO - Requested to load FluxClipModel_
2024-11-2512:52:43,521- root - INFO - Loading 1 new model
2024-11-2512:52:45,218- root - INFO - loaded completely 0.04777.53759765625 True
2024-11-2512:52:53,453- root - INFO - model weight dtype torch.float16, manual cast: None
2024-11-2512:52:54,024- root - INFO - model_type EPS
2024-11-2512:53:06,048- root - INFO - Using xformers attention in VAE
2024-11-2512:53:06,049- root - INFO - Using xformers attention in VAE
2024-11-2512:53:13,776- root - INFO - Requested to load SD1ClipModel
2024-11-2512:53:13,776- root - INFO - Loading 1 new model
2024-11-2512:53:13,940- root - INFO - loaded completely 0.0235.84423828125 True
2024-11-2512:53:14,073- root - INFO - Requested to load AutoencoderKL
2024-11-2512:53:14,073- root - INFO - Loading 1 new model
2024-11-2512:53:15,038- root - INFO - loaded completely 0.0159.55708122253418 True
2024-11-2512:53:20,836- root - INFO - Requested to load BaseModel
2024-11-2512:53:20,836- root - INFO - Loading 1 new model
2024-11-2512:53:38,035- root - WARNING - WARNING SHAPE MISMATCH diffusion_model.input_blocks.0.0.weight WEIGHT NOT MERGED torch.Size([320,8,3,3]) != torch.Size([320,4,3,3])
2024-11-2512:53:38,228- root - INFO - loaded completely 0.01639.406135559082 True
2024-11-2512:53:56,338- root - INFO - Requested to load AutoencodingEngine
2024-11-2512:53:56,339- root - INFO - Loading 1 new model
2024-11-2512:53:56,985- root - INFO - loaded completely 0.0159.87335777282715 True
2024-11-2512:53:57,419- root - INFO - Requested to load Flux
2024-11-2512:53:57,419- root - INFO - Loading 1 new model
2024-11-2512:55:47,019- root - INFO - loaded partially 5872.073493194585871.99023437502024-11-2512:57:16,387- root - INFO - Requested to load AutoencodingEngine
2024-11-2512:57:16,387- root - INFO - Loading 1 new model
2024-11-2512:57:17,882- root - INFO - loaded completely 0.0159.87335777282715 True
2024-11-2512:57:22,149- root - INFO - Prompt executed in287.19 seconds
2024-11-2513:06:26,341- root - INFO - got prompt
2024-11-2513:06:26,418- root - ERROR - Failed to validate prompt for output 174:
2024-11-2513:06:26,418- root - ERROR -* ControlNetLoader 60:
2024-11-2513:06:26,418- root - ERROR -- Value not in list: control_net_name: 'control_v11p_sd15_openpose_fp16.safetensors' not in (list of length 30)
2024-11-2513:06:26,418- root - ERROR - Output will be ignored
2024-11-2513:06:26,418- root - ERROR - Failed to validate prompt for output 177:
2024-11-2513:06:26,418- root - ERROR -* (prompt):
2024-11-2513:06:26,418- root - ERROR -- Required input is missing: images
2024-11-2513:06:26,418- root - ERROR -* SaveImage 177:
2024-11-2513:06:26,418- root - ERROR -- Required input is missing: images
2024-11-2513:06:26,418- root - ERROR - Output will be ignored
2024-11-2513:06:26,418- root - ERROR - Failed to validate prompt for output 175:
2024-11-2513:06:26,418- root - ERROR - Output will be ignored
2024-11-2513:07:00,705- root - INFO - Prompt executed in34.28 seconds
2024-11-2513:07:27,926- root - INFO - got prompt
2024-11-2513:07:27,986- root - ERROR - Failed to validate prompt for output 177:
2024-11-2513:07:27,986- root - ERROR -* (prompt):
2024-11-2513:07:27,986- root - ERROR -- Required input is missing: images
2024-11-2513:07:27,986- root - ERROR -* SaveImage 177:
2024-11-2513:07:27,986- root - ERROR -- Required input is missing: images
2024-11-2513:07:27,986- root - ERROR - Output will be ignored
2024-11-2513:07:31,495- root - INFO - model weight dtype torch.float16, manual cast: None
2024-11-2513:07:31,495- root - INFO - model_type EPS
2024-11-2513:07:43,092- root - INFO - Using xformers attention in VAE
2024-11-2513:07:43,093- root - INFO - Using xformers attention in VAE
2024-11-2513:07:46,385- root - INFO - Requested to load SD1ClipModel
2024-11-2513:07:46,385- root - INFO - Loading 1 new model
2024-11-2513:07:46,580- root - INFO - loaded completely 0.0235.84423828125 True
2024-11-2513:07:52,800- root - INFO - Requested to load ControlNet
2024-11-2513:07:52,800- root - INFO - Requested to load BaseModel
2024-11-2513:07:52,800- root - INFO - Loading 2 new models
2024-11-2513:07:53,189- root - INFO - loaded completely 0.0689.0852355957031 True
2024-11-2513:07:53,447- root - INFO - loaded completely 0.01639.406135559082 True
2024-11-2513:08:04,107- root - INFO - Requested to load AutoencoderKL
2024-11-2513:08:04,107- root - INFO - Loading 1 new model
2024-11-2513:08:04,916- root - INFO - loaded completely 0.0159.55708122253418 True
2024-11-2513:08:05,650- root - INFO - Requested to load BaseModel
2024-11-2513:08:05,651- root - INFO - Loading 1 new model
2024-11-2513:08:05,934- root - INFO - loaded completely 0.01639.406135559082 True
2024-11-2513:09:21,382- root - INFO - Unloading models for lowram load.
2024-11-2513:09:21,412- root - INFO -1 models unloaded.
2024-11-2513:09:21,412- root - INFO - Loading 1 new model
2024-11-2513:09:21,442- root - INFO - loaded completely 0.0159.55708122253418 True
2024-11-2513:09:41,236- root - INFO - Prompt executed in133.25 seconds
2024-11-2513:09:48,837- root - INFO - got prompt
2024-11-2513:09:48,895- root - ERROR - Failed to validate prompt for output 177:
2024-11-2513:09:48,895- root - ERROR -* (prompt):
2024-11-2513:09:48,895- root - ERROR -- Required input is missing: images
2024-11-2513:09:48,895- root - ERROR -* SaveImage 177:
2024-11-2513:09:48,896- root - ERROR -- Required input is missing: images
2024-11-2513:09:48,896- root - ERROR - Output will be ignored
2024-11-2513:09:48,896- root - ERROR - Failed to validate prompt for output 175:
2024-11-2513:09:48,896- root - ERROR -* (prompt):
2024-11-2513:09:48,896- root - ERROR -- Required input is missing: images
2024-11-2513:09:48,896- root - ERROR -* SaveImage 175:
2024-11-2513:09:48,896- root - ERROR -- Required input is missing: images
2024-11-2513:09:48,896- root - ERROR - Output will be ignored
2024-11-2513:09:59,445- root - INFO - Requested to load SD1ClipModel
2024-11-2513:09:59,445- root - INFO - Loading 1 new model
2024-11-2513:09:59,494- root - INFO - loaded completely 0.0235.84423828125 True
2024-11-2513:09:59,511- root - INFO - Requested to load ControlNet
2024-11-2513:09:59,511- root - INFO - Requested to load BaseModel
2024-11-2513:09:59,511- root - INFO - Loading 2 new models
2024-11-2513:09:59,646- root - INFO - loaded completely 0.0689.0852355957031 True
2024-11-2513:09:59,962- root - INFO - loaded completely 0.01639.406135559082 True
2024-11-2513:10:10,956- root - INFO - Unloading models for lowram load.
2024-11-2513:10:10,993- root - INFO -0 models unloaded.
2024-11-2513:10:11,771- root - INFO - Prompt executed in22.87 seconds
2024-11-2513:11:15,059- root - INFO - got prompt
2024-11-2513:11:15,082- root - ERROR - Failed to validate prompt for output 177:
2024-11-2513:11:15,082- root - ERROR -* (prompt):
2024-11-2513:11:15,082- root - ERROR -- Required input is missing: images
2024-11-2513:11:15,082- root - ERROR -* SaveImage 177:
2024-11-2513:11:15,082- root - ERROR -- Required input is missing: images
2024-11-2513:11:15,082- root - ERROR - Output will be ignored
2024-11-2513:11:15,082- root - ERROR - Failed to validate prompt for output 175:
2024-11-2513:11:15,082- root - ERROR -* (prompt):
2024-11-2513:11:15,082- root - ERROR -- Required input is missing: images
2024-11-2513:11:15,082- root - ERROR -* SaveImage 175:
2024-11-2513:11:15,082- root - ERROR -- Required input is missing: images
2024-11-2513:11:15,082- root - ERROR - Output will be ignored
2024-11-2513:11:15,210- root - INFO - Requested to load SD1ClipModel
2024-11-2513:11:15,210- root - INFO - Loading 1 new model
2024-11-2513:11:15,364- root - INFO - loaded completely 0.0235.84423828125 True
2024-11-2513:11:15,530- root - INFO - Requested to load ControlNet
2024-11-2513:11:15,530- root - INFO - Requested to load BaseModel
2024-11-2513:11:15,530- root - INFO - Loading 2 new models
2024-11-2513:11:15,663- root - INFO - loaded completely 0.0689.0852355957031 True
2024-11-2513:11:15,920- root - INFO - loaded completely 0.01639.406135559082 True
2024-11-2513:11:26,786- root - INFO - loaded partially 38.7326507568359438.7326183319091802024-11-2513:11:27,587- root - INFO - Prompt executed in12.49 seconds
2024-11-2513:11:41,037- root - INFO - got prompt
2024-11-2513:11:41,064- root - ERROR - Failed to validate prompt for output 177:
2024-11-2513:11:41,064- root - ERROR -* (prompt):
2024-11-2513:11:41,064- root - ERROR -- Required input is missing: images
2024-11-2513:11:41,064- root - ERROR -* SaveImage 177:
2024-11-2513:11:41,064- root - ERROR -- Required input is missing: images
2024-11-2513:11:41,064- root - ERROR - Output will be ignored
2024-11-2513:11:41,064- root - ERROR - Failed to validate prompt for output 175:
2024-11-2513:11:41,064- root - ERROR -* (prompt):
2024-11-2513:11:41,064- root - ERROR -- Required input is missing: images
2024-11-2513:11:41,064- root - ERROR -* SaveImage 175:
2024-11-2513:11:41,064- root - ERROR -- Required input is missing: images
2024-11-2513:11:41,064- root - ERROR - Output will be ignored
2024-11-2513:11:41,186- root - INFO - Requested to load SD1ClipModel
2024-11-2513:11:41,186- root - INFO - Loading 1 new model
2024-11-2513:11:41,355- root - INFO - loaded completely 0.0235.84423828125 True
2024-11-2513:11:41,521- root - INFO - Requested to load ControlNet
2024-11-2513:11:41,521- root - INFO - Requested to load BaseModel
2024-11-2513:11:41,521- root - INFO - Loading 2 new models
2024-11-2513:11:41,624- root - INFO - loaded completely 0.0689.0852355957031 True
2024-11-2513:11:41,878- root - INFO - loaded completely 0.01639.406135559082 True
2024-11-2513:11:52,913- root - INFO - loaded partially 38.7326507568359438.7326183319091802024-11-2513:11:53,702- root - INFO - Prompt executed in12.63 seconds
2024-11-2513:27:03,342- root - INFO - got prompt
2024-11-2513:27:03,982- root - INFO - Using xformers attention in VAE
2024-11-2513:27:03,984- root - INFO - Using xformers attention in VAE
2024-11-2513:27:11,618- root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
2024-11-2513:27:11,650- root - INFO - model_type FLUX
2024-11-2513:33:35,709- root - WARNING - clip missing: ['text_projection.weight']
2024-11-2513:33:40,037- root - INFO - Requested to load FluxClipModel_
2024-11-2513:33:40,038- root - INFO - Loading 1 new model
2024-11-2513:33:42,149- root - INFO - loaded completely 0.04777.53759765625 True
2024-11-2513:33:48,977- root - INFO - Requested to load Flux
2024-11-2513:33:48,977- root - INFO - Loading 1 new model
2024-11-2513:35:27,440- root - INFO - loaded partially 5196.4765706481945196.3867797851562112024-11-2513:35:31,589- root - ERROR -!!! Exception during processing !!! calculate_weight() got an unexpected keyword argument 'intermediate_dtype'2024-11-2513:35:32,870- root - ERROR - Traceback (most recent call last):
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317,in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192,in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169,in _map_node_over_list
process_inputs(input_dict, i)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158,in process_inputs
results.append(getattr(obj, func)(**inputs))
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429,in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396,in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9,in informative_sample
return original_sample(*args,**kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420,in motion_sample
return orig_comfy_sample(model, noise,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116,in acn_sample
return orig_comfy_sample(model,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117,in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43,in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829,in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729,in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716,in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695,in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600,in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar,**self.extra_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116,in decorate_context
return func(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144,in sample_euler
denoised = model(x, sigma_hat * s_in,**extra_args)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299,in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682,in __call__
return self.predict_noise(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685,in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279,in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228,in calc_cond_batch
output = model.apply_model(input_x, timestep_,**c).chunk(batch_chunks)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69,in apply_model_uncond_cleanup_wrapper
return orig_apply_model(self,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142,in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options,**extra_conds).float()
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159,in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118,in forward_orig
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152,in forward
img_qkv = self.img_attn.qkv(img_modulated)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76,in forward
return self.forward_comfy_cast_weights(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295,in forward_comfy_cast_weights
out = fp8_linear(self, input)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264,in fp8_linear
w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57,in cast_bias_weight
weight = s.weight_function(weight)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91,in __call__
return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)
TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'2024-11-2513:35:33,110- root - INFO - Prompt executed in509.73 seconds
2024-11-2513:36:23,623- root - INFO - got prompt
2024-11-2513:36:38,848- root - INFO - loaded partially 5472.0577581481945468.65728759765602024-11-2513:36:38,872- root - ERROR -!!! Exception during processing !!! calculate_weight() got an unexpected keyword argument 'intermediate_dtype'2024-11-2513:36:38,873- root - ERROR - Traceback (most recent call last):
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317,in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192,in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169,in _map_node_over_list
process_inputs(input_dict, i)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158,in process_inputs
results.append(getattr(obj, func)(**inputs))
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429,in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396,in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9,in informative_sample
return original_sample(*args,**kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420,in motion_sample
return orig_comfy_sample(model, noise,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116,in acn_sample
return orig_comfy_sample(model,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117,in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43,in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829,in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729,in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716,in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695,in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600,in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar,**self.extra_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116,in decorate_context
return func(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144,in sample_euler
denoised = model(x, sigma_hat * s_in,**extra_args)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299,in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682,in __call__
return self.predict_noise(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685,in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279,in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228,in calc_cond_batch
output = model.apply_model(input_x, timestep_,**c).chunk(batch_chunks)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69,in apply_model_uncond_cleanup_wrapper
return orig_apply_model(self,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142,in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options,**extra_conds).float()
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159,in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118,in forward_orig
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152,in forward
img_qkv = self.img_attn.qkv(img_modulated)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76,in forward
return self.forward_comfy_cast_weights(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295,in forward_comfy_cast_weights
out = fp8_linear(self, input)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264,in fp8_linear
w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57,in cast_bias_weight
weight = s.weight_function(weight)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91,in __call__
return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)
TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'2024-11-2513:36:38,910- root - INFO - Prompt executed in15.19 seconds
2024-11-2513:37:35,552- root - INFO - got prompt
2024-11-2513:37:35,852- root - INFO - Requested to load FluxClipModel_
2024-11-2513:37:35,852- root - INFO - Loading 1 new model
2024-11-2513:37:36,871- root - INFO - loaded completely 0.04777.53759765625 True
2024-11-2513:37:37,280- root - INFO - Requested to load Flux
2024-11-2513:37:37,281- root - INFO - Loading 1 new model
2024-11-2513:37:53,020- root - INFO - loaded partially 5672.9405706481945672.6602172851562012024-11-2513:37:53,042- root - ERROR -!!! Exception during processing !!! calculate_weight() got an unexpected keyword argument 'intermediate_dtype'2024-11-2513:37:53,043- root - ERROR - Traceback (most recent call last):
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317,in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192,in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169,in _map_node_over_list
process_inputs(input_dict, i)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158,in process_inputs
results.append(getattr(obj, func)(**inputs))
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429,in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396,in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9,in informative_sample
return original_sample(*args,**kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420,in motion_sample
return orig_comfy_sample(model, noise,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116,in acn_sample
return orig_comfy_sample(model,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117,in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43,in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829,in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729,in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716,in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695,in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600,in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar,**self.extra_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116,in decorate_context
return func(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144,in sample_euler
denoised = model(x, sigma_hat * s_in,**extra_args)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299,in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682,in __call__
return self.predict_noise(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685,in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279,in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228,in calc_cond_batch
output = model.apply_model(input_x, timestep_,**c).chunk(batch_chunks)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69,in apply_model_uncond_cleanup_wrapper
return orig_apply_model(self,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142,in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options,**extra_conds).float()
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159,in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118,in forward_orig
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152,in forward
img_qkv = self.img_attn.qkv(img_modulated)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76,in forward
return self.forward_comfy_cast_weights(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295,in forward_comfy_cast_weights
out = fp8_linear(self, input)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264,in fp8_linear
w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57,in cast_bias_weight
weight = s.weight_function(weight)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91,in __call__
return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)
TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'2024-11-2513:37:53,044- root - INFO - Prompt executed in17.43 seconds
2024-11-2513:39:49,858- aiohttp.server - ERROR - Error handling request
Traceback (most recent call last):
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\web_protocol.py", line 477,in _handle_request
resp = await request_handler(request)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\web_app.py", line 559,in _handle
return await handler(request)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\web_middlewares.py", line 117,in impl
return await handler(request)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\server.py", line 61,in cache_control
response: web.Response = await handler(request)
File "<enhanced_experience patches.comfyui.html_resources_patcher>", line 26,in http_resource_injector
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\server.py", line 73,in cors_middleware
response = await handler(request)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\glob\manager_server.py", line 1175,in get_notice
async with session.get(f"https://{url}{path}") as response:
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client.py", line 1357,in __aenter__
self._resp: _RetType = await self._coro
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client.py", line 661,in _request
conn = await self._connector.connect(
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 556,in connect
proto = await self._create_connection(req, traces, timeout)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 1007,in _create_connection
_, proto = await self._create_proxy_connection(req, traces, timeout)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\connector.py", line 1358,in _create_proxy_connection
proxy_req = ClientRequest(
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client_reqrep.py", line 330,in __init__
self.update_host(url)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\aiohttp\client_reqrep.py", line 409,in update_host
raise InvalidURL(url)
aiohttp.client_exceptions.InvalidURL: 127.0.0.1:78902024-11-2513:42:47,352- root - INFO - got prompt
2024-11-2513:42:47,417- root - ERROR - Failed to validate prompt for output 22:
2024-11-2513:42:47,417- root - ERROR -* KSampler 20:
2024-11-2513:42:47,417- root - ERROR -- Required input is missing: latent_image
2024-11-2513:42:47,417- root - ERROR - Output will be ignored
2024-11-2513:42:47,597- root - INFO - Prompt executed in0.18 seconds
2024-11-2513:43:05,783- root - INFO - got prompt
2024-11-2513:43:06,017- root - INFO - Unloading models for lowram load.
2024-11-2513:43:06,035- root - INFO -0 models unloaded.
2024-11-2513:43:06,366- root - ERROR -!!! Exception during processing !!! calculate_weight() got an unexpected keyword argument 'intermediate_dtype'2024-11-2513:43:06,367- root - ERROR - Traceback (most recent call last):
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317,in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192,in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169,in _map_node_over_list
process_inputs(input_dict, i)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158,in process_inputs
results.append(getattr(obj, func)(**inputs))
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429,in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396,in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9,in informative_sample
return original_sample(*args,**kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420,in motion_sample
return orig_comfy_sample(model, noise,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116,in acn_sample
return orig_comfy_sample(model,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117,in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43,in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829,in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729,in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716,in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695,in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600,in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar,**self.extra_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116,in decorate_context
return func(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144,in sample_euler
denoised = model(x, sigma_hat * s_in,**extra_args)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299,in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682,in __call__
return self.predict_noise(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685,in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279,in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228,in calc_cond_batch
output = model.apply_model(input_x, timestep_,**c).chunk(batch_chunks)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69,in apply_model_uncond_cleanup_wrapper
return orig_apply_model(self,*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142,in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options,**extra_conds).float()
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159,in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118,in forward_orig
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152,in forward
img_qkv = self.img_attn.qkv(img_modulated)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553,in _wrapped_call_impl
return self._call_impl(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562,in _call_impl
return forward_call(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76,in forward
return self.forward_comfy_cast_weights(*args,**kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295,in forward_comfy_cast_weights
out = fp8_linear(self, input)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264,in fp8_linear
w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57,in cast_bias_weight
weight = s.weight_function(weight)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91,in __call__
return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)
TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'2024-11-2513:43:06,368- root - INFO - Prompt executed in0.52 seconds
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"last_node_id":33,"last_link_id":29,"nodes":[{"id":10,"type":"UNETLoader","pos":{"0":186.4521026611328,"1":213.6870880126953},"size":{"0":315,"1":82},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[10],"slot_index":0,"shape":3,"label":"模型"}],"properties":{"Node name for S&R":"UNETLoader"},"widgets_values":["FLUX1\\pixelwave_flux1Dev03.safetensors","fp8_e4m3fn"]},{"id":11,"type":"DualCLIPLoader","pos":{"0":186.90663146972656,"1":354.5138854980469},"size":{"0":315,"1":106},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[11],"slot_index":0,"shape":3,"label":"CLIP"}],"properties":{"Node name for S&R":"DualCLIPLoader"},"widgets_values":["t5xxl_fp8_e4m3fn.safetensors","clip_l.safetensors","flux"]},{"id":16,"type":"CLIPTextEncode","pos":
Other
ComfyUI Error Report
Error Details
Node Type: KSampler
Exception Type: TypeError
Exception Message: calculate_weight() got an unexpected keyword argument 'intermediate_dtype'
Stack Trace
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1429, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\nodes.py", line 1396, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 420, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\sampling.py", line 116, in acn_sample
return orig_comfy_sample(model, *args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 117, in uncond_multiplier_check_cn_sample
return orig_comfy_sample(model, *args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\sample.py", line 43, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 829, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 729, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 716, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 695, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 600, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\k_diffusion\sampling.py", line 144, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 299, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 682, in __call__
return self.predict_noise(*args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 685, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\samplers.py", line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\utils.py", line 69, in apply_model_uncond_cleanup_wrapper
return orig_apply_model(self, *args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_base.py", line 142, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 159, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\model.py", line 118, in forward_orig
img, txt = block(img=img, txt=txt, vec=vec, pe=pe)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ldm\flux\layers.py", line 152, in forward
img_qkv = self.img_attn.qkv(img_modulated)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\python\lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 76, in forward
return self.forward_comfy_cast_weights(*args, **kwargs)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 295, in forward_comfy_cast_weights
out = fp8_linear(self, input)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 264, in fp8_linear
w, bias = cast_bias_weight(self, input, dtype=dtype, bias_dtype=input.dtype)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\ops.py", line 57, in cast_bias_weight
weight = s.weight_function(weight)
File "F:\ComfyUI-aki\ComfyUI-aki-v1.4\comfy\model_patcher.py", line 91, in __call__
return comfy.lora.calculate_weight(self.patches[self.key], weight, self.key, intermediate_dtype=weight.dtype)
System Information
ComfyUI Version: v0.2.2
Arguments: F:\ComfyUI-aki\ComfyUI-aki-v1.4\main.py --auto-launch --preview-method auto --disable-cuda-malloc --fast
Your question
when I try to use FLUX, ksamplers error.
Logs
Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Other
ComfyUI Error Report
Error Details
Stack Trace
System Information
Devices
The text was updated successfully, but these errors were encountered: