You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I get OOM on example keyframe workflow. However on txt2vid and img2vid works perfectly. I tried connect torch compile and there is no difference.
Starting server
To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[]
[]
FETCH ComfyRegistry Data: 5/79
FETCH ComfyRegistry Data: 10/79
FETCH ComfyRegistry Data: 15/79
got prompt
Failed to validate prompt for output 111:
HyVideoVAELoader 7:
Value not in list: model_name: 'hyvid\hunyuan_video_vae_bf16.safetensors' not in ['ae.safetensors.safetensors', 'hunyuan_video_vae_bf16.safetensors', 'hunyuan_video_vae_fp32.safetensors.safetensors', 'sdxl_vae.safetensors']
HyVideoLoraSelect 101:
Value not in list: lora: 'hyvid\musubi-tuner\HunyuanVideo_dashtoon_keyframe_lora_converted_bf16.safetensors' not in ['8itchw4lk_2_40.safetensors', 'HunyuanVideo_dashtoon_keyframe_lora_converted_bf16.safetensors.safetensors', 'HunyuanVideo_dashtoon_keyframe_lora_converted_comfy_bf16.safetensors.safetensors', 'NeonFantasyFLUX.safetensors', 'RetroAnimeFluxV1.safetensors', 'SECRET SAUCE B3 Hunyuan.safetensors', 'cum_on_face_v1.0.safetensors', 'hyvideo_FastVideo_LoRA-fp8.safetensors.safetensors', 'img2vid.safetensors.safetensors', 'img2vid544p.safetensors.safetensors', 'pov_cumshot_v1.1.safetensors', 'tittydrop_v1.safetensors']
HyVideoModelLoader 1:
Value not in list: model: 'hyvideo\hunyuan_video_720_fp8_e4m3fn.safetensors' not in ['flux1-fill-dev.safetensors.safetensors', 'flux_dev.safetensors', 'hunyuan_video_720_cfgdistill_bf16.safetensors.safetensors', 'hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors', 'hunyuan_video_FastVideo_720_fp8_e4m3fn.safetensors', 'hunyuan_video_I2V_720_fixed_bf16.safetensors.safetensors', 'hunyuan_video_image_to_video_720p_bf16.safetensors.safetensors', 'mp_rank_00_model_states_fp8.pt']
Output will be ignored
invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
FETCH ComfyRegistry Data: 20/79
FETCH ComfyRegistry Data: 25/79
FETCH ComfyRegistry Data: 30/79
got prompt
video_frames torch.Size([1, 3, 63, 960, 720])
FETCH ComfyRegistry Data: 35/79
FETCH ComfyRegistry Data: 40/79
FETCH ComfyRegistry Data: 45/79
FETCH ComfyRegistry Data: 50/79
FETCH ComfyRegistry Data: 55/79
encoded latents shape torch.Size([1, 16, 17, 120, 90])
Loading text encoder model (clipL) from: D:\AI\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14
Text encoder to dtype: torch.float16
Loading tokenizer (clipL) from: D:\AI\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14
Loading text encoder model (llm) from: D:\AI\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
FETCH ComfyRegistry Data: 60/79
Loading checkpoint shards: 25%|██████████████▎ | 1/4 [00:02<00:06, 2.19s/it]FETCH ComfyRegistry Data: 65/79
Loading checkpoint shards: 50%|████████████████████████████▌ | 2/4 [00:08<00:09, 4.58s/it]FETCH ComfyRegistry Data: 70/79
FETCH ComfyRegistry Data: 75/79
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:40<00:00, 10.13s/it]
Text encoder to dtype: torch.float16
Loading tokenizer (llm) from: D:\AI\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 1
clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 2
Condition type: latent_concat
model_type FLOW
The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Scheduler config: FrozenDict({'num_train_timesteps': 1000, 'flow_shift': 7.0, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['num_train_timesteps', 'n_tokens']})
Using accelerate to load and assign model weights to device...
Loading LoRA: HunyuanVideo_dashtoon_keyframe_lora_converted_comfy_bf16 with strength: 1.0000000000000002
Different in_channels 32 vs 16, patching...
lora key not loaded: diffusion_model.guidance_in.in_layer.lora_A.weight
lora key not loaded: diffusion_model.guidance_in.in_layer.lora_B.weight
lora key not loaded: diffusion_model.guidance_in.out_layer.lora_A.weight
lora key not loaded: diffusion_model.guidance_in.out_layer.lora_B.weight
lora key not loaded: diffusion_model.time_in.in_layer.lora_A.weight
lora key not loaded: diffusion_model.time_in.in_layer.lora_B.weight
lora key not loaded: diffusion_model.time_in.out_layer.lora_A.weight
lora key not loaded: diffusion_model.time_in.out_layer.lora_B.weight
lora key not loaded: diffusion_model.txt_in.t_embedder.in_layer.lora_A.weight
lora key not loaded: diffusion_model.txt_in.t_embedder.in_layer.lora_B.weight
lora key not loaded: diffusion_model.txt_in.t_embedder.out_layer.lora_A.weight
lora key not loaded: diffusion_model.txt_in.t_embedder.out_layer.lora_B.weight
Requested to load HyVideoModel
loaded completely 9.5367431640625e+25 12556.328247070312 True
Input (height, width, video_length) = (960, 720, 65)
The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Scheduler config: FrozenDict({'num_train_timesteps': 1000, 'flow_shift': 7.000000000000002, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['num_train_timesteps', 'n_tokens']})
Swapping 20 double blocks and 40 single blocks
Using random noise only
Sampling 65 frames in 17 latents at 720x960 with 30 inference steps
0%| | 0/30 [00:48<?, ?it/s]
Exception in thread Thread-13 (prompt_worker):
Traceback (most recent call last):
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 1440, in process
out_latents = model["pipe"](
^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\diffusion\pipelines\pipeline_hunyuan_video.py", line 842, in call
noise_pred = self.transformer( # For an input image (129, 192, 336) (1, 256, 256)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 1123, in forward
x = _process_single_blocks(x, vec, txt.shape[1], block_args, stg_mode, stg_block_idx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 975, in _process_single_blocks
block.to(self.offload_device, non_blocking=True)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
module._apply(fn)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 927, in _apply
param_applied = fn(param)
^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
return t.to(
^^^^^
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "threading.py", line 1075, in _bootstrap_inner
File "threading.py", line 1012, in run
File "D:\AI\ComfyUI_windows_portable\ComfyUI\main.py", line 173, in prompt_worker
e.execute(item[2], prompt_id, item[3], item[4])
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 504, in execute
result, error, ex = execute(self.server, dynamic_prompt, self.caches, node_id, extra_data, executed, prompt_id, execution_list, pending_subgraph_results)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 394, in execute
input_data_formatted[name] = [format_value(x) for x in inputs]
^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 246, in format_value
return str(x)
^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor.py", line 523, in repr
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 708, in _str
return _str_intern(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 625, in _str_intern
tensor_str = _tensor_str(self, indent)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 339, in _tensor_str
self = self.float()
^^^^^^^^^^^^
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
The text was updated successfully, but these errors were encountered:
I get OOM on example keyframe workflow. However on txt2vid and img2vid works perfectly. I tried connect torch compile and there is no difference.
Starting server
To see the GUI go to: http://127.0.0.1:8188
FETCH DATA from: D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
[]
[]
FETCH ComfyRegistry Data: 5/79
FETCH ComfyRegistry Data: 10/79
FETCH ComfyRegistry Data: 15/79
got prompt
Failed to validate prompt for output 111:
Output will be ignored
invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
FETCH ComfyRegistry Data: 20/79
FETCH ComfyRegistry Data: 25/79
FETCH ComfyRegistry Data: 30/79
got prompt
video_frames torch.Size([1, 3, 63, 960, 720])
FETCH ComfyRegistry Data: 35/79
FETCH ComfyRegistry Data: 40/79
FETCH ComfyRegistry Data: 45/79
FETCH ComfyRegistry Data: 50/79
FETCH ComfyRegistry Data: 55/79
encoded latents shape torch.Size([1, 16, 17, 120, 90])
Loading text encoder model (clipL) from: D:\AI\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14
Text encoder to dtype: torch.float16
Loading tokenizer (clipL) from: D:\AI\ComfyUI_windows_portable\ComfyUI\models\clip\clip-vit-large-patch14
Loading text encoder model (llm) from: D:\AI\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
FETCH ComfyRegistry Data: 60/79
Loading checkpoint shards: 25%|██████████████▎ | 1/4 [00:02<00:06, 2.19s/it]FETCH ComfyRegistry Data: 65/79
Loading checkpoint shards: 50%|████████████████████████████▌ | 2/4 [00:08<00:09, 4.58s/it]FETCH ComfyRegistry Data: 70/79
FETCH ComfyRegistry Data: 75/79
FETCH ComfyRegistry Data [DONE]
[ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
nightly_channel: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json [DONE]
[ComfyUI-Manager] All startup tasks have been completed.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:40<00:00, 10.13s/it]
Text encoder to dtype: torch.float16
Loading tokenizer (llm) from: D:\AI\ComfyUI_windows_portable\ComfyUI\models\LLM\llava-llama-3-8b-text-encoder-tokenizer
llm prompt attention_mask shape: torch.Size([1, 161]), masked tokens: 1
clipL prompt attention_mask shape: torch.Size([1, 77]), masked tokens: 2
Condition type: latent_concat
model_type FLOW
The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Scheduler config: FrozenDict({'num_train_timesteps': 1000, 'flow_shift': 7.0, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['num_train_timesteps', 'n_tokens']})
Using accelerate to load and assign model weights to device...
Loading LoRA: HunyuanVideo_dashtoon_keyframe_lora_converted_comfy_bf16 with strength: 1.0000000000000002
Different in_channels 32 vs 16, patching...
lora key not loaded: diffusion_model.guidance_in.in_layer.lora_A.weight
lora key not loaded: diffusion_model.guidance_in.in_layer.lora_B.weight
lora key not loaded: diffusion_model.guidance_in.out_layer.lora_A.weight
lora key not loaded: diffusion_model.guidance_in.out_layer.lora_B.weight
lora key not loaded: diffusion_model.time_in.in_layer.lora_A.weight
lora key not loaded: diffusion_model.time_in.in_layer.lora_B.weight
lora key not loaded: diffusion_model.time_in.out_layer.lora_A.weight
lora key not loaded: diffusion_model.time_in.out_layer.lora_B.weight
lora key not loaded: diffusion_model.txt_in.t_embedder.in_layer.lora_A.weight
lora key not loaded: diffusion_model.txt_in.t_embedder.in_layer.lora_B.weight
lora key not loaded: diffusion_model.txt_in.t_embedder.out_layer.lora_A.weight
lora key not loaded: diffusion_model.txt_in.t_embedder.out_layer.lora_B.weight
Requested to load HyVideoModel
loaded completely 9.5367431640625e+25 12556.328247070312 True
Input (height, width, video_length) = (960, 720, 65)
The config attributes {'use_flow_sigmas': True, 'prediction_type': 'flow_prediction'} were passed to FlowMatchDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Scheduler config: FrozenDict({'num_train_timesteps': 1000, 'flow_shift': 7.000000000000002, 'reverse': True, 'solver': 'euler', 'n_tokens': None, '_use_default_values': ['num_train_timesteps', 'n_tokens']})
Swapping 20 double blocks and 40 single blocks
Using random noise only
Sampling 65 frames in 17 latents at 720x960 with 30 inference steps
0%| | 0/30 [00:48<?, ?it/s]
Exception in thread Thread-13 (prompt_worker):
Traceback (most recent call last):
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 1440, in process
out_latents = model["pipe"](
^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\diffusion\pipelines\pipeline_hunyuan_video.py", line 842, in call
noise_pred = self.transformer( # For an input image (129, 192, 336) (1, 256, 256)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 1123, in forward
x = _process_single_blocks(x, vec, txt.shape[1], block_args, stg_mode, stg_block_idx)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\modules\models.py", line 975, in _process_single_blocks
block.to(self.offload_device, non_blocking=True)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1340, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 900, in _apply
module._apply(fn)
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 927, in _apply
param_applied = fn(param)
^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1326, in convert
return t.to(
^^^^^
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "threading.py", line 1075, in _bootstrap_inner
File "threading.py", line 1012, in run
File "D:\AI\ComfyUI_windows_portable\ComfyUI\main.py", line 173, in prompt_worker
e.execute(item[2], prompt_id, item[3], item[4])
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 504, in execute
result, error, ex = execute(self.server, dynamic_prompt, self.caches, node_id, extra_data, executed, prompt_id, execution_list, pending_subgraph_results)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 394, in execute
input_data_formatted[name] = [format_value(x) for x in inputs]
^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 246, in format_value
return str(x)
^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor.py", line 523, in repr
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 708, in _str
return _str_intern(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 625, in _str_intern
tensor_str = _tensor_str(self, indent)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 339, in _tensor_str
self = self.float()
^^^^^^^^^^^^
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.The text was updated successfully, but these errors were encountered: