Broken runtime with PyTorch 12

#38
by russfellows - opened

Hardware: Nvidia 4090
Model Run: Large and quantized versions
Error: RuntimeError: cuDNN Frontend error: [cudnn_frontend] Error: No execution plans support the graph.

I have run the two examples, one full size and the other using the bitsandbytes quantized version. Both result in Cuda runtime errors. I have updated to the very latest version of PyTorch and Cuda libraries.

Full error Details:

Traceback (most recent call last):
File "/home/eval/Code/./sdiffusion3.5-quant.py", line 28, in
image = pipeline(
^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py", line 834, in call
) = self.encode_prompt(
^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py", line 413, in encode_prompt
prompt_embed, pooled_prompt_embed = self._get_clip_prompt_embeds(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py", line 301, in _get_clip_prompt_embeds
prompt_embeds = text_encoder(text_input_ids.to(device), output_hidden_states=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/accelerate/hooks.py", line 170, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/transformers/models/clip/modeling_clip.py", line 1473, in forward
text_outputs = self.text_model(
^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/transformers/models/clip/modeling_clip.py", line 954, in forward
encoder_outputs = self.encoder(
^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/transformers/models/clip/modeling_clip.py", line 877, in forward
layer_outputs = encoder_layer(
^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/transformers/models/clip/modeling_clip.py", line 608, in forward
hidden_states, attn_weights = self.self_attn(
^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/eval/anaconda3/envs/sdiffusion2/lib/python3.12/site-packages/transformers/models/clip/modeling_clip.py", line 540, in forward
attn_output = torch.nn.functional.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: cuDNN Frontend error: [cudnn_frontend] Error: No execution plans support the graph.

Sign up or log in to comment