Getting error while loading model

#30
by Dipto084 - opened

File "../python3.10/site-packages/transformers/modeling_utils.py", line 1824, in _check_and_enable_sdpa
raise ValueError(
ValueError: ConditionalChatTTS does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please request the support for this architecture: https://github.com/huggingface/transformers/issues/28005. If you believe this error is a bug, please open an issue in Transformers GitHub repository and load your model with the argument attn_implementation="eager" meanwhile. Example: model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="eager")

it solves if I switch to 'eager' but the usage note suggests to use either flash_attn2 or fsdp. Any suggestion?

OpenBMB org

check torch version

I have torch version 2.5.1. Is that an issue?

And one more question, If I am using it with 'eager' and it is running, what are the implications?

Sign up or log in to comment