runtime error

Exit code: 1. Reason: utureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch. @torch.library.impl_abstract("xformers_flash::flash_fwd") /usr/local/lib/python3.10/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch. @torch.library.impl_abstract("xformers_flash::flash_bwd") The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s] Loading pipeline components...: 17%|█▋ | 1/6 [00:05<00:29, 5.96s/it] Loading pipeline components...: 33%|███▎ | 2/6 [00:07<00:13, 3.25s/it] Loading pipeline components...: 67%|██████▋ | 4/6 [00:11<00:04, 2.49s/it]/usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( Loading pipeline components...: 100%|██████████| 6/6 [00:11<00:00, 1.91s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 11, in <module> pipe.load_lora_weights(lora_repo) File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 88, in load_lora_weights raise ValueError("PEFT backend is required for this method.") ValueError: PEFT backend is required for this method.

Container logs:

Fetching error logs...