runtime error
Exit code: 1. Reason: 00002.safetensors: 19%|█▉ | 640M/3.35G [00:04<00:13, 195MB/s] [A model-00002-of-00002.safetensors: 39%|███▉ | 1.32G/3.35G [00:05<00:05, 352MB/s][A model-00002-of-00002.safetensors: 52%|█████▏ | 1.74G/3.35G [00:06<00:04, 329MB/s][A model-00002-of-00002.safetensors: 63%|██████▎ | 2.12G/3.35G [00:08<00:03, 328MB/s][A model-00002-of-00002.safetensors: 77%|███████▋ | 2.59G/3.35G [00:09<00:02, 368MB/s][A model-00002-of-00002.safetensors: 95%|█████████▍| 3.17G/3.35G [00:10<00:00, 428MB/s][A model-00002-of-00002.safetensors: 100%|█████████▉| 3.35G/3.35G [00:10<00:00, 319MB/s] Downloading shards: 100%|██████████| 2/2 [00:24<00:00, 11.84s/it][A Downloading shards: 100%|██████████| 2/2 [00:24<00:00, 12.08s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 16, in <module> model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda", trust_remote_code=True, torch_dtype="auto") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 268, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4266, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1626, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1773, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.
Container logs:
Fetching error logs...