runtime error
CUDA not found. Traceback (most recent call last): File "/home/user/app/app.py", line 61, in <module> llama2_wrapper.init_tokenizer() File "/home/user/app/model.py", line 21, in init_tokenizer self.tokenizer = LLAMA2_WRAPPER.create_llama2_tokenizer(self.config) File "/home/user/app/model.py", line 65, in create_llama2_tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 652, in from_pretrained tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 496, in get_tokenizer_config resolved_config_file = cached_file( File "/home/user/.local/lib/python3.10/site-packages/transformers/utils/hub.py", line 417, in cached_file resolved_file = hf_hub_download( File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn validate_repo_id(arg_value) File "/home/user/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '../LLM/Llama-2-7b-chat-hf'. Use `repo_type` argument if needed.
Container logs:
Fetching error logs...