Cannot access gated repo
Hi, I am Kiran, I have been granted access to use llma 3 by hugging face, but when try to access the model from the VScode it says Cannot access gated repo. I am using hugging face token (Read only) to access the model from vscode.
Hi this worked.
if you are using the Vscosde follow the instructions below
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import pipeline
login(token = 'your_token')
tokenizer = AutoTokenizer.from_pretrained(
"meta-llama/Meta-Llama-3-8B",
cache_dir="/kaggle/working/"
)
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3-8B",
cache_dir="/kaggle/working/",
device_map="auto",
)
Gated model You have been granted access to this model. However, Exception has occurred: OSError
We couldn't connect to 'https://huggingface.co.' to load this file, couldn't find it in the cached files and it looks like meta-llama/Meta-Llama-3-8B is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co./docs/transformers/installation#offline-mode'.
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co./meta-llama/Meta-Llama-3-8B/resolve/main/config.json
The above exception was the direct cause of the following exception:
huggingface_hub.utils._errors.HfHubHTTPError: (Request ID: Root=1-66684564-4b77863a5ca2c26c4729723a;c2d90b76-e1ee-496b-8e5b-825662520211)
403 Forbidden: Authorization error..
Cannot access content at: https://huggingface.co./meta-llama/Meta-Llama-3-8B/resolve/main/config.json.
If you are trying to create or update content,make sure you have a token with the write
role.
The above exception was the direct cause of the following exception:
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
The above exception was the direct cause of the following exception:
File "/home/hailay/Desktop/neumann/Finetune.py", line 10, in
tokenizer = AutoTokenizer.from_pretrained(
OSError: We couldn't connect to 'https://huggingface.co.' to load this file, couldn't find it in the cached files and it looks like meta-llama/Meta-Llama-3-8B is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co./docs/transformers/installation#offline-mode'.
I have the same problem:
when I write :
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import pipeline
login(token = 'my token')
tokenizer = AutoTokenizer.from_pretrained(
"meta-llama/Meta-Llama-3-8B",
cache_dir="/kaggle/working/"
)
model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Meta-Llama-3.1-8B",
cache_dir="/kaggle/working/",
device_map="auto",
I am getting this message:
The token has not been saved to the git credentials helper. Pass add_to_git_credential=True
in this function directly or --add-to-git-credential
if using via huggingface-cli
if you want to set the git credential as well.
Token is valid (permission: fineGrained).
Your token has been saved to /Users/my user/.cache/huggingface/token
Login successful
HTTPError Traceback (most recent call last)
File /opt/anaconda3/envs/thesis/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py:304, in hf_raise_for_status(response, endpoint_name)
303 try:
--> 304 response.raise_for_status()
305 except HTTPError as e:
File /opt/anaconda3/envs/thesis/lib/python3.11/site-packages/requests/models.py:1024, in Response.raise_for_status(self)
1023 if http_error_msg:
-> 1024 raise HTTPError(http_error_msg, response=self)
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co./meta-llama/Meta-Llama-3-8B/resolve/main/config.json
The above exception was the direct cause of the following exception:
GatedRepoError Traceback (most recent call last)
File /opt/anaconda3/envs/thesis/lib/python3.11/site-packages/transformers/utils/hub.py:402, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_gated_repo, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs)
400 try:
401 # Load from URL or cache if already cached
--> 402 resolved_file = hf_hub_download(
403 path_or_repo_id,
404 filename,
405 subfolder=None if len(subfolder) == 0 else subfolder,
406 repo_type=repo_type,
407 revision=revision,
...
Make sure to have access to it at https://huggingface.co./meta-llama/Meta-Llama-3-8B.
403 Client Error. (Request ID: Root=1-66b75bdd-24e5552314a689bc452c2afa;f41afa8a-51a6-4bb9-aa77-0101110d127f)
Cannot access gated repo for url https://huggingface.co./meta-llama/Meta-Llama-3-8B/resolve/main/config.json.
Access to model meta-llama/Meta-Llama-3-8B is restricted and you are not in the authorized list. Visit https://huggingface.co./meta-llama/Meta-Llama-3-8B to ask for access.
iin mine it says like this
OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co./meta-llama/Meta-Llama-3.1-8B.
403 Client Error. (Request ID: Root=1-66bde90e-307a3c0419da184f621cea2a;56d907ca-6175-4ec2-88c8-0ec3a17003d0)
Cannot access gated repo for url https://huggingface.co./meta-llama/Meta-Llama-3.1-8B/resolve/main/config.json.
Your request to access model meta-llama/Meta-Llama-3.1-8B is awaiting a review from the repo authors.
very sad cant access this
i had similar problem, where i can't access the repo event though I got access to it.
I throws a error "unable to access gated repo"
Solution:
login in your hugging face token using
$ huggingface-cli login
passing the token with the line
def get_huggingface_token():
# Define the path to the token file (updated path)
token_file = Path.home() / ".cache" / "huggingface" / "token"
# Check if the token file exists
if token_file.exists():
with open(token_file, "r") as file:
token = file.read().strip()
return token
else:
raise FileNotFoundError("Hugging Face token file not found. Please run 'huggingface-cli login'.")
Fetch the token using the function
HF_TOKEN = get_huggingface_token()
hi i also have a problem, i was granted access but i cannot download the model:
huggingface_hub.errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
thanks.
If none of the above work, check your transformers version. I updated mine from 4.40 to 4.43 and it worked fine.
I'm not sure about that, but I just changed the type of my token from 'fine grained' to 'read'. The issue was fixed.