Issue with accessing gated repo
Hello!
So I'm trying to run Meta-Llama-3.1-8B-Instruct in Pycharm using the transformers library.
When I ran the code it gave me the following error:
OSError: You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co./meta-llama/Meta-Llama-3.1-8B-Instruct.
401 Client Error. (Request ID: Root=1-66c4bfd3-3cd2c93e7c7bc3e90a7ca1a3;1109938a-aead-4b20-ae49-09e9ba03fac4)
Cannot access gated repo for url https://huggingface.co./meta-llama/Meta-Llama-3.1-8B-Instruct/resolve/main/config.json.
Access to model meta-llama/Meta-Llama-3.1-8B-Instruct is restricted. You must be authenticated to access it.
However, I should have access to it as I got it granted yesterday
What can I do? Thanks,
You can generate access token and either install it on your development machine using huggingface-cli login
https://huggingface.co./docs/huggingface_hub/en/guides/cli#huggingface-cli-login
Or add it to your application:
https://huggingface.co./docs/hub/en/security-tokens#how-to-use-user-access-tokens
from transformers import AutoModel
access_token = "hf_..."
model = AutoModel.from_pretrained("private/model", token=access_token)
Oh! Thank you very much :D
When I create a token in hugging face there a 3 options (fine-grained, read, write) for this purpose, which one should I create?
It's probably in the documentation but for example I've made a fine-grained access token just for my development machine with the single permission:
User permissions (username here)
Repositories
- Read access to contents of all public gated repos you can access
Everything else left unchecked.
Then I've imported it to my desktop user account using huggingface-cli login
.
If I recall correctly, it's saved in ~/.cache/huggingface/token
and as it's the token used by default with any hub operation it's best for it to have minimum possible permissions.
If there will be a need to do something more (upload a model for example), it's better to create a separate new token for just this task and use the new one for the specific task only.
It's probably in the documentation but for example I've made a fine-grained access token just for my development machine with the single permission:
User permissions (username here)
Repositories
- Read access to contents of all public gated repos you can access
Everything else left unchecked.
Then I've imported it to my desktop user account using
huggingface-cli login
.If I recall correctly, it's saved in
~/.cache/huggingface/token
and as it's the token used by default with any hub operation it's best for it to have minimum possible permissions.If there will be a need to do something more (upload a model for example), it's better to create a separate new token for just this task and use the new one for the specific task only.
thank you very much! :D
I've encountered a similar issue, except it's a 403 Client Error rather than a 401 Error that the OP received.
I created a fine-grained access token after I was granted access to the model. Afterwards, I validated the model's permissions with the new token and attempted to download just the config.json file to try to isolate the issue.
When I ran the code this is what I received:
"2024-09-14 07:39:33,751 - INFO - Model permissions verified successfully. Proceeding with model loading.
2024-09-14 07:39:33,751 - INFO - Attempting to manually download a file from meta-llama/Meta-Llama-3-8B-Instruct
2024-09-14 07:39:33,773 - ERROR - Error downloading file from model repository: 403 Client Error. (Request ID: Root=1-66e53db5-31e1f31a58a2480813642d59;697f1fc8-7225-42ef-b54d-fb63303d5839)
Cannot access gated repo for url https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct/resolve/main/config.json.
Access to model meta-llama/Meta-Llama-3-8B-Instruct is restricted and you are not in the authorized list. Visit https://huggingface.co./meta-llama/Meta-Llama-3-8B-Instruct to ask for access.
2024-09-14 07:39:33,773 - ERROR - This suggests an issue with repository access. "
Does anyone know how to resolve this? Thanks
Maybe Llama-3 and 3.1 access tokens are to be requested separately?
You may try to change your model URL to download 3.1 (this is the repository we're in at the moment).
Or separately request access to 3.0?