url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/27950 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27950/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27950/comments | https://api.github.com/repos/huggingface/transformers/issues/27950/events | https://github.com/huggingface/transformers/pull/27950 | 2,035,788,688 | PR_kwDOCUB6oc5hsDNQ | 27,950 | [`Awq`] Enable the possibility to skip quantization for some target modules | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27950). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This PR enables compatiblity with Mixtral AWQ ! \r\nhttps://github.com/casper-hansen/AutoAWQ/pull/251 being merged, this PR is ready for review 🙏 ",
"Thanks a lot for the review @amyeroberts ! \r\nI will merge this as soon as AutoAWQ makes the 0.1.8 Mixtral release cc @casper-hansen just for your information",
"Release has been done! Merging!"
] | 1,702 | 1,703 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
Adds the possibility to load AWQ models if some modules of the model are skipped for quantization.
E.g. for whisper, Llava, Mixtral, we respectively don't want to quantize the encoder, vision encoder and the gate layer to ensure inference stability.
Let's merge it once AWQ makes the 0.1.8 release
cc @ArthurZucker @casper-hansen @TheBloke @SunMarc
https://github.com/casper-hansen/AutoAWQ/pull/248
This PR makes it also possible to run multi-modal models with AWQ:
```py
from transformers import pipeline
from PIL import Image
import requests
model_id = "ybelkada/llava-1.5-7b-hf-awq"
pipe = pipeline("image-to-text", model=model_id, device=0)
url = "https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/compel-neg.png"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nCan you please describe this image?\nASSISTANT:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 100})
print(outputs[0]["generated_text"])
```
![image](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/compel-neg.png)
> USER: \nCan you please describe this image?\nASSISTANT: The image features a brown and white cat sitting on a green surface, possibly a carpet or a grassy area. The cat is holding a red ball in its paws, seemingly playing with it. The cat appears to be focused on the ball, possibly preparing to play or just enjoying the toy.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27950/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27950",
"html_url": "https://github.com/huggingface/transformers/pull/27950",
"diff_url": "https://github.com/huggingface/transformers/pull/27950.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27950.patch",
"merged_at": 1703498817000
} |
https://api.github.com/repos/huggingface/transformers/issues/27949 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27949/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27949/comments | https://api.github.com/repos/huggingface/transformers/issues/27949/events | https://github.com/huggingface/transformers/pull/27949 | 2,035,770,836 | PR_kwDOCUB6oc5hr_Sj | 27,949 | In PreTrainedTokenizerBase add missing word in error message | {
"login": "petergtz",
"id": 3618401,
"node_id": "MDQ6VXNlcjM2MTg0MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3618401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petergtz",
"html_url": "https://github.com/petergtz",
"followers_url": "https://api.github.com/users/petergtz/followers",
"following_url": "https://api.github.com/users/petergtz/following{/other_user}",
"gists_url": "https://api.github.com/users/petergtz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petergtz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petergtz/subscriptions",
"organizations_url": "https://api.github.com/users/petergtz/orgs",
"repos_url": "https://api.github.com/users/petergtz/repos",
"events_url": "https://api.github.com/users/petergtz/events{/privacy}",
"received_events_url": "https://api.github.com/users/petergtz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27949). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,703 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
This is a minor cosmetic change in the error message when invoking the tokenizer:
"text input must of type" -> "text input must be of type"
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27949/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27949",
"html_url": "https://github.com/huggingface/transformers/pull/27949",
"diff_url": "https://github.com/huggingface/transformers/pull/27949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27949.patch",
"merged_at": 1702307560000
} |
https://api.github.com/repos/huggingface/transformers/issues/27948 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27948/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27948/comments | https://api.github.com/repos/huggingface/transformers/issues/27948/events | https://github.com/huggingface/transformers/pull/27948 | 2,035,739,062 | PR_kwDOCUB6oc5hr4Ou | 27,948 | Hot-fix-mixstral-loss | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Actually will compute on GPU!"
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
Fixes
```python
load_balancing_loss_func
gate_logits = torch.cat(gate_logits, dim=0)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:7! (when checking argument for argument tensors in method wrapper_cat)
gate_logits = torch.cat(gate_logits, dim=0)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:6! (when checking argument for argument tensors in method wrapper_cat)
```
which appears when computing the loss in parallel settings (accelerate) .
The actual tensors are pretty small ( batch x seq_length , 2) so putting them all on cpu should be alright. There is no perfect solution for now.
Either this or we use some gather operation | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27948/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 7,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27948/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27948",
"html_url": "https://github.com/huggingface/transformers/pull/27948",
"diff_url": "https://github.com/huggingface/transformers/pull/27948.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27948.patch",
"merged_at": 1702380029000
} |
https://api.github.com/repos/huggingface/transformers/issues/27947 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27947/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27947/comments | https://api.github.com/repos/huggingface/transformers/issues/27947/events | https://github.com/huggingface/transformers/pull/27947 | 2,035,719,098 | PR_kwDOCUB6oc5hrzzR | 27,947 | Fix test for auto_find_batch_size on multi-GPU | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
The new test added in https://github.com/huggingface/transformers/pull/27568 doesn't account for multi-GPU, when the `bs` is multiplied by `n_gpu` for the effective train batch size. This PR modifies the test slightly as a result to work on any number of GPUs (and CPU)
Fixes # (issue)
Failing nightly test
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27947/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27947",
"html_url": "https://github.com/huggingface/transformers/pull/27947",
"diff_url": "https://github.com/huggingface/transformers/pull/27947.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27947.patch",
"merged_at": 1702306661000
} |
https://api.github.com/repos/huggingface/transformers/issues/27946 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27946/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27946/comments | https://api.github.com/repos/huggingface/transformers/issues/27946/events | https://github.com/huggingface/transformers/pull/27946 | 2,035,678,553 | PR_kwDOCUB6oc5hrq4b | 27,946 | Update import message | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Currently, if you do have Accelerate installed, but you don't have the `min_version` specified [here](https://github.com/huggingface/transformers/blob/56be5e80e6cd5264012eb9ea84bd589233a503d9/src/transformers/utils/import_utils.py#L671), you will get a message saying Accelerate is not installed.
So I've improved the error message.
cc @muellerzr | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27946/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27946",
"html_url": "https://github.com/huggingface/transformers/pull/27946",
"diff_url": "https://github.com/huggingface/transformers/pull/27946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27946.patch",
"merged_at": 1702306686000
} |
https://api.github.com/repos/huggingface/transformers/issues/27945 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27945/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27945/comments | https://api.github.com/repos/huggingface/transformers/issues/27945/events | https://github.com/huggingface/transformers/pull/27945 | 2,035,613,327 | PR_kwDOCUB6oc5hrccI | 27,945 | Fix parameter count in readme for mixtral 45b | {
"login": "CyberTimon",
"id": 78795905,
"node_id": "MDQ6VXNlcjc4Nzk1OTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/78795905?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CyberTimon",
"html_url": "https://github.com/CyberTimon",
"followers_url": "https://api.github.com/users/CyberTimon/followers",
"following_url": "https://api.github.com/users/CyberTimon/following{/other_user}",
"gists_url": "https://api.github.com/users/CyberTimon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CyberTimon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CyberTimon/subscriptions",
"organizations_url": "https://api.github.com/users/CyberTimon/orgs",
"repos_url": "https://api.github.com/users/CyberTimon/repos",
"events_url": "https://api.github.com/users/CyberTimon/events{/privacy}",
"received_events_url": "https://api.github.com/users/CyberTimon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27945). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes the parameter count number in the readme. In the [mistral blog post](https://mistral.ai/news/mixtral-of-experts/) they state that it's a 45b model and not a 84b / 85b.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27945/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27945",
"html_url": "https://github.com/huggingface/transformers/pull/27945",
"diff_url": "https://github.com/huggingface/transformers/pull/27945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27945.patch",
"merged_at": 1702306729000
} |
https://api.github.com/repos/huggingface/transformers/issues/27944 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27944/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27944/comments | https://api.github.com/repos/huggingface/transformers/issues/27944/events | https://github.com/huggingface/transformers/pull/27944 | 2,035,356,548 | PR_kwDOCUB6oc5hqjWg | 27,944 | Update bounding box format everywhere | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27944). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes an issue pointed out by https://discuss.huggingface.co/t/owl-vit-postprocess-api-bbox-conversion/65309/2, namely we state that we postprocess to the COCO API, but effectively we're using the Pascal VOC format.
A [very nice blog post](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#:~:text=albumentations%20is%20similar%20to%20pascal_voc,the%20height%20of%20the%20image) explaining all bounding box formats goes into more detail. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27944/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27944",
"html_url": "https://github.com/huggingface/transformers/pull/27944",
"diff_url": "https://github.com/huggingface/transformers/pull/27944.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27944.patch",
"merged_at": 1702317822000
} |
https://api.github.com/repos/huggingface/transformers/issues/27943 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27943/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27943/comments | https://api.github.com/repos/huggingface/transformers/issues/27943/events | https://github.com/huggingface/transformers/pull/27943 | 2,035,267,751 | PR_kwDOCUB6oc5hqPz_ | 27,943 | Fix PatchTSMixer Docstrings | {
"login": "vijaye12",
"id": 25958261,
"node_id": "MDQ6VXNlcjI1OTU4MjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/25958261?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vijaye12",
"html_url": "https://github.com/vijaye12",
"followers_url": "https://api.github.com/users/vijaye12/followers",
"following_url": "https://api.github.com/users/vijaye12/following{/other_user}",
"gists_url": "https://api.github.com/users/vijaye12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vijaye12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vijaye12/subscriptions",
"organizations_url": "https://api.github.com/users/vijaye12/orgs",
"repos_url": "https://api.github.com/users/vijaye12/repos",
"events_url": "https://api.github.com/users/vijaye12/events{/privacy}",
"received_events_url": "https://api.github.com/users/vijaye12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@kashif ",
"thanks! @amyeroberts \r\n"
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | Fix PatchTSMixer Docstring indentation issues. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27943/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27943",
"html_url": "https://github.com/huggingface/transformers/pull/27943",
"diff_url": "https://github.com/huggingface/transformers/pull/27943.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27943.patch",
"merged_at": 1702295817000
} |
https://api.github.com/repos/huggingface/transformers/issues/27942 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27942/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27942/comments | https://api.github.com/repos/huggingface/transformers/issues/27942/events | https://github.com/huggingface/transformers/pull/27942 | 2,035,251,962 | PR_kwDOCUB6oc5hqMaD | 27,942 | [`Add Mixtral`] Adds support for the Mixtral MoE | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Are there any other aux losses apart from the LM loss?",
"The auxiliary loss can be computed with `output_router_logits = True` automatically, other losses like `z_loss` can be imported from Switch transformers modeling code! Custom losses should be able to use the router_logits returned by the model"
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
Adds the latest MoE model from mistral AI | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27942/reactions",
"total_count": 62,
"+1": 26,
"-1": 0,
"laugh": 0,
"hooray": 18,
"confused": 0,
"heart": 18,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27942/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27942",
"html_url": "https://github.com/huggingface/transformers/pull/27942",
"diff_url": "https://github.com/huggingface/transformers/pull/27942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27942.patch",
"merged_at": 1702295428000
} |
https://api.github.com/repos/huggingface/transformers/issues/27941 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27941/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27941/comments | https://api.github.com/repos/huggingface/transformers/issues/27941/events | https://github.com/huggingface/transformers/issues/27941 | 2,035,248,886 | I_kwDOCUB6oc55T272 | 27,941 | The "source" button in docs points to 404 | {
"login": "R-N",
"id": 1442761,
"node_id": "MDQ6VXNlcjE0NDI3NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1442761?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/R-N",
"html_url": "https://github.com/R-N",
"followers_url": "https://api.github.com/users/R-N/followers",
"following_url": "https://api.github.com/users/R-N/following{/other_user}",
"gists_url": "https://api.github.com/users/R-N/gists{/gist_id}",
"starred_url": "https://api.github.com/users/R-N/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/R-N/subscriptions",
"organizations_url": "https://api.github.com/users/R-N/orgs",
"repos_url": "https://api.github.com/users/R-N/repos",
"events_url": "https://api.github.com/users/R-N/events{/privacy}",
"received_events_url": "https://api.github.com/users/R-N/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @R-N, \r\n\r\nThanks for raising this issue! The link should now work as v4.36 has been released. ",
"> Hi @R-N,\r\n> \r\n> Thanks for raising this issue! The link should now work as v4.36 has been released.\r\n\r\nConfirmed, it works now. Thank you"
] | 1,702 | 1,702 | 1,702 | NONE | null | ### System Info
Windows 11 64 bit 21h2
Google chrome latest
### Who can help?
@stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Open [latest Trainer docs](https://huggingface.co./docs/transformers/main_classes/trainer)
2. Scroll to training_step
3. Click "source"
For me, it opens [this link](https://github.com/huggingface/transformers/blob/v4.36.0/src/transformers/trainer.py#L2697), which shows 404 for me.
### Expected behavior
It opens the source code, for example, source code of training_step. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27941/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27940 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27940/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27940/comments | https://api.github.com/repos/huggingface/transformers/issues/27940/events | https://github.com/huggingface/transformers/pull/27940 | 2,035,220,555 | PR_kwDOCUB6oc5hqFdH | 27,940 | Fix SDPA dispatch & make SDPA CI compatible with torch<2.1.1 | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | As per title.
On torch==2.0.1, these do pass
```
RUN_SLOW=1 pytest tests/models/bart -s -vvvvv -k "torchscript"
RUN_SLOW=1 pytest tests/models/llama -s -vvvvv -k "torchscript"
RUN_SLOW=1 pytest tests/models/whisper -s -vvvvv -k "torchscript"
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/bert -s -vvvvv
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/llama -s -vvvvv
```
On torch==2.1.1, these do pass (https://github.com/huggingface/transformers/pull/26572#issuecomment-1847774858)
```
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/ -s -vvvvv -k "flash or sdpa"
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/whisper -s -vvvvv -k "llama"
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/llama -s -vvvvv
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/bart -s -vvvvv
RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/bert -s -vvvvv
```
There was a bug where even though we manually request `attn_implementation="eager"`, we would still go into the SDPA controlflow and hard check that the requirements are fine. Which is not what we want. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27940/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27940",
"html_url": "https://github.com/huggingface/transformers/pull/27940",
"diff_url": "https://github.com/huggingface/transformers/pull/27940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27940.patch",
"merged_at": 1702288598000
} |
https://api.github.com/repos/huggingface/transformers/issues/27939 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27939/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27939/comments | https://api.github.com/repos/huggingface/transformers/issues/27939/events | https://github.com/huggingface/transformers/pull/27939 | 2,034,958,797 | PR_kwDOCUB6oc5hpLtK | 27,939 | fix cpm-ant tokenizer name | {
"login": "jq460494839",
"id": 4471203,
"node_id": "MDQ6VXNlcjQ0NzEyMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4471203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jq460494839",
"html_url": "https://github.com/jq460494839",
"followers_url": "https://api.github.com/users/jq460494839/followers",
"following_url": "https://api.github.com/users/jq460494839/following{/other_user}",
"gists_url": "https://api.github.com/users/jq460494839/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jq460494839/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jq460494839/subscriptions",
"organizations_url": "https://api.github.com/users/jq460494839/orgs",
"repos_url": "https://api.github.com/users/jq460494839/repos",
"events_url": "https://api.github.com/users/jq460494839/events{/privacy}",
"received_events_url": "https://api.github.com/users/jq460494839/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,704 | 1,703 | NONE | null | # What does this PR do?
After comparison, I found that the names in the config file on HuggingFace and in the transformers library are inconsistent
Fixes #27938
@ArthurZucker @zh-zheng
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27939/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27939",
"html_url": "https://github.com/huggingface/transformers/pull/27939",
"diff_url": "https://github.com/huggingface/transformers/pull/27939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27939.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27938 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27938/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27938/comments | https://api.github.com/repos/huggingface/transformers/issues/27938/events | https://github.com/huggingface/transformers/issues/27938 | 2,034,954,335 | I_kwDOCUB6oc55SvBf | 27,938 | ValueError: Tokenizer class CPMAntTokenizer does not exist or is not currently imported. | {
"login": "jq460494839",
"id": 4471203,
"node_id": "MDQ6VXNlcjQ0NzEyMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4471203?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jq460494839",
"html_url": "https://github.com/jq460494839",
"followers_url": "https://api.github.com/users/jq460494839/followers",
"following_url": "https://api.github.com/users/jq460494839/following{/other_user}",
"gists_url": "https://api.github.com/users/jq460494839/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jq460494839/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jq460494839/subscriptions",
"organizations_url": "https://api.github.com/users/jq460494839/orgs",
"repos_url": "https://api.github.com/users/jq460494839/repos",
"events_url": "https://api.github.com/users/jq460494839/events{/privacy}",
"received_events_url": "https://api.github.com/users/jq460494839/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"```python\r\nfrom transformers import CpmAntTokenizer\r\n```\r\n\r\nis what you are looking for",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,704 | 1,704 | NONE | null | ### System Info
transformers==4.35.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. download cpm-ant-10b from huggingface
2. load cpm-ant tokenizer locally
### Expected behavior
Traceback (most recent call last):
File "/opt/projects/FastChat/fastchat/train/train.py", line 301, in <module>
train()
File "/opt/projects/FastChat/fastchat/train/train.py", line 273, in train
tokenizer = transformers.AutoTokenizer.from_pretrained(
File "/root/miniconda/envs/torch_npu/lib/python3.9/site-packages/transformers/models/auto/tokenization_auto.py", line 766, in from_pretrained
raise ValueError(
ValueError: Tokenizer class CPMAntTokenizer does not exist or is not currently imported. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27938/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27937 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27937/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27937/comments | https://api.github.com/repos/huggingface/transformers/issues/27937/events | https://github.com/huggingface/transformers/issues/27937 | 2,034,800,258 | I_kwDOCUB6oc55SJaC | 27,937 | Whisper Large-v3 has problems with language detection | {
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Left it running as this is model specific rather than transformers bug IMO",
"Indeed - feel free to post on the OpenAI repo since it looks like a model regression from large-v2 -> v3: https://github.com/openai/whisper/discussions",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,707 | 1,707 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The audio file: https://drive.google.com/file/d/1EFWm7GpP79NUEmUO6rsLo444OugGRDHP/view?usp=sharing
```python
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="openai/whisper-large-v3")
output = pipe("sample.flac")
print(output)
```
v3 output: `{'text': ' Mari kita perlahan-lahan dengan penyelidikan baru dan baru yang telah dilakukan tahun lepas.'}`
v2 output: `{'text': " Let's go slow with this new and novel legalization passed last year."}`
tiny.en output: `{'text': " Let's go slow with this new and novel legalization past last year."}`
### Expected behavior
Output should be like v2 and the tiny.en model. I suspect there is something very wrong with language detection. I couldn't run v3 in the original repo (https://github.com/openai/whisper) due to OOM so I'm not sure if this is a problem with the v3 model itself or inference code in HF.
Seems related: https://github.com/huggingface/transformers/issues/27368#issuecomment-1835564217 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27937/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27936 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27936/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27936/comments | https://api.github.com/repos/huggingface/transformers/issues/27936/events | https://github.com/huggingface/transformers/issues/27936 | 2,034,584,228 | I_kwDOCUB6oc55RUqk | 27,936 | Problems importing LlavaForConditionalGeneration | {
"login": "ppsmk388",
"id": 60417397,
"node_id": "MDQ6VXNlcjYwNDE3Mzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/60417397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ppsmk388",
"html_url": "https://github.com/ppsmk388",
"followers_url": "https://api.github.com/users/ppsmk388/followers",
"following_url": "https://api.github.com/users/ppsmk388/following{/other_user}",
"gists_url": "https://api.github.com/users/ppsmk388/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ppsmk388/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ppsmk388/subscriptions",
"organizations_url": "https://api.github.com/users/ppsmk388/orgs",
"repos_url": "https://api.github.com/users/ppsmk388/repos",
"events_url": "https://api.github.com/users/ppsmk388/events{/privacy}",
"received_events_url": "https://api.github.com/users/ppsmk388/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! 👋 llava is no part of the latest release! You need to install from source or wait until tomorrow! 🤗",
"ok, thank you for your reply"
] | 1,702 | 1,702 | 1,702 | NONE | null | ### System Info
transformers version: 4.35.2
Platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31
Python version: 3.9.2
Huggingface_hub version: 0.19.4
Safetensors version: 0.4.1
Accelerate version: 0.25.0
PyTorch version (GPU?): 2.1.1+cu121 (True)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from transformers import LlavaForConditionalGeneration
### Expected behavior
An error message will be displayed:
ImportError: cannot import name 'LlavaForConditionalGeneration' from 'transformers' (/data/kkk/anaconda3/envs/va/lib/python3.9/site-packages/transformers/__init__.py)
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27936/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27935 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27935/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27935/comments | https://api.github.com/repos/huggingface/transformers/issues/27935/events | https://github.com/huggingface/transformers/pull/27935 | 2,034,508,529 | PR_kwDOCUB6oc5hnrHn | 27,935 | Fix tensor-parallelism link | {
"login": "steilgedacht",
"id": 89748204,
"node_id": "MDQ6VXNlcjg5NzQ4MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/89748204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/steilgedacht",
"html_url": "https://github.com/steilgedacht",
"followers_url": "https://api.github.com/users/steilgedacht/followers",
"following_url": "https://api.github.com/users/steilgedacht/following{/other_user}",
"gists_url": "https://api.github.com/users/steilgedacht/gists{/gist_id}",
"starred_url": "https://api.github.com/users/steilgedacht/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/steilgedacht/subscriptions",
"organizations_url": "https://api.github.com/users/steilgedacht/orgs",
"repos_url": "https://api.github.com/users/steilgedacht/repos",
"events_url": "https://api.github.com/users/steilgedacht/events{/privacy}",
"received_events_url": "https://api.github.com/users/steilgedacht/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you run `make style`? ",
"running `make style` makes no changes, do you have any ideas on what to change?",
"Make sure you have `ruff==0.1.5` 🤗 and that you rebased on main! ",
"Okay, I have already `ruff=0.1.5` and rebased it, am I doing anything wrong here? 😅\r\n\r\n![image](https://github.com/huggingface/transformers/assets/89748204/f7e55e4a-aa1c-4a56-af64-6b223c31ef00)\r\n\r\n\r\n",
"Ah no I have a similar issue. just `make fixup` should help I think otherwise just reverting the changes ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,705 | 1,705 | NONE | null | # What does this PR do?
Replaces the old link in the llama configuration file to the new section on the website.
Clean PR of #27840
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27935/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27935",
"html_url": "https://github.com/huggingface/transformers/pull/27935",
"diff_url": "https://github.com/huggingface/transformers/pull/27935.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27935.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27934 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27934/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27934/comments | https://api.github.com/repos/huggingface/transformers/issues/27934/events | https://github.com/huggingface/transformers/pull/27934 | 2,034,454,834 | PR_kwDOCUB6oc5hngpN | 27,934 | [BEiT] Fix test | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the failing test_forward_signature test for `BeitBackbone`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27934/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27934",
"html_url": "https://github.com/huggingface/transformers/pull/27934",
"diff_url": "https://github.com/huggingface/transformers/pull/27934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27934.patch",
"merged_at": 1702282622000
} |
https://api.github.com/repos/huggingface/transformers/issues/27933 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27933/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27933/comments | https://api.github.com/repos/huggingface/transformers/issues/27933/events | https://github.com/huggingface/transformers/issues/27933 | 2,034,416,293 | I_kwDOCUB6oc55Qrql | 27,933 | Migrate to Pydantic v2 | {
"login": "lmmx",
"id": 2979452,
"node_id": "MDQ6VXNlcjI5Nzk0NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2979452?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lmmx",
"html_url": "https://github.com/lmmx",
"followers_url": "https://api.github.com/users/lmmx/followers",
"following_url": "https://api.github.com/users/lmmx/following{/other_user}",
"gists_url": "https://api.github.com/users/lmmx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lmmx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmmx/subscriptions",
"organizations_url": "https://api.github.com/users/lmmx/orgs",
"repos_url": "https://api.github.com/users/lmmx/repos",
"events_url": "https://api.github.com/users/lmmx/events{/privacy}",
"received_events_url": "https://api.github.com/users/lmmx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"please migrate to pydantic v2 🤗",
"Please do this!",
"Hi @lmmx, thanks for raising this! \r\n\r\nWe can certainly think about having v2 with v1 fallback support. However, I suspect the previous difficulty being raised was that it's hard to manage fallbacks when there's an incompatibility with a third-party library we interface with. Completely agree that pinning to an old version is brittle and it's best we update if and when possible. \r\n\r\nWDYT @ydshieh?",
"Hi, thank you for asking!\r\n\r\nFor now, I can (at least) trigger a CI run with `Pydantic v2` and let's see how it goes to decide what to go!",
"A PR (and its CI) is opened #28728 🤞 !",
"CircleCI is good. I still need to check with docker image and other CI on github",
"Docker image is fine\r\n\r\nhttps://github.com/huggingface/transformers/actions/runs/7669130818/job/20902409442\r\n\r\n(the failure at the end is not related to `Pydantic` -> everything is installed successfully)"
] | 1,702 | 1,706 | 1,706 | NONE | null | ### Feature request
Pydantic v2 was released five months ago in June 2023.
Transformers has pinned to v1 (#24596), which should only be used as a temporary solution.
Leaving it this way means that the many new features of Pydantic 2 are missed, and leaves little hope for the library to keep pace as a roadmap to v3 is already emerging.
In #24597 it was mentioned that part of the barrier was (at the time) in external dependencies that couple Transformers to v1:
> Regarding using Pydantic V2, I am afraid that the involved places are not directly in `transformers` codebase.
>
> For example, in
>
> [#24596 (comment)](https://github.com/huggingface/transformers/pull/24596#issuecomment-1615176591)
>
> it shows
>
> ```shell
> 2023-06-30T20:07:31.9883431Z > [19/19] RUN python3 -c "from deepspeed.launcher.runner import main":
> 2023-06-30T20:07:31.9883916Z 1.621 from deepspeed.runtime.zero.config import DeepSpeedZeroConfig
> 2023-06-30T20:07:31.9884613Z 1.621 File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py", line 76, in <module>
> 2023-06-30T20:07:31.9885116Z 1.621 class DeepSpeedZeroConfig(DeepSpeedConfigModel):
> 2023-06-30T20:07:31.9885814Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 171, in __new__
> 2023-06-30T20:07:31.9886256Z 1.621 set_model_fields(cls, bases, config_wrapper, types_namespace)
> 2023-06-30T20:07:31.9886812Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 361, in set_model_fields
> 2023-06-30T20:07:31.9887329Z 1.621 fields, class_vars = collect_model_fields(cls, bases, config_wrapper, types_namespace, typevars_map=typevars_map)
> 2023-06-30T20:07:31.9888039Z 1.621 File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_fields.py", line 112, in collect_model_fields
> 2023-06-30T20:07:31.9888950Z 1.621 raise NameError(f'Field "{ann_name}" has conflict with protected namespace "{protected_namespace}"')
> 2023-06-30T20:07:31.9889546Z 1.621 NameError: Field "model_persistence_threshold" has conflict with protected namespace "
> ```
>
> which indicates `/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/config.py` using `pydantic`.
>
> It's the 3rd party libraries using pydantic have to do something in order to be run with pydantic V2. Right now, `transformers` can only pin v1 and wait.
These barriers should at the very least be enumerated, I’m sure there are ways to deal with them without holding the entire repo’s development back.
Libraries such as SQLModel have included support for both v1 and v2.
- https://github.com/tiangolo/sqlmodel/pull/722
- https://github.com/tiangolo/sqlmodel/pull/709
- https://github.com/tiangolo/sqlmodel/pull/699 (first draft, ultimately not merged)
The pin adopted in Transformers has already begun to cause clashes with other libraries on v2 such as Gradio (v2.4.2 as raised in #27273)
> Eventually, if `pydantic>=2` is used by many libraries, we might consider to update the requirement (as long as not so many things breaking 😄 )
I fully appreciate the need to maintain backcompatibility and it is possible to support both, as examples like SQLModel have demonstrated.
### Motivation
The syntax of Pydantic v1 is incompatible with v2. Backpinning should only be used as a temporary measure, it is not a sustainable long-term approach. Specifically, the pin would be relaxed to `pydantic<3.0.0` as in SQLModel.
### Your contribution
I am opening this feature request to begin discussion and hopefully contribute to its resolution. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27933/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27933/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27932 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27932/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27932/comments | https://api.github.com/repos/huggingface/transformers/issues/27932/events | https://github.com/huggingface/transformers/pull/27932 | 2,034,339,234 | PR_kwDOCUB6oc5hnJXa | 27,932 | Adds VIP-llava to transformers | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27932). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
VIP-llava is a new Llava variant. It seems the only difference between Llava and VIP-Llava is that VIP-llava uses a projector layernorm before passing the hidden states into the MM projector. It also concatenates many hidden states from the image encoder before passing it to the multi-modal projector.
```python
from transformers import pipeline
from PIL import Image
import requests
model_id = "ybelkada/vip-llava-7b"
pipe = pipeline("image-to-text", model=model_id, model_kwargs={"load_in_4bit": True, "use_flash_attention_2": True})
url = "https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/compel-neg.png"
image = Image.open(requests.get(url, stream=True).raw)
prompt = "USER: <image>\nCan you please describe this image?\nASSISTANT:"
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 100})
print(outputs[0]["generated_text"])
>>> USER: <image>
Can you please describe this image?
ASSISTANT: The image features a brown and white cat sitting on a green surface, with a red ball in its paw. The cat appears to be playing with the ball, possibly a sports ball, as it is positioned in a relaxed manner. The cat's eyes are wide open, indicating that it is focused on the ball and possibly in the middle of a playful moment.
```
![image](https://huggingface.co./datasets/huggingface/documentation-images/resolve/main/diffusers/compel-neg.png)
> The image features a brown and white cat sitting on a green surface, with a red ball in its paw. The cat appears to be playing with the ball, possibly a sports ball, as it is positioned in a relaxed manner. The cat's eyes are wide open, indicating that it is focused on the ball and possibly in the middle of a playful moment.
Also compatible with Flash Attention 2.
https://github.com/mu-cai/ViP-LLaVA
cc @ArthurZucker @NielsRogge @mu-cai @haotian-liu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27932/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27932/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27932",
"html_url": "https://github.com/huggingface/transformers/pull/27932",
"diff_url": "https://github.com/huggingface/transformers/pull/27932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27932.patch",
"merged_at": 1702460544000
} |
https://api.github.com/repos/huggingface/transformers/issues/27931 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27931/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27931/comments | https://api.github.com/repos/huggingface/transformers/issues/27931/events | https://github.com/huggingface/transformers/pull/27931 | 2,034,296,431 | PR_kwDOCUB6oc5hnAoy | 27,931 | [`Core generation`] Adds support for static KV cache | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27931). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"If I understand correctly, this PR should close the existing gap between inference with transformers + AutoGPTQ and inference with ExLlama, as the VRAM usage would become much more controlled. I'm rooting for it :)",
"Thanks! 🤗",
"Exciting PR! ",
"1. This was some debug / tests but removed hehe\r\n2. The idea is to be able to save this and track usage + just make the api seamless. You can pass the cache instance but it should be able to be deduced. It's not a generation kwargs for me but a generation_config argument rather than that \r\n3. Attention mask is mostly what is left to adress TBH, because the control flow introduced are non trivial that is why I have a seperate PR that tried to take inspiration for JAX. Still working on it! Jax does update the attention mask with the cache see:\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_flax_llama.py#L225\r\n\r\nMain attention challenge is that we have different attention classes, which need different kind of masks. My goal is to only pass the 2d attention mask, and have each attention layers use it. ",
"How do I use this PR on npu device, here is my demo with errors.\r\n```python\r\nimport torch\r\nimport torch_npu\r\nfrom transformers import LlamaForCausalLM, LlamaTokenizer, TextStreamer\r\n\r\n\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(\r\n \"../Llama-2-7b-chat-hf\",\r\n device_map=\"npu:2\"\r\n)\r\nres = tokenizer(\"hello\")\r\n\r\nllama_model = LlamaForCausalLM.from_pretrained(\r\n \"../Llama-2-7b-chat-hf\",\r\n device_map=\"npu:2\"\r\n)\r\nstreamer = TextStreamer(tokenizer)\r\n\r\n\r\nprint(\"load successful\")\r\nwhile True:\r\n ins = input(\"user: \")\r\n res = tokenizer.encode(ins, return_tensors=\"pt\").to(\"npu:2\")\r\n print(res.dtype)\r\n outputs = llama_model.generate(\r\n inputs=res,\r\n streamer=streamer,\r\n max_new_tokens=10,\r\n )\r\n```\r\n\r\nTraceback (most recent call last):\r\n File \"test.py\", line 27, in <module>\r\n streamer=streamer,\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/transformers/generation/utils.py\", line 1790, in generate\r\n return self.sample(\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/transformers/generation/utils.py\", line 2887, in sample\r\n outputs = self(\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py\", line 1206, in forward\r\n outputs = self.model(\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py\", line 1093, in forward\r\n layer_outputs = decoder_layer(\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py\", line 820, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/conda/envs/PyTorch2.1.0/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py\", line 438, in forward\r\n raise ValueError(\r\nValueError: Attention mask should be of size (1, 1, 2, 2), but is torch.Size([1, 2])",
"This PR is still very much in draft mode. A script can be found in the tests, but does not support everything yet! Would recommend you to wait a tad bit, should be in a better state by the end of the week",
"cc @oobabooga the PR is more or less ready, would love to get your feedback from using it in `oobabooga`! (this needs a bit of custom work to compile only the decoding step, but should already help!) ",
"I had a look at them, the main pain point here is: \r\n- the overhead on a generation is ~2 token /s \r\n- the overhead of not using the `position_ids` tensor in compiled mode is more than 40 tokens / s\r\ns",
"Alternative for ROPE that does not break cuda graphs + faster (5%) than vanilla recomputing. Issue is you save these tensor so loading will be wrong for sure.\r\n```python\r\nclass LlamaRotaryEmbedding(nn.Module):\r\n def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):\r\n super().__init__()\r\n\r\n self.dim = dim\r\n self.max_position_embeddings = max_position_embeddings\r\n self.base = base\r\n inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2, dtype=torch.int64).float().to(device) / self.dim))\r\n self.register_buffer(\"inv_freq\", inv_freq, persistent=False)\r\n # Build here to make `torch.jit.trace` work.\r\n self._set_cos_sin_cache(\r\n seq_len=max_position_embeddings, device=self.inv_freq.device, dtype=torch.get_default_dtype()\r\n )\r\n\r\n \r\n def _set_cos_sin_cache(self, seq_len, device, dtype):\r\n self.max_seq_len_cached = seq_len\r\n t = torch.arange(self.max_seq_len_cached, device=device, dtype=self.inv_freq.dtype)\r\n\r\n freqs = torch.outer(t, self.inv_freq)\r\n # Different from paper, but it uses a different permutation in order to obtain the same calculation\r\n emb = torch.cat((freqs, freqs), dim=-1)\r\n self.register_buffer(\"cos_cached\", emb.cos().to(dtype), persistent=False)\r\n self.register_buffer(\"sin_cached\", emb.sin().to(dtype), persistent=False)\r\n\r\n \r\n def forward(self, x, position_ids, seq_len=None):\r\n # x: [bs, num_attention_heads, seq_len, head_size]\r\n if seq_len > self.max_seq_len_cached:\r\n self._set_cos_sin_cache(seq_len=seq_len, device=x.device, dtype=x.dtype)\r\n cos = F.embedding(position_ids, self.cos_cached)[0,:,:].to(dtype=x.dtype)\r\n sin = F.embedding(position_ids, self.sin_cached)[0,:,:].to(dtype=x.dtype)\r\n return cos, sin\r\n```\r\n<img width=\"896\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/48595927/0e37d453-2712-462b-9e33-e6183e3bd983\">\r\n",
"@ArthurZucker when will the next minor release (4.37.3?) of transformers come out? Will this PR be included in 4.37.3? ",
"@tsengalb99 v4.37.3 would be a micro release. These are reserved for patch fixes which resolve newly introduced breaking changes in the repo or fixing regressions from the last release.\r\n\r\nThe next minor release is v4.38. We release on a roughly monthly schedule, so would be in around two weeks. \r\n\r\nIf you want to use this feature immediately, you can [install from source](https://huggingface.co./docs/transformers/installation#install-from-source). ",
"Is this working in general or only for some specific hardware, etc? \r\nI tried using fresh installs with WSL2, ubuntu, Cuda 11.8 (and again with 12.1) on an RTX A5000\r\n\r\n I installed from source today and tried the test script (https://gist.github.com/ArthurZucker/2dd607c4333ac4c489af30f54a1d8a2d)\r\n\r\n but the model.generate for compiled forward throws runtimeError: attn_bias is not correctly aligned (strideH). attn_bias.stride(1) = 21, and should be a multiple of 8. \r\n\r\n",
"What is the recommended way to use cuda graphs with this PR? The torch cuda graph wrapper does not appear to be working iirc due to this line `attention_mask is not None and not torch.all(attention_mask[..., 0] == 1) and q_len != 1` / the equivalent in all model files.",
"This line is probably gonne be removed in #28937 \r\nI'update once this is ready.\r\nI use torch nightly so torch 2.3! ",
"@ArthurZucker Great to see the PR has been merged. I tried with the latest nightlies on H100, and running into errors (full [logs](https://gist.github.com/chauhang/a0dabe50c9958af5c92a3b1a23d1e888))\r\n\r\nwith sdpa attention: \r\n`RuntimeError: Event device type CUDA does not match blocking stream's device type CPU.`\r\n\r\nwith flash_attention_2 (fails in import itself):\r\n`flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN3c104cuda9SetDeviceEi`",
"Nice, could you share a full stack trace? 🤗 \r\n",
"@chauhang \r\n\r\n> with flash_attention_2 (fails in import itself):\r\n\r\nThis means that the your local torch version does not match the torch version flash-attn was built against. You would need to uninstall & rebuild flash-attn."
] | 1,702 | 1,707 | 1,707 | COLLABORATOR | null | # ~4x speedups with cuda graphs! 🥳
Currently getting ~4x speedups compare to dynamic cache with torch.compile for a single forward pass (agnostic to batch but faster for smaller batch)
Forward is very very very fast, but materializing the input costs a bit!
~10ms / forward is what we get to!
- [x] Refactors the way we deal with attention mask:
- causal and padding are separated
- does not rely on the `past_key_values`
- merged in 2 line. No attention mask utils are needed, no extra complicated logic all explicit
- LlamaAttention is not self contained, this added 20% overhead in a simple forward
- Gets rid of the entire `mask_attn_utils` 😄
- [x] Save the cache class in the generation config
- [x] Init the cache with the batch size (from the generate call) and the `max_length` from the generation config (taking max_new_tokens) into account
- [x] torch.compile
## Benchmark using [af097af](https://github.com/huggingface/transformers/pull/27931/commits/af097af7582de01fcf73856e7fa37be972e96456)
## Use it in generate:
Use this: EDIT: TO COME
## Failing test left for @gante
Related to the fact that I don't return `past_key_values` / is None so the `test_new_cache_format` fails. I don't want to dive in this.
fixes #28075 , fixes #28610, fixes #28190 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27931/reactions",
"total_count": 21,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 12,
"rocket": 9,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27931/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27931",
"html_url": "https://github.com/huggingface/transformers/pull/27931",
"diff_url": "https://github.com/huggingface/transformers/pull/27931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27931.patch",
"merged_at": 1707389434000
} |
https://api.github.com/repos/huggingface/transformers/issues/27930 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27930/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27930/comments | https://api.github.com/repos/huggingface/transformers/issues/27930/events | https://github.com/huggingface/transformers/issues/27930 | 2,034,262,567 | I_kwDOCUB6oc55QGIn | 27,930 | An error when creating test_dataloader in Time series transformer | {
"login": "kkckk1110",
"id": 144304282,
"node_id": "U_kgDOCJnomg",
"avatar_url": "https://avatars.githubusercontent.com/u/144304282?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkckk1110",
"html_url": "https://github.com/kkckk1110",
"followers_url": "https://api.github.com/users/kkckk1110/followers",
"following_url": "https://api.github.com/users/kkckk1110/following{/other_user}",
"gists_url": "https://api.github.com/users/kkckk1110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkckk1110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkckk1110/subscriptions",
"organizations_url": "https://api.github.com/users/kkckk1110/orgs",
"repos_url": "https://api.github.com/users/kkckk1110/repos",
"events_url": "https://api.github.com/users/kkckk1110/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkckk1110/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
] | [
"Hey! Could you make sure you provide a reproducer, isolating the bug? We can't really debug your code in your stead 🤗 ",
"Thanks to your quick reply! Here are my detailed codes related to the bug.\r\n\r\n```python\r\nfrom transformers import TimeSeriesTransformerConfig, TimeSeriesTransformerForPrediction\r\n\r\nconfig = TimeSeriesTransformerConfig(\r\n prediction_length = prediction_length,\r\n context_length = prediction_length,\r\n lags_sequence = lags_sequence,\r\n num_time_features = len(time_features) + 1,\r\n num_static_categorical_features = 1,\r\n num_dynamic_real_features = 48,\r\n \r\n cardinality = [len(train_dataset)],\r\n embedding_dimension = [2],\r\n\r\n # transformer params:\r\n encoder_layers = 4,\r\n decoder_layers = 4,\r\n d_model = 32,\r\n)\r\n\r\nmodel = TimeSeriesTransformerForPrediction(config)\r\n\r\ndef create_test_dataloader(\r\n config: PretrainedConfig,\r\n freq,\r\n data,\r\n batch_size: int,\r\n **kwargs,\r\n):\r\n PREDICTION_INPUT_NAMES = [\r\n \"past_time_features\",\r\n \"past_values\", \r\n \"past_observed_mask\",\r\n \"future_time_features\",\r\n ]\r\n if config.num_static_categorical_features > 0:\r\n PREDICTION_INPUT_NAMES.append(\"static_categorical_features\")\r\n\r\n if config.num_static_real_features > 0:\r\n PREDICTION_INPUT_NAMES.append(\"static_real_features\")\r\n \r\n transformation = create_transformation(freq, config)\r\n transformed_data = transformation.apply(data, is_train=False) #!!!\r\n\r\n # we create a Test Instance splitter which will sample the very last\r\n # context window seen during training only for the encoder.\r\n instance_sampler = create_instance_splitter(config, \"test\")\r\n\r\n # we apply the transformations in test mode\r\n testing_instances = instance_sampler.apply(transformed_data, is_train=False)\r\n\r\n return as_stacked_batches(\r\n testing_instances,\r\n batch_size=batch_size,\r\n output_type=torch.tensor,\r\n field_names=PREDICTION_INPUT_NAMES,\r\n )\r\n```",
"Actually, most of the codes are identical with those in https://huggingface.co./blog/time-series-transformers, except that I set num_dynamic_real_features = 48 in the Config",
"Yep, but this changes the value of `self._number_of_features`, which affect `self.feature_size` in return. The `past_time_features` given to the model need to be updated as well. I'm betting the issue is with the dataset rather than anything. The `feat_dynamic_real`'s shape is expected to be `2,75` apparently. I'm not super familiar with this model but if you can share the dataset you are using as well it would help 🤗 ",
"Thank you very much! I will provide more details about my dataset. I am working on my own dataset, using the following codes.\r\n```python\r\nfrom gluonts.dataset.pandas import PandasDataset\r\n\r\nprediction_length = 8\r\ntrain = PandasDataset.from_long_dataframe(train, target=\"sales\", item_id=\"city\",\r\n feat_dynamic_real = [col for col in train.columns if ((col != 'sales') & (col != 'city'))])\r\ntest = PandasDataset.from_long_dataframe(test, target=\"sales\", item_id=\"city\",\r\n feat_dynamic_real = [col for col in test.columns if ((col != 'sales') & (col != 'city'))])\r\n\r\nclass ProcessStartField():\r\n ts_id = 0\r\n def __call__(self, data):\r\n data[\"start\"] = data[\"start\"].to_timestamp()\r\n data[\"feat_static_cat\"] = [self.ts_id]\r\n dynamic_feature_values = np.array(data[\"feat_dynamic_real\"])\r\n data[\"feat_dynamic_real\"] = dynamic_feature_values\r\n self.ts_id += 1\r\n return data\r\n\r\nfrom gluonts.itertools import Map\r\nprocess_start = ProcessStartField()\r\nlist_train = list(Map(process_start, train))\r\nprocess_start = ProcessStartField()\r\nlist_test = list(Map(process_start, test))\r\n\r\ntrainset = Dataset.from_list(list_train) #, features=features\r\ntestset = Dataset.from_list(list_test)\r\n```\r\nAnd my dataset looks like:\r\n```\r\ntrain_example: {'start': datetime.datetime(2017, 1, 1, 0, 0),\r\n 'target': [806, 806...] (a list with length of **59**)\r\n'item_id': cityname\r\n'feat_dynamic_real': [[215,219...],[217,192...]...] (a list comprising of _48_ sublists, each sublist has length of **59**)}\r\n'feat_static_cat': [0]}\r\n```\r\nThe corresponding test_example is :\r\n```\r\n {'start': datetime.datetime(2017, 1, 1, 0, 0),\r\n 'target': [806, 806...] (a list with length of **67**)\r\n'item_id': cityname\r\n'feat_dynamic_real': [[215,219...],[217,192...]...] (a list comprising of _48_ sublists, each sublist has length of **67**)}\r\n'feat_static_cat': [0]}\r\n```\r\nI also tried other number of context length, and I found that, the codes always transformed the testdata to the length of 67+context_length, which in turn incurred an error. \r\nI am really confused about the bug for a long time. I would really appreciate it if you can help! Thanks!",
"cc @NielsRogge if you have any idea?!",
"cc @kashif, the master of time series",
"@kkckk1110 so it could be that the arrays `feat_dynamic_real` in your dataset have to be transposed... let me double-check but you can quickly try it if that helps.",
"Thank you very much for your attention! I have already tried but came across errors though.",
"can you paste a bigger error trace? is it from the `VStack` transformation?\r\n",
"Exactly! the complete error information is as follows.\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[51], line 1\r\n----> 1 batch = next(iter(test_dataloader))\r\n 2 for k, v in batch.items():\r\n 3 print(k, v.shape, v.type())\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/itertools.py:415, in IterableSlice.__iter__(self)\r\n 414 def __iter__(self):\r\n--> 415 yield from itertools.islice(self.iterable, self.length)\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:111, in TransformedDataset.__iter__(self)\r\n 110 def __iter__(self) -> Iterator[DataEntry]:\r\n--> 111 yield from self.transformation(\r\n 112 self.base_dataset, is_train=self.is_train\r\n 113 )\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:132, in MapTransformation.__call__(self, data_it, is_train)\r\n 129 def __call__(\r\n 130 self, data_it: Iterable[DataEntry], is_train: bool\r\n 131 ) -> Iterator:\r\n--> 132 for data_entry in data_it:\r\n 133 try:\r\n 134 yield self.map_transform(data_entry.copy(), is_train)\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/dataset/loader.py:55, in Stack.__call__(self, data, is_train)\r\n 54 def __call__(self, data, is_train):\r\n---> 55 for batch in data:\r\n 56 yield rows_to_columns(batch, np.array)\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/dataset/loader.py:50, in Batch.__call__(self, data, is_train)\r\n 49 def __call__(self, data, is_train):\r\n---> 50 yield from batcher(data, self.batch_size)\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/itertools.py:131, in batcher.<locals>.get_batch()\r\n 130 def get_batch():\r\n--> 131 return list(itertools.islice(it, batch_size))\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:132, in MapTransformation.__call__(self, data_it, is_train)\r\n 129 def __call__(\r\n 130 self, data_it: Iterable[DataEntry], is_train: bool\r\n 131 ) -> Iterator:\r\n--> 132 for data_entry in data_it:\r\n 133 try:\r\n 134 yield self.map_transform(data_entry.copy(), is_train)\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:111, in TransformedDataset.__iter__(self)\r\n 110 def __iter__(self) -> Iterator[DataEntry]:\r\n--> 111 yield from self.transformation(\r\n 112 self.base_dataset, is_train=self.is_train\r\n 113 )\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:186, in FlatMapTransformation.__call__(self, data_it, is_train)\r\n 182 def __call__(\r\n 183 self, data_it: Iterable[DataEntry], is_train: bool\r\n 184 ) -> Iterator:\r\n 185 num_idle_transforms = 0\r\n--> 186 for data_entry in data_it:\r\n 187 num_idle_transforms += 1\r\n 188 for result in self.flatmap_transform(data_entry.copy(), is_train):\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:111, in TransformedDataset.__iter__(self)\r\n 110 def __iter__(self) -> Iterator[DataEntry]:\r\n--> 111 yield from self.transformation(\r\n 112 self.base_dataset, is_train=self.is_train\r\n 113 )\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:132, in MapTransformation.__call__(self, data_it, is_train)\r\n 129 def __call__(\r\n 130 self, data_it: Iterable[DataEntry], is_train: bool\r\n 131 ) -> Iterator:\r\n--> 132 for data_entry in data_it:\r\n 133 try:\r\n 134 yield self.map_transform(data_entry.copy(), is_train)\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:136, in MapTransformation.__call__(self, data_it, is_train)\r\n 134 yield self.map_transform(data_entry.copy(), is_train)\r\n 135 except Exception as e:\r\n--> 136 raise e\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:134, in MapTransformation.__call__(self, data_it, is_train)\r\n 132 for data_entry in data_it:\r\n 133 try:\r\n--> 134 yield self.map_transform(data_entry.copy(), is_train)\r\n 135 except Exception as e:\r\n 136 raise e\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/_base.py:149, in SimpleTransformation.map_transform(self, data, is_train)\r\n 148 def map_transform(self, data: DataEntry, is_train: bool) -> DataEntry:\r\n--> 149 return self.transform(data)\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/gluonts/transform/convert.py:219, in VstackFeatures.transform(self, data)\r\n 213 def transform(self, data: DataEntry) -> DataEntry:\r\n 214 r = [\r\n 215 data[fname]\r\n 216 for fname in self.input_fields\r\n 217 if data[fname] is not None\r\n 218 ]\r\n--> 219 output = np.vstack(r) if not self.h_stack else np.hstack(r)\r\n 220 data[self.output_field] = output\r\n 221 for fname in self.cols_to_drop:\r\n\r\nFile ~/miniforge3/envs/tensorflow/lib/python3.9/site-packages/numpy/core/shape_base.py:289, in vstack(tup, dtype, casting)\r\n 287 if not isinstance(arrs, list):\r\n 288 arrs = [arrs]\r\n--> 289 return _nx.concatenate(arrs, 0, dtype=dtype, casting=casting)\r\n\r\nValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 75 and the array at index 2 has size 67",
"so yes the shape of the `data[\"feat_dynamic_real\"].shape` has to be (`num_dynamic_real_features`, `time`), can you confirm that is the case in your dataset?\r\n",
"Sure! \r\nnp.array(train_example['feat_dynamic_real']).shape : (48, 59)\r\nnp.array(validation_example['feat_dynamic_real']).shape : (48, 67)\r\nwhich are expected and consistent with (num_dynamic_real_features, time)",
"@kkckk1110 ok I figured it out, my bad, i'll update the blog post: In the mean time use this for the back-testing:\r\n\r\n```python\r\n def create_backtest_dataloader(\r\n config: PretrainedConfig,\r\n freq,\r\n data,\r\n batch_size: int,\r\n **kwargs,\r\n):\r\n PREDICTION_INPUT_NAMES = [\r\n \"past_time_features\",\r\n \"past_values\",\r\n \"past_observed_mask\",\r\n \"future_time_features\",\r\n ]\r\n if config.num_static_categorical_features > 0:\r\n PREDICTION_INPUT_NAMES.append(\"static_categorical_features\")\r\n\r\n if config.num_static_real_features > 0:\r\n PREDICTION_INPUT_NAMES.append(\"static_real_features\")\r\n\r\n transformation = create_transformation(freq, config)\r\n transformed_data = transformation.apply(data, is_train=True)\r\n\r\n # we create a Test Instance splitter which will sample the very last\r\n # context window seen during training only for the encoder.\r\n instance_sampler = create_instance_splitter(config, \"validation\")\r\n\r\n # we apply the transformations in train mode\r\n backtesting_instances = instance_sampler.apply(transformed_data, is_train=True)\r\n \r\n return as_stacked_batches(\r\n backtesting_instances,\r\n batch_size=batch_size,\r\n output_type=torch.tensor,\r\n field_names=PREDICTION_INPUT_NAMES,\r\n )\r\n``` ",
"It works! Great! Can you elaborate on why the modified codes solved the problems?",
"yes the splitter for back-testing still needs to be applied in \"Training\"\r\n\r\n```\r\n backtesting_instances = instance_sampler.apply(transformed_data, is_train=True)\r\n ```\r\n \r\n else the dynamic features are not split while the target is... I'll check with the gluonts folks ",
"I really appreciate your efforts! I hate to bother you but I have another problem. I wonder how can I interpret time series models, such as analyzing feature importance? Are there any tools like SHAP, for time series forecasting?",
"there is none at the moment for transformer-based models as far as i know... one does have the attention weights so can figure out which time points were used to make a particular prediction"
] | 1,702 | 1,702 | 1,702 | NONE | null | ### System Info
I am running a time series transformer following the tutorial in Huggingface.
I have dynamic_features_real = 48 in my dataset. However, I came across an error when creating the test_dataloader:
```python
ValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 75 and the array at index 2 has size 67.
``
some settings in my codes are:
```python
len(train_example['target']) = 59
len(validation_example['target']) = 67
np.array(validation_example['feat_dynamic_real']).shape = (48, 67)
prediction_length = 8
```
I think the problem is with the feat_dynamic_real, because when I set it = 0, the codes run normally. However, I tried but fail to solve the problem.
Can anyone help me fix the problem? Thanks a lot!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
batch = next(iter(test_dataloader))
for k, v in batch.items():
print(k, v.shape, v.type())
```
ValueError: all the input array dimensions except for the concatenation axis must match exactly, but along dimension 1, the array at index 0 has size 75 and the array at index 2 has size 67
### Expected behavior
I hope to fix the problem. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27930/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27929 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27929/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27929/comments | https://api.github.com/repos/huggingface/transformers/issues/27929/events | https://github.com/huggingface/transformers/pull/27929 | 2,034,210,195 | PR_kwDOCUB6oc5hmvSt | 27,929 | fix: handle multiprocess properly in trainer checkpointing | {
"login": "thundergolfer",
"id": 12058921,
"node_id": "MDQ6VXNlcjEyMDU4OTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/12058921?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thundergolfer",
"html_url": "https://github.com/thundergolfer",
"followers_url": "https://api.github.com/users/thundergolfer/followers",
"following_url": "https://api.github.com/users/thundergolfer/following{/other_user}",
"gists_url": "https://api.github.com/users/thundergolfer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thundergolfer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thundergolfer/subscriptions",
"organizations_url": "https://api.github.com/users/thundergolfer/orgs",
"repos_url": "https://api.github.com/users/thundergolfer/repos",
"events_url": "https://api.github.com/users/thundergolfer/events{/privacy}",
"received_events_url": "https://api.github.com/users/thundergolfer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Test failures are for \"documentation\" and \"transformers metadata\", same as last time (https://github.com/huggingface/transformers/pull/27820#issuecomment-1841186551)",
"> have you confirmed this works on a multi-GPU system?\r\n\r\nYes, that's detailed in the PR description, starting with the sentence: \"I didn't setup a multi-GPU VM to run the test, ...\"\r\n\r\nAlso if you agree with the TODO, I'm happy to make a follow-up PR addressing it 🙂 ",
"Sorry for missing it! I'll run it locally here and get back to you on if the solution indeed works. If so, yes a follow up PR for that would be great :) ",
"Any Update here? Thanks!",
"any update here.? waiting for the PR merge",
"I tried the changes from this PR, but I got other issue as follows:\r\n```\r\n[E ProcessGroupNCCL.cpp:475] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=387992, OpType=_ALLGATHER_BASE, NumelIn=6291456, NumelOut=25165824, Timeout(ms)=1800000) ran for 1800865 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:489] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:495] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:916] [Rank 0] NCCL watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=387891, OpType=_ALLGATHER_BASE, NumelIn=1024, NumelOut=4096, Timeout(ms)=1800000) ran for 1801725 milliseconds before timing out.\r\nterminate called after throwing an instance of 'std::runtime_error'\r\n what(): [Rank 0] NCCL watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=387891, OpType=_ALLGATHER_BASE, NumelIn=1024, NumelOut=4096, Timeout(ms)=1800000) ran for 1801725 milliseconds before timing out.\r\n```\r\n\r\n\r\nI realized that why we don't just check the existence of the folder like this:\r\n```python\r\n ...\r\n if os.path.exists(staging_output_dir):\r\n if self.args.should_save:\r\n self.state.save_to_json(os.path.join(staging_output_dir, TRAINER_STATE_NAME))\r\n \r\n if self.args.push_to_hub:\r\n self._push_from_checkpoint(staging_output_dir)\r\n \r\n # Place checkpoint in final location after all saving is finished.\r\n if staging_output_dir != output_dir:\r\n os.rename(staging_output_dir, output_dir)\r\n ...\r\n```\r\nThis works smoothly.",
"@thundergolfer I have a different fix coming in that works, the issue is you were not checking that the rename of the staging folder was happening just on the main process: https://github.com/huggingface/transformers/pull/28009",
"Closing in favor of https://github.com/huggingface/transformers/pull/28009 as this change still doesn't handle all multi-GPU scenarios. "
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Follow-up to https://github.com/huggingface/transformers/pull/27820 which is bugged for multi-device/multiprocess training. I made the error of thinking that in multiprocess training the `._save_checkpoint()` method was already restricted to a single writer.
I've fixed that now and augmented an existing multiprocess test to validate checkpointing functionality.
I've also noted with a `TODO` something I found pretty confusing in the current code. `store_flos()` isn't checkpointing related in my opinion, but it does an `all_gather` and thus if all processes don't enter the `store_flos()` fn the training program hangs. In my opinion this code should be moved out of the checkpointing method so that this method conceptually supports entrance and execution by a single writer (the process with `self.args.should_save == True`).
I didn't setup a multi-GPU VM to run the test, but this multi-GPU Modal script runs and passes the test:
```python
import modal
import subprocess
GIT_SHA = "d867b232d46a0652e1bfe6eda7bc0804b9ad5ea4" # my fork's latest commit
image = (
modal.Image.debian_slim(python_version="3.10")
.apt_install("git").pip_install("pytest")
.run_commands(
"cd /root && git init .",
"cd /root && git remote add origin https://github.com/thundergolfer/transformers",
f"cd /root && git fetch --depth=1 origin {GIT_SHA} && git checkout {GIT_SHA}",
"cd /root && pip install -e \".[dev]\"",
)
)
stub = modal.Stub(image=image)
@stub.function(
gpu=modal.gpu.T4(count=2),
# Can uncomment this to quickly modify local test implementation
# and sync with remote container.
# mounts=[modal.Mount.from_local_file(
# local_path="./tests/trainer/test_trainer.py",
# remote_path="/root/tests/trainer/test_trainer.py",
# )],
secrets=[modal.Secret.from_dict({"RUN_SLOW": "1", "NCCL_P2P_LEVEL": "PIX"})],
timeout=600,
)
def run():
subprocess.run("nvidia-smi", shell=True, check=True)
test_module = "tests/trainer/test_trainer.py"
test_identifier = f"{test_module}::TrainerIntegrationTest::test_end_to_end_example"
subprocess.run(f"pytest -s -v {test_identifier}", shell=True, check=True)
```
**Fixes** https://github.com/huggingface/transformers/issues/27925
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@muellerzr, @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27929/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27929/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27929",
"html_url": "https://github.com/huggingface/transformers/pull/27929",
"diff_url": "https://github.com/huggingface/transformers/pull/27929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27929.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27928 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27928/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27928/comments | https://api.github.com/repos/huggingface/transformers/issues/27928/events | https://github.com/huggingface/transformers/issues/27928 | 2,034,199,269 | I_kwDOCUB6oc55P2rl | 27,928 | [Question] What is the main difference between "AutoModelForCasualLM" and "PeftModelForCausalLM"? | {
"login": "daehuikim",
"id": 40377750,
"node_id": "MDQ6VXNlcjQwMzc3NzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/40377750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/daehuikim",
"html_url": "https://github.com/daehuikim",
"followers_url": "https://api.github.com/users/daehuikim/followers",
"following_url": "https://api.github.com/users/daehuikim/following{/other_user}",
"gists_url": "https://api.github.com/users/daehuikim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/daehuikim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/daehuikim/subscriptions",
"organizations_url": "https://api.github.com/users/daehuikim/orgs",
"repos_url": "https://api.github.com/users/daehuikim/repos",
"events_url": "https://api.github.com/users/daehuikim/events{/privacy}",
"received_events_url": "https://api.github.com/users/daehuikim/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada ",
"Hmm looks interesting. I think, when you print it out, first thing is considering original+ adapter and last thing is considering only structure of it’s backbone. \r\nDid you considered a random state, before inference?\r\n\r\nSometimes, same input’s result can be different when you inference it. ",
"> Hmm looks interesting. I think, when you print it out, first thing is considering original+ adapter and last thing is considering only structure of it’s backbone. Did you considered a random state, before inference?\r\n> \r\n> Sometimes, same input’s result can be different when you inference it.\r\n\r\nThe reason why the structure of the last one feels exactly the same as the structure of the backbone model is because the lora adapter of this one has already been merged, so there is no numerical difference.\r\nThe only difference is the Object they are carried (```PeftModelForCausalLM```, ```AutoModelForCasualLM```).\r\n\r\nAnd 2 models are initialized from different sources. one from ```PeftModelForCausalLM.from_pretrained``` and one for from ```AutoModelForCasualLM.from_pretrained```. However, every params, configuration files are the same except the object.",
"Hi @daehuikim \r\nThanks for your issue, recall the formulation of the LoRA adapters in the figure below:\r\n![lora-animated.gif](https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/EMtsDOf4Wh7aveX0wy-OZ.gif)\r\n\r\nThe fundamental difference between `AutoModelForCausalLM` and `PeftModelForCausalLM`, in your case is that the `AutoModelForCausalLM` contains the merged adapter whereas the second model contains the model with LoRA attached on them. Technically the inference results should be the same as the LoRA operations can be simply rewritten as a refactorization of simple matrix multiplication:\r\n\r\n![Screenshot 2023-11-02 at 20.12.17.png](https://cdn-uploads.huggingface.co/production/uploads/62441d1d9fdefb55a0b7d12c/Oc_cbRvTyGZIOp_SyAX7b.png)\r\n\r\nNow regarding your issue, it seems you have further trained your `lm_head` as well. I think `merge_and_unload` does not properly take care of that, but I am not sure. Can you confirm which PEFT version are you using now?",
"> Hi @daehuikim Thanks for your issue, recall the formulation of the LoRA adapters in the figure below: ![lora-animated.gif](https://camo.githubusercontent.com/0c4168380b2d5a2d93f963838ccd920bc6384e56ebea50f4904031289db70673/68747470733a2f2f63646e2d75706c6f6164732e68756767696e67666163652e636f2f70726f64756374696f6e2f75706c6f6164732f3632343431643164396664656662353561306237643132632f454d7473444f6634576837617665583077792d4f5a2e676966) [ ![lora-animated.gif](https://camo.githubusercontent.com/0c4168380b2d5a2d93f963838ccd920bc6384e56ebea50f4904031289db70673/68747470733a2f2f63646e2d75706c6f6164732e68756767696e67666163652e636f2f70726f64756374696f6e2f75706c6f6164732f3632343431643164396664656662353561306237643132632f454d7473444f6634576837617665583077792d4f5a2e676966) ](https://camo.githubusercontent.com/0c4168380b2d5a2d93f963838ccd920bc6384e56ebea50f4904031289db70673/68747470733a2f2f63646e2d75706c6f6164732e68756767696e67666163652e636f2f70726f64756374696f6e2f75706c6f6164732f3632343431643164396664656662353561306237643132632f454d7473444f6634576837617665583077792d4f5a2e676966) [ ](https://camo.githubusercontent.com/0c4168380b2d5a2d93f963838ccd920bc6384e56ebea50f4904031289db70673/68747470733a2f2f63646e2d75706c6f6164732e68756767696e67666163652e636f2f70726f64756374696f6e2f75706c6f6164732f3632343431643164396664656662353561306237643132632f454d7473444f6634576837617665583077792d4f5a2e676966)\r\n> \r\n> The fundamental difference between `AutoModelForCausalLM` and `PeftModelForCausalLM`, in your case is that the `AutoModelForCausalLM` contains the merged adapter whereas the second model contains the model with LoRA attached on them. Technically the inference results should be the same as the LoRA operations can be simply rewritten as a refactorization of simple matrix multiplication:\r\n> \r\n> ![Screenshot 2023-11-02 at 20.12.17.png](https://camo.githubusercontent.com/212b6ba2d32f654813e329fc0caf83873681a059ea11300c383f42905411b7db/68747470733a2f2f63646e2d75706c6f6164732e68756767696e67666163652e636f2f70726f64756374696f6e2f75706c6f6164732f3632343431643164396664656662353561306237643132632f4f635f636252765479475a494f705f5379415837622e706e67)\r\n> \r\n> Now regarding your issue, it seems you have further trained your `lm_head` as well. I think `merge_and_unload` does not properly take care of that, but I am not sure. Can you confirm which PEFT version are you using now?\r\n\r\nThanks for describing the issue correctly.\r\nMy peftconfig for training is like below: (```lm_head``` is not trained by trainer) \r\n```\r\npeft_config = LoraConfig(\r\n lora_alpha=lora_alpha,\r\n lora_dropout=lora_dropout,\r\n r=lora_r,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n target_modules= [\r\n \"q_proj\",\r\n \"k_proj\",\r\n \"v_proj\",\r\n \"o_proj\",\r\n \"gate_proj\",\r\n \"up_proj\",\r\n \"down_proj\",\r\n ],\r\n modules_to_save=[\r\n \"embed_tokens\",\r\n \"lm_head\"\r\n ]\r\n )\r\n```\r\nAnd this is my peft version\r\n```\r\n$ pip freeze | grep peft\r\npeft==0.6.2\r\n```\r\n\r\nThanks for answering! @younesbelkada ",
"Thanks @daehuikim \r\nYou are adding \r\n\r\n```py\r\n modules_to_save=[\r\n \"embed_tokens\",\r\n \"lm_head\"\r\n ]\r\n```\r\nin the lora config, this will make the lm head and the input embedding layer trainable, hence the strange behaviour you are facing (I am not sure we support merge and unload for models that have `modules_to_save`). Can you try without `modules_to_save`?",
"> Thanks @daehuikim You are adding\r\n> \r\n> ```python\r\n> modules_to_save=[\r\n> \"embed_tokens\",\r\n> \"lm_head\"\r\n> ]\r\n> ```\r\n> \r\n> in the lora config, this will make the lm head and the input embedding layer trainable, hence the strange behaviour you are facing (I am not sure we support merge and unload for models that have `modules_to_save`). Can you try without `modules_to_save`?\r\n\r\nThanks again @younesbelkada \r\n\r\n```modules_to_save``` wrapper save the modules not to be trained from trainer.\r\nAlso, ```embed_tokens, lm_head``` must to be saved when there exist added_tokens.\r\nTherefore, ```modules_to_save``` block is necessary for my task.\r\nI will see what happens in ```merge_and_unload``` for finding this out.\r\nThank you again!",
"https://github.com/huggingface/peft/blob/21c304f6f6ea62c7dcbef8e201d178a7575b471d/src/peft/tuners/lora/model.py#L419\r\n\r\n```\r\nelif isinstance(target, ModulesToSaveWrapper):\r\n # save any additional trainable modules part of `modules_to_save`\r\n setattr(parent, target_name, target.modules_to_save[target.active_adapter])\r\n```\r\n@younesbelkada \r\n```modules_to_save``` preserve modules from parent model.\r\nTheoretically both model should reproduce same outputs.\r\nBut why this still happen? I am curious.",
"Hi, I'm having the same problem as you, in my scenario I also need to add some tokens for training and I've also set the\r\n```\r\nmodules_to_save=[\r\n \"embed_tokens\",\r\n \"lm_head\"\r\n ]\r\n```\r\nMay I ask if the problem is solved now? How did you end up solving it, looking forward to your reply.",
"> Hi, I'm having the same problem as you, in my scenario I also need to add some tokens for training and I've also set the\r\n> \r\n> ```\r\n> modules_to_save=[\r\n> \"embed_tokens\",\r\n> \"lm_head\"\r\n> ]\r\n> ```\r\n> \r\n> May I ask if the problem is solved now? How did you end up solving it, looking forward to your reply.\r\n\r\nI have no idea yet. whole codes seems same in peft and transformers. I guess it could be some weight initialization problem",
"> > Hi, I'm having the same problem as you, in my scenario I also need to add some tokens for training and I've also set the\r\n> > ```\r\n> > modules_to_save=[\r\n> > \"embed_tokens\",\r\n> > \"lm_head\"\r\n> > ]\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > May I ask if the problem is solved now? How did you end up solving it, looking forward to your reply.\r\n> \r\n> I have no idea yet. whole codes seems same in peft and transformers. I guess it could be some weight initialization problem\r\n\r\nRoger that. Thank you for your reply.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi there! Is the issue solved? Let me know if you need more clarification",
"@younesbelkada I think we can close this issue by now. Thanks :)",
"Thanks @daehuikim !"
] | 1,702 | 1,706 | 1,706 | NONE | null | I also wrote it down in peft repo. However this issue is also related to transformers. So i write my question here again.
issue is here in peft(https://github.com/huggingface/peft/issues/1245)
Hello, Sorry for naive question.
I noticed that the``model.generate()`` function performed differently when inferrence right after train with ```trainer.model``` and after merge and unload. (Every params are the same.)
So I checked two different object with simple print function.
Difference was the object that contains model.
1. ```model = trainer.model```
```
PeftModelForCausalLM(
(base_model): LoraModel(
(model): LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): ModulesToSaveWrapper(
(original_module): Embedding(32008, 5120)
(modules_to_save): ModuleDict(
(default): Embedding(32008, 5120)
)
)
(layers): ModuleList(
(0-39): 40 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(k_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(v_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(o_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)
)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=13824, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False)
)
(up_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=5120, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=13824, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False)
)
(down_proj): Linear4bit(
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=13824, out_features=64, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=64, out_features=5120, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
(base_layer): Linear4bit(in_features=13824, out_features=5120, bias=False)
)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): ModulesToSaveWrapper(
(original_module): Linear(in_features=5120, out_features=32008, bias=False)
(modules_to_save): ModuleDict(
(default): Linear(in_features=5120, out_features=32008, bias=False)
)
)
)
)
)
```
2. ```AutoModelForCasualLM.from_pretrained(after merging lora adapter)```
```
LlamaForCausalLM(
(model): LlamaModel(
(embed_tokens): Embedding(32008, 5120)
(layers): ModuleList(
(0-39): 40 x LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear4bit(in_features=5120, out_features=5120, bias=False)
(k_proj): Linear4bit(in_features=5120, out_features=5120, bias=False)
(v_proj): Linear4bit(in_features=5120, out_features=5120, bias=False)
(o_proj): Linear4bit(in_features=5120, out_features=5120, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear4bit(in_features=5120, out_features=13824, bias=False)
(up_proj): Linear4bit(in_features=5120, out_features=13824, bias=False)
(down_proj): Linear4bit(in_features=13824, out_features=5120, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
)
)
(norm): LlamaRMSNorm()
)
(lm_head): Linear(in_features=5120, out_features=32008, bias=False)
)
```
I think both modes should work exactly the same way, but when I inferred with the model.generate function, I found that #1 (PeftModelForCausalLM) works much more accurately. I'd like to know why, is there a theoretical or engineering reason for this?
Thanks for watching my long long question! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27928/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27927 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27927/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27927/comments | https://api.github.com/repos/huggingface/transformers/issues/27927/events | https://github.com/huggingface/transformers/issues/27927 | 2,034,137,763 | I_kwDOCUB6oc55Pnqj | 27,927 | Terminate TextIteratorStreamer Before Done | {
"login": "fakerybakery",
"id": 76186054,
"node_id": "MDQ6VXNlcjc2MTg2MDU0",
"avatar_url": "https://avatars.githubusercontent.com/u/76186054?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fakerybakery",
"html_url": "https://github.com/fakerybakery",
"followers_url": "https://api.github.com/users/fakerybakery/followers",
"following_url": "https://api.github.com/users/fakerybakery/following{/other_user}",
"gists_url": "https://api.github.com/users/fakerybakery/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fakerybakery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fakerybakery/subscriptions",
"organizations_url": "https://api.github.com/users/fakerybakery/orgs",
"repos_url": "https://api.github.com/users/fakerybakery/repos",
"events_url": "https://api.github.com/users/fakerybakery/events{/privacy}",
"received_events_url": "https://api.github.com/users/fakerybakery/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! I don't think this is possible through the TextIteratorStreamer but should be handled within gradio 🤗 ",
"Ok, thanks for the suggestion! I'll close this issue then"
] | 1,702 | 1,702 | 1,702 | NONE | null | Hi,
Is there any way to terminate a TextIteratorStreamer before the text has finished generating? Related to [this](https://github.com/gradio-app/gradio/issues/6724).
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27927/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27925 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27925/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27925/comments | https://api.github.com/repos/huggingface/transformers/issues/27925/events | https://github.com/huggingface/transformers/issues/27925 | 2,033,911,870 | I_kwDOCUB6oc55Owg- | 27,925 | Save model checkpoint error when multi-gpu training | {
"login": "Cospui",
"id": 36847795,
"node_id": "MDQ6VXNlcjM2ODQ3Nzk1",
"avatar_url": "https://avatars.githubusercontent.com/u/36847795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cospui",
"html_url": "https://github.com/Cospui",
"followers_url": "https://api.github.com/users/Cospui/followers",
"following_url": "https://api.github.com/users/Cospui/following{/other_user}",
"gists_url": "https://api.github.com/users/Cospui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cospui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cospui/subscriptions",
"organizations_url": "https://api.github.com/users/Cospui/orgs",
"repos_url": "https://api.github.com/users/Cospui/repos",
"events_url": "https://api.github.com/users/Cospui/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cospui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"I had this same issue, I temporarily fixed it by neutering the different staging directory:\r\n\r\n```python\r\nif os.path.exists(output_dir) and len(os.listdir(output_dir)) > 0:\r\n logger.warning(\r\n f\"Checkpoint destination directory {output_dir} already exists and is non-empty.\"\r\n \"Saving will proceed but saved results may be invalid.\"\r\n )\r\n staging_output_dir = output_dir\r\nelse:\r\n # staging_output_dir = os.path.join(run_dir, f\"tmp-{checkpoint_folder}\")\r\n staging_output_dir = output_dir\r\n```",
"> I had this same issue, I temporarily fixed it by neutering the different staging directory:\r\n> \r\n> ```python\r\n> if os.path.exists(output_dir) and len(os.listdir(output_dir)) > 0:\r\n> logger.warning(\r\n> f\"Checkpoint destination directory {output_dir} already exists and is non-empty.\"\r\n> \"Saving will proceed but saved results may be invalid.\"\r\n> )\r\n> staging_output_dir = output_dir\r\n> else:\r\n> # staging_output_dir = os.path.join(run_dir, f\"tmp-{checkpoint_folder}\")\r\n> staging_output_dir = output_dir\r\n> ```\r\n\r\nWhere did you insert this?",
"Facing same issue in multi-node training:\r\n`File \"/home/user/.local/lib/python3.8/site-packages/transformers/trainer.py\", line 2353, in _save_checkpoint\r\n self.save_model(staging_output_dir, _internal_call=True)\r\nRuntimeError: Parent directory tmp-checkpoint-200 does\r\nnot exist.`\r\nIt added annoying tmp- in front of the checkpoint",
"This is a showstopper for training on multi-GPU nodes. The culprit seems to be the following merged PR #27820.",
"There is an open PR #27929, which seems to fix the issue.\r\n@ArthurZucker @sgugger @younesbelkada ",
"Hi all, can you please do `pip install git+https://github.com/huggingface/transformers` and rerun your code? This should fix your issue now.\r\n\r\nThank you very much for your patience and flagging this!",
"@muellerzr @thundergolfer I still get the same issue of saving checkpoint using the latest version of transformers `4.36` and even with `‘4.37.0.dev0’`\r\n\r\nI used three workers each one has two GPUs, I tried fine-tuning to be saved on a shared storage and a non-shared storage, and for both cases I still got the same error!\r\n\r\n **FileNotFoundError: [Errno 2] No such file or directory: 'model/tmp-checkpoint-49' -> 'model/checkpoint-49'**\r\n\r\n ```\r\nFile \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 1537, in train\r\n return inner_training_loop(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 1929, in _inner_training_loop\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 2279, in _maybe_log_save_evaluate\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/trainer.py\", line 2395, in _save_checkpoint\r\n os.rename(staging_output_dir, output_dir)\r\nFileNotFoundError: [Errno 2] No such file or directory: 'model/tmp-checkpoint-49' -> 'model/checkpoint-49'\r\n```\r\n\r\nalthough the `model/checkpoint-49 `is already created!",
"@hahmad2008 can you try doing either `pip install transformers -U` or reinstall from git? From the line numbers it's not adding up that you're using a version that includes the fix",
"I encountered this issue with the trainer with the following command-line. This was after recently updating transformers with pip install transformers --upgrade\r\n\r\n```--save_strategy epoch --save_total_limit 1```\r\n\r\ntransformers==4.36.2\r\n\r\nEdit: \r\nOne thing to note this was with 2 nodes with 8x A100s per node.\r\nLooking at the code around the error, I have a feeling this was because I may have used local=True when using with main_process_first. Going to try disabling save_on_each_node.\r\n```\r\n if staging_output_dir != output_dir:\r\n with self.args.main_process_first(\r\n desc=\"Renaming model checkpoint folder to true location\", local=self.args.save_on_each_node\r\n ):\r\n if os.path.exists(staging_output_dir):\r\n os.rename(staging_output_dir, output_dir)\r\n```\r\n\r\nedit edit: \r\nLooks like its still not working even when specifying save_on_each_node to false.\r\n\r\nHere is the full command, launched from a slurm sbatch job:\r\n```\r\nsrun --kill-on-bad-exit=1 --jobid $SLURM_JOB_ID bash -c \"accelerate launch --use_deepspeed --zero_stage 1 --deepspeed_hostfile hostfile --deepspeed_multinode_launcher openmpi --gradient_accumulation_steps 1 --num_processes $(( $NUM_GPUS * $COUNT_NODE )) --num_machines $COUNT_NODE --num_cpu_threads_per_process $CPU_COUNT --mixed_precision bf16 --machine_rank \\$SLURM_PROCID --main_process_ip $MASTER_ADDR --main_process_port $MASTER_PORT main.py --source_datasets_filepath source_data/clm --output_dir testing_output_cluster --model_number 2 --overwrite_output_dir --dataloader_num_workers 10 --bf16 --data_fraction 0.1 --save_strategy steps --save_total_limit 1 --save_on_each_node false --dataloader_num_workers 2 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --max_token_length 1024 --num_train_epochs 1\"\r\n```\r\n\r\n",
"I encountered a similar error when using the trainer from DeepSpeed.\r\nThe error occurs at the exact moment after `if os.path.exists(staging_output_dir):` is evaluated and another process finishes renaming.\r\n\r\nI had no other choice, so I resorted to using a try block to get around it.\r\n\r\n```python\r\nif staging_output_dir != output_dir:\r\n with self.args.main_process_first(\r\n desc=\"Renaming model checkpoint folder to true location\", local=self.args.save_on_each_node\r\n ):\r\n if os.path.exists(staging_output_dir):\r\n try:\r\n os.rename(staging_output_dir, output_dir)\r\n except Exception as e:\r\n logger.info(f\"Could not rename checkpoint directory from {staging_output_dir} to {output_dir}. Reason: {e}\")\r\n \r\n```\r\n\r\ntransformers-4.37.0.dev0",
"Hi, @snowyday , @tblattner , and @muellerzr . I think `main_process_first` may be broken. \r\n\r\nI run the trainer with 2 nodes X 8 V100 GPUs and deepspeed. When I turned on `log_level=debug`, I found that only one process entered the waiting mode, while all other processes tried to save the checkpoint.\r\n\r\nThe log from process that waited: \r\n\r\n```\r\n[DEBUG|training_args.py:2119] 2023-12-27 15:11:30,917 >> 4: waiting for the main process to perform Renaming model checkpoint folder to true location\r\n```",
"I also encounter this with 4.36.2 and HEAD in a multi-node multi-GPU setup. Looks like an obvious race condition, as it happens indeterminately (sometimes 2nd save, sometimes 7th save etc).",
"Hi Any update or final conclusion here? :>",
"any solutions? facing the same issue on multinode training using deepspeed",
"same here, any solutions?",
"I've been using a try-except approach for bypassing the issue, and it's been working well for me. However, as xk-huang mentioned, it seems that the root cause is that self.args.main_process_first is not handling multi-node setups properly.",
"Curious if there is any reason why we must do ```os.path.exists``` and ```os.rename``` for each process, why not just the main process(es)?\r\n\r\n\r\nHaven't tested this code yet as my compute resources are currently filled and I have a long-running experiment set to finish in a couple days, but wanted to get some thoughts on this potential solution.\r\n```\r\n # Only rename from main process to avoid race condition from other processes especially for distributed filesystems\r\n if staging_output_dir != output_dir:\r\n if self.args.distributed_state.is_local_main_process if self.args.save_on_each_node else self.args.distributed_state.is_main_process:\r\n if os.path.exists(staging_output_dir):\r\n os.rename(staging_output_dir, output_dir)\r\n\r\n self.args.distributed_state.wait_for_everyone()\r\n```",
"I'm using transformers's Trainer, is there any work around for this?",
"For work around with Trainer, I just subclassed it and replace the _save_checkpoint method that added try exception.\r\n\r\n```\r\nclass CustomTrainer(Trainer):\r\n def _save_checkpoint(self, model, trial, metrics=None):\r\n # In all cases, including ddp/dp/deepspeed, self.model is always a reference to the model we\r\n # want to save except FullyShardedDDP.\r\n # assert unwrap_model(model) is self.model, \"internal model should be a reference to self.model\"\r\n\r\n # Save model checkpoint\r\n checkpoint_folder = f\"{PREFIX_CHECKPOINT_DIR}-{self.state.global_step}\"\r\n\r\n if self.hp_search_backend is None and trial is None:\r\n self.store_flos()\r\n\r\n run_dir = self._get_output_dir(trial=trial)\r\n output_dir = os.path.join(run_dir, checkpoint_folder)\r\n if os.path.exists(output_dir) and len(os.listdir(output_dir)) > 0:\r\n logger.warning(\r\n f\"Checkpoint destination directory {output_dir} already exists and is non-empty.\"\r\n \"Saving will proceed but saved results may be invalid.\"\r\n )\r\n staging_output_dir = output_dir\r\n else:\r\n staging_output_dir = os.path.join(\r\n run_dir, f\"tmp-{checkpoint_folder}\")\r\n self.save_model(staging_output_dir, _internal_call=True)\r\n\r\n if not self.args.save_only_model:\r\n # Save optimizer and scheduler\r\n self._save_optimizer_and_scheduler(staging_output_dir)\r\n # Save RNG state\r\n self._save_rng_state(staging_output_dir)\r\n\r\n # Determine the new best metric / best model checkpoint\r\n if metrics is not None and self.args.metric_for_best_model is not None:\r\n metric_to_check = self.args.metric_for_best_model\r\n if not metric_to_check.startswith(\"eval_\"):\r\n metric_to_check = f\"eval_{metric_to_check}\"\r\n metric_value = metrics[metric_to_check]\r\n\r\n operator = np.greater if self.args.greater_is_better else np.less\r\n if (\r\n self.state.best_metric is None\r\n or self.state.best_model_checkpoint is None\r\n or operator(metric_value, self.state.best_metric)\r\n ):\r\n self.state.best_metric = metric_value\r\n self.state.best_model_checkpoint = output_dir\r\n\r\n # Save the Trainer state\r\n if self.args.should_save:\r\n self.state.save_to_json(os.path.join(\r\n staging_output_dir, TRAINER_STATE_NAME))\r\n\r\n if self.args.push_to_hub:\r\n self._push_from_checkpoint(staging_output_dir)\r\n\r\n # Place checkpoint in final location after all saving is finished.\r\n # First wait for everyone to finish writing\r\n self.args.distributed_state.wait_for_everyone()\r\n # Then go through the rewriting process starting on process 0\r\n try:\r\n if staging_output_dir != output_dir:\r\n with self.args.main_process_first(\r\n desc=\"Renaming model checkpoint folder to true location\", local=self.args.save_on_each_node\r\n ):\r\n if os.path.exists(staging_output_dir):\r\n os.rename(staging_output_dir, output_dir)\r\n\r\n # Maybe delete some older checkpoints.\r\n if self.args.should_save:\r\n self._rotate_checkpoints(use_mtime=True, output_dir=run_dir)\r\n except Exception:\r\n print(\"Error rotating checkpoints skipping\")\r\n pass\r\n```",
"I've checked the `main_process_first` using the code snippet below:\r\nNumber of nodes: 3\r\nProcesses per node (GPUs): 4\r\nTotal: 12 processes\r\n\r\n```python\r\nimport logging\r\n\r\nimport deepspeed\r\nimport transformers\r\nimport torch\r\n\r\n\r\nlogging.basicConfig(level=logging.INFO)\r\nlogger = logging.getLogger()\r\n\r\nif __name__ == \"__main__\":\r\n deepspeed.init_distributed()\r\n node_rank = torch.distributed.get_rank() \r\n training_args = transformers.TrainingArguments(per_device_train_batch_size=8,\r\n gradient_accumulation_steps=2,\r\n num_train_epochs=3,\r\n deepspeed=\"ds_config/ds_config_zero3.json\",\r\n output_dir=\"logs\")\r\n\r\n with training_args.main_process_first():\r\n logger.info(f\"Check `main_process_first`. Node rank {node_rank}\")\r\n```\r\n\r\n```\r\nAddress family not supported by protocol).\r\n[INFO:root:Check `main_process_first`. Node rank 8\r\nINFO:root:Check `main_process_first`. Node rank 0\r\nINFO:root:Check `main_process_first`. Node rank 4\r\nINFO:root:Check `main_process_first`. Node rank 6\r\nINFO:root:Check `main_process_first`. Node rank 10\r\nINFO:root:Check `main_process_first`. Node rank 5\r\nINFO:root:Check `main_process_first`. Node rank 9\r\nINFO:root:Check `main_process_first`. Node rank 1\r\nINFO:root:Check `main_process_first`. Node rank 2\r\nINFO:root:Check `main_process_first`. Node rank 3\r\nINFO:root:Check `main_process_first`. Node rank 7\r\nINFO:root:Check `main_process_first`. Node rank 11\r\n```\r\n\r\nThe node rankings appear to be correctly allocated, with Node rank 0 going to node 1, Node rank 4 to node 2, and Node rank 8 to node 3; however, there are inaccuracies with the global rankings. In the context of a shared filesystem, if we proceed without waiting for the result from global rank 0, it could cause conflicts during the os.rename operation.\r\n\r\n```python\r\nif staging_output_dir != output_dir:\r\n with self.args.main_process_first(\r\n desc=\"Renaming model checkpoint folder to true location\", local=self.args.save_on_each_node\r\n ):\r\n if os.path.exists(staging_output_dir):\r\n os.rename(staging_output_dir, output_dir)\r\n \r\n ```",
"> however, there are inaccuracies with the global rankings.\r\n\r\n@snowyday as indicated by the fact that `rank 8` is printed first?",
"@thundergolfer \r\n`Rank 0` should pop up first, and the others should hang tight until the renaming wraps up.\r\nI should set `args.save_on_each_node=False`:\r\n\r\n```python\r\nwith self.args.main_process_first(\r\n desc=\"Renaming model checkpoint folder to true location\", local=self.args.save_on_each_node\r\n ):\r\n```",
"Without having tested, this looks like the right direction.",
"In the end, simply setting `save_on_each_node=False` worked out for everything.\r\n\r\n```python\r\ntraining_args = transformers.TrainingArguments(..., save_on_each_node=False, ...)\r\n```\r\n\r\nBy setting `save_on_each_node=False` in `TrainingArguments`, it ensures that in the `Trainer`’s `def _save_checkpoint method`, `main_process_first`'s `local` will be set to `False`. Consequently, following the explanation provided, it works correctly.\r\n\r\n_**if `False` first means process of rank 0 of node rank 0 In multi-node environment with a shared filesystem you most likely will want to use `local=False` so that only the main process of the first node will do the processing.**_\r\n\r\n```python\r\n @contextlib.contextmanager\r\n def main_process_first(self, local=True, desc=\"work\"):\r\n \"\"\"\r\n A context manager for torch distributed environment where on needs to do something on the main process, while\r\n blocking replicas, and when it's finished releasing the replicas.\r\n\r\n One such use is for `datasets`'s `map` feature which to be efficient should be run once on the main process,\r\n which upon completion saves a cached version of results and which then automatically gets loaded by the\r\n replicas.\r\n\r\n Args:\r\n local (`bool`, *optional*, defaults to `True`):\r\n if `True` first means process of rank 0 of each node if `False` first means process of rank 0 of node\r\n rank 0 In multi-node environment with a shared filesystem you most likely will want to use\r\n `local=False` so that only the main process of the first node will do the processing. If however, the\r\n filesystem is not shared, then the main process of each node will need to do the processing, which is\r\n the default behavior.\r\n desc (`str`, *optional*, defaults to `\"work\"`):\r\n a work description to be used in debug logs\r\n\r\n \"\"\"\r\n```",
"Would this work in a setting without a shared file system?",
"I'v checked it on a GPU cluster with a shared file system. For multi-node setups with independent file systems, the default `save_on_each_node=True` is fine; `main_process_first` make sure to serialize the execution for each node. If that still doesn't work, then I think there might still be an issue with `main_process_first`.",
"I don't think there is an issue with main_process_first as I've been using it across a lot of dataset processing steps. \r\n\r\nI believe that on network/shared file systems os.rename is not atomic. So its possible that the file system in this case might not be reflected after os.rename returns, causing other processes to observe the wrong state. I haven't found a good way to ensure the rename is completed. Catching the exception would handle it though, but not my ideal way to deal with the race condition.",
"In the case of processes sharing a filesystem, it seems prudent for only one process to wait for a rename operation to complete. However, why `main_process_first` is being used? On a shared filesystem, if the `rename()` fails, options are limited. Is this why multiple processes are making repeated attempts?",
"I'm not sure if it fails or not. From what I understand, the network attached storage node might not actually complete the operation before the next process comes to check if the path exists. It will complete, just not in the timeframe allowed (sometimes). But that outlines the core issue here.\r\n\r\nMy suggestion is to use something like this:\r\n```if self.args.distributed_state.is_local_main_process if self.args.save_on_each_node else self.args.distributed_state.is_main_process:```\r\n\r\nThen ```self.args.distributed_state.wait_for_everyone()``` to synchronize everyone afterwards. \r\n\r\nThis would only use the main process if save_on_each_node is false, otherwise only the local main processes. Which I think is the intended behavior. The part I'm not sure of is if the renamed file is used later downstream, then that could introduce a race condition there...\r\n\r\nIt would be nice if we could have an fsync for the shared filesystem to ensure the rename actually completed.\r\n",
"It is, so we could have a race condition. An `fsync` could be done certainly and your logic makes sense. @tblattner would you like to open a PR on this by chance? "
] | 1,702 | 1,705 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-6.2.0-1017-azure-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
@muellerzr and @pacman100 I found when launch the example trainer code with multi-nodes, the code will raise a FileNotFound error when saving the checkpoint, and after debug, I think the reason is in `trainer.py` L2382:
```
if staging_output_dir != output_dir:
os.rename(staging_output_dir, output_dir)
```
When one process rename the folder, and other processes will encounter the FileNotFound error. Maybe one can modify the code like this to avoid the error:
```
if self.args.should_save and staging_output_dir != output_dir:
os.rename(staging_output_dir, output_dir)
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the MAE training code from the example folder.
### Expected behavior
Solve the FileNotFound error. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27925/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/27925/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27924 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27924/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27924/comments | https://api.github.com/repos/huggingface/transformers/issues/27924/events | https://github.com/huggingface/transformers/pull/27924 | 2,033,894,083 | PR_kwDOCUB6oc5hltZg | 27,924 | Adding FA2 support for MusicGen | {
"login": "staghado",
"id": 84044788,
"node_id": "MDQ6VXNlcjg0MDQ0Nzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/84044788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/staghado",
"html_url": "https://github.com/staghado",
"followers_url": "https://api.github.com/users/staghado/followers",
"following_url": "https://api.github.com/users/staghado/following{/other_user}",
"gists_url": "https://api.github.com/users/staghado/gists{/gist_id}",
"starred_url": "https://api.github.com/users/staghado/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/staghado/subscriptions",
"organizations_url": "https://api.github.com/users/staghado/orgs",
"repos_url": "https://api.github.com/users/staghado/repos",
"events_url": "https://api.github.com/users/staghado/events{/privacy}",
"received_events_url": "https://api.github.com/users/staghado/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Hey @staghado, thanks for taking care of this, let us know when it's ready to be reviewed!",
"I have conducted some tests on an A10 GPU : \r\n - The code seems to work without errors when `_supports_flash_attn_2` is set to `True` for `MusicgenForConditionalGeneration` but does not load the model with FA2 if not specified by hand. Maybe it needs to be added at the class level in MusicgenForConditionalGeneration?\r\n - There is no difference in generation speed between eager attention and FA2 : \r\n \r\n![Screenshot from 2024-01-06 21-50-49](https://github.com/huggingface/transformers/assets/84044788/d93f298d-32e5-4345-b054-e5335ac266fe)\r\n\r\n",
"cc @ylacombe could you possibly circle back here when you get the chance!",
"Hi @ylacombe,\r\n\r\nI confirm that the model was instantiated as described [here](https://huggingface.co./docs/transformers/v4.36.1/en/perf_infer_gpu_one#flashattention-2) with the exception of `torch_dtype=torch.float16` instead of `torch_dtype=torch.bfloat16` because some operations did not seem to implement bfloat16.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,708 | null | CONTRIBUTOR | null | # What does this PR do?
This PR adds Flash Attention 2 support for MusicGen model. It is based on Bart example and it is a WIP for now.
I could not test the model because FA2 is not supported yet for T4 GPUs.
Fixes #27552
@sanchit-gandhi @ylacombe | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27924/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27924",
"html_url": "https://github.com/huggingface/transformers/pull/27924",
"diff_url": "https://github.com/huggingface/transformers/pull/27924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27924.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27926 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27926/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27926/comments | https://api.github.com/repos/huggingface/transformers/issues/27926/events | https://github.com/huggingface/transformers/issues/27926 | 2,033,979,191 | I_kwDOCUB6oc55PA83 | 27,926 | Can't get add_generation_prompt to work correctly in apply_chat_template | {
"login": "odellus",
"id": 4686956,
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odellus",
"html_url": "https://github.com/odellus",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"repos_url": "https://api.github.com/users/odellus/repos",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @Rocketknight1 if you have time to look at this! ",
"I'm using `transformers==4.34.0`",
"Upgrading to 4.35.2 fixed this issue for me. Closing."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | I'm having trouble getting the `add_generation_prompt` feature working with `tokenizer.apply_chat_template`. I'm working with stablelm-zephyr-3b right now. I raised an issue on their HF model page, but I don't think the problem is with their chat template. Their chat template looks correct.
https://huggingface.co./stabilityai/stablelm-zephyr-3b/discussions/9
Discussion reproduced here so you don't have to click through:
Not able to get `tokenizer.apply_chat_template` to append the generation prompt for stablelm-zephyr-3b
```python
print(tokenizer.chat_template)
"{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] == 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{ '<|assistant|>' }}\n{% endif %}\n{% endfor %}"
chat = [{'role': 'system', 'content': 'You are an excellent C++ programmer'}, {'role': 'user', 'content': 'Write a program to compute pairwise distances between atoms in a PDB file'}]
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
'<|system|>\nYou are an excellent C++ programmer<|endoftext|>\n<|user|>\nWrite a program to compute pairwise distances between atoms in a PDB file<|endoftext|>\n'
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)
'<|system|>\nYou are an excellent C++ programmer<|endoftext|>\n<|user|>\nWrite a program to compute pairwise distances between atoms in a PDB file<|endoftext|>\n'
```
Could this be an issue with tokenizer module? The chat template looks right. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27926/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27923 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27923/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27923/comments | https://api.github.com/repos/huggingface/transformers/issues/27923/events | https://github.com/huggingface/transformers/issues/27923 | 2,033,804,530 | I_kwDOCUB6oc55OWTy | 27,923 | SafetensorError: Error while deserializing header: HeaderTooLarge | {
"login": "KyrieCui",
"id": 37808472,
"node_id": "MDQ6VXNlcjM3ODA4NDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/37808472?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KyrieCui",
"html_url": "https://github.com/KyrieCui",
"followers_url": "https://api.github.com/users/KyrieCui/followers",
"following_url": "https://api.github.com/users/KyrieCui/following{/other_user}",
"gists_url": "https://api.github.com/users/KyrieCui/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KyrieCui/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KyrieCui/subscriptions",
"organizations_url": "https://api.github.com/users/KyrieCui/orgs",
"repos_url": "https://api.github.com/users/KyrieCui/repos",
"events_url": "https://api.github.com/users/KyrieCui/events{/privacy}",
"received_events_url": "https://api.github.com/users/KyrieCui/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, could you try to use the latest release of safetensors? \r\nOtherwise not sure I can help you with a checkpoint that is private ! ",
"Hello, Sir. \r\nI met this error when loading the 6th checkpoint, so I re-downloaded the sixth safetensors file. But still the same error. Should I try to re- download all of the safetensors files?",
"If you download them from the hub anyway, you should just use from_pretrained, they will automatically be cache in your .cache and re-used accordingly. ",
"@KyrieCui this might be a storage issue as well, can you make sure you have enough Disk space on your device?",
"thanks for all of your support! I fixed this issue by re-downloading all of the safetensors files. "
] | 1,702 | 1,702 | 1,702 | NONE | null | ### System Info
transformers version: 4.35.0
Platform: Linux-4.18.0-477.27.1.el8_8.x86_64.x86_64-x86_64-with-glibc2.28
Python version: 3.9.16
Huggingface_hub version:0.16.4
Accelerate version: 0.21.0
Safetensors version: 0.3.1
PyTorch version (GPU?): 22.0.1+cu117 (True)
### Who can help?
@ArthurZucker @SunMarc @younesbelkada
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
base_model = '/llm/llama2-2-70b-chat-hf'
model = AutoModelForCausalLM( base_model, load_in_8bit=True, device_map={"",0},use_safetensors=True)
in load_state_dict (checkpoint_file)
462 """
463 Reads a Pytorch checkpoint file, returning properly formatted errors if they arise.
464 """
465 if checkpoint_file.endswith(".safetensors") and is_safetensors_available():
-->466 with safe_open(checkpoint_file,framework="pt") as f:
467 metadata=f.metadate()
SafetensorError: Error while deserializing header: HeaderTooLarge
```
### Expected behavior
Expected tao load the model successful | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27923/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27922 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27922/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27922/comments | https://api.github.com/repos/huggingface/transformers/issues/27922/events | https://github.com/huggingface/transformers/issues/27922 | 2,033,681,878 | I_kwDOCUB6oc55N4XW | 27,922 | add system prompt option in .apply_chat_template() | {
"login": "ONE-THING-9",
"id": 123763769,
"node_id": "U_kgDOB2B8OQ",
"avatar_url": "https://avatars.githubusercontent.com/u/123763769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ONE-THING-9",
"html_url": "https://github.com/ONE-THING-9",
"followers_url": "https://api.github.com/users/ONE-THING-9/followers",
"following_url": "https://api.github.com/users/ONE-THING-9/following{/other_user}",
"gists_url": "https://api.github.com/users/ONE-THING-9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ONE-THING-9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ONE-THING-9/subscriptions",
"organizations_url": "https://api.github.com/users/ONE-THING-9/orgs",
"repos_url": "https://api.github.com/users/ONE-THING-9/repos",
"events_url": "https://api.github.com/users/ONE-THING-9/events{/privacy}",
"received_events_url": "https://api.github.com/users/ONE-THING-9/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, I think it already supports this! see [here](https://huggingface.co./docs/transformers/chat_templating) (you just need to customize the roles) ",
"Got it, i missed that.",
"There is a big problem here. Some of these models reject the system prompt in their chat template. It's wildly inconsistent and there doesn't seem to be a way to tell if a model accepts a system prompt or not. You end up getting this error if you pass in a system prompt and format it using the chat template function:\r\n\r\njinja2.exceptions.TemplateError: Conversation roles must alternate user/assistant/user/assistant/\r\n\r\nI'm really surprised I haven't seen anybody raise this. Am I missing something? How do you tell if a model takes a system prompt or not? ",
"Thanks for the feedback! cc @Rocketknight1 if that's something you noticed",
"Hi @unoriginalscreenname - unfortunately, this is an unavoidable consequence of how these models were trained. Some models were trained with system prompts as part of the training data, and other models were not. When a model was not trained with a system prompt, it will not have any tokens that it can use to represent a system prompt, and trying to insert a prompt will confuse the model and probably significantly reduce the output quality.\r\n\r\nIn the cases when a model was trained without a system prompt, the model's chat template can be configured to raise an error if a `system` message is included in the input, and this is indeed what happens with some models (e.g. LLaMA/Mistral/Mixtral). This is correct and intended behaviour, and there isn't really any way to \"fix\" it without retraining the models!\r\n\r\nThe only solution I can suggest is that there is usually a different fine-tune of most models that supports a system prompt. For example, instead of Mistral-instruct you can use [Zephyr-beta](https://huggingface.co./HuggingFaceH4/zephyr-7b-beta), and instead of Mixtral-instruct you can use [Dolphin](https://huggingface.co./cognitivecomputations/dolphin-2.7-mixtral-8x7b). Both of these models were trained with system prompts, and will understand them correctly (and apply them in their chat template)."
] | 1,702 | 1,704 | 1,703 | NONE | null | ### Feature request
Currently, I cannot find the option of adding a system prompt while doing tokenizer.apply_chat_template().
### Motivation
Because of this I have to avoid using apply_chat_template
### Your contribution
we can add this in {'role':'system'..........} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27922/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27921 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27921/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27921/comments | https://api.github.com/repos/huggingface/transformers/issues/27921/events | https://github.com/huggingface/transformers/pull/27921 | 2,033,645,816 | PR_kwDOCUB6oc5hk5mU | 27,921 | Add LayoutLM processor | {
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The tests are failing because when I copy LayoutLMv2Tokenizer to LayoutLMTokenizer, `bbox` input is required. To preserve backward-compatibility, we need to check if `bbox` is None, and apply the old logic in that case. I'm not familiar with the tokenizer code, so help would be appreciated 🙏.",
"(feel free to ping me again for another review 😉 )",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,707 | 1,707 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27826
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@ArthurZucker and @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27921/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27921",
"html_url": "https://github.com/huggingface/transformers/pull/27921",
"diff_url": "https://github.com/huggingface/transformers/pull/27921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27921.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27920 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27920/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27920/comments | https://api.github.com/repos/huggingface/transformers/issues/27920/events | https://github.com/huggingface/transformers/pull/27920 | 2,033,635,445 | PR_kwDOCUB6oc5hk3VO | 27,920 | fixed typos (issue 27919) | {
"login": "asusevski",
"id": 77211520,
"node_id": "MDQ6VXNlcjc3MjExNTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/77211520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asusevski",
"html_url": "https://github.com/asusevski",
"followers_url": "https://api.github.com/users/asusevski/followers",
"following_url": "https://api.github.com/users/asusevski/following{/other_user}",
"gists_url": "https://api.github.com/users/asusevski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asusevski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asusevski/subscriptions",
"organizations_url": "https://api.github.com/users/asusevski/orgs",
"repos_url": "https://api.github.com/users/asusevski/repos",
"events_url": "https://api.github.com/users/asusevski/events{/privacy}",
"received_events_url": "https://api.github.com/users/asusevski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27919
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu and @MKhalusova and @merve
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27920/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27920",
"html_url": "https://github.com/huggingface/transformers/pull/27920",
"diff_url": "https://github.com/huggingface/transformers/pull/27920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27920.patch",
"merged_at": 1702338263000
} |
https://api.github.com/repos/huggingface/transformers/issues/27919 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27919/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27919/comments | https://api.github.com/repos/huggingface/transformers/issues/27919/events | https://github.com/huggingface/transformers/issues/27919 | 2,033,624,429 | I_kwDOCUB6oc55NqVt | 27,919 | Typos with Knowledge Distillation for Computer Vision documentation | {
"login": "asusevski",
"id": 77211520,
"node_id": "MDQ6VXNlcjc3MjExNTIw",
"avatar_url": "https://avatars.githubusercontent.com/u/77211520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asusevski",
"html_url": "https://github.com/asusevski",
"followers_url": "https://api.github.com/users/asusevski/followers",
"following_url": "https://api.github.com/users/asusevski/following{/other_user}",
"gists_url": "https://api.github.com/users/asusevski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asusevski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asusevski/subscriptions",
"organizations_url": "https://api.github.com/users/asusevski/orgs",
"repos_url": "https://api.github.com/users/asusevski/repos",
"events_url": "https://api.github.com/users/asusevski/events{/privacy}",
"received_events_url": "https://api.github.com/users/asusevski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.15.133.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@stevhliu and @MKhalusova and @merve
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Issue 1**: ```NameError: name 'teacher_extractor' is not defined```
```
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()
trainer = ImageDistilTrainer(
student_model=student_model,
teacher_model=teacher_model,
training_args=training_args,
train_dataset=processed_datasets["train"],
eval_dataset=processed_datasets["validation"],
data_collator=data_collator,
tokenizer=teacher_extractor,
compute_metrics=compute_metrics,
temperature=5,
lambda_param=0.5
)
```
**Issue 2**: Trainer doesn't initialize
```
class ImageDistilTrainer(Trainer):
def __init__(self, *args, teacher_model=None, **kwargs):
super().__init__(*args, **kwargs)
self.teacher = teacher_model
self.student = student_model
self.loss_function = nn.KLDivLoss(reduction="batchmean")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.teacher.to(device)
self.teacher.eval()
self.temperature = temperature
self.lambda_param = lambda_param
```
### Expected behavior
**Issue 1**: ```teacher_extractor``` should be ```teacher_processor```
**Issue 2**: ```ImageDistilTrainer``` should be:
```
class ImageDistilTrainer(Trainer):
def __init__(self ,teacher_model=None, student_model=None, temperature=None, lambda_param=None, *args, **kwargs):
super().__init__(model=student_model, *args, **kwargs)
self.teacher = teacher_model
self.student = student_model
self.loss_function = nn.KLDivLoss(reduction="batchmean")
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.teacher.to(device)
self.teacher.eval()
self.temperature = temperature
self.lambda_param = lambda_param
```
Will raise PR for both fixes! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27919/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27918 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27918/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27918/comments | https://api.github.com/repos/huggingface/transformers/issues/27918/events | https://github.com/huggingface/transformers/pull/27918 | 2,033,540,619 | PR_kwDOCUB6oc5hkjQ5 | 27,918 | Fix typo | {
"login": "f4hy",
"id": 36440,
"node_id": "MDQ6VXNlcjM2NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/36440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f4hy",
"html_url": "https://github.com/f4hy",
"followers_url": "https://api.github.com/users/f4hy/followers",
"following_url": "https://api.github.com/users/f4hy/following{/other_user}",
"gists_url": "https://api.github.com/users/f4hy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f4hy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f4hy/subscriptions",
"organizations_url": "https://api.github.com/users/f4hy/orgs",
"repos_url": "https://api.github.com/users/f4hy/repos",
"events_url": "https://api.github.com/users/f4hy/events{/privacy}",
"received_events_url": "https://api.github.com/users/f4hy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"thanks"
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27918/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27918",
"html_url": "https://github.com/huggingface/transformers/pull/27918",
"diff_url": "https://github.com/huggingface/transformers/pull/27918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27918.patch",
"merged_at": 1702119564000
} |
https://api.github.com/repos/huggingface/transformers/issues/27917 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27917/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27917/comments | https://api.github.com/repos/huggingface/transformers/issues/27917/events | https://github.com/huggingface/transformers/issues/27917 | 2,033,530,579 | I_kwDOCUB6oc55NTbT | 27,917 | LLava not working with accelerate dispatch: "Expected all tensors to be on the same device" | {
"login": "py4",
"id": 747819,
"node_id": "MDQ6VXNlcjc0NzgxOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/747819?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/py4",
"html_url": "https://github.com/py4",
"followers_url": "https://api.github.com/users/py4/followers",
"following_url": "https://api.github.com/users/py4/following{/other_user}",
"gists_url": "https://api.github.com/users/py4/gists{/gist_id}",
"starred_url": "https://api.github.com/users/py4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/py4/subscriptions",
"organizations_url": "https://api.github.com/users/py4/orgs",
"repos_url": "https://api.github.com/users/py4/repos",
"events_url": "https://api.github.com/users/py4/events{/privacy}",
"received_events_url": "https://api.github.com/users/py4/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada I think this worked / works on main no? \r\n\r\nFYI @gante and @tomaarsen we'll work on a fix with @younesbelkada "
] | 1,702 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- PyTorch version (GPU?): 2.1.1+cu121 (True)
### Who can help?
@pacman100 @ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "llava-hf/llava-1.5-7b-hf"
prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map='auto'
)
processor = AutoProcessor.from_pretrained(model_id)
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to('cuda', torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
```
### Expected behavior
It should produce the output but I get the following. I believe something [similar to this ](https://github.com/huggingface/transformers/issues/24410#issuecomment-1603133017) is needed to fix
```return func(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/generation/utils.py", line 1718, in generate
return self.greedy_search(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/generation/utils.py", line 2579, in greedy_search
outputs = self(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llava/modeling_llava.py", line 433, in forward
outputs = self.language_model(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 1174, in forward
outputs = self.model(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 1061, in forward
layer_outputs = decoder_layer(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 789, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward
output = module._old_forward(*args, **kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 408, in forward
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/cache_utils.py", line 127, in update
self.key_cache[layer_idx] = torch.cat([self.key_cache[layer_idx], key_states], dim=-2)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27917/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27916 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27916/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27916/comments | https://api.github.com/repos/huggingface/transformers/issues/27916/events | https://github.com/huggingface/transformers/issues/27916 | 2,033,389,088 | I_kwDOCUB6oc55Mw4g | 27,916 | Question about the output of the decision transformer | {
"login": "Pulsar110",
"id": 125087940,
"node_id": "U_kgDOB3SwxA",
"avatar_url": "https://avatars.githubusercontent.com/u/125087940?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pulsar110",
"html_url": "https://github.com/Pulsar110",
"followers_url": "https://api.github.com/users/Pulsar110/followers",
"following_url": "https://api.github.com/users/Pulsar110/following{/other_user}",
"gists_url": "https://api.github.com/users/Pulsar110/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pulsar110/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pulsar110/subscriptions",
"organizations_url": "https://api.github.com/users/Pulsar110/orgs",
"repos_url": "https://api.github.com/users/Pulsar110/repos",
"events_url": "https://api.github.com/users/Pulsar110/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pulsar110/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"Thank you. I have created a post here: https://discuss.huggingface.co/t/question-about-the-output-of-the-decision-transformer/65384\r\nSo far no one has commented there. I'm not sure if there is a bug in the code, or maybe I do not understand it correctly, that's also why I wanted to post here. ",
"I don't know this model at all so pinging @edbeeching the author of the PR! ",
"Hi @Pulsar110 , thanks for your question. It would probably be best to reach out to the authors with this question as our implementation aims to match the author's codebase: https://github.com/kzl/decision-transformer/blob/e2d82e68f330c00f763507b3b01d774740bee53f/gym/decision_transformer/models/decision_transformer.py#L97\r\n\r\nIf I were to hazard a guess I would think that there is a mistake in their implementation and we should be indexing entry 0 at some point. \r\n\r\nLet us know what they say and perhaps we can update our implementation with any changes they suggest. I will close the issue for now but feel free to reopen it with more questions or if you hear back from them.\r\n"
] | 1,702 | 1,703 | 1,703 | NONE | null | From the code in here: https://github.com/huggingface/transformers/blob/v4.35.2/src/transformers/models/decision_transformer/modeling_decision_transformer.py#L920-L927
```
# reshape x so that the second dimension corresponds to the original
# returns (0), states (1), or actions (2); i.e. x[:,1,t] is the token for s_t
x = x.reshape(batch_size, seq_length, 3, self.hidden_size).permute(0, 2, 1, 3)
# get predictions
return_preds = self.predict_return(x[:, 2]) # predict next return given state and action
state_preds = self.predict_state(x[:, 2]) # predict next state given state and action
action_preds = self.predict_action(x[:, 1]) # predict next action given state
````
I'm not sure I understand why ` self.predict_return(x[:, 2])` or `self.predict_state(x[:, 2])` is predicting the return/next state given the state and action. From the comment on the top, `x[:, 2]` is only the action? Am I missing something?
And if this code is correct, what is the use of `x[:, 0]`? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27916/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27915 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27915/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27915/comments | https://api.github.com/repos/huggingface/transformers/issues/27915/events | https://github.com/huggingface/transformers/issues/27915 | 2,033,135,988 | I_kwDOCUB6oc55LzF0 | 27,915 | dMoE support | {
"login": "AlpinDale",
"id": 52078762,
"node_id": "MDQ6VXNlcjUyMDc4NzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/52078762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AlpinDale",
"html_url": "https://github.com/AlpinDale",
"followers_url": "https://api.github.com/users/AlpinDale/followers",
"following_url": "https://api.github.com/users/AlpinDale/following{/other_user}",
"gists_url": "https://api.github.com/users/AlpinDale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AlpinDale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlpinDale/subscriptions",
"organizations_url": "https://api.github.com/users/AlpinDale/orgs",
"repos_url": "https://api.github.com/users/AlpinDale/repos",
"events_url": "https://api.github.com/users/AlpinDale/events{/privacy}",
"received_events_url": "https://api.github.com/users/AlpinDale/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"I've been using a quick drop-in replacement lifted from the dmoe.py implementation from megablocks.\r\n\r\nRunning `torch.cat` for the expert weights on each forward pass adds a ~5% overhead, since I didn't want to deal with managing the state dicts. Overall training is 2-3x faster.\r\n\r\n```python\r\nclass MixtralSparseMoeBlock(torch.nn.Module):\r\n def __init__(self, config: MixtralConfig):\r\n super(MixtralSparseMoeBlock, self).__init__()\r\n\r\n self.config = config\r\n\r\n self.hidden_dim = config.hidden_size\r\n self.ffn_dim = config.intermediate_size\r\n self.num_experts = config.num_local_experts\r\n self.top_k = config.num_experts_per_tok\r\n\r\n self.gate = nn.Linear(config.hidden_size, config.num_local_experts, bias=False)\r\n self.experts = nn.ModuleList(\r\n [MixtralBLockSparseTop2MLP(config) for _ in range(self.num_experts)]\r\n )\r\n\r\n self.sort_end_bit = max(int(np.ceil(np.log2(self.num_experts))), 1)\r\n self.blocking = 128\r\n self.quantize_scatter_num_bits = -1\r\n max_column_index = (self.ffn_dim * self.num_experts) // self.blocking\r\n self.transpose_sort_end_bit = max(int(np.ceil(np.log2(max_column_index))), 1)\r\n\r\n # From https://github.com/stanford-futuredata/megablocks/blob/7c25169ce87c32c31e8845ef34785d3095b1a2cb/megablocks/layers/dmoe.py#L31\r\n def sparse_transpose(self, size, row_indices, column_indices):\r\n block_columns = size[1] // self.blocking\r\n\r\n # Sort row indices by column indices to get the transposed matrix's\r\n # column indices.\r\n #\r\n # NOTE: Our sort operation uses the same width indices as the input values.\r\n # To avoid overflow when we have large activation matrices we cast to\r\n # 32-bit before sorting.\r\n _, gather_indices = ops.sort(column_indices.int(), self.transpose_sort_end_bit)\r\n\r\n # There are a constant number of blocks in every row of the sparse matrix.\r\n # A blocks offset is:\r\n #\r\n # row_index * blocks_per_row + column_index % blocks_per_row\r\n #\r\n # Once we have the block offsets ordered for transposition we can divide\r\n # by blocks_per_row to get the transposed column indices.\r\n column_indices_t = row_indices.gather(0, gather_indices.long())\r\n block_offsets_t = gather_indices.int()\r\n\r\n zero = torch.zeros((1,), dtype=torch.int32, device=row_indices.device)\r\n nnz_per_column = ops.histogram(column_indices, block_columns)\r\n nnz_per_column = ops.inclusive_cumsum(nnz_per_column, 0)\r\n offsets_t = torch.cat([zero, nnz_per_column])\r\n return column_indices_t, offsets_t, block_offsets_t\r\n\r\n # From https://github.com/stanford-futuredata/megablocks/blob/7c25169ce87c32c31e8845ef34785d3095b1a2cb/megablocks/layers/dmoe.py#L59\r\n def topology(self, x: torch.Tensor, padded_bins: torch.Tensor):\r\n padded_tokens, _ = x.size()\r\n assert padded_tokens % self.blocking == 0\r\n assert self.ffn_dim % self.blocking == 0\r\n\r\n # Offsets for the sparse matrix. All rows have the\r\n # same number of nonzero blocks dictated by the\r\n # dimensionality of a single expert.\r\n block_rows = padded_tokens // self.blocking\r\n blocks_per_row = self.ffn_dim // self.blocking\r\n offsets = torch.arange(\r\n 0,\r\n block_rows * blocks_per_row + 1,\r\n blocks_per_row,\r\n dtype=torch.int32,\r\n device=x.device,\r\n )\r\n\r\n # Indices for the sparse matrix. The indices for\r\n # the intermediate matrix are dynamic depending\r\n # on the mapping of tokens to experts.\r\n column_indices = ops.topology(\r\n padded_bins, self.blocking, block_rows, blocks_per_row\r\n )\r\n\r\n # TODO(tgale): This is unused. Remove the need for this in stk.\r\n # For now, use meta init to save the device memory.\r\n data = torch.empty(\r\n column_indices.numel(),\r\n self.blocking,\r\n self.blocking,\r\n dtype=x.dtype,\r\n device=\"meta\",\r\n )\r\n shape = (padded_tokens, self.ffn_dim * self.num_experts)\r\n row_indices = stk.ops.row_indices(shape, data, offsets, column_indices)\r\n column_indices_t, offsets_t, block_offsets_t = self.sparse_transpose(\r\n shape, row_indices, column_indices\r\n )\r\n return stk.Matrix(\r\n shape,\r\n data,\r\n row_indices,\r\n column_indices,\r\n offsets,\r\n column_indices_t,\r\n offsets_t,\r\n block_offsets_t,\r\n )\r\n\r\n # From https://github.com/stanford-futuredata/megablocks/blob/7c25169ce87c32c31e8845ef34785d3095b1a2cb/megablocks/layers/dmoe.py#L103\r\n def indices_and_padded_bins(self, top_experts: torch.Tensor):\r\n # Sort the expert ids to produce the scatter/gather\r\n # indices for the permutation.\r\n top_experts = top_experts.int()\r\n bin_ids, indices = ops.sort(top_experts, self.sort_end_bit)\r\n\r\n # Histogram the expert ids to identify the number of\r\n # tokens routed to each expert.\r\n tokens_per_expert = ops.histogram(top_experts, self.num_experts)\r\n\r\n # Round the token counts up to the block size used in\r\n # the matrix muliplications. Caculate the starting\r\n # position of each bin.\r\n padded_tokens_per_expert = ops.round_up(tokens_per_expert, self.blocking)\r\n padded_bins = ops.inclusive_cumsum(padded_tokens_per_expert, 0)\r\n padded_bins = promote_scalar(padded_bins)\r\n\r\n # Calculate the bin bounds for the sorted tokens.\r\n bins = ops.inclusive_cumsum(tokens_per_expert, 0)\r\n bins = promote_scalar(bins)\r\n return indices, bin_ids, bins, padded_bins, tokens_per_expert\r\n\r\n # From https://github.com/stanford-futuredata/megablocks/blob/7c25169ce87c32c31e8845ef34785d3095b1a2cb/megablocks/layers/dmoe.py#L126\r\n def sparse_forward(\r\n self,\r\n hidden_states: torch.Tensor,\r\n expert_weights: torch.Tensor,\r\n top_experts: torch.Tensor,\r\n ):\r\n # x: [sl, bs, hs]\r\n # expert_weights: [sl * bs, top-k]\r\n # top_experts: [sl * bs, top-k]\r\n expert_weights = expert_weights.flatten().to(hidden_states.dtype)\r\n top_experts = top_experts.flatten()\r\n\r\n with torch.no_grad():\r\n (\r\n indices,\r\n bin_ids,\r\n bins,\r\n padded_bins,\r\n _,\r\n ) = self.indices_and_padded_bins(top_experts)\r\n\r\n # Permute tokens and pad to prepare expert computation\r\n # (top_k * sequence_length padding, model_dim)\r\n # Route the tokens for MoE computation.\r\n hidden_states = ops.padded_gather(\r\n hidden_states, indices, bin_ids, bins, padded_bins, self.top_k\r\n )\r\n\r\n # Create the sparse matrix topology\r\n with torch.no_grad():\r\n topo = self.topology(hidden_states, padded_bins)\r\n\r\n w1 = torch.cat([expert.w1.weight.T for expert in self.experts], dim=1)\r\n w2 = torch.cat([expert.w2.weight for expert in self.experts], dim=1).T\r\n w3 = torch.cat([expert.w3.weight.T for expert in self.experts], dim=1)\r\n\r\n # Perform the expert computation\r\n hidden_states = stk.Matrix( # type: ignore\r\n topo.size(),\r\n F.silu(stk.ops.sdd(hidden_states, w1, topo).data)\r\n * stk.ops.sdd(hidden_states, w3, topo).data,\r\n topo.row_indices,\r\n topo.column_indices,\r\n topo.offsets,\r\n topo.column_indices_t,\r\n topo.offsets_t,\r\n topo.block_offsets_t,\r\n )\r\n hidden_states = stk.ops.dsd(hidden_states, w2)\r\n\r\n # Permute back and remove padding\r\n # (top_k * sequence_length, model_dim)\r\n hidden_states: torch.Tensor = ops.padded_scatter( # type: ignore\r\n hidden_states,\r\n indices,\r\n bin_ids,\r\n expert_weights,\r\n bins,\r\n padded_bins,\r\n self.top_k,\r\n self.quantize_scatter_num_bits,\r\n )\r\n return hidden_states\r\n\r\n def forward(self, hidden_states: torch.Tensor):\r\n orig_shape = hidden_states.shape\r\n batch_size, sequence_length, hidden_dim = orig_shape\r\n\r\n hidden_states = hidden_states.view(-1, hidden_dim)\r\n\r\n router_logits = self.gate(hidden_states)\r\n\r\n routing_weights = router_logits.softmax(dim=-1).to(hidden_states.dtype)\r\n routing_weights, expert_indices = torch.topk(\r\n routing_weights, self.top_k, dim=-1\r\n )\r\n routing_weights /= routing_weights.sum(dim=-1, keepdim=True)\r\n\r\n hidden_states = self.sparse_forward(\r\n hidden_states, routing_weights, expert_indices\r\n )\r\n\r\n return hidden_states.view(*orig_shape), router_logits\r\n```",
"@kevinhu what do you think would be the best way to modify the state dict to avoid the `.cat`s? Merging the individual w1, w2, and w3 tensors that are currently in a list of `MixtralBlockSparseTop2MLP` into `w1, w3=nn.Linear(self.hidden_dim, self.ffn_dim * self.num_experts)` and `w2=nn.Linear(self.ffn_dim * self.num_experts, self.hidden_dim)` under the `MixtralSparseMoeBlock` and then using `.view`s of it for each expert? Since you're effectively just using the `MixtralBLockSparseTop2MLP` class as a dataclass for storing the expert weights and not actually using its `forward()` method."
] | 1,702 | 1,703 | null | CONTRIBUTOR | null | ### Feature request
MistralAI recently [released their new model](https://twitter.com/MistralAI/status/1733150512395038967), a Mixture of Experts based on [megablocks](https://github.com/stanford-futuredata/megablocks), a type of dropless Mixture of Experts.
### Motivation
It's very likely that the future of open source LLMs will be MoEs. Having it in HF transformers would allow us to use the built-in trainer, as it's unwieldy to use Megatron-LM for the average user who's only ever done QLoRA.
### Your contribution
No clue for now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27915/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27915/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27914 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27914/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27914/comments | https://api.github.com/repos/huggingface/transformers/issues/27914/events | https://github.com/huggingface/transformers/pull/27914 | 2,032,919,310 | PR_kwDOCUB6oc5hibuH | 27,914 | Fix: [SeamlessM4T - S2TT] Bug in batch loading of audio in torch.Tensor format in the SeamlessM4TFeatureExtractor class | {
"login": "nicholasneo78",
"id": 45549785,
"node_id": "MDQ6VXNlcjQ1NTQ5Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/45549785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nicholasneo78",
"html_url": "https://github.com/nicholasneo78",
"followers_url": "https://api.github.com/users/nicholasneo78/followers",
"following_url": "https://api.github.com/users/nicholasneo78/following{/other_user}",
"gists_url": "https://api.github.com/users/nicholasneo78/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nicholasneo78/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nicholasneo78/subscriptions",
"organizations_url": "https://api.github.com/users/nicholasneo78/orgs",
"repos_url": "https://api.github.com/users/nicholasneo78/repos",
"events_url": "https://api.github.com/users/nicholasneo78/events{/privacy}",
"received_events_url": "https://api.github.com/users/nicholasneo78/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Awesome @nicholasneo78!",
"cc @ylacombe ",
"Hi @ylacombe,\r\n\r\nYes this works for me. Have updated the code `feature_extraction_seamless_m4t.py` and added test in `test_feature_extraction_seamless_m4t.py`\r\n\r\nThanks for the suggestion!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27914). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,703 | 1,703 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Based on the documentation for the [SeamlessM4TProcessor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/seamless_m4t/processing_seamless_m4t.py#L22) class, the class is supposed to take in either `List[np.ndarray]` or `List[torch.Tensor]` for batch decoding when `processor(audios=...)` is called. It would then return a batch of transcriptions (S2TT task of SeamlessM4T). However, when `List[torch.Tensor]` is passed into the `audios` arg, only one translated transcript is being returned even though a batch of audio is passed in. After adding the check for `torch.Tensor` in the `SeamlessM4TFeatureExtractor` class, the translated batch transcript returned as expected.
Below is a code snippet that I use to test the issue:
```python
from transformers import SeamlessM4Tv2Model, SeamlessM4TProcessor
import torch
from datasets import load_dataset
processor = SeamlessM4TProcessor.from_pretrained("facebook/seamless-m4t-v2-large")
model = SeamlessM4Tv2Model.from_pretrained("facebook/seamless-m4t-v2-large", use_safetensors=True)
dataset = load_dataset("arabic_speech_corpus", split="test", streaming=True)
# numpy array as audio_inputs
audio_sample = next(iter(dataset))["audio"]
print(type(audio_sample["array"])) # <class 'numpy.ndarray'>
# get a list of two numpy arrays to simulate batch size=2 when loading the audio arrays
audio_sample_batch = [audio_sample["array"], audio_sample["array"]]
audio_inputs = processor(audios=audio_sample_batch, return_tensors="pt", sampling_rate=16000)
output_tokens = model.generate(**audio_inputs, tgt_lang="eng", generate_speech=False)
translated_text_from_audio = processor.batch_decode(output_tokens[0].tolist(), skip_special_tokens=True)
print(f"Translated text from audio (numpy array): {translated_text_from_audio}\n")
# >>> Translated text from audio (numpy array): ['The first is the fact that the sun is shining brightly on the moon.', 'The first is the fact that the sun is shining brightly on the moon.']
# torch tensors as audio_inputs
torch_tensor_audio_sample = torch.from_numpy(audio_sample["array"])
print(type(torch_tensor_audio_sample)) # <class 'torch.Tensor'>
# get a list of two torch tensors to simulate batch size=2 when loading the audio arrays
torch_tensor_audio_sample_batch = [torch_tensor_audio_sample,torch_tensor_audio_sample]
audio_inputs = processor(audios=torch_tensor_audio_sample_batch, return_tensors="pt", sampling_rate=16000)
output_tokens = model.generate(**audio_inputs, tgt_lang="eng", generate_speech=False)
translated_text_from_audio = processor.batch_decode(output_tokens[0].tolist(), skip_special_tokens=True)
print(f"Translated text from audio (torch tensors): {translated_text_from_audio}")
# >>> Translated text from audio (torch tensors): ['The first is the fact that the sun is shining brightly on the moon.']
# expects two translated sentences just like the numpy array inputs but only one sentence is translated
```
Environment:
```shell
- `transformers` version: 4.36.0.dev0
- Platform: Linux-6.1.11-76060111-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
## Before submitting
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
cc: @sanchit-gandhi
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27914/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27914/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27914",
"html_url": "https://github.com/huggingface/transformers/pull/27914",
"diff_url": "https://github.com/huggingface/transformers/pull/27914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27914.patch",
"merged_at": 1703242051000
} |
https://api.github.com/repos/huggingface/transformers/issues/27913 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27913/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27913/comments | https://api.github.com/repos/huggingface/transformers/issues/27913/events | https://github.com/huggingface/transformers/pull/27913 | 2,032,904,981 | PR_kwDOCUB6oc5hiYby | 27,913 | Fixing Value error question_answering.py | {
"login": "khyatikhandelwal",
"id": 65815098,
"node_id": "MDQ6VXNlcjY1ODE1MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/65815098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/khyatikhandelwal",
"html_url": "https://github.com/khyatikhandelwal",
"followers_url": "https://api.github.com/users/khyatikhandelwal/followers",
"following_url": "https://api.github.com/users/khyatikhandelwal/following{/other_user}",
"gists_url": "https://api.github.com/users/khyatikhandelwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/khyatikhandelwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/khyatikhandelwal/subscriptions",
"organizations_url": "https://api.github.com/users/khyatikhandelwal/orgs",
"repos_url": "https://api.github.com/users/khyatikhandelwal/repos",
"events_url": "https://api.github.com/users/khyatikhandelwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/khyatikhandelwal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,705 | 1,705 | NONE | null | On running this pipeline, the value error is always raised even if a dict/SquadExample is passed as there was no 'else' condition. Now it will only be raised when input is not dict/SquadExample.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27913/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27913",
"html_url": "https://github.com/huggingface/transformers/pull/27913",
"diff_url": "https://github.com/huggingface/transformers/pull/27913.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27913.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27912 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27912/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27912/comments | https://api.github.com/repos/huggingface/transformers/issues/27912/events | https://github.com/huggingface/transformers/pull/27912 | 2,032,860,518 | PR_kwDOCUB6oc5hiOtH | 27,912 | Skip `UnivNetModelTest::test_multi_gpu_data_parallel_forward` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
`test_multi_gpu_data_parallel_forward` is known to fail, and it uses `nn.DataParallel` which is not recommended by pyttorch.
Let's skip it for now. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27912/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27912",
"html_url": "https://github.com/huggingface/transformers/pull/27912",
"diff_url": "https://github.com/huggingface/transformers/pull/27912.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27912.patch",
"merged_at": 1702282658000
} |
https://api.github.com/repos/huggingface/transformers/issues/27911 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27911/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27911/comments | https://api.github.com/repos/huggingface/transformers/issues/27911/events | https://github.com/huggingface/transformers/pull/27911 | 2,032,822,044 | PR_kwDOCUB6oc5hiGTN | 27,911 | Fix M4T v2 integration tests | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"OC, I've already verified that the tests pass (on a 24GB TITAN RTX)! I've also monitored by hand GPU utilization (around 10GB) so I believe it should pass on a 16GB GPU.\r\n\r\nFYI, the tests that I've changed verify that every task-specific model has a 1-to-1 correspondence with the task-agnostic model"
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
Some M4T-v2 integration tests are [causing GPUs OOM](https://github.com/huggingface/transformers/actions/runs/7054968060/job/19204890066). It happens when we load two models together. I thus shifted some integration tests to half precision which should solve the issue.
cc @ydshieh @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27911/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27911",
"html_url": "https://github.com/huggingface/transformers/pull/27911",
"diff_url": "https://github.com/huggingface/transformers/pull/27911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27911.patch",
"merged_at": 1702282722000
} |
https://api.github.com/repos/huggingface/transformers/issues/27910 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27910/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27910/comments | https://api.github.com/repos/huggingface/transformers/issues/27910/events | https://github.com/huggingface/transformers/pull/27910 | 2,032,739,953 | PR_kwDOCUB6oc5hh0DZ | 27,910 | Llama conversion script: adjustments for Llama Guard | {
"login": "pcuenca",
"id": 1177582,
"node_id": "MDQ6VXNlcjExNzc1ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1177582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcuenca",
"html_url": "https://github.com/pcuenca",
"followers_url": "https://api.github.com/users/pcuenca/followers",
"following_url": "https://api.github.com/users/pcuenca/following{/other_user}",
"gists_url": "https://api.github.com/users/pcuenca/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcuenca/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcuenca/subscriptions",
"organizations_url": "https://api.github.com/users/pcuenca/orgs",
"repos_url": "https://api.github.com/users/pcuenca/repos",
"events_url": "https://api.github.com/users/pcuenca/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcuenca/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | MEMBER | null | # What does this PR do?
Small adjustments to the Llama 2 conversion script so it works with the original Llama Guard weights.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27910/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27910",
"html_url": "https://github.com/huggingface/transformers/pull/27910",
"diff_url": "https://github.com/huggingface/transformers/pull/27910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27910.patch",
"merged_at": 1702047770000
} |
https://api.github.com/repos/huggingface/transformers/issues/27909 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27909/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27909/comments | https://api.github.com/repos/huggingface/transformers/issues/27909/events | https://github.com/huggingface/transformers/pull/27909 | 2,032,734,928 | PR_kwDOCUB6oc5hhy59 | 27,909 | fix llava | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27909). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
Fix the prepare inputs for generationa after the cache PR. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27909/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27909",
"html_url": "https://github.com/huggingface/transformers/pull/27909",
"diff_url": "https://github.com/huggingface/transformers/pull/27909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27909.patch",
"merged_at": 1702053154000
} |
https://api.github.com/repos/huggingface/transformers/issues/27908 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27908/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27908/comments | https://api.github.com/repos/huggingface/transformers/issues/27908/events | https://github.com/huggingface/transformers/issues/27908 | 2,032,726,259 | I_kwDOCUB6oc55KPDz | 27,908 | Mistral: CUDA error when generating text with a batch of inputs | {
"login": "plroit",
"id": 1734563,
"node_id": "MDQ6VXNlcjE3MzQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1734563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plroit",
"html_url": "https://github.com/plroit",
"followers_url": "https://api.github.com/users/plroit/followers",
"following_url": "https://api.github.com/users/plroit/following{/other_user}",
"gists_url": "https://api.github.com/users/plroit/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plroit/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plroit/subscriptions",
"organizations_url": "https://api.github.com/users/plroit/orgs",
"repos_url": "https://api.github.com/users/plroit/repos",
"events_url": "https://api.github.com/users/plroit/events{/privacy}",
"received_events_url": "https://api.github.com/users/plroit/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, you added a new token, this you need to resize the token embedding layer of the model with `model.resize_token_embeddings(len(tokenizer))`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,705 | 1,705 | NONE | null | ### System Info
I'm trying to decode a batch of outputs from a batch of inputs, with code that is working correctly with any encoder-decoder model (i.e. T5). I get the following error when I'm using Mistral:
` CUDA error: device-side assert triggered`
stack trace:
```python
File ~/miniconda3/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py:84, in MistralRMSNorm.forward(self, hidden_states)
82 input_dtype = hidden_states.dtype
83 hidden_states = hidden_states.to(torch.float32)
---> 84 variance = hidden_states.pow(2).mean(-1, keepdim=True)
85 hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
86 return self.weight * hidden_states.to(input_dtype)
```
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: neither, single device
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Example script to recreate:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
# required if I want a padded batch (Mistral does not define a padding token)
tokenizer.add_special_tokens({'pad_token': '[PAD]'})
device = "cuda"
model_inputs = tokenizer(["Hi there how are you? What's your name?", "Hi, sup?"], return_tensors="pt", padding=True).to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=10)
```
### Expected behavior
I should be able to use tokenizer.batch_decode on the outputs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27908/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27907 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27907/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27907/comments | https://api.github.com/repos/huggingface/transformers/issues/27907/events | https://github.com/huggingface/transformers/pull/27907 | 2,032,684,189 | PR_kwDOCUB6oc5hhnqy | 27,907 | Generate: SinkCache can handle iterative prompts | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Some additional results @gante \r\n\r\n![image](https://github.com/huggingface/transformers/assets/37621491/36b8ef6c-c5fa-418a-acc4-6193c4abc2c3)\r\n\r\n* `transformers_old` refers to before #27700 (thanks @kevinhu for the change!)\r\n* `transformers_56be5e80` refers to commit 56be5e80, i.e. near main\r\n* `transformers_attn_sink_1024_4_pr-27907` refers to this PR, with `SinkCache(1024, 4)`\r\n\r\nThis experiment is with calling the models with individual tokens exclusively. Using `SinkCache` makes the memory usage linear at a very low cost in perplexity.\r\n\r\n---\r\n\r\nAdditionally, I'm doing a test with calling Mistral-7B in this case with this PR using multiple tokens at once. I took indices 0 to 63k of a book from pg19, only kept 20% of all indices, and then fed the model with the tokens between subsequent indices. The running cache is also included. The NLL loss is then converted to perplexity.\r\n**Note:** We can't compare this with the perplexities from the previous graph: we should only try and observe whether the model eventually increases in perplexity.\r\n\r\n![image](https://github.com/huggingface/transformers/assets/37621491/6606dbbd-079c-4028-81df-2f9195a68a91)\r\n\r\nThe same script crashes on `main`. In this test, the perplexity stays constant, which is good. \r\nEdit: I have now continued with more tests:\r\n![image](https://github.com/huggingface/transformers/assets/37621491/a9f47b15-55ae-4564-a1f7-82701ff21936)\r\n\r\n\r\n* `transformers_multi_attn_sink_1024_4_pr-27907_left_cache`: This PR, with `SinkCache(1024, 4)`.\r\n* `transformers_multi_attn_sink_1024_4_pr-27907_right_cache`: This PR, with `SinkCache(1024, 4)` & the change @ArthurZucker proposed regarding slicing the cache from the right.\r\n* `transformers_multi_b31905d1`: `main` without a special SinkCache.\r\n\r\nThe perplexity diverges quite heavily between the SinkCache and non, which is not ideal. Perhaps this is indicative of some error/bug, or perhaps not. It's a bit hard to tell. Beyond that, the left and right cache implementations behave identically (unless I made some measuring mistake), which is a bit odd. I don't have 100% confidence in this fix anymore I'm afraid.\r\n\r\n- Tom Aarsen",
"@ArthurZucker as per your suggestion, I've reworked the PR to avoid post hoc `attention_mask` slicing -- there is a new function to get the usable cache length, and that function is used to obtain `kv_seq_len`\r\n\r\n@tomaarsen the rework seems to have resulted in a qualitative result upgrade (e.g. see the test case), so I suspect that I've inadvertently fixed a bug 👀 Would you be able to rerun your benchmarks for `SinkCache`? ",
"@gante I get `ValueError: Attention weights should be of size (1, 32, 5, 1027), but is torch.Size([1, 32, 5, 1024])` upon running my modified script. Do you get an error like this with the multi-step generation script from your PR?",
"@tomaarsen no, the script in the PR header runs endlessly without issues 🤔 LMK if you can find a reproducer",
"I have the same, that script works fine. Hmmm",
"Got a reproducer: change `max_new_tokens` in the script above to 512 👀 having a look!",
"@tomaarsen should be fixed",
"@gante Works great now!\r\n![image](https://github.com/huggingface/transformers/assets/37621491/3ec53db0-49eb-483f-8611-ac01312c6a54)\r\n\r\nRed is the baseline, I can only run it to about ~15k seq length until my PC completely freezes.",
"🙌 "
] | 1,702 | 1,702 | 1,702 | MEMBER | null | # What does this PR do?
Fixes the case where `SinkCache` is used in a chat bot, receiving new prompts after giving an answer. Fix developed with @tomaarsen
Here's an example of a script that works after this PR:
```py
from transformers import AutoTokenizer, SinkCache, AutoModelForCausalLM, TextStreamer
import torch
from datasets import load_dataset
# Loading the model & tokenizer
model_id = "HuggingFaceH4/zephyr-7b-beta"
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Loading the prompts to simulate user interactions
prompt_dataset = load_dataset("HuggingFaceH4/mt_bench_prompts", split="train")
prompts = [prompt for prompts in prompt_dataset["prompt"] for prompt in prompts]
# Prepare generation settings
cache = SinkCache(window_length=1024, num_sink_tokens=4)
streamer = TextStreamer(tokenizer)
input_ids = torch.tensor([], device=model.device, dtype=torch.int)
for prompt in prompts:
# Tokenize the prompt with the correct chat template
chat = [{"role": "user", "content": prompt}]
input_ids = torch.cat((input_ids, tokenizer.apply_chat_template(chat, return_tensors="pt", add_generation_prompt=True).to(model.device)), dim=1)
# input_ids = tokenizer.apply_chat_template(chat, return_tensors="pt").to(model.device)
# Perform the generation
gen_out = model.generate(input_ids, do_sample=False, max_new_tokens=100, past_key_values=cache, use_cache=True, streamer=streamer)
# input_ids = torch.cat((input_ids, gen_out), dim=1)
input_ids = gen_out
# If desired, decode the output from this prompt
decoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True)
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27907/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27907",
"html_url": "https://github.com/huggingface/transformers/pull/27907",
"diff_url": "https://github.com/huggingface/transformers/pull/27907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27907.patch",
"merged_at": 1702065740000
} |
https://api.github.com/repos/huggingface/transformers/issues/27906 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27906/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27906/comments | https://api.github.com/repos/huggingface/transformers/issues/27906/events | https://github.com/huggingface/transformers/pull/27906 | 2,032,423,360 | PR_kwDOCUB6oc5hgvQh | 27,906 | mark `test_initialization` as flaky in 2 model tests | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"For the record: below code shows the issue\r\n\r\n```python\r\nimport torch\r\n\r\nDIM = 32\r\nSTD = 1e-10\r\nB = 2\r\nN_ITER = 100\r\n\r\n\r\nfor _ in range(100):\r\n for pow in range(4, 15):\r\n m = 0\r\n seq_len = 2 ** pow\r\n for i in range(N_ITER):\r\n t = torch.zeros(size=(seq_len, DIM))\r\n o = torch.abs(torch.nn.init.trunc_normal_(t.to(torch.float32), mean=0.0, std=STD, a=-B, b=B))\r\n n = torch.sum(o >= B)\r\n m += n\r\n\r\n print(f\"seq_len = {seq_len}: {m / seq_len / DIM / N_ITER}\")\r\n\r\n print(\"-\" * 80)\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27906). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
`torch.nn.init.trunc_normal_` is flaky and sometimes produce large value even if `mean=0.0` and `std=1e-10).
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27906/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27906",
"html_url": "https://github.com/huggingface/transformers/pull/27906",
"diff_url": "https://github.com/huggingface/transformers/pull/27906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27906.patch",
"merged_at": 1702043672000
} |
https://api.github.com/repos/huggingface/transformers/issues/27905 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27905/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27905/comments | https://api.github.com/repos/huggingface/transformers/issues/27905/events | https://github.com/huggingface/transformers/pull/27905 | 2,032,418,365 | PR_kwDOCUB6oc5hguKt | 27,905 | [Seamless] Fix links in docs | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27905). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Relative links were broken, updated to absolute URL ones. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27905/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27905",
"html_url": "https://github.com/huggingface/transformers/pull/27905",
"diff_url": "https://github.com/huggingface/transformers/pull/27905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27905.patch",
"merged_at": 1702566853000
} |
https://api.github.com/repos/huggingface/transformers/issues/27904 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27904/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27904/comments | https://api.github.com/repos/huggingface/transformers/issues/27904/events | https://github.com/huggingface/transformers/issues/27904 | 2,032,397,437 | I_kwDOCUB6oc55I-x9 | 27,904 | ERROR: Could not build wheels for safetensors, tokenizers, which is required to install pyproject.toml-based projects | {
"login": "zhaosheng-thu",
"id": 144892591,
"node_id": "U_kgDOCKLirw",
"avatar_url": "https://avatars.githubusercontent.com/u/144892591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaosheng-thu",
"html_url": "https://github.com/zhaosheng-thu",
"followers_url": "https://api.github.com/users/zhaosheng-thu/followers",
"following_url": "https://api.github.com/users/zhaosheng-thu/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaosheng-thu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaosheng-thu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaosheng-thu/subscriptions",
"organizations_url": "https://api.github.com/users/zhaosheng-thu/orgs",
"repos_url": "https://api.github.com/users/zhaosheng-thu/repos",
"events_url": "https://api.github.com/users/zhaosheng-thu/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaosheng-thu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"hey which version of python are you working with? And what hardware are you using? ",
"> hey which version of python are you working with? And what hardware are you using?\r\n\r\nthanks a lot! I have just solved this problem, and wish you a good day!",
"How did you solve this?\r\n",
"> hey which version of python are you working with? And what hardware are you using?\r\n\r\ni am getting same issue. i am using python 3.12"
] | 1,702 | 1,705 | 1,703 | NONE | null | ### System Info
`pip install transformers`
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When I am installing transformers-4.35.2, the problems happen.
Building wheels for collected packages: safetensors, tokenizers
Building wheel for safetensors (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for safetensors (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [45 lines of output]
Running `maturin pep517 build-wheel -i D:\pycharm312\venv\Scripts\python.exe --compatibility off`
馃嵐 Building a mixed python/rust project
馃敆 Found pyo3 bindings
馃悕 Found CPython 3.12 at D:\pycharm312\venv\Scripts\python.exe
馃摗 Using build options features, bindings from pyproject.toml
Compiling proc-macro2 v1.0.70
Compiling target-lexicon v0.12.12
Compiling unicode-ident v1.0.12
Compiling autocfg v1.1.0
Compiling once_cell v1.18.0
Compiling windows_x86_64_msvc v0.48.5
Compiling syn v1.0.109
Compiling libc v0.2.150
Compiling parking_lot_core v0.9.9
Compiling serde v1.0.193
Compiling cfg-if v1.0.0
Compiling scopeguard v1.2.0
Compiling smallvec v1.11.2
Compiling serde_json v1.0.108
Compiling itoa v1.0.9
Compiling ryu v1.0.15
Compiling unindent v0.1.11
error: linker `link.exe` not found
|
= note: program not found
note: the msvc targets depend on the msvc linker but `link.exe` was not found
note: please ensure that Visual Studio 2017 or later, or Build Tools for Visual Studio were installed with the Visual C++ option.
note: VS Code is a different product, and is not sufficient.
error: could not compile `proc-macro2` (build script) due to previous error
warning: build failed, waiting for other jobs to finish...
error: could not compile `target-lexicon` (build script) due to previous error
error: could not compile `windows_x86_64_msvc` (build script) due to previous error
error: could not compile `syn` (build script) due to previous error
error: could not compile `libc` (build script) due to previous error
error: could not compile `parking_lot_core` (build script) due to previous error
error: could not compile `serde` (build script) due to previous error
error: could not compile `serde_json` (build script) due to previous error
馃挜 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit code: 101": `"cargo" "rustc" "--features" "pyo3/extension-module" "--message-format" "json-render-diagnostics"
"--manifest-path" "C:\\Users\\86186\\AppData\\Local\\Temp\\pip-install-b9fju0sq\\safetensors_2de969c81fb9425fbfef7449546ec30d\\bindings\\python\\Cargo.toml" "--release" "--lib"`
Error: command ['maturin', 'pep517', 'build-wheel', '-i', 'D:\\pycharm312\\venv\\Scripts\\python.exe', '--compatibility', 'off'] returned non-zero exit status 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for safetensors
Building wheel for tokenizers (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for tokenizers (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [43 lines of output]
Running `maturin pep517 build-wheel -i D:\pycharm312\venv\Scripts\python.exe --compatibility off`
馃嵐 Building a mixed python/rust project
馃敆 Found pyo3 bindings
馃悕 Found CPython 3.12 at D:\pycharm312\venv\Scripts\python.exe
馃摗 Using build options features, bindings from pyproject.toml
Compiling autocfg v1.1.0
Compiling proc-macro2 v1.0.69
Compiling unicode-ident v1.0.12
Compiling windows_x86_64_msvc v0.48.5
Compiling cfg-if v1.0.0
Compiling syn v1.0.109
Compiling target-lexicon v0.12.12
Compiling scopeguard v1.2.0
Compiling libc v0.2.150
Compiling crossbeam-utils v0.8.16
Compiling cc v1.0.83
Compiling once_cell v1.18.0
Compiling memchr v2.6.4
Compiling fnv v1.0.7
Compiling windows_x86_64_msvc v0.42.2
Compiling strsim v0.10.0
error: linker `link.exe` not found
|
= note: program not found
note: the msvc targets depend on the msvc linker but `link.exe` was not found
note: please ensure that Visual Studio 2017 or later, or Build Tools for Visual Studio were installed with the Visual C++ option.
note: VS Code is a different product, and is not sufficient.
error: could not compile `windows_x86_64_msvc` (build script) due to previous error
warning: build failed, waiting for other jobs to finish...
error: could not compile `proc-macro2` (build script) due to previous error
error: could not compile `windows_x86_64_msvc` (build script) due to previous error
error: could not compile `crossbeam-utils` (build script) due to previous error
error: could not compile `target-lexicon` (build script) due to previous error
error: could not compile `libc` (build script) due to previous error
error: could not compile `syn` (build script) due to previous error
馃挜 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit code: 101": `"cargo" "rustc" "--features" "pyo3/extension-module" "--message-format" "json-render-diagnostics"
"--manifest-path" "C:\\Users\\86186\\AppData\\Local\\Temp\\pip-install-b9fju0sq\\tokenizers_1ea38977042a4a4194501ec96394a1a0\\bindings\\python\\Cargo.toml" "--release" "--lib"`
Error: command ['maturin', 'pep517', 'build-wheel', '-i', 'D:\\pycharm312\\venv\\Scripts\\python.exe', '--compatibility', 'off'] returned non-zero exit status 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for tokenizers
Failed to build safetensors tokenizers
ERROR: Could not build wheels for safetensors, tokenizers, which is required to install pyproject.toml-based projects
versions:
pip: 23.3.1
setuptools: 69.0.2
wheel: 0.38.4
they are updated to the latest version.
how can i solve it? Thanks!
### Expected behavior
how can i solve it? Thanks! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27904/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27903 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27903/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27903/comments | https://api.github.com/repos/huggingface/transformers/issues/27903/events | https://github.com/huggingface/transformers/pull/27903 | 2,032,323,741 | PR_kwDOCUB6oc5hgZaW | 27,903 | Fix `notification_service.py` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
Fix a tiny issue in #27881 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27903/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27903",
"html_url": "https://github.com/huggingface/transformers/pull/27903",
"diff_url": "https://github.com/huggingface/transformers/pull/27903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27903.patch",
"merged_at": 1702043702000
} |
https://api.github.com/repos/huggingface/transformers/issues/27902 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27902/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27902/comments | https://api.github.com/repos/huggingface/transformers/issues/27902/events | https://github.com/huggingface/transformers/issues/27902 | 2,032,276,433 | I_kwDOCUB6oc55IhPR | 27,902 | Trainer logging_first_step not evaluate on first step as it is documented | {
"login": "RmZeta2718",
"id": 42400165,
"node_id": "MDQ6VXNlcjQyNDAwMTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RmZeta2718",
"html_url": "https://github.com/RmZeta2718",
"followers_url": "https://api.github.com/users/RmZeta2718/followers",
"following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}",
"gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions",
"organizations_url": "https://api.github.com/users/RmZeta2718/orgs",
"repos_url": "https://api.github.com/users/RmZeta2718/repos",
"events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}",
"received_events_url": "https://api.github.com/users/RmZeta2718/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 3551105283,
"node_id": "LA_kwDOCUB6oc7TqZED",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Documentation%20Issue",
"name": "Good First Documentation Issue",
"color": "AB0BA8",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Gently pinging @muellerzr and @pacman100 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Marking as good first issue, doc can be updated 😉 ",
"Hey, @pacman100. Can I work on this issue?",
"Sure feel free to open a PR to update the doc 😉 "
] | 1,702 | 1,707 | 1,707 | NONE | null | ### System Info
`transformers` version: 4.35.2
### Who can help?
trainer: @muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
document says that [logging_first_step](https://huggingface.co./docs/transformers/main_classes/trainer#transformers.TrainingArguments.logging_first_step) will evaluate on the first global_step. But it only logs on the first step, not evaluate.
Related code: [link](https://github.com/huggingface/transformers/blob/633215ba58fe5114d8c8d32e415a04600e010701/src/transformers/trainer_callback.py#L435)
### Expected behavior
Either fix the document (remove "evaluate") or add evaluate feature to `logging_first_step` (I would prefer the latter)
Or if it's confusing for `logging_first_step` to evaluate, maybe we can add a `evaluate_first_step` argument. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27902/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27901 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27901/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27901/comments | https://api.github.com/repos/huggingface/transformers/issues/27901/events | https://github.com/huggingface/transformers/pull/27901 | 2,032,120,031 | PR_kwDOCUB6oc5hfsP8 | 27,901 | [Bugfix] non_attended_tokens index | {
"login": "okotaku",
"id": 24734142,
"node_id": "MDQ6VXNlcjI0NzM0MTQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24734142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/okotaku",
"html_url": "https://github.com/okotaku",
"followers_url": "https://api.github.com/users/okotaku/followers",
"following_url": "https://api.github.com/users/okotaku/following{/other_user}",
"gists_url": "https://api.github.com/users/okotaku/gists{/gist_id}",
"starred_url": "https://api.github.com/users/okotaku/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/okotaku/subscriptions",
"organizations_url": "https://api.github.com/users/okotaku/orgs",
"repos_url": "https://api.github.com/users/okotaku/repos",
"events_url": "https://api.github.com/users/okotaku/events{/privacy}",
"received_events_url": "https://api.github.com/users/okotaku/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker Here is the minimum code.\r\n\r\n```\r\nimport torch\r\nfrom transformers.models.llava.modeling_llava import LlavaForConditionalGeneration\r\n\r\n\r\nclass TestLlavaForConditionalGeneration(LlavaForConditionalGeneration):\r\n def forward(\r\n self,\r\n input_ids= None,\r\n pixel_values= None,\r\n attention_mask = None,\r\n position_ids= None,\r\n past_key_values= None,\r\n inputs_embeds= None,\r\n vision_feature_layer= None,\r\n vision_feature_select_strategy= None,\r\n labels= None,\r\n use_cache= None,\r\n output_attentions= None,\r\n output_hidden_states= None,\r\n return_dict= None,\r\n ):\r\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\r\n output_hidden_states = (\r\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\r\n )\r\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\r\n vision_feature_layer = (\r\n vision_feature_layer if vision_feature_layer is not None else self.config.vision_feature_layer\r\n )\r\n vision_feature_select_strategy = (\r\n vision_feature_select_strategy\r\n if vision_feature_select_strategy is not None\r\n else self.config.vision_feature_select_strategy\r\n )\r\n\r\n if inputs_embeds is None:\r\n # 1. Extra the input embeddings\r\n inputs_embeds = self.get_input_embeddings()(input_ids)\r\n\r\n # 2. Merge text and images\r\n if pixel_values is not None and input_ids.shape[1] != 1:\r\n image_outputs = self.vision_tower(pixel_values, output_hidden_states=True)\r\n # this is not memory efficient at all (output_hidden_states=True) will save all the hidden stated.\r\n selected_image_feature = image_outputs.hidden_states[vision_feature_layer]\r\n\r\n if vision_feature_select_strategy == \"default\":\r\n selected_image_feature = selected_image_feature[:, 1:]\r\n elif vision_feature_select_strategy == \"full\":\r\n selected_image_feature = selected_image_feature\r\n else:\r\n raise ValueError(\r\n f\"Unexpected select feature strategy: {self.config.vision_feature_select_strategy}\"\r\n )\r\n\r\n image_features = self.multi_modal_projector(selected_image_feature)\r\n inputs_embeds, attention_mask, position_ids = self._merge_input_ids_with_image_features(\r\n image_features, inputs_embeds, input_ids, attention_mask, position_ids\r\n )\r\n if labels is None:\r\n labels = torch.full_like(attention_mask, self.config.ignore_index).to(torch.long)\r\n else:\r\n # In case input_ids.shape[1] == 1 & pixel_values==None & past_key_values != None, we are in the case of\r\n # generation with cache\r\n if past_key_values is not None and pixel_values is not None and input_ids.shape[1] == 1:\r\n # Retrieve the first layer to inspect the logits and mask out the hidden states\r\n # that are set to 0\r\n first_layer_past_key_value = past_key_values[0][0][:, 0, :, 0]\r\n batch_index, non_attended_tokens = torch.where(first_layer_past_key_value == 0)\r\n\r\n ############################\r\n # Add here\r\n # non_attended_tokens = non_attended_tokens - attention_mask.shape[1]\r\n ############################\r\n\r\n # Get the target length\r\n target_seqlen = first_layer_past_key_value.shape[-1] + 1\r\n\r\n extended_attention_mask = torch.ones(\r\n (attention_mask.shape[0], target_seqlen - attention_mask.shape[1]),\r\n dtype=attention_mask.dtype,\r\n device=attention_mask.device,\r\n )\r\n\r\n # Zero-out the places where we don't need to attend\r\n extended_attention_mask[batch_index, non_attended_tokens] = 0\r\n\r\n attention_mask = torch.cat((attention_mask, extended_attention_mask), dim=1)\r\n position_ids = torch.sum(attention_mask, dim=1).unsqueeze(-1) - 1\r\n print(position_ids)\r\n\r\nllava = TestLlavaForConditionalGeneration.from_pretrained(\r\n \"llava-hf/llava-1.5-13b-hf\", torch_dtype=torch.float16)\r\nllava.to(\"cuda\")\r\ninput_ids = torch.tensor([[319]]).cuda().long()\r\npixel_values = torch.zeros((1, 3, 224, 224)).cuda().half()\r\npast_key_values = torch.rand((1, 1, 1, 40, 642, 128)).cuda().half()\r\npast_key_values[:, :, :, 0, 600, 0] = 0.\r\nattention_mask = torch.ones((1, 68)).cuda().half()\r\nllava(input_ids=input_ids,\r\n pixel_values=pixel_values,\r\n past_key_values=past_key_values,\r\n attention_mask=attention_mask)\r\n```\r\n\r\nOutput is:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/workspace/a.py\", line 99, in <module>\r\n llava(input_ids=input_ids,\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/workspace/a.py\", line 87, in forward\r\n attention_mask = torch.cat((attention_mask, extended_attention_mask), dim=1)\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```",
"cc @younesbelkada ",
"@younesbelkada \r\nThank you!\r\n\r\nI feel there is an indexing error between `extended_attention_mask` and `non_attended_tokens`.\r\n\r\n```\r\nbatch_index, non_attended_tokens = torch.where(first_layer_past_key_value == 0)\r\n# Get the target length\r\ntarget_seqlen = first_layer_past_key_value.shape[-1] + 1\r\n\r\n# Now the index of `non_attended_tokens` corresponds to the index of `target_seqlen` == first_layer_past_key_value.axis(-1).\r\n\r\n# However, the index of extended_attention_mask is target_seqlen - attention_mask.shape[1].\r\n# Are the indices of `non_attended_tokens` and `extended_attention_mask` different?\r\nextended_attention_mask = torch.ones(\r\n (attention_mask.shape[0], target_seqlen - attention_mask.shape[1]),\r\n dtype=attention_mask.dtype,\r\n device=attention_mask.device,\r\n)\r\n\r\n# Zero-out the places where we don't need to attend\r\nextended_attention_mask[batch_index, non_attended_tokens] = 0\r\nattention_mask = torch.cat((attention_mask, extended_attention_mask), dim=1)\r\n```",
"Hi @okotaku \r\nI think https://github.com/huggingface/transformers/pull/28032 fixes the same issue, can you try out on transformers main ? 🙏 "
] | 1,702 | 1,705 | 1,705 | NONE | null | # What does this PR do?
```
batch_index, non_attended_tokens = torch.where(first_layer_past_key_value == 0)
# Get the target length
target_seqlen = first_layer_past_key_value.shape[-1] + 1
extended_attention_mask = torch.ones(
(attention_mask.shape[0], target_seqlen - attention_mask.shape[1]),
dtype=attention_mask.dtype,
device=attention_mask.device,
)
# Zero-out the places where we don't need to attend
extended_attention_mask[batch_index, non_attended_tokens] = 0
attention_mask = torch.cat((attention_mask, extended_attention_mask), dim=1)
```
The shape of `extended_attention_mask` is `(attention_mask.shape[0], target_seqlen - attention_mask.shape[1])`, but `first_layer_past_key_value` is `(attention_mask.shape[0], target_seqlen)`.
This cause index error of `non_attended_tokens`.
I added an index fix line.
```
non_attended_tokens = non_attended_tokens - attention_mask.shape[1]
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27901/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27901",
"html_url": "https://github.com/huggingface/transformers/pull/27901",
"diff_url": "https://github.com/huggingface/transformers/pull/27901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27901.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27900 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27900/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27900/comments | https://api.github.com/repos/huggingface/transformers/issues/27900/events | https://github.com/huggingface/transformers/issues/27900 | 2,032,092,580 | I_kwDOCUB6oc55H0Wk | 27,900 | Weird Tokenization when Training New Tokenizer from Llama 2 Tokenizer using `train_new_from_iterator` | {
"login": "phoongkhangzhie",
"id": 25717121,
"node_id": "MDQ6VXNlcjI1NzE3MTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/25717121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phoongkhangzhie",
"html_url": "https://github.com/phoongkhangzhie",
"followers_url": "https://api.github.com/users/phoongkhangzhie/followers",
"following_url": "https://api.github.com/users/phoongkhangzhie/following{/other_user}",
"gists_url": "https://api.github.com/users/phoongkhangzhie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phoongkhangzhie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phoongkhangzhie/subscriptions",
"organizations_url": "https://api.github.com/users/phoongkhangzhie/orgs",
"repos_url": "https://api.github.com/users/phoongkhangzhie/repos",
"events_url": "https://api.github.com/users/phoongkhangzhie/events{/privacy}",
"received_events_url": "https://api.github.com/users/phoongkhangzhie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ahhh I'll have a look that looks a bit nasty indeed\r\n",
"Hi @ArthurZucker , any updates on this? Thank you!",
"Hey, I can't reproduce this yet. I don't have your local dataset, and I don't have the loading script so \r\n```python\r\ndef python_generator():\r\n # Load local files for code_search_net/python\r\n # https://huggingface.co./datasets/code_search_net\r\n dataset = load_dataset(\"code_search_net/python.py\", \"python\")\r\n dataset = dataset[\"train\"]\r\n for start_idx in range(0, len(dataset), 1000):\r\n samples = dataset[start_idx: start_idx + 1000]\r\n yield samples[\"whole_func_string\"]\r\n```\r\n\r\nfails with \r\n`FileNotFoundError: Couldn't find a dataset script at /Users/arthurzucker/Work/transformers/deci-7b/code_search_net/python.py`",
"I cannot help you without a proper reproducer\r\n",
"One thing that is certain is that Bytefallback does not seem to be activated (properly) because the bytes should be part of the vocab, the trainer should have a logic to handle that which it does not at the moment ",
"> I cannot help you without a proper reproducer\r\n\r\nI've updated the script above. Hopefully it works now!",
"Same here! There are tokens in the vocabulary that consist of some joined words, like `this▁is▁a▁test`",
"What did you train your tokenizer on? ",
"@phoongkhangzhie I had to update your script it does not work out of the box, ",
"@ArthurZucker on batches of strings. It seems it's not splitting words",
"I think a quick fix would be to disable the normalizer and use a metaspace pre-tokenizer instead. \r\n```python3\r\nfrom tokenizers import pre_tokenizers, normalizers\r\nfrom transformers import AutoTokenizer\r\nold_tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\")\r\nold_tokenizer._tokenizer.normalizer = normalizers.Sequence([])\r\nold_tokenizer._tokenizer.pre_tokenizer = pre_tokenizers.Metaspace(\"▁\", True, prepend_scheme = \"first\")\r\n```\r\n",
"It works, the vocabulary is correctly generated now. However, it does not pretokenize punctuation:\r\n\r\n```bash\r\n(Pdb) old_tokenizer.convert_ids_to_tokens(old_tokenizer(\"This is a test.\")[\"input_ids\"])\r\n['<s>', '▁This', '▁is', '▁a', '▁test', '.']\r\n(Pdb) new_tokenizer.convert_ids_to_tokens(new_tokenizer(\"This is a test.\")[\"input_ids\"])\r\n['<s>', '▁Th', 'is', '▁is', '▁a', '▁tes', 't.']\r\n```",
"That's because it is probably missing a replace normalizer. so something like this: \r\n```python \r\nfrom tokenizers import pre_tokenizers, normalizers\r\nfrom transformers import AutoTokenizer\r\nold_tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\")\r\nold_tokenizer._tokenizer.normalizer = normalizers.Sequence([normalizers.Strip(left=False, right=True), normalizers.Replace(Regex(\" {2,}\"), \"▁\")])\r\nold_tokenizer._tokenizer.pre_tokenizer = pre_tokenizers.Metaspace(\"▁\", True, prepend_scheme = \"first\")\r\n```\r\n(make sure you don't use \"_\" but \"▁\"",
"#26678 should provide the fix. \r\ncc @xenova as this seems to give us a headache hahaa ",
"I've added the noramlizer as you said. I solves the final dot issue. However, inner punctuation is not tokenized. There are tokens like `▁(house)` in the final vocabulary: I think we need to add `pre_tokenizers.Punctuation()` in the `pre_tokenizers`: \r\n\r\n```python\r\nold_tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)\r\nold_tokenizer._tokenizer.normalizer = normalizers.Sequence([normalizers.Strip(left=False, right=True), normalizers.Replace(tokenizers.Regex(\" {2,}\"), \"▁\")])\r\nold_tokenizer._tokenizer.pre_tokenizer = pre_tokenizers.Sequence([pre_tokenizers.Punctuation(), pre_tokenizers.Metaspace(prepend_scheme=\"first\")])\r\n```",
"Thank you @ArthurZucker and @anderleich for your inputs.\r\n\r\nI think there are still issues with the tokenizer even after the various fixes.\r\n\r\n> I think a quick fix would be to disable the normalizer and use a metaspace pre-tokenizer instead.\r\n> \r\n> ```python\r\n> from tokenizers import pre_tokenizers, normalizers\r\n> from transformers import AutoTokenizer\r\n> old_tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\")\r\n> old_tokenizer._tokenizer.normalizer = normalizers.Sequence([])\r\n> old_tokenizer._tokenizer.pre_tokenizer = pre_tokenizers.Metaspace(\"▁\", True, prepend_scheme = \"first\")\r\n> ```\r\n\r\nWith the above fix, the outputs are:\r\n```\r\nExample 1:\r\ndef add_numbers(a, b):\r\n \"\"\"Add the two numbers `a` and `b`.\"\"\"\r\n return a + b\r\n\r\nold: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁def', '▁add', '_', 'numbers', '(', 'a', ',', '▁b', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁\"\"\"', 'Add', '▁the', '▁two', '▁numbers', '▁`', 'a', '`', '▁and', '▁`', 'b', '`', '.\"', '\"\"', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁a', '▁+', '▁b', '<0x0A>', '▁▁▁▁▁▁▁▁']\r\nnew: ['▁\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁def', '▁add_', 'number', 's(', 'a,', '▁b):\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁\"\"\"Add', '▁the', '▁two', '▁numbers', '▁`a`', '▁and', '▁`b', '`.\"\"\"\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁return', '▁a', '▁+', '▁b\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁']\r\n\r\nExample 2:\r\nclass LinearLayer():\r\n def __init__(self, input_size, output_size):\r\n self.weight = torch.randn(input_size, output_size)\r\n self.bias = torch.zeros(output_size)\r\n\r\n def __call__(self, x):\r\n return x @ self.weights + self.bias\r\n\r\nold: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁class', '▁Linear', 'Layer', '():', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'init', '__(', 'self', ',', '▁input', '_', 'size', ',', '▁output', '_', 'size', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'weight', '▁=', '▁tor', 'ch', '.', 'rand', 'n', '(', 'input', '_', 'size', ',', '▁output', '_', 'size', ')', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'b', 'ias', '▁=', '▁tor', 'ch', '.', 'zer', 'os', '(', 'output', '_', 'size', ')', '<0x0A>', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'call', '__(', 'self', ',', '▁x', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁x', '▁@', '▁self', '.', 'we', 'ights', '▁+', '▁self', '.', 'b', 'ias', '<0x0A>', '▁▁▁▁▁▁▁▁']\r\nnew: ['▁\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁class', '▁Linear', 'Layer', '():\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁def', '▁__init__(self,', '▁input', '_size,', '▁output', '_size):\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁self.weight', '▁=', '▁torch', '.randn', '(input', '_size,', '▁output', '_size)\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁self.b', 'ias', '▁=', '▁torch.', 'zeros(', 'output', '_size)\\n\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁def', '▁__call', '__(self,', '▁x):\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁return', '▁x', '▁@', '▁self.', 'weights', '▁+', '▁self.b', 'ias', '\\n', '▁', '▁', '▁', '▁', '▁', '▁', '▁', '▁']\r\n```\r\nThis fix prepends all whitespace characters with `'▁'`, but all of them are separate tokens in the final output where instead some of them should be merged instead to represent indentations or double indentation in code. Also, the newline character `\\n` is not treated as a whitespace character.\r\n\r\n> That's because it is probably missing a replace normalizer. so something like this:\r\n> \r\n> ```python\r\n> from tokenizers import pre_tokenizers, normalizers\r\n> from transformers import AutoTokenizer\r\n> old_tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\")\r\n> old_tokenizer._tokenizer.normalizer = normalizers.Sequence([normalizers.Strip(left=False, right=True), normalizers.Replace(Regex(\" {2,}\"), \"▁\")])\r\n> old_tokenizer._tokenizer.pre_tokenizer = pre_tokenizers.Metaspace(\"▁\", True, prepend_scheme = \"first\")\r\n> ```\r\n> \r\n> (make sure you don't use \"_\" but \"▁\"\r\n\r\nWith the above fix, the outputs are:\r\n```\r\nExample 1:\r\ndef add_numbers(a, b):\r\n \"\"\"Add the two numbers `a` and `b`.\"\"\"\r\n return a + b\r\n\r\nold: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁def', '▁add', '_', 'numbers', '(', 'a', ',', '▁b', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁\"\"\"', 'Add', '▁the', '▁two', '▁numbers', '▁`', 'a', '`', '▁and', '▁`', 'b', '`', '.\"', '\"\"', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁a', '▁+', '▁b', '<0x0A>', '▁▁▁▁▁▁▁▁']\r\nnew: ['▁\\n', '▁def', '▁add_', 'number', 's(', 'a,', '▁b):\\n', '▁\"\"\"Add', '▁the', '▁two', '▁numbers', '▁`a`', '▁and', '▁`b', '`.\"\"\"\\n', '▁return', '▁a', '▁+', '▁b']\r\n\r\nExample 2:\r\nclass LinearLayer():\r\n def __init__(self, input_size, output_size):\r\n self.weight = torch.randn(input_size, output_size)\r\n self.bias = torch.zeros(output_size)\r\n\r\n def __call__(self, x):\r\n return x @ self.weights + self.bias\r\n\r\nold: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁class', '▁Linear', 'Layer', '():', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'init', '__(', 'self', ',', '▁input', '_', 'size', ',', '▁output', '_', 'size', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'weight', '▁=', '▁tor', 'ch', '.', 'rand', 'n', '(', 'input', '_', 'size', ',', '▁output', '_', 'size', ')', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'b', 'ias', '▁=', '▁tor', 'ch', '.', 'zer', 'os', '(', 'output', '_', 'size', ')', '<0x0A>', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'call', '__(', 'self', ',', '▁x', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁x', '▁@', '▁self', '.', 'we', 'ights', '▁+', '▁self', '.', 'b', 'ias', '<0x0A>', '▁▁▁▁▁▁▁▁']\r\nnew: ['▁\\n', '▁class', '▁Linear', 'Layer', '():\\n', '▁def', '▁__init__(self,', '▁input', '_size,', '▁output', '_size):\\n', '▁self.weight', '▁=', '▁torch', '.randn', '(input', '_size,', '▁output', '_size)\\n', '▁self.b', 'ias', '▁=', '▁torch.', 'zeros(', 'output', '_size)\\n\\n', '▁def', '▁__call', '__(self,', '▁x):\\n', '▁return', '▁x', '▁@', '▁self.', 'weights', '▁+', '▁self.b', 'ias']\r\n```\r\nThis fix collapses all the whitespace characters into a single `'▁'` character. However, this removes the importance of whitespace in code such as the different indentation levels. Again, the newline character `\\n` is not treated as a whitespace character.\r\n\r\n> I've added the noramlizer as you said. I solves the final dot issue. However, inner punctuation is not tokenized. There are tokens like `▁(house)` in the final vocabulary: I think we need to add `pre_tokenizers.Punctuation()` in the `pre_tokenizers`:\r\n> \r\n> ```python\r\n> old_tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)\r\n> old_tokenizer._tokenizer.normalizer = normalizers.Sequence([normalizers.Strip(left=False, right=True), normalizers.Replace(tokenizers.Regex(\" {2,}\"), \"▁\")])\r\n> old_tokenizer._tokenizer.pre_tokenizer = pre_tokenizers.Sequence([pre_tokenizers.Punctuation(), pre_tokenizers.Metaspace(prepend_scheme=\"first\")])\r\n> ```\r\n\r\nAnd with this fix, the outputs are:\r\n```\r\nExample 1:\r\ndef add_numbers(a, b):\r\n \"\"\"Add the two numbers `a` and `b`.\"\"\"\r\n return a + b\r\n\r\nold: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁def', '▁add', '_', 'numbers', '(', 'a', ',', '▁b', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁\"\"\"', 'Add', '▁the', '▁two', '▁numbers', '▁`', 'a', '`', '▁and', '▁`', 'b', '`', '.\"', '\"\"', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁a', '▁+', '▁b', '<0x0A>', '▁▁▁▁▁▁▁▁']\r\nnew: ['▁\\n', '▁def', '▁add', '_', 'numbers', '(', 'a', ',', '▁b', ')', ':', '\\n', '▁', '\"', '\"', '\"', 'Add', '▁the', '▁two', '▁numbers', '▁', '`', 'a', '`', '▁and', '▁', '`', 'b', '`', '.', '\"', '\"', '\"', '\\n', '▁return', '▁a', '▁', '+', '▁b']\r\n\r\nExample 2:\r\nclass LinearLayer():\r\n def __init__(self, input_size, output_size):\r\n self.weight = torch.randn(input_size, output_size)\r\n self.bias = torch.zeros(output_size)\r\n\r\n def __call__(self, x):\r\n return x @ self.weights + self.bias\r\n\r\nold: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁class', '▁Linear', 'Layer', '():', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'init', '__(', 'self', ',', '▁input', '_', 'size', ',', '▁output', '_', 'size', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'weight', '▁=', '▁tor', 'ch', '.', 'rand', 'n', '(', 'input', '_', 'size', ',', '▁output', '_', 'size', ')', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'b', 'ias', '▁=', '▁tor', 'ch', '.', 'zer', 'os', '(', 'output', '_', 'size', ')', '<0x0A>', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'call', '__(', 'self', ',', '▁x', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁x', '▁@', '▁self', '.', 'we', 'ights', '▁+', '▁self', '.', 'b', 'ias', '<0x0A>', '▁▁▁▁▁▁▁▁']\r\nnew: ['▁\\n', '▁class', '▁Linear', 'Layer', '(', ')', ':', '\\n', '▁def', '▁', '_', '_', 'init', '_', '_', '(', 'self', ',', '▁input', '_', 'size', ',', '▁output', '_', 'size', ')', ':', '\\n', '▁self', '.', 'weight', '▁', '=', '▁torch', '.', 'randn', '(', 'input', '_', 'size', ',', '▁output', '_', 'size', ')', '\\n', '▁self', '.', 'bias', '▁', '=', '▁torch', '.', 'zeros', '(', 'output', '_', 'size', ')', '\\n\\n', '▁def', '▁', '_', '_', 'call', '_', '_', '(', 'self', ',', '▁x', ')', ':', '\\n', '▁return', '▁x', '▁', '@', '▁self', '.', 'weights', '▁', '+', '▁self', '.', 'bias']\r\n```\r\nWhile this tokenization might be better than the above one, I think it is too aggressive with the splitting of the punctuation. Like the above fixes, the newline character `\\n` is not treated as a whitespace character.\r\n\r\nIdeally, the outputs should be like this (similar to the GPT2 tokenization):\r\n```\r\nExample 1:\r\ndef add_numbers(a, b):\r\n \"\"\"Add the two numbers `a` and `b`.\"\"\"\r\n return a + b\r\n\r\nold: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁def', '▁add', '_', 'numbers', '(', 'a', ',', '▁b', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁\"\"\"', 'Add', '▁the', '▁two', '▁numbers', '▁`', 'a', '`', '▁and', '▁`', 'b', '`', '.\"', '\"\"', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁a', '▁+', '▁b', '<0x0A>', '▁▁▁▁▁▁▁▁']\r\nnew: ['▁\\n', '▁def', '▁add', '_', 'numbers', '(', 'a', ',', '▁b', ')', ':', '▁\\n', '▁\"\"\"', 'Add', '▁the', '▁two', '▁numbers', '▁', '`', 'a', '`', '▁and', '▁', '`', 'b', '`', '.\"\"\"', '▁\\n', '▁return', '▁a', '▁+', '▁b']\r\n\r\nExample 2:\r\nclass LinearLayer():\r\n def __init__(self, input_size, output_size):\r\n self.weight = torch.randn(input_size, output_size)\r\n self.bias = torch.zeros(output_size)\r\n\r\n def __call__(self, x):\r\n return x @ self.weights + self.bias\r\n\r\nold: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁class', '▁Linear', 'Layer', '():', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'init', '__(', 'self', ',', '▁input', '_', 'size', ',', '▁output', '_', 'size', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'weight', '▁=', '▁tor', 'ch', '.', 'rand', 'n', '(', 'input', '_', 'size', ',', '▁output', '_', 'size', ')', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'b', 'ias', '▁=', '▁tor', 'ch', '.', 'zer', 'os', '(', 'output', '_', 'size', ')', '<0x0A>', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'call', '__(', 'self', ',', '▁x', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁x', '▁@', '▁self', '.', 'we', 'ights', '▁+', '▁self', '.', 'b', 'ias', '<0x0A>', '▁▁▁▁▁▁▁▁']\r\nnew: ['▁\\n', '▁class', '▁Linear', 'Layer', '():', '▁\\n', '▁def', '▁__', 'init', '__(', 'self', ',', '▁input', '_', 'size', ',', '▁output', '_', 'size', '):', '▁\\n', '▁self', '.', 'weight', '▁=', '▁torch', '.', 'randn', '(', 'input', '_', 'size', ',', '▁output', '_', 'size', ')', '▁\\n', '▁self', '.', 'bias', '▁=', '▁torch', '.', 'zeros', '(', 'output', '_', 'size', ')', '▁\\n\\n', '▁def', '▁__', 'call', '__(', 'self', ',', '▁x', '):', '▁\\n', '▁return', '▁x', '▁@', '▁self', '.', 'weights', '▁+', '▁self', '.', 'bias']\r\n```\r\nWill there be any other fixes for this?",
"If you want to keep the white space, `normalizers.Replace(Regex(\" {2,}\"), \"▁\")` should not be used indeed. \r\nLeftStripping can be kept, but you would need to also add the bytefallback tokens to the vocab ('<0x0A>' is a new line via bytefallback) if you want it to have the same behaviour! \r\n\r\nRegarding the merges, it might be the frequency of the `▁▁▁▁▁▁▁` token that prevents the model from learning it but should not be related to the pre-processing. \r\n\r\nSo the last issue is probably the bytefallback. \r\n\r\n",
"@ArthurZucker are there any plans to add all those fixes to the `train_new_from_iterator` function for Llama2 models?",
"There are plans to add these fixes to the LlamaTokenizer as a whole (specifically the pretokenizer vs normalizer) here #26678. The bytefallback thing needs to be adde to `tokenizers` and there is a plan but I don't have bandwidth just yet! 🤗 "
] | 1,702 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-105-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```python
import os
import argparse
from datasets import load_dataset
from transformers import (
AutoTokenizer
)
def python_generator():
# Load local files for code_search_net/python
# https://huggingface.co./datasets/code_search_net
dataset = load_dataset("code_search_net", "python")
dataset = dataset["train"]
for start_idx in range(0, len(dataset), 1000):
samples = dataset[start_idx: start_idx + 1000]
yield samples["whole_func_string"]
def main(args):
model_paths = [
"gpt2",
"meta-llama/Llama-2-70b-hf",
]
access_token = ""
for model_path in model_paths:
print(f"\n\n{model_path}")
save_dir = (
f"{model_path}-python-52K_vocab"
)
os.makedirs(os.path.join(os.getcwd(), "tokenizers"), exist_ok=True)
save_path = os.path.join(os.getcwd(), "tokenizers", save_dir)
old_tokenizer = AutoTokenizer.from_pretrained(
model_path,
token=access_token
)
assert old_tokenizer.is_fast
if os.path.exists(save_path):
new_tokenizer = AutoTokenizer.from_pretrained(save_path)
else:
new_tokenizer = old_tokenizer.train_new_from_iterator(
python_generator(),
vocab_size=52000
)
new_tokenizer.save_pretrained(save_path)
example_1 = '''
def add_numbers(a, b):
"""Add the two numbers `a` and `b`."""
return a + b
'''
print(f"\n{example_1}")
old_tokens = old_tokenizer.tokenize(example_1)
print(f"old: {old_tokens}")
new_tokens = new_tokenizer.tokenize(example_1)
print(f"new: {new_tokens}")
example_2 = """
class LinearLayer():
def __init__(self, input_size, output_size):
self.weight = torch.randn(input_size, output_size)
self.bias = torch.zeros(output_size)
def __call__(self, x):
return x @ self.weights + self.bias
"""
print(f"\n{example_2}")
old_tokens = old_tokenizer.tokenize(example_2)
print(f"old: {old_tokens}")
new_tokens = new_tokenizer.tokenize(example_2)
print(f"new: {new_tokens}")
```
### Expected behavior
The function `train_new_from_iterator` works as expected when training a new tokenizer from a gpt2 tokenizer as demonstrated in the [example](https://huggingface.co./learn/nlp-course/chapter6/2), but does not work for training a new tokenizer from a Llama-2 tokenizer.
With the code snippet above, training a tokenizer from gpt2 gives the output:
```
Example 1:
def add_numbers(a, b):
"""Add the two numbers `a` and `b`."""
return a + b
old: ['Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġdef', 'Ġadd', '_', 'n', 'umbers', '(', 'a', ',', 'Ġb', '):', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ"""', 'Add', 'Ġthe', 'Ġtwo', 'Ġnumbers', 'Ġ`', 'a', '`', 'Ġand', 'Ġ`', 'b', '`', '."', '""', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġreturn', 'Ġa', 'Ġ+', 'Ġb', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ']
new: ['ĊĠĠĠĠĠĠĠ', 'Ġdef', 'Ġadd', '_', 'numbers', '(', 'a', ',', 'Ġb', '):', 'ĊĠĠĠĠĠĠĠĠĠĠĠ', 'Ġ"""', 'Add', 'Ġthe', 'Ġtwo', 'Ġnumbers', 'Ġ`', 'a', '`', 'Ġand', 'Ġ`', 'b', '`."""', 'ĊĠĠĠĠĠĠĠĠĠĠĠ', 'Ġreturn', 'Ġa', 'Ġ+', 'Ġb', 'ĊĠĠĠĠĠĠĠĠ']
Example 2:
class LinearLayer():
def __init__(self, input_size, output_size):
self.weight = torch.randn(input_size, output_size)
self.bias = torch.zeros(output_size)
def __call__(self, x):
return x @ self.weights + self.bias
old: ['Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġclass', 'ĠLinear', 'Layer', '():', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġdef', 'Ġ__', 'init', '__', '(', 'self', ',', 'Ġinput', '_', 'size', ',', 'Ġoutput', '_', 'size', '):', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġself', '.', 'weight', 'Ġ=', 'Ġtorch', '.', 'rand', 'n', '(', 'input', '_', 'size', ',', 'Ġoutput', '_', 'size', ')', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġself', '.', 'b', 'ias', 'Ġ=', 'Ġtorch', '.', 'zer', 'os', '(', 'output', '_', 'size', ')', 'ĊĊ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġdef', 'Ġ__', 'call', '__', '(', 'self', ',', 'Ġx', '):', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġreturn', 'Ġx', 'Ġ@', 'Ġself', '.', 'weights', 'Ġ+', 'Ġself', '.', 'b', 'ias', 'Ċ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ', 'Ġ']
new: ['ĊĠĠĠĠĠĠĠ', 'Ġclass', 'ĠLinear', 'Layer', '():', 'ĊĠĠĠĠĠĠĠĠĠĠĠ', 'Ġdef', 'Ġ__', 'init', '__(', 'self', ',', 'Ġinput', '_', 'size', ',', 'Ġoutput', '_', 'size', '):', 'ĊĠĠĠĠĠĠĠĠĠĠĠĠĠĠĠ', 'Ġself', '.', 'weight', 'Ġ=', 'Ġtorch', '.', 'randn', '(', 'input', '_', 'size', ',', 'Ġoutput', '_', 'size', ')', 'ĊĠĠĠĠĠĠĠĠĠĠĠĠĠĠĠ', 'Ġself', '.', 'bias', 'Ġ=', 'Ġtorch', '.', 'zeros', '(', 'output', '_', 'size', ')', 'ĊĊĠĠĠĠĠĠĠĠĠĠĠ', 'Ġdef', 'Ġ__', 'call', '__(', 'self', ',', 'Ġx', '):', 'ĊĠĠĠĠĠĠĠĠĠĠĠĠĠĠĠ', 'Ġreturn', 'Ġx', 'Ġ@', 'Ġself', '.', 'weights', 'Ġ+', 'Ġself', '.', 'bias', 'ĊĠĠĠĠĠĠĠĠ']
```
However, training Llama-2's tokenizer gives:
```
Example 1:
def add_numbers(a, b):
"""Add the two numbers `a` and `b`."""
return a + b
old: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁def', '▁add', '_', 'numbers', '(', 'a', ',', '▁b', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁"""', 'Add', '▁the', '▁two', '▁numbers', '▁`', 'a', '`', '▁and', '▁`', 'b', '`', '."', '""', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁a', '▁+', '▁b', '<0x0A>', '▁▁▁▁▁▁▁▁']
new: ['▁', '\n▁▁▁▁▁▁▁▁def▁', 'add_', 'number', 's(', 'a,▁b', '):\n▁▁▁▁▁▁▁▁▁▁▁▁"""', 'Add▁the▁', 'two▁', 'number', 's▁`', 'a', '`▁and▁`', 'b', '`', '."""', '\n▁▁▁▁▁▁▁▁▁▁▁▁return▁', 'a▁+▁', 'b', '\n▁▁▁▁▁▁▁▁']
Example 2:
class LinearLayer():
def __init__(self, input_size, output_size):
self.weight = torch.randn(input_size, output_size)
self.bias = torch.zeros(output_size)
def __call__(self, x):
return x @ self.weights + self.bias
old: ['▁', '<0x0A>', '▁▁▁▁▁▁▁', '▁class', '▁Linear', 'Layer', '():', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'init', '__(', 'self', ',', '▁input', '_', 'size', ',', '▁output', '_', 'size', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'weight', '▁=', '▁tor', 'ch', '.', 'rand', 'n', '(', 'input', '_', 'size', ',', '▁output', '_', 'size', ')', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁self', '.', 'b', 'ias', '▁=', '▁tor', 'ch', '.', 'zer', 'os', '(', 'output', '_', 'size', ')', '<0x0A>', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁', '▁def', '▁__', 'call', '__(', 'self', ',', '▁x', '):', '<0x0A>', '▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁', '▁return', '▁x', '▁@', '▁self', '.', 'we', 'ights', '▁+', '▁self', '.', 'b', 'ias', '<0x0A>', '▁▁▁▁▁▁▁▁']
new: ['▁', '\n▁▁▁▁▁▁▁▁', 'class▁', 'Linear', 'Layer(', '):\n▁▁▁▁▁▁▁▁▁▁▁▁', 'def▁__init__(self,▁', 'input_', 'size,▁', 'output_', 'size', '):\n▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁self.', 'weight▁=▁', 'torch', '.r', 'and', 'n(', 'input_', 'size,▁', 'output_', 'size', ')\n▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁self.', 'bi', 'as▁=▁', 'torch.', 'zeros(', 'output_', 'size', ')\n\n▁▁▁▁▁▁▁▁▁▁▁▁', 'def▁__', 'call__', '(self,▁x', '):\n▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁return▁', 'x▁', '@▁', 'self.', 'weight', 's▁+▁', 'self.', 'bias', '\n▁▁▁▁▁▁▁▁']
```
The underscores `_` should be prepended at the front of new words, but it seems to be inserted at the back of words or in between words. In fact, it seems like the retrained tokenizer is worse than the original tokenizer on the new data. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27900/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27900/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27899 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27899/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27899/comments | https://api.github.com/repos/huggingface/transformers/issues/27899/events | https://github.com/huggingface/transformers/pull/27899 | 2,031,983,998 | PR_kwDOCUB6oc5hfOn0 | 27,899 | fix typo in image_processing_blip.py Wwhether -> Whether | {
"login": "zhc7",
"id": 53651354,
"node_id": "MDQ6VXNlcjUzNjUxMzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/53651354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhc7",
"html_url": "https://github.com/zhc7",
"followers_url": "https://api.github.com/users/zhc7/followers",
"following_url": "https://api.github.com/users/zhc7/following{/other_user}",
"gists_url": "https://api.github.com/users/zhc7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhc7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhc7/subscriptions",
"organizations_url": "https://api.github.com/users/zhc7/orgs",
"repos_url": "https://api.github.com/users/zhc7/repos",
"events_url": "https://api.github.com/users/zhc7/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhc7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,702 | 1,702 | 1,702 | CONTRIBUTOR | null | # fix typo in image_processing_blip.py Wwhether -> Whether
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
vision models: @amyeroberts
Documentation: @stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27899/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27899",
"html_url": "https://github.com/huggingface/transformers/pull/27899",
"diff_url": "https://github.com/huggingface/transformers/pull/27899.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27899.patch",
"merged_at": 1702060368000
} |
https://api.github.com/repos/huggingface/transformers/issues/27898 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27898/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27898/comments | https://api.github.com/repos/huggingface/transformers/issues/27898/events | https://github.com/huggingface/transformers/issues/27898 | 2,031,880,725 | I_kwDOCUB6oc55HAoV | 27,898 | optimizer.param_groups[0]['params'][0].dtype IndexError: list index out of range | {
"login": "wandouguo",
"id": 9603390,
"node_id": "MDQ6VXNlcjk2MDMzOTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9603390?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wandouguo",
"html_url": "https://github.com/wandouguo",
"followers_url": "https://api.github.com/users/wandouguo/followers",
"following_url": "https://api.github.com/users/wandouguo/following{/other_user}",
"gists_url": "https://api.github.com/users/wandouguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wandouguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wandouguo/subscriptions",
"organizations_url": "https://api.github.com/users/wandouguo/orgs",
"repos_url": "https://api.github.com/users/wandouguo/repos",
"events_url": "https://api.github.com/users/wandouguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/wandouguo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, if you want our help we are going to need a full reproducer (as the contribution guidelines point out)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,702 | 1,705 | 1,705 | NONE | null | ### System Info
errors happened:
deepspeed 0.12.3
transformers 4.35.2
normal:
transformers 4.33.2
deepspeed 0.12.3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1、
deepspeed 0.12.3
transformers 4.35.2
2、{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 100,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1e-10
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 1e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 1e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
3、trainer.train()
### Expected behavior
normal training | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27898/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27897 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27897/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27897/comments | https://api.github.com/repos/huggingface/transformers/issues/27897/events | https://github.com/huggingface/transformers/pull/27897 | 2,031,752,315 | PR_kwDOCUB6oc5hedag | 27,897 | fix bug in mask2former: cost matrix is infeasible | {
"login": "xuchenhao001",
"id": 20737194,
"node_id": "MDQ6VXNlcjIwNzM3MTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/20737194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xuchenhao001",
"html_url": "https://github.com/xuchenhao001",
"followers_url": "https://api.github.com/users/xuchenhao001/followers",
"following_url": "https://api.github.com/users/xuchenhao001/following{/other_user}",
"gists_url": "https://api.github.com/users/xuchenhao001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xuchenhao001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xuchenhao001/subscriptions",
"organizations_url": "https://api.github.com/users/xuchenhao001/orgs",
"repos_url": "https://api.github.com/users/xuchenhao001/repos",
"events_url": "https://api.github.com/users/xuchenhao001/events{/privacy}",
"received_events_url": "https://api.github.com/users/xuchenhao001/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
As per the title, fix a bug that causes the error of ``ValueError: cost matrix is infeasible``.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21644
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@amyeroberts
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27897/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27897/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27897",
"html_url": "https://github.com/huggingface/transformers/pull/27897",
"diff_url": "https://github.com/huggingface/transformers/pull/27897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27897.patch",
"merged_at": 1702311556000
} |
https://api.github.com/repos/huggingface/transformers/issues/27896 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27896/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27896/comments | https://api.github.com/repos/huggingface/transformers/issues/27896/events | https://github.com/huggingface/transformers/pull/27896 | 2,031,737,201 | PR_kwDOCUB6oc5heaTe | 27,896 | [docs] Fused AWQ modules | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,704 | 1,702 | MEMBER | null | Streamlines the quantization docs a bit with the latest AWQ-fused module update :)
- I think the reintroduced Benchmarks in the AWQ section is a duplicate because it's already currently at the end of the doc. I moved the Benchmarks to the end because I think comparing the results of all the quantization schemes makes more sense after having read through them. Looking at the benchmarks now in the AWQ section, you don't have as much context about the other quantization schemes (AutoGPTQ, bitsandbytes), making it a bit more difficult to compare IMO.
- condense the fused/unfused code with the code-switching feature in the docs
- try to improve the table titles of the fused/unfused benchmarks | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27896/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27896",
"html_url": "https://github.com/huggingface/transformers/pull/27896",
"diff_url": "https://github.com/huggingface/transformers/pull/27896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27896.patch",
"merged_at": 1702320094000
} |
https://api.github.com/repos/huggingface/transformers/issues/27895 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27895/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27895/comments | https://api.github.com/repos/huggingface/transformers/issues/27895/events | https://github.com/huggingface/transformers/pull/27895 | 2,031,459,951 | PR_kwDOCUB6oc5hdeXJ | 27,895 | [LLaVa] Some improvements | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"They are failing on my setup:\r\n```\r\nFAILED tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_batch - AssertionError: Lists differ: ['USER: \\nWhat are the things I should be [267 chars]ock'] != ['\\nUSER: What are the things I should be c[306 chars] R.']\r\nFAILED tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama - AssertionError: 'USER[116 chars]hich appears to be a dock or pier extending ov[562 chars]ier.' != 'USER[116 chars]hich is a pier or dock extending over a body o[572 chars]ies.'\r\nFAILED tests/models/llava/test_modeling_llava.py::LlavaForConditionalGenerationIntegrationTest::test_small_model_integration_test_llama_batched - AssertionError: Lists differ: ['USE[114 chars]ANT: When visiting this serene location, one s[177 chars]ed.'] != ['USE[114 chars]ANT: the water is calm and clear\\n\\nThe image [154 chars]ed.']\r\n```\r\nCould possibly also be the case on main, will check",
"Yeah it still fails for me both on main and my branch even after #27909. So @ArthurZucker could you perhaps run the slow tests from my branch before merging? But I'm pretty sure it doesn't affect integration tests. "
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Some minor improvements when going over the LLaVa code. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27895/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27895",
"html_url": "https://github.com/huggingface/transformers/pull/27895",
"diff_url": "https://github.com/huggingface/transformers/pull/27895.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27895.patch",
"merged_at": 1702286546000
} |
https://api.github.com/repos/huggingface/transformers/issues/27894 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27894/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27894/comments | https://api.github.com/repos/huggingface/transformers/issues/27894/events | https://github.com/huggingface/transformers/issues/27894 | 2,031,129,801 | I_kwDOCUB6oc55EJTJ | 27,894 | Group beam search decoded result depends on pad_token_id even though it's not printable | {
"login": "Wovchena",
"id": 10669582,
"node_id": "MDQ6VXNlcjEwNjY5NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/10669582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wovchena",
"html_url": "https://github.com/Wovchena",
"followers_url": "https://api.github.com/users/Wovchena/followers",
"following_url": "https://api.github.com/users/Wovchena/following{/other_user}",
"gists_url": "https://api.github.com/users/Wovchena/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wovchena/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wovchena/subscriptions",
"organizations_url": "https://api.github.com/users/Wovchena/orgs",
"repos_url": "https://api.github.com/users/Wovchena/repos",
"events_url": "https://api.github.com/users/Wovchena/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wovchena/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, both padding tokens are not `non-printable` you just decided to skip them using `skip_special_tokens = True`. \r\nYou should try to set the padding to the left as it is recommended for generation and the padding token on the right will always impact the model. more details [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535) ",
"Hi. `skip_special_tokens=True` is intentional. How do I modify the reproducer to set the padding to the left?",
"`tokenizer.padding_side = \"left\"`",
"The results are still different:\r\n```py\r\nimport transformers\r\n\r\ntokenizer = transformers.LlamaTokenizer.from_pretrained('TinyLlama/TinyLlama-1.1B-Chat-v0.6')\r\ntokenizer.padding_side = \"left\"\r\ninput_ids = tokenizer('Hi', return_tensors='pt')['input_ids']\r\nmodel = transformers.LlamaForCausalLM.from_pretrained('TinyLlama/TinyLlama-1.1B-Chat-v0.6')\r\nprint(model.generation_config.eos_token_id)\r\npad_token_id = 0\r\nassert pad_token_id != model.generation_config.eos_token_id\r\nzero = [tokenizer.decode(beam, skip_special_tokens=True) for beam in model.generate(input_ids, max_new_tokens=25, num_beam_groups=9, num_beams=99, num_return_sequences=99, diversity_penalty=1.0, no_repeat_ngram_size=3, do_sample=False, pad_token_id=pad_token_id)]\r\nprint(zero)\r\npad_token_id = 1\r\nassert pad_token_id != model.generation_config.eos_token_id\r\none = [tokenizer.decode(beam, skip_special_tokens=True) for beam in model.generate(input_ids, max_new_tokens=25, num_beam_groups=9, num_beams=99, num_return_sequences=99, diversity_penalty=1.0, no_repeat_ngram_size=3, do_sample=False, pad_token_id=pad_token_id)]\r\nprint(one)\r\nassert zero == one\r\n```\r\nMy explanation why your suggestion didn't help is that the problem I'm referring to lies inside of `model.generate()`, which doesn't involve `tokenizer`.",
"Hi @Wovchena 👋 \r\n\r\nThe issue with your script was that you were not passing the attention mask to `generate` :) \r\n\r\nThroughout transformers, we do our best effort to infer the attention mask when it is not passed: if the token is equal to the pad token the attention mask is 0 (and 1 otherwise). In your particular example, you were setting the pad token to the bos token (both `1`), the token that the tokenizer uses to signal the beginning of the sequence. As such, the inferred attention mask was different in the two inputs, leading to a different output.\r\n\r\nWe always recommend passing the attention mask 🤗 \r\n\r\nWorking example (passing the attention mask):\r\n```py\r\nimport transformers\r\n\r\ntokenizer = transformers.LlamaTokenizer.from_pretrained('TinyLlama/TinyLlama-1.1B-Chat-v0.6')\r\nmodel = transformers.LlamaForCausalLM.from_pretrained('TinyLlama/TinyLlama-1.1B-Chat-v0.6')\r\n\r\ntokenizer.padding_side = \"left\"\r\ninput_ids = tokenizer('Hi', return_tensors='pt')\r\nprint(model.generation_config.eos_token_id)\r\n\r\npad_token_id = 0\r\nassert pad_token_id != model.generation_config.eos_token_id\r\ngen_zero = model.generate(**input_ids, max_new_tokens=25, num_beam_groups=9, num_beams=99, num_return_sequences=99, diversity_penalty=1.0, no_repeat_ngram_size=3, do_sample=False, pad_token_id=pad_token_id)\r\nzero = [tokenizer.decode(beam, skip_special_tokens=True) for beam in gen_zero]\r\n\r\npad_token_id = 1\r\nassert pad_token_id != model.generation_config.eos_token_id\r\ngen_one = model.generate(**input_ids, max_new_tokens=25, num_beam_groups=9, num_beams=99, num_return_sequences=99, diversity_penalty=1.0, no_repeat_ngram_size=3, do_sample=False, pad_token_id=pad_token_id)\r\none = [tokenizer.decode(beam, skip_special_tokens=True) for beam in gen_one]\r\n\r\nassert zero == one\r\n```",
"Thank you!"
] | 1,701 | 1,702 | 1,702 | NONE | null | ### System Info
transformers Version: 4.35.2
Windows and Ubuntu20
Python 3.11.3
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```py
import transformers
tokenizer = transformers.LlamaTokenizer.from_pretrained('TinyLlama/TinyLlama-1.1B-Chat-v0.6')
input_ids = tokenizer('Hi', return_tensors='pt')['input_ids']
model = transformers.LlamaForCausalLM.from_pretrained('TinyLlama/TinyLlama-1.1B-Chat-v0.6')
print(model.generation_config.eos_token_id)
pad_token_id = 0
assert pad_token_id != model.generation_config.eos_token_id
zero = [tokenizer.decode(beam, skip_special_tokens=True) for beam in model.generate(input_ids, max_new_tokens=25, num_beam_groups=9, num_beams=99, num_return_sequences=99, diversity_penalty=1.0, no_repeat_ngram_size=3, do_sample=False, pad_token_id=pad_token_id)]
print(zero)
pad_token_id = 1
assert pad_token_id != model.generation_config.eos_token_id
one = [tokenizer.decode(beam, skip_special_tokens=True) for beam in model.generate(input_ids, max_new_tokens=25, num_beam_groups=9, num_beams=99, num_return_sequences=99, diversity_penalty=1.0, no_repeat_ngram_size=3, do_sample=False, pad_token_id=pad_token_id)]
print(one)
assert zero == one
```
### Expected behavior
Using a non printable `pad_token_id` should result in same text generation.
When a group is completed, group beam search keeps padding ongoing beams from this group. That in turn affects, how diversity penalty is applied to tokens for other groups. And thus different tokens are chosen.
I believe a completed group should not affect log probabilities for other groups. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27894/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27893 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27893/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27893/comments | https://api.github.com/repos/huggingface/transformers/issues/27893/events | https://github.com/huggingface/transformers/issues/27893 | 2,031,120,434 | I_kwDOCUB6oc55EHAy | 27,893 | Insanely-Fast-Whisper model: large - Cantonese KeyError "yue" | {
"login": "asusdisciple",
"id": 138434950,
"node_id": "U_kgDOCEBZhg",
"avatar_url": "https://avatars.githubusercontent.com/u/138434950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asusdisciple",
"html_url": "https://github.com/asusdisciple",
"followers_url": "https://api.github.com/users/asusdisciple/followers",
"following_url": "https://api.github.com/users/asusdisciple/following{/other_user}",
"gists_url": "https://api.github.com/users/asusdisciple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asusdisciple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asusdisciple/subscriptions",
"organizations_url": "https://api.github.com/users/asusdisciple/orgs",
"repos_url": "https://api.github.com/users/asusdisciple/repos",
"events_url": "https://api.github.com/users/asusdisciple/events{/privacy}",
"received_events_url": "https://api.github.com/users/asusdisciple/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey could you share a full reproducer, with a traceback and the output of `transformers-cli env`",
"```\r\nfrom transformers import pipeline\r\nimport torch\r\n\r\nmodel = \"openai/whisper-large\"\r\n\r\nobj = pipeline(model=model,\r\n torch_dtype=torch.float16,\r\n device=\"cuda:0\", # or mps for Mac devices\r\n chunk_length_s=30,\r\n batch_size=8,\r\n return_timestamps=False,\r\n model_kwargs={\"use_flash_attention_2\": False},\r\n\r\n )\r\nobj.model = obj.model.to_bettertransformer()\r\n\r\naudio = obj(\"/.../csmd024.wav\", generate_kwargs={\"language\": \"yue\"})\r\n\r\nprint(audio)\r\n```\r\n\r\nError log:\r\n```\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\nThe BetterTransformer implementation does not support padding during training, as the fused kernels do not support attention masks. Beware that passing padded batched data during training may result in unexpected outputs. Please refer to https://huggingface.co./docs/optimum/bettertransformer/overview for more details.\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/contextlib.py\", line 153, in __exit__\r\n self.gen.throw(typ, value, traceback)\r\n File \"/.../Whisper/venv/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 924, in device_placement\r\n yield\r\n File \"/.../Whisper/venv/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1046, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"/.../Whisper/venv/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 569, in _forward\r\n tokens = self.model.generate(\r\n File \"/.../Whisper/venv/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py\", line 2050, in generate\r\n forced_decoder_ids.append((1, generation_config.lang_to_id[language_token]))\r\nKeyError: '<|yue|>'\r\n```\r\n",
"That is expected you are not using `whisper-large-v3`. Other version do not support this language",
"I think the behaviour is still strange because usually if the language is not supported it gives you an error with the languages that are supported.",
"That's because it's supported in the actual code but not in the lang to id. But for sure the error message should be improved! ",
"The issue is with this:\r\n```python \r\n if generation_config.language in generation_config.lang_to_id.keys():\r\n language_token = generation_config.language\r\n elif generation_config.language in TO_LANGUAGE_CODE.keys():\r\n language_token = f\"<|{TO_LANGUAGE_CODE[generation_config.language]}|>\"\r\n elif generation_config.language in TO_LANGUAGE_CODE.values():\r\n language_token = f\"<|{generation_config.language}|>\"\r\n```\r\nthe language token is in the supported language but not in the lang to id. "
] | 1,701 | 1,702 | 1,702 | NONE | null | ### System Info
Using Newest transformers from Main. Ubuntu Linux 22.04
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Load Insanely Fast Whisper with large-v1 in HF
2. Try to transcribe something in Cantonese, "yue"
3. KeyError |yue| not found.
### Expected behavior
I think the problem is Cantonese was added as a language with whisper large-v3. So when insanely fast whisper is used it somehow tries to use this language instead of ignoring it, because yue is not a valid language. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27893/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27892 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27892/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27892/comments | https://api.github.com/repos/huggingface/transformers/issues/27892/events | https://github.com/huggingface/transformers/pull/27892 | 2,031,076,271 | PR_kwDOCUB6oc5hcKAU | 27,892 | Fix pos_mask application and update tests accordingly | {
"login": "ferjorosa",
"id": 24965845,
"node_id": "MDQ6VXNlcjI0OTY1ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/24965845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ferjorosa",
"html_url": "https://github.com/ferjorosa",
"followers_url": "https://api.github.com/users/ferjorosa/followers",
"following_url": "https://api.github.com/users/ferjorosa/following{/other_user}",
"gists_url": "https://api.github.com/users/ferjorosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ferjorosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ferjorosa/subscriptions",
"organizations_url": "https://api.github.com/users/ferjorosa/orgs",
"repos_url": "https://api.github.com/users/ferjorosa/repos",
"events_url": "https://api.github.com/users/ferjorosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ferjorosa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker do you know which steps are left in the process?"
] | 1,701 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27855 (https://github.com/huggingface/transformers/issues/27855)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @younesbelkada @amyeroberts
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27892/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27892",
"html_url": "https://github.com/huggingface/transformers/pull/27892",
"diff_url": "https://github.com/huggingface/transformers/pull/27892.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27892.patch",
"merged_at": 1704454570000
} |
https://api.github.com/repos/huggingface/transformers/issues/27891 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27891/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27891/comments | https://api.github.com/repos/huggingface/transformers/issues/27891/events | https://github.com/huggingface/transformers/pull/27891 | 2,030,729,484 | PR_kwDOCUB6oc5ha9U4 | 27,891 | fix resuming from ckpt when using FSDP with FULL_STATE_DICT | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27891). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
1. Fixes https://github.com/huggingface/transformers/issues/27878 introduced by https://github.com/huggingface/transformers/pull/27652
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27891/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27891",
"html_url": "https://github.com/huggingface/transformers/pull/27891",
"diff_url": "https://github.com/huggingface/transformers/pull/27891.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27891.patch",
"merged_at": 1702735903000
} |
https://api.github.com/repos/huggingface/transformers/issues/27890 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27890/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27890/comments | https://api.github.com/repos/huggingface/transformers/issues/27890/events | https://github.com/huggingface/transformers/pull/27890 | 2,030,698,018 | PR_kwDOCUB6oc5ha2PI | 27,890 | [Doc] Spanish translation of pad_truncation.md | {
"login": "aaronjimv",
"id": 67152883,
"node_id": "MDQ6VXNlcjY3MTUyODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aaronjimv",
"html_url": "https://github.com/aaronjimv",
"followers_url": "https://api.github.com/users/aaronjimv/followers",
"following_url": "https://api.github.com/users/aaronjimv/following{/other_user}",
"gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions",
"organizations_url": "https://api.github.com/users/aaronjimv/orgs",
"repos_url": "https://api.github.com/users/aaronjimv/repos",
"events_url": "https://api.github.com/users/aaronjimv/events{/privacy}",
"received_events_url": "https://api.github.com/users/aaronjimv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello. I am open to any feedback.",
"Hi @stevhliu, thanks!",
"Hi @osanseviero, thanks for the help. Please let me know anything else."
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
Add the Spanish version of `pad_truncation.md` to `transformers/docs/source/es`
Fix a typo in the table of `en/pad_truncation.md`
Fixes #15947
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@omarespejel @sgugger @osanseviero @stevhliu | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27890/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27890",
"html_url": "https://github.com/huggingface/transformers/pull/27890",
"diff_url": "https://github.com/huggingface/transformers/pull/27890.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27890.patch",
"merged_at": 1702060338000
} |
https://api.github.com/repos/huggingface/transformers/issues/27889 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27889/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27889/comments | https://api.github.com/repos/huggingface/transformers/issues/27889/events | https://github.com/huggingface/transformers/pull/27889 | 2,030,653,912 | PR_kwDOCUB6oc5haseU | 27,889 | Fix 2 tests in `FillMaskPipelineTests` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,702 | 1,702 | COLLABORATOR | null | # What does this PR do?
The test is introduced in #26234 which doesn't work as expected.
This PR update the values.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27889/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27889",
"html_url": "https://github.com/huggingface/transformers/pull/27889",
"diff_url": "https://github.com/huggingface/transformers/pull/27889.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27889.patch",
"merged_at": 1702043729000
} |
https://api.github.com/repos/huggingface/transformers/issues/27888 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27888/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27888/comments | https://api.github.com/repos/huggingface/transformers/issues/27888/events | https://github.com/huggingface/transformers/issues/27888 | 2,030,650,995 | I_kwDOCUB6oc55CUZz | 27,888 | Adding the same special token twice cause the additional_special_tokens to be set to an empty list inside tokenizer | {
"login": "dblakely",
"id": 20539855,
"node_id": "MDQ6VXNlcjIwNTM5ODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/20539855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dblakely",
"html_url": "https://github.com/dblakely",
"followers_url": "https://api.github.com/users/dblakely/followers",
"following_url": "https://api.github.com/users/dblakely/following{/other_user}",
"gists_url": "https://api.github.com/users/dblakely/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dblakely/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dblakely/subscriptions",
"organizations_url": "https://api.github.com/users/dblakely/orgs",
"repos_url": "https://api.github.com/users/dblakely/repos",
"events_url": "https://api.github.com/users/dblakely/events{/privacy}",
"received_events_url": "https://api.github.com/users/dblakely/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"On going through the source code for `add_special_tokens` in the link [here](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L873), it seems like this function has a param `replace_additional_special_tokens` which defaults to `True`.\r\n\r\nThis is an excerpt from the docstring for this param:\r\n\r\nIf `True`, the existing list of additional special tokens will be replaced by the list provided in `special_tokens_dict`. Otherwise, `self._additional_special_tokens` is just extended. In the former case, the tokens will NOT be removed from the tokenizer's full vocabulary - they are only being flagged as non-special tokens.\r\n\r\nSo, what happened is after you added the same token again, as you expected, it wasn't added, which is why an empty list was outputted.\r\n\r\nSo, it seems like what you wanna do is explicitly set this param to `False`.\r\n\r\nAlso, you can execute `print(tokenizer_2)` to verify that the token you added prior to saving is retained. This is what I see using your code sample:\r\n\r\n```\r\nLlamaTokenizer(name_or_path='tokenizer_1', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>'}, clean_up_tokenization_spaces=False), added_tokens_decoder={\r\n\t0: AddedToken(\"<unk>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n\t1: AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n\t2: AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n\t32000: AddedToken(\"<tok>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n}\r\n```\r\nClearly your special token is retained after loading.",
"I actually discovered that as well and that is how I solved the problem on my end. But I still felt like the situation seemed like a bug--they initialize an empty set of additional special tokens but then don't end up adding a special token that already exists. And so the set remains empty, which in effect removes the special token from the set. My guess is that this wasn't intended.",
"Yep, I can reproduce and that's indeed a good catch. I have no idea why this is happening yet, opening a pr now 🤗 ",
"Sorry! Christmas delays 🤗 making sure this is fixed for next release "
] | 1,701 | 1,705 | 1,705 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.31
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): NA
- Jax version: NA
- JaxLib version: NA
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The issue is that if you re-add a particular special token to the tokenizer, it'll make that token no longer an "additional_special_token" (the token will still exist in the vocabulary, but it'll cease to be treated as a special token).
Here's an example to reproduce the issue:
```python
from transformers import LlamaTokenizer
base_model = "meta-llama/Llama-2-7b-chat-hf"
# Create tokenizer and add an additional special token
tokenizer_1 = LlamaTokenizer.from_pretrained(base_model)
tokenizer_1.add_special_tokens({"additional_special_tokens": ["<tok>"]})
print(tokenizer_1.additional_special_tokens) # outputs `['<tok>']` as expected
tokenizer_1_checkpoint = "tokenizer_1"
tokenizer_1.save_pretrained(tokenizer_1_checkpoint)
# Load the above tokenizer and add the same special token a second time
tokenizer_2 = LlamaTokenizer.from_pretrained(tokenizer_1_checkpoint)
tokenizer_2.add_special_tokens({"additional_special_tokens": ["<tok>"]})
print(tokenizer_2.additional_special_tokens) # outputs `[]` which seems incorrect
```
This might seem contrived, but I have a training script where I add a special token by default to the tokenizer to ensure that it's there (I don't always know a priori that it's already been added). However, I found that if it is already there, the behavior of the tokenizer changes a bit. For example, the tokenizer will cease to remove the token when decoding automatically. After digging into it, I realized it's because the `additional_special_tokens` list was getting reset.
### Expected behavior
I'd expect adding the exact same special token a second time to have no effect whatsoever on the tokenizer. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27888/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27887 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27887/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27887/comments | https://api.github.com/repos/huggingface/transformers/issues/27887/events | https://github.com/huggingface/transformers/pull/27887 | 2,030,592,470 | PR_kwDOCUB6oc5hae1Q | 27,887 | Fix device of masks in tests | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Strange that we don't see the issue on our CI. Do you know why @fxmarty?",
"@ydshieh Can you try `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/llama -s -vvvvv -k \"test_resize_tokens_embeddings\"` on main?",
"I am not sure it's weird. The `push` tests should run on GPU so those should have failed...\r\nhttps://github.com/huggingface/transformers/blob/fc71e815f6ff5fc8b743786e2146cff9644ec598/.github/workflows/self-push.yml#L133",
"As discussed offline it did actually fail (https://github.com/huggingface/transformers/actions/runs/7124821720) so all good!"
] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | As per title, fix a bug introduced in https://github.com/huggingface/transformers/pull/24587 in the GPU tests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27887/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27887",
"html_url": "https://github.com/huggingface/transformers/pull/27887",
"diff_url": "https://github.com/huggingface/transformers/pull/27887.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27887.patch",
"merged_at": 1701952484000
} |
https://api.github.com/repos/huggingface/transformers/issues/27886 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27886/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27886/comments | https://api.github.com/repos/huggingface/transformers/issues/27886/events | https://github.com/huggingface/transformers/pull/27886 | 2,030,591,304 | PR_kwDOCUB6oc5haeku | 27,886 | [WIP] add copied from in test files when adding new model like | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,701 | 1,706 | null | COLLABORATOR | null | # What does this PR do?
Adds copied from in the test files automatically. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27886/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27886",
"html_url": "https://github.com/huggingface/transformers/pull/27886",
"diff_url": "https://github.com/huggingface/transformers/pull/27886.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27886.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27885 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27885/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27885/comments | https://api.github.com/repos/huggingface/transformers/issues/27885/events | https://github.com/huggingface/transformers/issues/27885 | 2,030,571,134 | I_kwDOCUB6oc55CA5- | 27,885 | [i18n-jp] Translating docs tojp | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | closed | false | null | [] | [] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the jp-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `jp` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `jp/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27885/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27884 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27884/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27884/comments | https://api.github.com/repos/huggingface/transformers/issues/27884/events | https://github.com/huggingface/transformers/pull/27884 | 2,030,531,160 | PR_kwDOCUB6oc5haQ9W | 27,884 | Add models from cpmant to derformable_detr | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stevhliu, I built docs locally, it should pass now.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27884). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Did the final change as well, as we discussed on the mail, can you open a separate issue for a model doc files with listing of the `model_docs` files that are yet to be translated? "
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #27885
[](https://github.com/huggingface/transformers/issues/27885)#27885
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27884/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27884",
"html_url": "https://github.com/huggingface/transformers/pull/27884",
"diff_url": "https://github.com/huggingface/transformers/pull/27884.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27884.patch",
"merged_at": 1702490550000
} |
https://api.github.com/repos/huggingface/transformers/issues/27883 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27883/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27883/comments | https://api.github.com/repos/huggingface/transformers/issues/27883/events | https://github.com/huggingface/transformers/pull/27883 | 2,030,503,533 | PR_kwDOCUB6oc5haKm3 | 27,883 | [`ChatGlm`] Adds support for the ChatGLM model | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27883). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi~ o(* ̄▽ ̄*)ブ Could you tell me what your plans? May I know when the PR can be merged? I have PRs depended on this feature. ",
"@younesbelkada and I just came back from holidays, we are hoping end of the week maybe later! ",
"cc @ArthurZucker could you give a first pass ? 🙏 "
] | 1,701 | 1,708 | null | COLLABORATOR | null | # What does this PR do?
Drafts the support of chat GLM | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27883/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27883",
"html_url": "https://github.com/huggingface/transformers/pull/27883",
"diff_url": "https://github.com/huggingface/transformers/pull/27883.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27883.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27882 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27882/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27882/comments | https://api.github.com/repos/huggingface/transformers/issues/27882/events | https://github.com/huggingface/transformers/pull/27882 | 2,030,456,683 | PR_kwDOCUB6oc5haAG5 | 27,882 | Md 4 | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27882/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27882",
"html_url": "https://github.com/huggingface/transformers/pull/27882",
"diff_url": "https://github.com/huggingface/transformers/pull/27882.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27882.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27881 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27881/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27881/comments | https://api.github.com/repos/huggingface/transformers/issues/27881/events | https://github.com/huggingface/transformers/pull/27881 | 2,030,437,906 | PR_kwDOCUB6oc5hZ8BC | 27,881 | Show new failing tests in a more clear way in slack report | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27881). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | # What does this PR do?
It's now difficult to see the **new** failing tests on our Slack CI report, as they are > 100 failures. Also, scrolling over the many slack reply responses is time consuming to get the right place.
This PR add the new failing tests in 2 places:
- the end of post
- the end of the replies to the post
Links are provided too.
With this, it's easy and fast to access the new failing tests.
Screenshot
<img width="1016" alt="Screenshot 2023-12-07 113217" src="https://github.com/huggingface/transformers/assets/2521628/9b0c85c9-589c-46f9-af7e-04c168294b88">
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27881/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27881",
"html_url": "https://github.com/huggingface/transformers/pull/27881",
"diff_url": "https://github.com/huggingface/transformers/pull/27881.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27881.patch",
"merged_at": 1701958171000
} |
https://api.github.com/repos/huggingface/transformers/issues/27880 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27880/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27880/comments | https://api.github.com/repos/huggingface/transformers/issues/27880/events | https://github.com/huggingface/transformers/issues/27880 | 2,030,179,409 | I_kwDOCUB6oc55AhRR | 27,880 | Model generation is not stopping on eos token | {
"login": "hengjiUSTC",
"id": 25971665,
"node_id": "MDQ6VXNlcjI1OTcxNjY1",
"avatar_url": "https://avatars.githubusercontent.com/u/25971665?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hengjiUSTC",
"html_url": "https://github.com/hengjiUSTC",
"followers_url": "https://api.github.com/users/hengjiUSTC/followers",
"following_url": "https://api.github.com/users/hengjiUSTC/following{/other_user}",
"gists_url": "https://api.github.com/users/hengjiUSTC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hengjiUSTC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hengjiUSTC/subscriptions",
"organizations_url": "https://api.github.com/users/hengjiUSTC/orgs",
"repos_url": "https://api.github.com/users/hengjiUSTC/repos",
"events_url": "https://api.github.com/users/hengjiUSTC/events{/privacy}",
"received_events_url": "https://api.github.com/users/hengjiUSTC/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! This is a duplicate of a lot of issue, the eos token is set to `AddedToken(content = \"</s>\", normalized = True,...)`. This will work:\r\n```python \r\nfrom transformers import AutoTokenizer, AddedToken \r\ntokenizer = AutoTokenizer.from_pretrained(\"NousResearch/Llama-2-7b-chat-hf\", eos_token = AddedToken(\"</s>\", normalized = False, special=True))\r\n```"
] | 1,701 | 1,702 | 1,702 | NONE | null | ### System Info
I have a finetuned llama2 model: "HenryJJ/tangshi-llama2-7b-chat-qlora" loaded by:
```
from peft import AutoPeftModelForCausalLM
# Local directory where the model is saved
local_model_path = "HenryJJ/tangshi-llama2-7b-chat-qlora"
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
model = AutoPeftModelForCausalLM.from_pretrained(
local_model_path,
low_cpu_mem_usage=True,
torch_dtype=torch.float16,
load_in_4bit=True,
)
```
when i am running the inference:
```
prompt = f"""<s>[INST] <<SYS>>你是一个唐诗助手,帮助用户写一首对应要求的唐诗<</SYS>>
作者:李商隱
标签:黄河;咏物;抒情;鼓吹曲辞;乐府;咏物诗
[/INST]
"""
input_ids = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.cuda()
# with torch.inference_mode():
outputs = model.generate(
input_ids=input_ids,
max_new_tokens=400,
top_p=0.9,
temperature=0.7,
pad_token_id= tokenizer.eos_token_id,
eos_token_id= tokenizer.eos_token_id,
)
print(f"Prompt:\n{prompt}\n")
print(f"Generated output:\n{tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0][len(prompt):]}")
```
the result does not stop on eos token `</s>`
```
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:381: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.7` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:386: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
warnings.warn(
Prompt:
<s>[INST] <<SYS>>你是一个唐诗助手,帮助用户写一首对应要求的唐诗<</SYS>>
作者:李商隱
标签:黄河;咏物;抒情;鼓吹曲辞;乐府;咏物诗
[/INST]
Generated output:
辭 黃河
黃河出塞西流幾千里,歷代將士戰死不知道。
黃河入海東流幾千里,歷代船員採藥不知道。
吹吹鼓鼓,喧喧鼓鼓。
黃河內流外流,兩岸淚水不相逢。
吹吹鼓鼓,哭哭鼓鼓。</s>
[/INST]鼓吹曲辭 黃河
黃河出塞西流幾千里,歷代將士戰死不知道。
黃河入海東流幾千里,歷代船員採藥不知道。
吹吹鼓鼓,喧喧鼓鼓。
黃河內流外流,兩岸淚水不相逢。
吹吹鼓鼓,哭哭鼓鼓。
黃河誰人知,從今古今新。
黃河誰人識,從今古今新。
吹吹鼓鼓,哭哭鼓鼓。</s>
黃河出塞西流幾
```
Same thing happens when I use pipeline
```
from transformers import (
pipeline,
)
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_length=300)
prompt = f"""[INST] <<SYS>>你是一个唐诗助手,帮助用户写一首对应要求的唐诗<</SYS>>
作者:駱賓王
标签:思念;七言律诗;秋天;咏物
[/INST]
"""
result = pipe(f"{prompt}", eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)
print(result[0]['generated_text'])
```
I got :
```
[INST] <<SYS>>你是一个唐诗助手,帮助用户写一首对应要求的唐诗<</SYS>>
作者:駱賓王
标签:思念;七言律诗;秋天;咏物
[/INST]
秋夜懷愁
薰茵寒煙濃,暮鴈飛斜堪。
漠漠江頭月,孤城殘鐘鐔。
歸來寄與憶,欲斷欲切斷。
欲斷欲切斷,欲斷欲折折。</s>
[/INST]題名:秋夜懷愁
[/INST]題名:秋夜懷愁
[/INST]題名:秋夜懷愁
[/INST]題名:秋夜懷愁
[/INST]題名:秋夜��
```
Is this a bug or i am using it wrong? reference colab: [https://colab.research.google.com/drive/1PkD_isiH7BA1pjgoB_1a55dcxQAkPCKJ?usp=sharing](https://colab.research.google.com/drive/1PkD_isiH7BA1pjgoB_1a55dcxQAkPCKJ?usp=sharing) please ignore the training code in front, reproduce code is in bottom.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1PkD_isiH7BA1pjgoB_1a55dcxQAkPCKJ?usp=sharing
### Expected behavior
Output should stop on `</s>`.
Example:
```
[INST] <<SYS>>你是一个唐诗助手,帮助用户写一首对应要求的唐诗<</SYS>>
作者:駱賓王
标签:思念;七言律诗;秋天;咏物
[/INST]
秋夜懷愁
薰茵寒煙濃,暮鴈飛斜堪。
漠漠江頭月,孤城殘鐘鐔。
歸來寄與憶,欲斷欲切斷。
欲斷欲切斷,欲斷欲折折。
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27880/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27879 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27879/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27879/comments | https://api.github.com/repos/huggingface/transformers/issues/27879/events | https://github.com/huggingface/transformers/issues/27879 | 2,029,990,560 | I_kwDOCUB6oc54_zKg | 27,879 | FuyuProcessor broken and causes infinite loop | {
"login": "grahamannett",
"id": 7343667,
"node_id": "MDQ6VXNlcjczNDM2Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7343667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/grahamannett",
"html_url": "https://github.com/grahamannett",
"followers_url": "https://api.github.com/users/grahamannett/followers",
"following_url": "https://api.github.com/users/grahamannett/following{/other_user}",
"gists_url": "https://api.github.com/users/grahamannett/gists{/gist_id}",
"starred_url": "https://api.github.com/users/grahamannett/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/grahamannett/subscriptions",
"organizations_url": "https://api.github.com/users/grahamannett/orgs",
"repos_url": "https://api.github.com/users/grahamannett/repos",
"events_url": "https://api.github.com/users/grahamannett/events{/privacy}",
"received_events_url": "https://api.github.com/users/grahamannett/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"if anyone runs into this, its because of the `if end != start` then `continue` will just keep going. both these functions should be rewritten ",
"cc @molbap for visibility, and @grahamannett if you can share a full reproducer would be helpful! ",
"@ArthurZucker its hard to reproduce since the encoding throws an error if you try to create a similar input but here:\r\n\r\n```python\r\nprocessor = AutoProcessor.from_pretrained(\"adept/fuyu-8b\")\r\nx = torch.tensor([[71011, 71019, 1, 138997, 75694, 71374, 71118, 70004, 70050, 70013, 70004, 70013, 71119, 71122]])\r\noutput = processor.post_process_box_coordinates(x)\r\n```\r\n\r\nthe tokens are supposed to resemble something like the model outputting 'here is the example <box>1, 2, 11, 12, 13</box>' (or something similar think x in my example is 2's + 21).\r\n\r\nI closed the issue tho so think we are all good 👍",
"Yep thanks I was able to reproduce. It's a bit of an edge case, but we could / should have a check for this. Do you want to open a PR for that? 🤗 ",
"@ArthurZucker Its not really an edge case as the generated output from the model can actually generate this sort of output and anecdotally seems like it becomes much more common if you further finetune the model on OCR related tasks/instructions. \r\n\r\nI do not think the PR I make will be merged so not sure if I should PR (hence closed this issue).",
"If this should be fix, we'll merge the PR that fixed it! We'll iterate together on the PR if you want! \r\nOpening this again and marking as good difficult issue ",
"I think a lot of the current FuyuProcessor and FuyuImageProcessor (the image processor preprocess/preprocess_with_tokenizer_info I believe has some big issues and unless I am told it is how they trained it, it will make fine-tuning with images not really work) are broken/not working correctly. \r\n\r\nI am just going to close this as it seems like the best way to fix this would be to use a different implementation.",
"It is how it was trained / the code that was shared by the authors 🤗 ",
"@ArthurZucker interesting. There are a lot of parts in it that seem to to not work/be confusing to me e.g. how `get_sample_encoding` https://github.com/huggingface/transformers/blob/7e0ddf89f483f53107870cddabb2e1cc93069705/src/transformers/models/fuyu/processing_fuyu.py#L385 is done makes me think that because of https://github.com/huggingface/transformers/blob/7e0ddf89f483f53107870cddabb2e1cc93069705/src/transformers/models/fuyu/processing_fuyu.py#L246 the processor is not really able to be used for training? Or why `_tokenize_prompts_with_image_and_batch` adds `\"|ENDOFTEXT|\"` token to a text prompt even though it then is removed by just using the original prompt size?\r\n"
] | 1,701 | 1,705 | 1,705 | NONE | null | https://github.com/huggingface/transformers/blob/75336c17945c6b1b5552dbf0236d25f869168aab/src/transformers/models/fuyu/processing_fuyu.py#L618
I am not sure exactly why/how it is but localized an issue where this segment of the processor goes into an infinite loop.
Could be another segment but if I add a timeout to `post_process_box_coordinates` it seems to point to this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27879/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27878 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27878/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27878/comments | https://api.github.com/repos/huggingface/transformers/issues/27878/events | https://github.com/huggingface/transformers/issues/27878 | 2,029,968,111 | I_kwDOCUB6oc54_trv | 27,878 | Fail to resume training from Pytorch FSDP checkpoint. | {
"login": "Aria-K-Alethia",
"id": 22995879,
"node_id": "MDQ6VXNlcjIyOTk1ODc5",
"avatar_url": "https://avatars.githubusercontent.com/u/22995879?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aria-K-Alethia",
"html_url": "https://github.com/Aria-K-Alethia",
"followers_url": "https://api.github.com/users/Aria-K-Alethia/followers",
"following_url": "https://api.github.com/users/Aria-K-Alethia/following{/other_user}",
"gists_url": "https://api.github.com/users/Aria-K-Alethia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aria-K-Alethia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aria-K-Alethia/subscriptions",
"organizations_url": "https://api.github.com/users/Aria-K-Alethia/orgs",
"repos_url": "https://api.github.com/users/Aria-K-Alethia/repos",
"events_url": "https://api.github.com/users/Aria-K-Alethia/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aria-K-Alethia/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, please let us know if the above PR resolves the issue.",
"Thank you for your follow up.\r\nI tried the new branch, but it gave another error:\r\n```python\r\nTraceback (most recent call last):\r\n File \"/home/user/lm_finetune/train_tts.py\", line 183, in <module>\r\n train()\r\n File \"/home/user/lm_finetune/train_tts.py\", line 175, in train\r\n trainer.train(resume_from_checkpoint=ckpt_dir)\r\n File \"/opt/conda/envs/alpaca/lib/python3.10/site-packages/transformers/trainer.py\", line 1533, in train\r\n return inner_training_loop(\r\n File \"/opt/conda/envs/alpaca/lib/python3.10/site-packages/transformers/trainer.py\", line 1691, in _inner_training_loop\r\n self._load_optimizer_and_scheduler(resume_from_checkpoint)\r\n File \"/opt/conda/envs/alpaca/lib/python3.10/site-packages/transformers/trainer.py\", line 2518, in _load_optimizer_and_scheduler\r\n load_fsdp_optimizer(\r\n File \"/opt/conda/envs/alpaca/lib/python3.10/site-packages/accelerate/utils/fsdp_utils.py\", line 190, in load_fsdp_optimizer\r\n optimizer.load_state_dict(flattened_osd)\r\n File \"/opt/conda/envs/alpaca/lib/python3.10/site-packages/accelerate/optimizer.py\", line 107, in load_state_dict\r\n self.optimizer.load_state_dict(state_dict)\r\n File \"/opt/conda/envs/alpaca/lib/python3.10/site-packages/torch/_compile.py\", line 24, in inner\r\n return torch._dynamo.disable(fn, recursive)(*args, **kwargs)\r\n File \"/opt/conda/envs/alpaca/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py\", line 328, in _fn\r\n return fn(*args, **kwargs)\r\n File \"/opt/conda/envs/alpaca/lib/python3.10/site-packages/torch/optim/optimizer.py\", line 735, in load_state_dict\r\n raise ValueError(\"loaded state dict contains a parameter group \"\r\nValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group\r\n```"
] | 1,701 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.35.2 (I also tried the development version 4.36.0 but the same problem arised)
- Platform: Linux-5.15.0-1033-azure-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+rocm5.6 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: I'm not sure the meaning of this term but I use 4 GPU cards
### Who can help?
@muellerzr @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Problem description**
There is no direct code sample to reproduce, so I will try my best to describe how the problem appeared.
1. I fine-tuned a LLM using `transformers.trainer` with pytorch FSDP arguments. The cmd is like:
```python
torchrun --nproc_per_node=4 --master_port=20001 train.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--bf16 True \
--output_dir output \
--num_train_epochs 5 \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 16 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 100 \
--save_total_limit 8 \
--learning_rate 1e-4 \
--weight_decay 0. \
--warmup_steps 1024 \
--lr_scheduler_type "cosine" \
--fsdp "full_shard auto_wrap" \
--fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \
--tf32 False \
--logging_steps 1 \
--model_max_length 4096 \
```
2. The training was interrupted so I tried to resume it from a checkpoint by `trainer.train(resume_from_checkpoint)`, but this gave the following error message:
```python
Traceback (most recent call last):
File "/home/user/lm_finetune/train.py", line 183, in <module>
train()
File "/home/user/lm_finetune/train.py", line 175, in train
trainer.train(resume_from_checkpoint=ckpt_dir)
File "/opt/conda/envs/alpaca/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in train
Traceback (most recent call last):
File "/home/user/lm_finetune/train.py", line 183, in <module>
return inner_training_loop(
File "/opt/conda/envs/alpaca/lib/python3.10/site-packages/transformers/trainer.py", line 1712, in _inner_training_loop
train()
File "/home/user/lm_finetune/train.py", line 175, in train
trainer.train(resume_from_checkpoint=ckpt_dir)
File "/opt/conda/envs/alpaca/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in train
self._load_from_checkpoint(resume_from_checkpoint, self.model_wrapped)
File "/opt/conda/envs/alpaca/lib/python3.10/site-packages/transformers/trainer.py", line 2064, in _load_from_checkpoint
return inner_training_loop(
File "/opt/conda/envs/alpaca/lib/python3.10/site-packages/transformers/trainer.py", line 1712, in _inner_training_loop
raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}")
ValueError: Can't find a valid checkpoint at /home/user/lm_finetune/output/checkpoint-900
self._load_from_checkpoint(resume_from_checkpoint, self.model_wrapped)
File "/opt/conda/envs/alpaca/lib/python3.10/site-packages/transformers/trainer.py", line 2064, in _load_from_checkpoint
raise ValueError(f"Can't find a valid checkpoint at {resume_from_checkpoint}")
ValueError: Can't find a valid checkpoint at /home/user/lm_finetune/output/checkpoint-900
```
**Possible solutions I tried**
1. I first tried to update `transformers` to the development version `4.36.0dev`, but the problem still exists
2. I checked the source code, looks like it requires the model ckpt to be a folder, but in my case, the FSDP ckpt is only a file named `pytorch_model_fsdp.bin` . I think this should be the key point but I don't know how to solve it.
### Expected behavior
The training should be resumed. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27878/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27877 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27877/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27877/comments | https://api.github.com/repos/huggingface/transformers/issues/27877/events | https://github.com/huggingface/transformers/pull/27877 | 2,029,700,435 | PR_kwDOCUB6oc5hXb0R | 27,877 | Format code | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, which version of `ruff` are you using? The main CI usually make sure that we have the correct format and it does not seem to be failing. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27877). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> Hey, which version of `ruff` are you using? The main CI usually make sure that we have the correct format and it does not seem to be failing.\r\n\r\nruff 0.1.7",
"you need to have 0.1.5 we pinned it for now",
":joy: Any plan to update?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Format some files by 'make style'
## Before submitting
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27877/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27877",
"html_url": "https://github.com/huggingface/transformers/pull/27877",
"diff_url": "https://github.com/huggingface/transformers/pull/27877.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27877.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27876 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27876/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27876/comments | https://api.github.com/repos/huggingface/transformers/issues/27876/events | https://github.com/huggingface/transformers/issues/27876 | 2,029,550,828 | I_kwDOCUB6oc54-Hzs | 27,876 | `aggregations_strategies` for TokenClassificationPipeline seem broken when note `simple` | {
"login": "antoine-lizee",
"id": 2957716,
"node_id": "MDQ6VXNlcjI5NTc3MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2957716?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antoine-lizee",
"html_url": "https://github.com/antoine-lizee",
"followers_url": "https://api.github.com/users/antoine-lizee/followers",
"following_url": "https://api.github.com/users/antoine-lizee/following{/other_user}",
"gists_url": "https://api.github.com/users/antoine-lizee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antoine-lizee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antoine-lizee/subscriptions",
"organizations_url": "https://api.github.com/users/antoine-lizee/orgs",
"repos_url": "https://api.github.com/users/antoine-lizee/repos",
"events_url": "https://api.github.com/users/antoine-lizee/events{/privacy}",
"received_events_url": "https://api.github.com/users/antoine-lizee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello Antoine,\r\n\r\nThanks for reporting this issue. I don't have the time to look at it in details. I think this is due to the tokenizer not supporting \"real words\" hence falling into the heuristic [here](https://github.com/huggingface/transformers/blob/75336c17945c6b1b5552dbf0236d25f869168aab/src/transformers/pipelines/token_classification.py#L387).\r\n\r\nFor now, I can suggest you to change your model to use a compatible tokenizer. If you need it absolutely, you can get the entity from the text with the offset mapping like `word = text[entity[\"start\"]:entity[\"end\"]]`. For \"Dermatologie\" fused with \".\", I don't think we can do something immediately with this type of tokenizer.\r\n\r\nHave a good day,\r\nLuc\r\n",
"Hello Luc,\r\n\r\nI do remember seeing the warning, so you're likely right.\r\nI'm a bit surprised that the heuristic can't do better but I won't have to look in depth either - tbh this was more of a bug report as `simple` is good enough in general and I'm moving to more suitable tokenizers.\r\n\r\nThanks you for your answer, closing as won't fix."
] | 1,701 | 1,702 | 1,702 | NONE | null | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.35.2
- Platform: macOS-13.6.2-arm64-arm-64bit
- Python version: 3.11.6
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
Blame gives roughly: @luccailliau @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
from pprint import pprint
tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
model = AutoModelForTokenClassification.from_pretrained("Jean-Baptiste/camembert-ner-with-dates")
nlp_no_agg = pipeline('ner', model=model, tokenizer=tokenizer)
nlp_simple = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
nlp_first = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
nlp_avg = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="average")
nlp_max = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="max")
for example in [
"Bonjour,je suis le docteur Brice Saintclair",
"Je vous renvoie en Dermatologie.",
]:
print(example)
print("no agg")
pprint(nlp_no_agg(example))
print("simple")
pprint(nlp_simple(example))
print("first")
pprint(nlp_first(example))
print("avg")
pprint(nlp_avg(example))
print("max")
pprint(nlp_max(example))
```
Result:
```
Bonjour,je suis le docteur Brice Saintclair
no agg
[{'end': 30,
'entity': 'I-PER',
'index': 7,
'score': 0.9949898,
'start': 26,
'word': '▁Bri'},
{'end': 32,
'entity': 'I-PER',
'index': 8,
'score': 0.99483263,
'start': 30,
'word': 'ce'},
{'end': 38,
'entity': 'I-PER',
'index': 9,
'score': 0.9943815,
'start': 32,
'word': '▁Saint'},
{'end': 43,
'entity': 'I-PER',
'index': 10,
'score': 0.9938929,
'start': 38,
'word': 'clair'}]
simple
[{'end': 43,
'entity_group': 'PER',
'score': 0.9945242,
'start': 26,
'word': 'Brice Saintclair'}]
first
[{'end': 43,
'entity_group': 'PER',
'score': 0.99468565,
'start': 26,
'word': 'BriceSaintclair'}]
avg
[{'end': 43,
'entity_group': 'PER',
'score': 0.9945242,
'start': 26,
'word': 'BriceSaintclair'}]
max
[{'end': 43,
'entity_group': 'PER',
'score': 0.99468565,
'start': 26,
'word': 'BriceSaintclair'}]
Je vous renvoie en Dermatologie.
no agg
[{'end': 22,
'entity': 'I-ORG',
'index': 5,
'score': 0.46623757,
'start': 18,
'word': '▁Der'},
{'end': 25,
'entity': 'I-ORG',
'index': 6,
'score': 0.4892864,
'start': 22,
'word': 'mat'},
{'end': 31,
'entity': 'I-ORG',
'index': 7,
'score': 0.49201807,
'start': 25,
'word': 'ologie'}]
simple
[{'end': 31,
'entity_group': 'ORG',
'score': 0.48251402,
'start': 18,
'word': 'Dermatologie'}]
first
[{'end': 32,
'entity_group': 'ORG',
'score': 0.46623757,
'start': 18,
'word': 'Dermatologie.'}]
avg
[{'end': 32,
'entity_group': 'ORG',
'score': 0.3619019,
'start': 18,
'word': 'Dermatologie.'}]
max
[]
```
### Expected behavior
Given the non-aggregated results, it seems that there are 2 bugs:
- 1/ The space between `Brice Saintclair` is ommited when the tokens are fused by any aggregation strategy that is not "simple". I would expect the space to remain given that it's part of the tagged token.
- 2/ The period after "Dermatologie" is fused with it. It makes the whole word be classified as "0" with `max`. I would expect the period to be counted as outside the word given that it is its own token. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27876/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27875 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27875/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27875/comments | https://api.github.com/repos/huggingface/transformers/issues/27875/events | https://github.com/huggingface/transformers/pull/27875 | 2,029,519,080 | PR_kwDOCUB6oc5hW1FC | 27,875 | Fix for stochastic depth decay rule in the TimeSformer implementation | {
"login": "atawari",
"id": 26859544,
"node_id": "MDQ6VXNlcjI2ODU5NTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/atawari",
"html_url": "https://github.com/atawari",
"followers_url": "https://api.github.com/users/atawari/followers",
"following_url": "https://api.github.com/users/atawari/following{/other_user}",
"gists_url": "https://api.github.com/users/atawari/gists{/gist_id}",
"starred_url": "https://api.github.com/users/atawari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/atawari/subscriptions",
"organizations_url": "https://api.github.com/users/atawari/orgs",
"repos_url": "https://api.github.com/users/atawari/repos",
"events_url": "https://api.github.com/users/atawari/events{/privacy}",
"received_events_url": "https://api.github.com/users/atawari/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"tagging @amyeroberts as the change is in the vision module.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27875). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | Fixing typo to correct the stochastic depth decay rule
# What does this PR do?
Implementation of `drop_path` module in TimesformerLayer uses a constant config.drop_path as oppose to stochastic drop path as expected - see comments [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/timesformer/modeling_timesformer.py#L305) and authors implementation [here](https://github.com/facebookresearch/TimeSformer/blob/a5ef29a7b7264baff199a30b3306ac27de901133/timesformer/models/vit.py#L205)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@amyeroberts
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27875/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27875",
"html_url": "https://github.com/huggingface/transformers/pull/27875",
"diff_url": "https://github.com/huggingface/transformers/pull/27875.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27875.patch",
"merged_at": 1702311631000
} |
https://api.github.com/repos/huggingface/transformers/issues/27874 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27874/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27874/comments | https://api.github.com/repos/huggingface/transformers/issues/27874/events | https://github.com/huggingface/transformers/issues/27874 | 2,029,450,995 | I_kwDOCUB6oc549vbz | 27,874 | When using HF trainer + PEFT + DeepSpeed ZeRO3, there's only hacky way to save the base model | {
"login": "yundai424",
"id": 43726198,
"node_id": "MDQ6VXNlcjQzNzI2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yundai424",
"html_url": "https://github.com/yundai424",
"followers_url": "https://api.github.com/users/yundai424/followers",
"following_url": "https://api.github.com/users/yundai424/following{/other_user}",
"gists_url": "https://api.github.com/users/yundai424/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yundai424/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yundai424/subscriptions",
"organizations_url": "https://api.github.com/users/yundai424/orgs",
"repos_url": "https://api.github.com/users/yundai424/repos",
"events_url": "https://api.github.com/users/yundai424/events{/privacy}",
"received_events_url": "https://api.github.com/users/yundai424/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Gently pinging @pacman100 and @younesbelkada ",
"i will let @pacman100 answer on this as he is more familiar than I am on DS, @yundai424 can you try on latest transformers and peft ? `pip install -U transformers peft`",
"@younesbelkada that gives the same result. Though I found https://huggingface.co./docs/accelerate/usage_guides/deepspeed#saving-and-loading gives a more official solution (i.e. using accelerate `unwrap_model` so if this is the ultimate solution i can close this issue :D"
] | 1,701 | 1,706 | null | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.1
- Platform: Linux-3.10.0-1160.102.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.12
- Huggingface_hub version: 0.17.1
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
deepspeed: HF Trainer/Accelerate: @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
train any peft model with DeepSpeed ZeRO3. For simplicity this is a sample script that uses `trl.SFTTrainer` to fine tune on alpaca dataset:
```python
import transformers
import datasets
import trl
from dataclasses import dataclass, field
import peft
import torch
import time
import callbacks
@dataclass
class CustomArguments():
model_path: str
data_path: str = field(default="alpaca_data.json")
max_seq_length: int = field(default=512)
lora: bool = field(default=False)
lora_r: int = field(default=8)
lora_alpha: int = field(default=32)
lora_dropout: float = field(default=0.1)
def formatting_func(example):
output_texts = []
for i in range(len(example['instruction'])):
output_texts.append(f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{example["instruction"][i]}
### Input:
{example["input"][i]}
### Response:
{example["output"][i]}""")
return output_texts
def main():
parser = transformers.HfArgumentParser((transformers.TrainingArguments, CustomArguments))
training_args, custom_args = parser.parse_args_into_dataclasses()
dataset = datasets.load_dataset("json", data_files=custom_args.data_path, split=[
'train'])[0].train_test_split(test_size=0.2, shuffle=True)
dataset_train, dataset_eval = dataset['train'], dataset['test']
if torch.distributed.get_rank() == 0:
print(custom_args, training_args)
model = transformers.AutoModelForCausalLM.from_pretrained(custom_args.model_path,
trust_remote_code=True,
use_cache=False)
peft_config = peft.LoraConfig(task_type=peft.TaskType.CAUSAL_LM,
inference_mode=False,
r=custom_args.lora_r,
lora_alpha=custom_args.lora_alpha,
lora_dropout=custom_args.lora_dropout) if custom_args.lora else None
tokenizer = transformers.AutoTokenizer.from_pretrained(custom_args.model_path)
tokenizer.pad_token = tokenizer.eos_token
trainer = trl.SFTTrainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset_train,
eval_dataset=dataset_eval,
formatting_func=formatting_func,
max_seq_length=custom_args.max_seq_length,
peft_config=peft_config,
args=training_args,
)
trainer.train()
tokenizer.save_pretrained(training_args.output_dir)
trainer.save_model()
model.base_model.save_pretrained(training_args.output_dir)
if __name__ == "__main__":
main()
```
If we run it with DS ZeRO3, this line `model.base_model.save_pretrained(training_args.output_dir)` will only save out a tiny model bin (several megabytes). I believe that's because when initializing with ZeRO3, the original model's `state_dict` will include mostly just [empty torch tensor](https://github.com/microsoft/DeepSpeed/blob/9dfb06de36bb29293b1e94dc1e48d6f2adf54d2c/deepspeed/runtime/zero/partition_parameters.py#L534) except for layer norm stats, while the actual weights are placed in deepspeed internal variables.
There's certainly a hack to work around:
```python
state_dict = trainer.accelerator.get_state_dict(trainer.deepspeed)
import collections
if torch.distributed.get_rank() == 0:
renamed_state_dict = collections.OrderedDict()
for key, value in state_dict.items():
renamed_state_dict[key.lstrip("base_model.model.")] = value
model.base_model.save_pretrained(training_args.output_dir", state_dict=renamed_state_dict)
```
to get the original full state dict of the lora model by `trainer.accelerator.get_state_dict(trainer.deepspeed)`, and then rename the keys in the state dict to trim the `base_model.model.` path so that the base model can be accessed.
(We understand that the base model is not necessary -- as long as the PEFT model is saved, once it's reloaded, the base model specified in the `"base_model_name_or_path"` in `adapter_config.json` will be loaded into memory as well. However for our production cases the base model path is not always there and may be cleaned up from time to time so a copy of base model is needed along with the peft model)
### Expected behavior
There may be a public API to get the full state dict of the base model that will work with ZeRO3, might be implemented in a way similar to the hack we're having now | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27874/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27873 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27873/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27873/comments | https://api.github.com/repos/huggingface/transformers/issues/27873/events | https://github.com/huggingface/transformers/pull/27873 | 2,029,321,430 | PR_kwDOCUB6oc5hWJQa | 27,873 | [Proposal, open for discussion] Better way of extracting hidden states | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Yep I like this optimisation, non breaking overall ",
"@ArthurZucker it is a breaking change in its current state, since `out_indices` currently defaults to the last stage index if the user doesn't specify them (think @amyeroberts added that [here](https://github.com/huggingface/transformers/blob/0410a29a2d5c798b2c0c1ca28398e0ddcf3384f2/src/transformers/utils/backbone_utils.py#L77-L79)). So if we were to add this with backwards compatibility, we would have to update the default `out_indices` to all stages in case they are not specified.",
"We can set it to `-1` to return everything maybe but I mean we can make it BC! ",
"I'd like to have @amyeroberts's opinion on this one",
"> it is a breaking change in its current state, since out_indices currently defaults to the last stage index if the user doesn't specify them (think @amyeroberts added that [here](https://github.com/huggingface/transformers/blob/0410a29a2d5c798b2c0c1ca28398e0ddcf3384f2/src/transformers/utils/backbone_utils.py#L77-L79)).\r\n\r\nThis was just matching the logic that was originally implemented for the `out_features` (selecting the last layer). As you added this @NielsRogge you'll know the motivation for this better than me :) \r\n\r\nAs it stands this, I'm not in favour of this as this requires adding in backbone API / logic into standard model APIs. This is essentially making things leaky: why do I need to know about `out_indices` to get my hidden states if I'm not loading a backbone?\r\n\r\nMoreover, this is going to break a tonne of stuff, as users who have created checkpoints which are not backbones will still have `out_indices` set in the model config. This isn't easy to rectify: how would we know if the values in the config are what the user wanted e.g. just the last hidden state, or it just happened to be the default when the config was created? \r\n\r\nIt introduces inconsistencies in our models forward passes, which makes the code harder to understand and is tying non-backbone logic to an API which still isn't 100% stable at the moment. \r\n\r\nAn alternative approach would be to have a different argument in the config which defaults to all the layers but then can be overridden by the config's `out_indices` when loading a backbone. ",
"Actually, not another config parameter - because then the source of truth isn't clear and behaviour for the user can be unexpected.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Currently, our `AutoBackbone` classes allow to get specific feature maps out of a certain vision model. For example:
```
from transformers import ConvNextBackbone
import torch
model = ConvNextBackbone.from_pretrained("facebook/convnext-small-224", out_indices=[0,1,2,3])
pixel_values = torch.randn(1, 3, 224, 224)
feature_maps = model(pixel_values)
for i in feature_maps:
print(i.shape)
```
However, they currently [extract all intermediate hidden states](https://github.com/huggingface/transformers/blob/main/src/transformers/models/convnext/modeling_convnext.py#L531), store them in memory, and return the ones required by the user. This is not efficient, we should store only activations required by the user in memory.
This current PR proposes to only return the hidden states specified by `config.out_indices` when the user sets `output_hidden_states=True`. However, this is not backwards compatible (as by default we do return all hidden states). So I'm open for suggestions on how we could improve this. Alternatively, we could make it backwards compatible by setting `out_indices` to all stages by default.
I think this could be an argument that is part of all configs, or at least vision encoders, which typically only require certain hidden states to be extracted.
Curious to hear opinions of @ArthurZucker @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27873/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27873",
"html_url": "https://github.com/huggingface/transformers/pull/27873",
"diff_url": "https://github.com/huggingface/transformers/pull/27873.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27873.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27872 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27872/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27872/comments | https://api.github.com/repos/huggingface/transformers/issues/27872/events | https://github.com/huggingface/transformers/pull/27872 | 2,029,159,599 | PR_kwDOCUB6oc5hVlum | 27,872 | Fix lr_scheduler in no_trainer training scripts | {
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Kindly ping @muellerzr ",
"Hi @muellerzr, thanks for the review! Just updated other no_trainer scripts which handle gradient accumulation by `with accelerator.accumulate(model)`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,706 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Hi!
I ran into unexpected behaviors with the LR scheduler when using gradient accumulation in a distributed environment within this script. I managed to fix it on my end with the following changes:
1. `AcceleratedScheduler` currently steps only when gradient accumulation is performed. As a result, there's no need to multiply `gradient_accumulation_steps` when setting up `num_warmup_steps` and `num_training_steps` for `get_scheduler`, since it will be handled inside `AcceleratedScheduler`.
https://github.com/huggingface/accelerate/blob/6a4857fec2bbae83880014cfd834d8a3e22de68b/src/accelerate/scheduler.py#L60-L64
2. In distributed settings, the scheduler steps for `accelerator.num_processes` each time, requiring multiplying `num_warmup_steps` by `accelerator.num_processes` here to guarantee proper warm-up.
3. When handling distributed training and setting `max_train_steps` for total training steps instead of `num_train_epochs`, `num_training_steps` for the scheduler should be multiplied by `accelerator.num_processes` as well, to ensure the correct number of updating steps.
I could also update other scripts facing a similar issue, if this modification is validated :)
Thanks in advance!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @muellerzr @pacman100 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27872/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27872",
"html_url": "https://github.com/huggingface/transformers/pull/27872",
"diff_url": "https://github.com/huggingface/transformers/pull/27872.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27872.patch",
"merged_at": 1705933338000
} |
https://api.github.com/repos/huggingface/transformers/issues/27871 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27871/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27871/comments | https://api.github.com/repos/huggingface/transformers/issues/27871/events | https://github.com/huggingface/transformers/pull/27871 | 2,029,147,230 | PR_kwDOCUB6oc5hVjAg | 27,871 | Add serialization logic to pytree types | {
"login": "angelayi",
"id": 10901756,
"node_id": "MDQ6VXNlcjEwOTAxNzU2",
"avatar_url": "https://avatars.githubusercontent.com/u/10901756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/angelayi",
"html_url": "https://github.com/angelayi",
"followers_url": "https://api.github.com/users/angelayi/followers",
"following_url": "https://api.github.com/users/angelayi/following{/other_user}",
"gists_url": "https://api.github.com/users/angelayi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/angelayi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/angelayi/subscriptions",
"organizations_url": "https://api.github.com/users/angelayi/orgs",
"repos_url": "https://api.github.com/users/angelayi/repos",
"events_url": "https://api.github.com/users/angelayi/events{/privacy}",
"received_events_url": "https://api.github.com/users/angelayi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @angelayi Thank you for opening a PR. I am not really familiar with this topic, so here are a few questions:\r\n\r\n> To ensure that the serialized name is consistent and doesn't change often\r\n\r\nCould you show us an example that we can see the serialized name being changed (without the change in this PR)?\r\n\r\nAnd could you share a documentation (I guess it is pytorch doc) that providing a serialized type name is a good/recommended practice name and why.\r\n\r\nFinally, an example demonstrate that the change in this PR does its job as described.\r\n\r\nThank you in advance!\r\n\r\n",
"@ydshieh Sorry for the delay! The PR context changed a little, but I added more information to the PR summary, and added a test. Please let me know if you need any more clarifications!",
"Hi @angelayi \r\n\r\nThanks for the update. I would still love to see a code snippet (that we can run easily) in action that address the following\r\n\r\n> Could you show us an example that we can see the serialized name being changed (without the change in this PR)?\r\n\r\n🙏 ",
"Hi @ydshieh, I updated the PR so that comment is no longer really relevant!",
"bump @ydshieh ",
"Hi @angelayi .\r\n\r\nAlthough I get the idea overall, and it LGTM, I still have a few questions and comments (like previously) and would love to see more detailed explanations.\r\n\r\n- I still don't get why this change is `necessary`:\r\n - without this change, would we get some BC/FC issue, say a graph created with old torch version could not be used with newer torch version?\r\n - if this is the case, could you elaborate what the situation(s) is/are?\r\n- And since this (the code content been added in this PR) is not something (very) public that a lot of users will use (it is more internal), I personally would appreciate a lot if an example that uses `torch.export` (I assume this is more public) that you mentioned in the PR description to demonstrate the situation (instead of just using `pytree.treespec_dumps` - it's fine to use this in the testing, but would help a lot to put an example in the PR description using `torch.export`).\r\n\r\n\r\nLooking forward to your comments !",
"@ydshieh \r\n\r\n> I still don't get why this change is necessary:\r\n\r\nWithout this change, we cannot serialize exported programs for models with outputs of type `ModelOutput`, aka most HuggingFace models. We would like for users to be able to save their exported programs and load it in another time.\r\n\r\n> I personally would appreciate a lot if an example that uses torch.export\r\n\r\nI added an example to the PR description which exports AlbertMaskedLM, saves, and loads it. Under the hood in torch.export.save, we use pytree.treespec_dumps to save the treespec, and in torch.export.load, we use pytree.treespec_loads to load the treespec. I also linked an issue where some of our benchmarks are failing due to not having pytree serialization support.",
"Thank you a lot @angelayi I will check the example ❤️ .\r\n",
"review this today",
"Hi! I reviewed the change along with the example you provided, I have one major question and another minor one.\r\n\r\nMinor one: So there is no way to make `torch.export.save` work with torch < 2.2 for HF models?\r\n\r\n### Major issue:\r\n\r\nA variable `SERIALIZED_CLASS_TO_PYTHON_CLASS` is created and used. It works well with your script as both saving and loading are done in the same python process (which uses a buffer).\r\n\r\nHowever, despite I don't have much experience on this topic, I think a common scenario/use case is: A user saves the exported program to a file. And later on, they will load the file, but **in another python process**. \r\n\r\nSo the 2 scripts (saving/loading) will give `KeyError: 'transformers.modeling_outputs.MaskedLMOutput'` (see the full log at the end).\r\n\r\n**Question**: Do you have any idea how to address this common use case?\r\n\r\n-------------------------------------------------------------------------------------------------------------------------\r\n\r\n#### saving\r\n```python\r\nimport torch\r\nimport io\r\nfrom transformers import AlbertForMaskedLM\r\nfrom transformers import file_utils\r\n\r\nmodel_cls = AlbertForMaskedLM\r\nmodel_config = model_cls.config_class()\r\nmodel = model_cls(model_config)\r\nmodel.to(\"cuda\", torch.float32)\r\n\r\ninput_dict = {\"input_ids\": torch.randint(0, 30000, (1, 512), device=\"cuda\", dtype=torch.int64, requires_grad=False)}\r\nep = torch.export.export(model, (), input_dict)\r\n\r\nfn = 'my_exported'\r\ntorch.export.save(ep, fn)\r\n```\r\n\r\n#### loading\r\n```python\r\nimport torch\r\nimport io\r\nfrom transformers import AlbertForMaskedLM\r\nfrom transformers import file_utils\r\n\r\nmodel_cls = AlbertForMaskedLM\r\nmodel_config = model_cls.config_class()\r\nmodel = model_cls(model_config)\r\nmodel.to(\"cuda\", torch.float32)\r\n\r\ninput_dict = {\"input_ids\": torch.randint(0, 30000, (1, 512), device=\"cuda\", dtype=torch.int64, requires_grad=False)}\r\n\r\nfn = 'my_exported'\r\nloaded_ep = torch.export.load(fn)\r\n\r\n\r\ninput_dict = {\"input_ids\": torch.randint(0, 30000, (1, 512), device=\"cuda\", dtype=torch.int64, requires_grad=False)}\r\nassert(torch.allclose(model(**input_dict).logits, loaded_ep(**input_dict).logits))\r\n```\r\n\r\n#### error log\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"temp3.py\", line 14, in <module>\r\n loaded_ep = torch.export.load(buffer)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/export/__init__.py\", line 582, in load\r\n return load(\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/_export/__init__.py\", line 1079, in load\r\n ep = deserialize(artifact)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/_export/serde/serialize.py\", line 1714, in deserialize\r\n ExportedProgramDeserializer(expected_opset_version)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/_export/serde/serialize.py\", line 1565, in deserialize\r\n GraphModuleDeserializer()\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/_export/serde/serialize.py\", line 1275, in deserialize\r\n module_call_graph = self.deserialize_module_call_graph(serialized_graph_module.module_call_graph)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/_export/serde/serialize.py\", line 1513, in deserialize_module_call_graph\r\n return [\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/_export/serde/serialize.py\", line 1516, in <listcomp>\r\n signature=self.deserialize_module_call_signature(entry.signature) if entry.signature else None,\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/_export/serde/serialize.py\", line 1509, in deserialize_module_call_signature\r\n out_spec=treespec_loads(module_call_signature.out_spec),\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/utils/_pytree.py\", line 991, in treespec_loads\r\n return _SUPPORTED_PROTOCOLS[protocol].json_to_treespec(json_schema)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/utils/_pytree.py\", line 953, in _json_to_treespec\r\n context = serialize_node_def.from_dumpable_context(json_schema[\"context\"])\r\n File \"/transformers/src/transformers/utils/generic.py\", line 465, in _model_output_from_dumpable_context\r\n python_class = SERIALIZED_CLASS_TO_PYTHON_CLASS[serialized_class]\r\nKeyError: 'transformers.modeling_outputs.MaskedLMOutput'\r\n```\r\n ",
"@ydshieh \r\n\r\nThanks for the detailed review!\r\n\r\n> So there is no way to make torch.export.save work with torch < 2.2 for HF models?\r\n\r\nNo sorry, torch.export.save won't work with torch < 2.2 for HF models. But also, torch.export.save was introduced in 2.1.\r\n\r\n> A user saves the exported program to a file. And later on, they will load the file, but in another python process.\r\n\r\nThanks for catching this! I made a minor change to how the flattening/unflattening is implemented, and this issue should no longer occur. Please let me know what you think!",
"Hi @angelayi \r\n\r\nThanks a lot! The save/load works with the latest change. But the last line\r\n\r\n> assert(torch.allclose(model(**input_dict).logits, loaded_ep(**input_dict).logits))\r\n\r\nin `loading` script fails as the diff is `> 1.0`\r\n\r\n(I am using this block at the end)\r\n```python\r\ndiff = model(**input_dict).logits - loaded_ep(**input_dict).logits\r\n\r\ndiff = torch.amax(torch.abs(diff))\r\nprint(diff)\r\n```\r\n\r\nHowever, if we put the saving/loading logic in the same script, the diff is `0.0`.\r\n\r\nI am feeling something very strange happens.\r\n\r\nLet me know if you are able to reproduce with the scripts.\r\n\r\nThe bash output looks like\r\n\r\n```bash\r\nroot@e6558c895956:/transformers# python3 temp1.py\r\ntensor(0., device='cuda:0', grad_fn=<AmaxBackward0>)\r\n\r\nroot@e6558c895956:/transformers# python3 temp2.py\r\ntensor(1.3523, device='cuda:0', grad_fn=<AmaxBackward0>)\r\n```",
"sorry, it's my bad! Let me check again.\r\n\r\n(I was using 2 models with different initialized weights)",
"thanks so much for the detailed reviews!!",
"@amyeroberts do you when we could get this merged into the repo? ",
"@amyeroberts I should've addressed the issues you mentioned, except I'm failing the formatting CI job. I tried running `make quality --fix` but `--fix` does not seem to be a valid option 😅 Do you know how I could fix this? ",
"@angelayi To apply the quality changes you can run `make fixup` and push the changes applies",
"@amyeroberts ah it seemed to work now! thanks!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27871). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks so much for all the help and reviews!"
] | 1,701 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
[`torch.export`](https://pytorch.org/docs/main/export.html) produces a graph representation of the program which contains a pytree `TreeSpec` specifying how to flatten the inputs from whatever structure they were in before to a flattened list of inputs to pass to the graph, and unflatten the outputs from the graph to the original structure that eager mode PyTorch produces. We would like to serialize the artifact from `torch.export` and later pass it to a runtime. This means that we need a good BC/FC surface.
The pytree `TreeSpec` contains a structure telling us how the original input/output looked like. For example, if we had an input containing a tuple of a class `InputClass`, the `TreeSpec` would look something like: `TreeSpec(tuple, None, [TreeSpec[InputClass, None, []], TreeSpec[InputClass, None, []]])`. We want to serialize this into a json string consisting of the serialized type, serialized context, and recursively serialized children. Since we want to convert to a json format, only json types are allowed (strings, ints...). Therefore in this PR you can see logic to serialize a python type to a fully qualified string name.
Note that this field only exists starting from PyTorch version 2.2.
Here's an example:
```
import torch
import io
from transformers import AlbertForMaskedLM
from transformers import file_utils
model_cls = AlbertForMaskedLM
model_config = model_cls.config_class()
model = model_cls(model_config)
model.to("cuda", torch.float32)
input_dict = {"input_ids": torch.randint(0, 30000, (1, 512), device="cuda", dtype=torch.int64, requires_grad=False)}
ep = torch.export.export(model, (), input_dict)
print(ep.module_call_graph[0].signature.in_spec)
# TreeSpec(tuple, None, [TreeSpec(tuple, None, []), TreeSpec(dict, ['input_ids'], [*])])
print(ep.module_call_graph[0].signature.out_spec)
# TreeSpec(MaskedLMOutput, (<class 'transformers.modeling_outputs.MaskedLMOutput'>, ['logits']), [*])
buffer = io.BytesIO()
torch.export.save(ep, buffer)
buffer.seek(0)
loaded_ep = torch.export.load(buffer)
print(loaded_ep.module_call_graph[0].signature.in_spec)
# TreeSpec(tuple, None, [TreeSpec(tuple, None, []), TreeSpec(dict, ['input_ids'], [*])])
print(loaded_ep.module_call_graph[0].signature.out_spec)
# TreeSpec(MaskedLMOutput, (<class 'transformers.modeling_outputs.MaskedLMOutput'>, ['logits']), [*])
input_dict = {"input_ids": torch.randint(0, 30000, (1, 512), device="cuda", dtype=torch.int64, requires_grad=False)}
assert(torch.allclose(model(**input_dict).logits, loaded_ep(**input_dict).logits))
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh who seems to have reviewed pytree-related stuff before :) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27871/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27871",
"html_url": "https://github.com/huggingface/transformers/pull/27871",
"diff_url": "https://github.com/huggingface/transformers/pull/27871.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27871.patch",
"merged_at": 1706521280000
} |
https://api.github.com/repos/huggingface/transformers/issues/27870 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27870/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27870/comments | https://api.github.com/repos/huggingface/transformers/issues/27870/events | https://github.com/huggingface/transformers/issues/27870 | 2,029,082,336 | I_kwDOCUB6oc548Vbg | 27,870 | The model 'T5ForConditionalGeneration' is not supported for text-generation. | {
"login": "nickjtay",
"id": 52182102,
"node_id": "MDQ6VXNlcjUyMTgyMTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/52182102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickjtay",
"html_url": "https://github.com/nickjtay",
"followers_url": "https://api.github.com/users/nickjtay/followers",
"following_url": "https://api.github.com/users/nickjtay/following{/other_user}",
"gists_url": "https://api.github.com/users/nickjtay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickjtay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickjtay/subscriptions",
"organizations_url": "https://api.github.com/users/nickjtay/orgs",
"repos_url": "https://api.github.com/users/nickjtay/repos",
"events_url": "https://api.github.com/users/nickjtay/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickjtay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1771187924,
"node_id": "MDU6TGFiZWwxNzcxMTg3OTI0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Core:%20Pipeline",
"name": "Core: Pipeline",
"color": "FF7066",
"default": false,
"description": "Internals of the library; Pipeline."
}
] | closed | false | null | [] | [
"Hey! The pipeline to use is `\"text2text-generation\"` for T5. Yeah not super intuitive 😓 ",
"Hey @nickjtay I am curious about why did you choose `text-generation` Please point me out to the source. I'll work on it. ",
"Thank you I will use `text2text-generation`",
"> Hey @nickjtay I am curious about why did you choose `text-generation` Please point me out to the source. I'll work on it.\r\n\r\nI didn't see it in documentation, so I made my best guess."
] | 1,701 | 1,702 | 1,702 | NONE | null | Immediately below is taken directly from huggingface's library documentation and is included in my script:
https://huggingface.co./docs/transformers/model_doc/flan-t5
```
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small")
```
However when I run the full script within the context of the pipeline function, `pipeline` returns an error stating that the model is incompatible. t5 is a text generation model, but it looks like the pipeline function is doing something for which t5 is incompatible. My question is how is t5 supposed to be used in the context of a problem such as this since there is only one documented example of text generation. Ultimately, I would like to get the t5 model working on my machine since it is small enough to function on my machine.
Script:
```
model_basename = "model"
tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small")
model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small")
DEFAULT_SYSTEM_PROMPT = """
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
""".strip()
def generate_prompt(prompt: str, system_prompt: str = DEFAULT_SYSTEM_PROMPT) -> str:
return f"""
[INST] <<SYS>>
{system_prompt}
<</SYS>>
{prompt} [/INST]
""".strip()
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
text_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=1024,
temperature=0,
top_p=0.95,
repetition_penalty=1.15,
streamer=streamer,
)
```
Error message:
```
The model 'T5ForConditionalGeneration' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'LlamaForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'FalconForCausalLM', 'FuyuForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MistralForCausalLM', 'MptForCausalLM', 'MusicgenForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PersimmonForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'WhisperForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27870/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27869 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27869/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27869/comments | https://api.github.com/repos/huggingface/transformers/issues/27869/events | https://github.com/huggingface/transformers/issues/27869 | 2,029,053,734 | I_kwDOCUB6oc548Ocm | 27,869 | Adding truncation to text-generation pipeline | {
"login": "thedamnedrhino",
"id": 8396998,
"node_id": "MDQ6VXNlcjgzOTY5OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8396998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thedamnedrhino",
"html_url": "https://github.com/thedamnedrhino",
"followers_url": "https://api.github.com/users/thedamnedrhino/followers",
"following_url": "https://api.github.com/users/thedamnedrhino/following{/other_user}",
"gists_url": "https://api.github.com/users/thedamnedrhino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thedamnedrhino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thedamnedrhino/subscriptions",
"organizations_url": "https://api.github.com/users/thedamnedrhino/orgs",
"repos_url": "https://api.github.com/users/thedamnedrhino/repos",
"events_url": "https://api.github.com/users/thedamnedrhino/events{/privacy}",
"received_events_url": "https://api.github.com/users/thedamnedrhino/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Hey! Sounds good, actually passing tokenizer arguments would be great in general, and better to have full args than kwargs. \r\nOne of the issue with this is that it does not really go well with the hierarchy because this means we cannot pass custom argument of custom tokenizers, but we could just have the args that are in the __call__ + some custom_tokenizer_kwargs for specific tokenizers",
"Are you thinking something like this?\r\n```\r\noutputs = pipeline(docs, truncation=True, ..., custom_tokenizer_kwargs={'eos_token_id': ...})\r\n```\r\n\r\nWhich tokenizer args should we add as args to this? Thinking about [not] making the function prototype too big...",
"Prototyping might be better than just supporting any kwargs! The relevant arguments depend on the task, text generation for example would need padding, max_length, add_special_tokens etc. ",
"👍 . Will push a PR next week!"
] | 1,701 | 1,704 | null | CONTRIBUTOR | null | ### Feature request
Passing along the `truncation` argument from the `text-generation` pipeline to the tokenizer.
### Motivation
If you're using a `text-generation` with input text from the user it is likely that their input text is too long. This doesn't cause problems for some models, e.g. `t5` based, but for other models, e.g. `BERT` based it raises and Index Error.
The workaround is to use the tokenizer and model manually and omit the pipeline. #25994 solves this issues for `fill-mask` pipelines.
### Your contribution
#27683. I added `tokenizer_kwargs` to the text-generation pipeline's `__call__()`. This is similar to #26234 that did this for `fill-mask` pipelines. I personally think it's better to add `truncation` and `max-length` as a top level arg to the pipeline's constructor and pass it to the tokenizer when it's time.
I think a refactoring of text based pipelines wouldn't hurt to unify tokenizer calls so you don't have to code this feature for each pipeline separately. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27869/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27869/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27868 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27868/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27868/comments | https://api.github.com/repos/huggingface/transformers/issues/27868/events | https://github.com/huggingface/transformers/pull/27868 | 2,028,997,408 | PR_kwDOCUB6oc5hVCF7 | 27,868 | [⚠️ removed a default argument] Make `AttentionMaskConverter` compatible with `torch.compile(..., fullgraph=True)` | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Also, `AttentionMaskConverter` is not in the documentation so not really user-facing."
] | 1,701 | 1,702 | 1,702 | COLLABORATOR | null | As per title, fixes https://github.com/huggingface/transformers/issues/27789.
This issue is only for PyTorch 2.1 and has been fixed in torch nightly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27868/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27868",
"html_url": "https://github.com/huggingface/transformers/pull/27868",
"diff_url": "https://github.com/huggingface/transformers/pull/27868.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27868.patch",
"merged_at": 1702028687000
} |
https://api.github.com/repos/huggingface/transformers/issues/27867 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27867/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27867/comments | https://api.github.com/repos/huggingface/transformers/issues/27867/events | https://github.com/huggingface/transformers/pull/27867 | 2,028,282,487 | PR_kwDOCUB6oc5hSjwh | 27,867 | Avoid class attribute _keep_in_fp32_modules being modified | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada "
] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | # What does this PR do?
Another approach to #26433. Fix #25910 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27867/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27867",
"html_url": "https://github.com/huggingface/transformers/pull/27867",
"diff_url": "https://github.com/huggingface/transformers/pull/27867.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27867.patch",
"merged_at": 1701879584000
} |
https://api.github.com/repos/huggingface/transformers/issues/27866 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27866/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27866/comments | https://api.github.com/repos/huggingface/transformers/issues/27866/events | https://github.com/huggingface/transformers/issues/27866 | 2,028,079,212 | I_kwDOCUB6oc544ghs | 27,866 | Make `output_dir` optional in TrainingArguments | {
"login": "ChanderG",
"id": 6350377,
"node_id": "MDQ6VXNlcjYzNTAzNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6350377?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChanderG",
"html_url": "https://github.com/ChanderG",
"followers_url": "https://api.github.com/users/ChanderG/followers",
"following_url": "https://api.github.com/users/ChanderG/following{/other_user}",
"gists_url": "https://api.github.com/users/ChanderG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ChanderG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ChanderG/subscriptions",
"organizations_url": "https://api.github.com/users/ChanderG/orgs",
"repos_url": "https://api.github.com/users/ChanderG/repos",
"events_url": "https://api.github.com/users/ChanderG/events{/privacy}",
"received_events_url": "https://api.github.com/users/ChanderG/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @muellerzr sounds good to me wdyt? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | null | NONE | null | ### Feature request
Currently, there is only 1 required param in creating a TrainingArguments object - `output_dir`. HFTrainer manually creates an object with a default value "tmp_trainer" if no Args object is passed to it.
Instead, we should make even this one param optional in the TrainingArguments class (and use a default inside the class implementation).
### Motivation
This is useful when creating and passing TrainingArguments in other runners - for eg, trl/SFTTrainer. I would like sensible defaults for all params, so that I only specify the particular arguments I am interested in.
### Your contribution
I can open a PR, if this is of interest. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27866/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27865 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27865/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27865/comments | https://api.github.com/repos/huggingface/transformers/issues/27865/events | https://github.com/huggingface/transformers/pull/27865 | 2,027,990,316 | PR_kwDOCUB6oc5hRjww | 27,865 | Support PeftModel signature inspect | {
"login": "dancingpipi",
"id": 20511825,
"node_id": "MDQ6VXNlcjIwNTExODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/20511825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dancingpipi",
"html_url": "https://github.com/dancingpipi",
"followers_url": "https://api.github.com/users/dancingpipi/followers",
"following_url": "https://api.github.com/users/dancingpipi/following{/other_user}",
"gists_url": "https://api.github.com/users/dancingpipi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dancingpipi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dancingpipi/subscriptions",
"organizations_url": "https://api.github.com/users/dancingpipi/orgs",
"repos_url": "https://api.github.com/users/dancingpipi/repos",
"events_url": "https://api.github.com/users/dancingpipi/events{/privacy}",
"received_events_url": "https://api.github.com/users/dancingpipi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @amyeroberts for a second look."
] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
If we set `remove_unused_columns` to 'True' while training a `LoRA` model, the all dataset columns will be removed.
This is because the `_set_signature_columns_if_needed` function directly checks the signature of `self.model`. If `self.model` is a `PeftModel`, the signature will become `['args', 'kwargs']` , causing the valid columns in the dataset to be deleted.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerz @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27865/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27865",
"html_url": "https://github.com/huggingface/transformers/pull/27865",
"diff_url": "https://github.com/huggingface/transformers/pull/27865.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27865.patch",
"merged_at": 1702323011000
} |
https://api.github.com/repos/huggingface/transformers/issues/27864 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27864/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27864/comments | https://api.github.com/repos/huggingface/transformers/issues/27864/events | https://github.com/huggingface/transformers/issues/27864 | 2,027,948,396 | I_kwDOCUB6oc544Als | 27,864 | IndexError: index out of range in self when using text2text-generation pipeline with encoder-decoder model | {
"login": "ZimoLoveShuang",
"id": 32027313,
"node_id": "MDQ6VXNlcjMyMDI3MzEz",
"avatar_url": "https://avatars.githubusercontent.com/u/32027313?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZimoLoveShuang",
"html_url": "https://github.com/ZimoLoveShuang",
"followers_url": "https://api.github.com/users/ZimoLoveShuang/followers",
"following_url": "https://api.github.com/users/ZimoLoveShuang/following{/other_user}",
"gists_url": "https://api.github.com/users/ZimoLoveShuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZimoLoveShuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZimoLoveShuang/subscriptions",
"organizations_url": "https://api.github.com/users/ZimoLoveShuang/orgs",
"repos_url": "https://api.github.com/users/ZimoLoveShuang/repos",
"events_url": "https://api.github.com/users/ZimoLoveShuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZimoLoveShuang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,701 | 1,701 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27864/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27863 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27863/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27863/comments | https://api.github.com/repos/huggingface/transformers/issues/27863/events | https://github.com/huggingface/transformers/pull/27863 | 2,027,862,438 | PR_kwDOCUB6oc5hRJCK | 27,863 | Choose appropriate loss function (MSE if regression) during forward as the doc suggests so that audio models could do regression | {
"login": "nevikw39",
"id": 20489759,
"node_id": "MDQ6VXNlcjIwNDg5NzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/20489759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nevikw39",
"html_url": "https://github.com/nevikw39",
"followers_url": "https://api.github.com/users/nevikw39/followers",
"following_url": "https://api.github.com/users/nevikw39/following{/other_user}",
"gists_url": "https://api.github.com/users/nevikw39/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nevikw39/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nevikw39/subscriptions",
"organizations_url": "https://api.github.com/users/nevikw39/orgs",
"repos_url": "https://api.github.com/users/nevikw39/repos",
"events_url": "https://api.github.com/users/nevikw39/events{/privacy}",
"received_events_url": "https://api.github.com/users/nevikw39/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"## Updated\r\n\r\nI also found other audio classification models, such as wav2vec 2.0, HuBERT, WavLM, etc., except Audio Spectrogram Transformer, have similar issues. Therefore, I mended them as well.\r\n\r\n~~After applying the patch, `make fixup` failed for `data2vec_audio` despite the fact that `data2vec_text` and `data2vec_vision` support MSE for regression. `make fix-copies` seem to solve the problem by removing the fix of `Data2VecAudioForAudioFrameClassification`.~~\r\n\r\n---\r\n\r\nP.S. When rebasing the main branch from upstream, I made some mistakes and accindentally merge the main branch into my one, which included others' commits into my PR. I reset the merge and pushed to my branch forcibly. Sorry for the mess.",
"## Updated Again\r\n\r\nI just found that the shape of logits used to compute MSE loss should be reshaped in different way rather than cross entropy.\r\n\r\nFor the copy issue reported by `make fixup`, I realized that in this case, it checked if `Data2VecAudioForAudioFrameClassification.forward() ` is identical to `Wav2Vec2ForAudioFrameClassification.forward()` since the former was marked as copied from the latter. The reason why the issue was reported was that I overlooked\r\n `Wav2Vec2ForAudioFrameClassification` previously.\r\n\r\nThis time, I was certainly sure that I executed `git pull --rebase` yet I included others' commits in my PR again... As a result, I had to push forcibly once more. I sincerely apologize to my naïve lack of practical git experience.",
"Thanks, I'll ping @ylacombe and @sanchit-gandhi for a first review 😉 ",
"> Thanks for these fixes @nevikw39 and apologies for the late review! The code looks perfect - in the style of the Transformers loss functions 🙌 May I request that we implement a test for at least one of these models to confirm that we get the correct loss in the case that `config.num_labels>1`? Otherwise, this PR looks good to go!\r\n\r\nHi HuggingFace team,\r\n\r\nApologies for that I was busy in recent weeks.\r\n\r\nI believe it's a great chance for me to learn to design Python unit testing. Could you please provide some kind instructions or examples for what the test should be? I checked existing testings and fount that only Whisper had tester for audio classification (with config `num_labels=2`), whereas Wav2Vec2, WavLM or HuBERT didn't.\r\n\r\nThank!",
"For testing I would recommend you to follow what is usually done in the code base so check that a loss is properly computed by simply passing dummy inputs and dummy labels! Make sure the CI is green as well an rebase on main as it hase been a while! 🤗 ",
"Hi HuggingFace team,\r\n\r\nI resolved the suggestion of code review and the CI tests, and synced with the master branch (by merging rather than rebasing though I don't know whether it is appropriate, yet hopefully the PR would be squashed into a single commit, right?).\r\n\r\nAs for the test, I tried to take a close look on current unit tests. Sanchit (6th Jan.) said that we need \"a test for at least one of these models to confirm that we get the correct loss in the case that `config.num_labels>1`\". I fount that inside `tests/models/whisper/test_modeling_whisper.py`, class `WhisperEncoderModelTest` and `WhisperEncoderModelTester` seems to test the `forward()` method of `WhisperForAudioClassification` with [config `num_labels=2`](../tree/main/tests/models/whisper/test_modeling_whisper.py#L2597). So do these existing tests suffice?\r\n\r\nFor the remaining models, I haven't figured out how the existing tests deal with `num_labels` config. It appears that these models aren't covered by tests of forward, are they? Could we just leave those models as is?\r\n\r\nThanks!",
"> As for the test, I tried to take a close look on current unit tests. Sanchit (6th Jan.) said that we need \"a test for at least one of these models to confirm that we get the correct loss in the case that config.num_labels>1\". I fount that inside tests/models/whisper/test_modeling_whisper.py, class WhisperEncoderModelTest and WhisperEncoderModelTester seems to test the forward() method of WhisperForAudioClassification with [config num_labels=2](https://github.com/huggingface/transformers/tree/main/tests/models/whisper/test_modeling_whisper.py#L2597). So do these existing tests suffice?\r\n\r\nyes it suffice if the `WhisperForAudioClassification` passes in that test! I can have a look and merge if it's alright! "
] | 1,701 | 1,707 | null | NONE | null | # What does this PR do?
The general documentation of `transformers` says that a classifier would specialize in regression if `num_labels` in config is set to 1. The doc-string comments of `WhisperForAudioClassification` class also suggest that it would do so. Nevertheless, the actual implementation of `forward()` method always computes cross entropy loss in regardless of the config.
The issue is found on other audio classification models as well, such as wav2vec2.0, HuBERT, WavLM, etc.
So this PR assigns appropriate loss function to `loss_fct` in `forward()` method of audio classification models class (mean square loss for regression) as the documentation suggests.
Fixes #27862
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- No, this PR fixed a small bug.
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- Yes of course!
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- I submitted an issue (#27862) earlier.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- Not necessary.
- [x] Did you write any new necessary tests?
- No.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
Hello @sanchit-gandhi, @patrickvonplaten, @anton-l, and @huggingface team, thanks for your efforts to bring Whisper to the awesome library, please help review this PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27863/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27863",
"html_url": "https://github.com/huggingface/transformers/pull/27863",
"diff_url": "https://github.com/huggingface/transformers/pull/27863.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27863.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27862 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27862/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27862/comments | https://api.github.com/repos/huggingface/transformers/issues/27862/events | https://github.com/huggingface/transformers/issues/27862 | 2,027,809,668 | I_kwDOCUB6oc543euE | 27,862 | Audio Classification fails to do regression even though the documentation says it should under certain config | {
"login": "nevikw39",
"id": 20489759,
"node_id": "MDQ6VXNlcjIwNDg5NzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/20489759?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nevikw39",
"html_url": "https://github.com/nevikw39",
"followers_url": "https://api.github.com/users/nevikw39/followers",
"following_url": "https://api.github.com/users/nevikw39/following{/other_user}",
"gists_url": "https://api.github.com/users/nevikw39/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nevikw39/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nevikw39/subscriptions",
"organizations_url": "https://api.github.com/users/nevikw39/orgs",
"repos_url": "https://api.github.com/users/nevikw39/repos",
"events_url": "https://api.github.com/users/nevikw39/events{/privacy}",
"received_events_url": "https://api.github.com/users/nevikw39/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"pinging @ylacombe and @sanchit-gandhi ",
"Great catch @nevikw39 and many thanks for the PR - just left a review: https://github.com/huggingface/transformers/pull/27863#pullrequestreview-1806592151"
] | 1,701 | 1,706 | null | NONE | null | ### System Info
- `transformers` version: 4.35.1
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: True
- num_processes: 8
- machine_rank: 0
- num_machines: 2
- gpu_ids: all
- main_process_ip: 10.18.18.1
- main_process_port: 8080
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
### Who can help?
Seems like @sanchit-gandhi would be of help when it comes to Whisper.
In fact, this issue could be fixed easily and I have made it work on our machine by directly modifying the source codes of `transformer` library. Though I am going to create a pull request, I think I should submit an issue here still.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
#### Code Sample
The dataset used below is private due to license. So for one who wants to reproduce, he / she might need find a suitable dataset for audio regression.
```python
#!/home/nevikw/miniconda3/envs/ml-project/bin/python
from argparse import ArgumentParser
from random import randint
import warnings
from datasets import load_dataset, Audio, Value
from transformers import (
AutoFeatureExtractor,
AutoModelForAudioClassification,
TrainingArguments,
Trainer,
EarlyStoppingCallback,
)
import numpy as np
from sklearn.metrics import mean_squared_error
warnings.filterwarnings("ignore")
ap = ArgumentParser()
ap.add_argument("-m", "--base-model", type=str, default="openai/whisper-large-v3")
ap.add_argument("-d", "--sample-duration", type=int, default=30)
ap.add_argument("-b", "--batch-size", type=int, default=4)
ap.add_argument("-g", "--grad-accu-step", type=int, default=8)
args = ap.parse_args()
feature_extractor = AutoFeatureExtractor.from_pretrained(args.base_model)
preprocess = lambda examples: feature_extractor(
[i["array"][(n := randint(0, len(i["array"]) - (m := min(len(i["array"]), feature_extractor.sampling_rate*args.sample_duration)))) : n + m] for i in examples["audio"]],
sampling_rate=feature_extractor.sampling_rate,
do_normalize=True,
)
dataset = (
load_dataset("nevikw39/ADReSSo")
.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate))
.cast_column("mmse", Value("float"))
)
dataset["train"], dataset["valid"] = dataset["train"].train_test_split(.25).values()
mean = np.mean(dataset["train"]["mmse"])
std = np.std(dataset["train"]["mmse"])
encoded_dataset = (
dataset
.map(preprocess, remove_columns=["audio"], batched=True, load_from_cache_file=False)
.map(lambda batch: {"label": (np.array(batch["mmse"]) - mean) / std}, remove_columns=["label"], batched=True, load_from_cache_file=False)
)
model = AutoModelForAudioClassification.from_pretrained(args.base_model, num_labels=1)
training_args = TrainingArguments(
output_dir="models/" + args.base_model[args.base_model.index('/') + 1 :] + "_ADReSSo-MMSE",
evaluation_strategy="epoch",
save_strategy="epoch",
per_device_train_batch_size=args.batch_size,
per_device_eval_batch_size=args.batch_size*2,
gradient_accumulation_steps=args.grad_accu_step,
num_train_epochs=100,
warmup_ratio=.05,
logging_steps=10,
load_best_model_at_end=True,
metric_for_best_model="rmse",
greater_is_better=False,
push_to_hub_organization="NTHU-ML-2023-team19",
push_to_hub=False,
hub_private_repo=True,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=encoded_dataset["train"],
eval_dataset=encoded_dataset["valid"],
tokenizer=feature_extractor,
compute_metrics=lambda eval_pred: {
"rmse": mean_squared_error(eval_pred.label_ids, eval_pred.predictions, squared=False) * std,
},
callbacks=[EarlyStoppingCallback(10)],
)
trainer.train()
print(trainer.evaluate(encoded_dataset["test"]))
trainer.save_model("models/" + args.base_model[args.base_model.index('/') + 1 :] + "_ADReSSo-MMSE")
```
#### Error Message
```
Traceback (most recent call last):
File "/home/nevikw/ML_Project/./acoustic_ft_mmse.py", line 106, in <module>
trainer.train()
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/transformers/trainer.py", line 1555, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/transformers/trainer.py", line 1860, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/transformers/trainer.py", line 2725, in training_step
loss = self.compute_loss(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/transformers/trainer.py", line 2748, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py", line 185, in forward
outputs = self.parallel_apply(replicas, inputs, module_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py", line 200, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py", line 110, in parallel_apply
output.reraise()
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/_utils.py", line 694, in reraise
raise exception
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py", line 85, in _worker
output = module(*input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/transformers/models/whisper/modeling_whisper.py", line 2419, in forward
loss = loss_fct(logits.view(-1, self.config.num_labels), labels.view(-1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/modules/loss.py", line 1179, in forward
return F.cross_entropy(input, target, weight=self.weight,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/nevikw/miniconda3/envs/ml-project/lib/python3.11/site-packages/torch/nn/functional.py", line 3053, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: "nll_loss_forward_reduce_cuda_kernel_2d_index" not implemented for 'Float'
```
#### Proposed Solution
I found that the issue could be resolved by assigning appropriate loss function to `loss_fct` in `forward()` method of `WhisperForAudioClassification` class. The pull request will be created latter.
### Expected behavior
We should be able to perform the regression task and the mean square error loss should be computed during forward process if `config.num_labels=1` as the documentation suggests. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27862/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27861 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27861/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27861/comments | https://api.github.com/repos/huggingface/transformers/issues/27861/events | https://github.com/huggingface/transformers/issues/27861 | 2,027,626,944 | I_kwDOCUB6oc542yHA | 27,861 | Error in run_mae.py when specifying arguments | {
"login": "SiyCHENG",
"id": 153052861,
"node_id": "U_kgDOCR9mvQ",
"avatar_url": "https://avatars.githubusercontent.com/u/153052861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SiyCHENG",
"html_url": "https://github.com/SiyCHENG",
"followers_url": "https://api.github.com/users/SiyCHENG/followers",
"following_url": "https://api.github.com/users/SiyCHENG/following{/other_user}",
"gists_url": "https://api.github.com/users/SiyCHENG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SiyCHENG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SiyCHENG/subscriptions",
"organizations_url": "https://api.github.com/users/SiyCHENG/orgs",
"repos_url": "https://api.github.com/users/SiyCHENG/repos",
"events_url": "https://api.github.com/users/SiyCHENG/events{/privacy}",
"received_events_url": "https://api.github.com/users/SiyCHENG/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"```diff\r\n(base) mima0000@mima0000deMacBook-Air ViTMAE % python run_mae.py \\\r\n --model_name_or_path /Users/mima0000/本地文档/ViTMAE/mae_base \\\r\n --dataset_name /Users/mima0000/本地文档/ViTMAE/Nuclear/ \\\r\n- --train_dir /Users/mima0000/本地文档/ViTMAE/Nuclear/train/* \\\r\n+ --train_dir /Users/mima0000/本地文档/ViTMAE/Nuclear/train \\\r\n --output_dir /Users/mima0000/本地文档/ViTMAE/Model \\\r\n --remove_unused_columns False \\\r\n --label_names pixel_values \\\r\n --do_train \\\r\n --do_eval\r\n```\r\nhey! Could you try with this? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,705 | 1,705 | NONE | null | ### System Info
I encountered an error while running the run_mae.py script with the following command:
```basg
(base) mima0000@mima0000deMacBook-Air ViTMAE % python run_mae.py \
--model_name_or_path /Users/mima0000/本地文档/ViTMAE/mae_base \
--dataset_name /Users/mima0000/本地文档/ViTMAE/Nuclear/ \
--train_dir /Users/mima0000/本地文档/ViTMAE/Nuclear/train/* \
--output_dir /Users/mima0000/本地文档/ViTMAE/Model \
--remove_unused_columns False \
--label_names pixel_values \
--do_train \
--do_eval
```
I received the following traceback:
```
Traceback (most recent call last):
File "/Users/mima0000/本地文档/ViTMAE/run_mae.py", line 412, in <module>
main()
File "/Users/mima0000/本地文档/ViTMAE/run_mae.py", line 182, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mima0000/anaconda3/lib/python3.11/site-packages/transformers/hf_argparser.py", line 347, in parse_args_into_dataclasses
raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}")
ValueError: Some specified arguments are not used by the HfArgumentParser: ['/Users/mima0000/本地文档/ViTMAE/Nuclear/train/sample_10.jpg', '/Users/mima0000/本地文档/ViTMAE/Nuclear/train/sample_2.jpg', '/Users/mima0000/本地文档/ViTMAE/Nuclear/train/sample_3.jpg', '/Users/mima0000/本地文档/ViTMAE/Nuclear/train/sample_4.jpg', '/Users/mima0000/本地文档/ViTMAE/Nuclear/train/sample_5.jpg', '/Users/mima0000/本地文档/ViTMAE/Nuclear/train/sample_6.jpg', '/Users/mima0000/本地文档/ViTMAE/Nuclear/train/sample_7.jpg', '/Users/mima0000/本地文档/ViTMAE/Nuclear/train/sample_8.jpg', '/Users/mima0000/本地文档/ViTMAE/Nuclear/train/sample_9.jpg']
```
It seems that the HfArgumentParser is reporting that some of the specified arguments are not used.
Could you please help me understand why these arguments are not recognized by the HfArgumentParser and how I can resolve this issue?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
(base) mima0000@mima0000deMacBook-Air ViTMAE % python run_mae.py \
--model_name_or_path /Users/mima0000/本地文档/ViTMAE/mae_base \
--dataset_name /Users/mima0000/本地文档/ViTMAE/Nuclear/ \
--train_dir /Users/mima0000/本地文档/ViTMAE/Nuclear/train/* \
--output_dir /Users/mima0000/本地文档/ViTMAE/Model \
--remove_unused_columns False \
--label_names pixel_values \
--do_train \
--do_eval
```
### Expected behavior
Train a mae_vit model with my own dataset | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27861/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27860 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27860/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27860/comments | https://api.github.com/repos/huggingface/transformers/issues/27860/events | https://github.com/huggingface/transformers/pull/27860 | 2,027,459,151 | PR_kwDOCUB6oc5hPxvV | 27,860 | Added passing parameters to "reduce_lr_on_plateau" scheduler | {
"login": "CharbelAD",
"id": 45701489,
"node_id": "MDQ6VXNlcjQ1NzAxNDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/45701489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CharbelAD",
"html_url": "https://github.com/CharbelAD",
"followers_url": "https://api.github.com/users/CharbelAD/followers",
"following_url": "https://api.github.com/users/CharbelAD/following{/other_user}",
"gists_url": "https://api.github.com/users/CharbelAD/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CharbelAD/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CharbelAD/subscriptions",
"organizations_url": "https://api.github.com/users/CharbelAD/orgs",
"repos_url": "https://api.github.com/users/CharbelAD/repos",
"events_url": "https://api.github.com/users/CharbelAD/events{/privacy}",
"received_events_url": "https://api.github.com/users/CharbelAD/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,702 | 1,702 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Follow up on #27595. Added support to pass `lr_scheduler_kwargs` to scheduler of type "reduce_lr_on_plateau".
Code example:
```python
train_dataset = RegressionDataset(length=64)
eval_dataset = RegressionDataset(length=64)
args = TrainingArguments(
"./regression",
evaluation_strategy = "epoch",
lr_scheduler_type="reduce_lr_on_plateau",
lr_scheduler_kwargs={"factor": 0.5, "mode": 'max'},
learning_rate=0.2,
warmup_steps=2,
)
model = RegressionModel()
trainer = Trainer(model, args, train_dataset=train_dataset, eval_dataset=eval_dataset)
trainer.create_optimizer_and_scheduler(num_training_steps=10)
print(trainer.lr_scheduler.factor, trainer.lr_scheduler.mode)
```
Current behavior: prints `0.1 min` which are the default values of the parameters `factor` and `mode` because `lr_scheduler_kwargs` are not being passed.
Expected behavior: prints `0.5 max`.
I did not write any test cases for this since a test case for passing extra kwargs was already added in #27595, however, I can add one if any maintainer finds it necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@muellerzr, @pacman100, and @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27860/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27860",
"html_url": "https://github.com/huggingface/transformers/pull/27860",
"diff_url": "https://github.com/huggingface/transformers/pull/27860.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27860.patch",
"merged_at": 1702040771000
} |
https://api.github.com/repos/huggingface/transformers/issues/27859 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27859/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27859/comments | https://api.github.com/repos/huggingface/transformers/issues/27859/events | https://github.com/huggingface/transformers/pull/27859 | 2,027,232,050 | PR_kwDOCUB6oc5hO_fZ | 27,859 | [docs] Custom semantic segmentation dataset | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,701 | 1,701 | 1,701 | MEMBER | null | Adds a brief section in the semantic segmentation guide for creating a custom dataset to use with the `run_semantic_segmentation.py` script. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27859/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27859",
"html_url": "https://github.com/huggingface/transformers/pull/27859",
"diff_url": "https://github.com/huggingface/transformers/pull/27859.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27859.patch",
"merged_at": 1701974856000
} |
https://api.github.com/repos/huggingface/transformers/issues/27857 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27857/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27857/comments | https://api.github.com/repos/huggingface/transformers/issues/27857/events | https://github.com/huggingface/transformers/issues/27857 | 2,026,945,992 | I_kwDOCUB6oc540L3I | 27,857 | Roundtrip Tokenization Failure | {
"login": "YehowshuaScaled",
"id": 152646622,
"node_id": "U_kgDOCRkz3g",
"avatar_url": "https://avatars.githubusercontent.com/u/152646622?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YehowshuaScaled",
"html_url": "https://github.com/YehowshuaScaled",
"followers_url": "https://api.github.com/users/YehowshuaScaled/followers",
"following_url": "https://api.github.com/users/YehowshuaScaled/following{/other_user}",
"gists_url": "https://api.github.com/users/YehowshuaScaled/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YehowshuaScaled/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YehowshuaScaled/subscriptions",
"organizations_url": "https://api.github.com/users/YehowshuaScaled/orgs",
"repos_url": "https://api.github.com/users/YehowshuaScaled/repos",
"events_url": "https://api.github.com/users/YehowshuaScaled/events{/privacy}",
"received_events_url": "https://api.github.com/users/YehowshuaScaled/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"As an example, <s> could show up when having a model generate or classify HTML. I think this is a very valid use case.",
"The slow tokenizer support the `encode_special_tokens` option which is set to false by default because most common use case need the special token to be recognized. Fast is going to support that in the near futur! 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.36.0.dev0
- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.37
- Python version: 3.11.4
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0.dev20231126+rocm5.7 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run the following:
```python3
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
"mistralai/Mistral-7B-v0.1",
add_eos_token=False,
add_bos_token=False)
pre_roundtrip_seq = [523, 28713, 28767]
print(f"{pre_roundtrip_seq=}")
print(f"{tokenizer.decode(pre_roundtrip_seq)=}")
post_roundtrip_seq = tokenizer.encode(tokenizer.decode(pre_roundtrip_seq))
print(f"{post_roundtrip_seq=}")
print(f"{tokenizer.decode(post_roundtrip_seq)=}")
```
```python3 bug.py
pre_roundtrip_seq=[523, 28713, 28767]
tokenizer.decode(pre_roundtrip_seq)='<s>'
post_roundtrip_seq=[1]
tokenizer.decode(post_roundtrip_seq)='<s>'
```
### Expected behavior
It's quite conceivable and sometimes desirable that a language model could spit out the literal sequence "<s>" which is a less than followed by s followed by greater than as shown above.
As is, tokenizers would interprets this as a start token. This is clearly incorrect behavior. To get around this behavior, I've been manually instantiating Tokenizers from `SentencePiece` and passing them a model binary.
Clearly, by doing this, I can longer take advantage of much of the Transformer library including generate or Trainer, as both expect a `Transformer.Tokenizer`.
I'm not sure what a clean way to get around this is - could involve re-doing much of the `Transformer.Tokenizer` infrastructure - unless there is some special token escape sequence included with `Transformer.Tokenizer` that I'm simply not aware of. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27857/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27856 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27856/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27856/comments | https://api.github.com/repos/huggingface/transformers/issues/27856/events | https://github.com/huggingface/transformers/pull/27856 | 2,026,535,407 | PR_kwDOCUB6oc5hMkkx | 27,856 | [`Docs`] Update broken image on fused modules | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27856). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
As per title, as pointed out by @SunMarc offline
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27856/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27856",
"html_url": "https://github.com/huggingface/transformers/pull/27856",
"diff_url": "https://github.com/huggingface/transformers/pull/27856.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27856.patch",
"merged_at": 1701808439000
} |
https://api.github.com/repos/huggingface/transformers/issues/27855 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27855/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27855/comments | https://api.github.com/repos/huggingface/transformers/issues/27855/events | https://github.com/huggingface/transformers/issues/27855 | 2,026,514,481 | I_kwDOCUB6oc54yigx | 27,855 | Index error while pretraining Flava | {
"login": "ferjorosa",
"id": 24965845,
"node_id": "MDQ6VXNlcjI0OTY1ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/24965845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ferjorosa",
"html_url": "https://github.com/ferjorosa",
"followers_url": "https://api.github.com/users/ferjorosa/followers",
"following_url": "https://api.github.com/users/ferjorosa/following{/other_user}",
"gists_url": "https://api.github.com/users/ferjorosa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ferjorosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ferjorosa/subscriptions",
"organizations_url": "https://api.github.com/users/ferjorosa/orgs",
"repos_url": "https://api.github.com/users/ferjorosa/repos",
"events_url": "https://api.github.com/users/ferjorosa/events{/privacy}",
"received_events_url": "https://api.github.com/users/ferjorosa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for reporting 🤗 would you like to open a PR for a fix? ",
"Hi, yes. I have created a PR. Could you take a look into it?\r\n\r\nThanks\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,701 | 1,704 | 1,704 | CONTRIBUTOR | null | ### System Info
`transformers==4.35.2`
### Who can help?
@ArthurZucker @younesbelkada @amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Error is thrown when doing pretraining with an `itm_labels` tensor that contains both 0s and 1s. Just as a reminder, to execute the ITM task successfully, pairs of image descriptions that do not match are required. The unmatched pairs are identified with a `1` in the itm_labels` list.
```
itm_labels = torch.tensor([0,1,0,0,0])
itm_outputs = model(
# Text
input_ids=text_inputs["input_ids"], # Text input
token_type_ids=text_inputs["token_type_ids"],
attention_mask=text_inputs["attention_mask"], # Text attention mask
input_ids_masked=text_inputs["input_ids_masked"], # MLM masked inputs
mlm_labels=text_inputs["mlm_labels"], # MLM labels, has a different value than -100 if masked in input_ids_masked
# Image
pixel_values=image_inputs["pixel_values"], # Image input
bool_masked_pos=image_inputs["bool_masked_pos"], # MIM mask (part of DALLE output), indicates which patches are masked (1) and which are not (0)
codebook_pixel_values=image_inputs["codebook_pixel_values"], # Information necessary for MIM labels
# Pure Multimodal
itm_labels=itm_labels
)
itm_outputs.loss_info
```
Error:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-11-d659e68fc452>](https://localhost:8080/#) in <cell line: 3>()
1 itm_labels = torch.tensor([0,1,0,0,0])
2
----> 3 itm_outputs = model(
4 # Text
5 input_ids=text_inputs["input_ids"], # Text input
2 frames
[/usr/local/lib/python3.10/dist-packages/transformers/models/flava/modeling_flava.py](https://localhost:8080/#) in forward(self, input_ids, input_ids_masked, pixel_values, codebook_pixel_values, attention_mask, token_type_ids, bool_masked_pos, position_ids, image_attention_mask, skip_unmasked_multimodal_encoder, mlm_labels, mim_labels, itm_labels, output_attentions, output_hidden_states, return_dict, return_loss)
1966
1967 if pos_mask is not None:
-> 1968 sequence_for_image = sequence_for_image[pos_mask]
1969 if mim_labels is not None:
1970 mim_labels = self._resize_to_2d(mim_labels)
IndexError: The shape of the mask [5] at index 0 does not match the shape of the indexed tensor [1, 196, 768] at index 0
```
[In order to properly reproduce this error, I have also prepared a Google colab notebook, which can be found here](https://colab.research.google.com/drive/14QCNWUb_DFCNfQEwPZSB_NqX-sQeA95p?usp=sharing)
As a side note, this error may go unnoticed if all items in `itm_labels` are 0s, indicating that they all match, or if they are all 1s, signifying that none of them match. However, it's important to comment that in the code, when `itm_labels` contains all 1s, it is automatically translated into all 0s. This automatic "translation" may result in unexpected behaviours for the user.
### Expected behavior
The error occurs because inside Flava's code the `pos_mask` is applied multiple times. It is first applied on line 1953 and then on lines 1968 (MMM-image) and 1991 (MMM-text) of **modeling_flava.py**. I think it would be fixed by just removing the second and third application of the mask. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27855/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/27854 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27854/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27854/comments | https://api.github.com/repos/huggingface/transformers/issues/27854/events | https://github.com/huggingface/transformers/pull/27854 | 2,026,282,819 | PR_kwDOCUB6oc5hLsyw | 27,854 | delete `delete_doc_comment_trigger.yml` workflow | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Superceded by #27852",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27854). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | COLLABORATOR | null | # What does this PR do?
delete `delete_doc_comment_trigger.yml` workflow as `huggingface/doc-builder/.github/workflows/delete_doc_comment_trigger.yml` no longer exists for security reason. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27854/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27854",
"html_url": "https://github.com/huggingface/transformers/pull/27854",
"diff_url": "https://github.com/huggingface/transformers/pull/27854.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27854.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/27853 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27853/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27853/comments | https://api.github.com/repos/huggingface/transformers/issues/27853/events | https://github.com/huggingface/transformers/pull/27853 | 2,026,238,736 | PR_kwDOCUB6oc5hLi6F | 27,853 | Update CUDA versions for DeepSpeed | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27853). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
DeepSpeed tests are failing because they require higher CUDA versions
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27853/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27853",
"html_url": "https://github.com/huggingface/transformers/pull/27853",
"diff_url": "https://github.com/huggingface/transformers/pull/27853.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27853.patch",
"merged_at": 1701810922000
} |
https://api.github.com/repos/huggingface/transformers/issues/27852 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27852/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27852/comments | https://api.github.com/repos/huggingface/transformers/issues/27852/events | https://github.com/huggingface/transformers/pull/27852 | 2,026,234,761 | PR_kwDOCUB6oc5hLiEM | 27,852 | removed the delete doc workflows | {
"login": "MKhalusova",
"id": 1065417,
"node_id": "MDQ6VXNlcjEwNjU0MTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MKhalusova",
"html_url": "https://github.com/MKhalusova",
"followers_url": "https://api.github.com/users/MKhalusova/followers",
"following_url": "https://api.github.com/users/MKhalusova/following{/other_user}",
"gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions",
"organizations_url": "https://api.github.com/users/MKhalusova/orgs",
"repos_url": "https://api.github.com/users/MKhalusova/repos",
"events_url": "https://api.github.com/users/MKhalusova/events{/privacy}",
"received_events_url": "https://api.github.com/users/MKhalusova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27852). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | Removed the delete doc workflows as per request by @mishig25 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27852/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27852",
"html_url": "https://github.com/huggingface/transformers/pull/27852",
"diff_url": "https://github.com/huggingface/transformers/pull/27852.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27852.patch",
"merged_at": 1701855057000
} |
https://api.github.com/repos/huggingface/transformers/issues/27851 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27851/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27851/comments | https://api.github.com/repos/huggingface/transformers/issues/27851/events | https://github.com/huggingface/transformers/pull/27851 | 2,026,110,359 | PR_kwDOCUB6oc5hLGry | 27,851 | [`ClipVision`] `accelerate` support for clip-vision | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27851). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | CONTRIBUTOR | null | # What does this PR do?
As per title, all accelerate tests pass
addresses: https://github.com/huggingface/transformers/pull/27662#discussion_r1415445563 for llava
cc @ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27851/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27851",
"html_url": "https://github.com/huggingface/transformers/pull/27851",
"diff_url": "https://github.com/huggingface/transformers/pull/27851.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27851.patch",
"merged_at": 1701781460000
} |
https://api.github.com/repos/huggingface/transformers/issues/27850 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/27850/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/27850/comments | https://api.github.com/repos/huggingface/transformers/issues/27850/events | https://github.com/huggingface/transformers/pull/27850 | 2,025,912,950 | PR_kwDOCUB6oc5hKbI4 | 27,850 | Generate: Update VisionEncoderDecoder test value | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27850). All of your documentation changes will be reflected on that endpoint."
] | 1,701 | 1,701 | 1,701 | MEMBER | null | # What does this PR do?
#27351 fixes a bug in beam search: the prompt length was being included in the length penalty computation, and this penalty should only be applied on newly generated tokens (otherwise decoder-only models would often see a big penalty, as the prompt is part of the output)
This PR updates the test results to account for the bug fix. I've double-checked that reverting those changes produces the old results!
(All tests in `RUN_SLOW=1 py.test tests/models/vision_encoder_decoder/test_modeling_vision_encoder_decoder.py -vv` pass) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/27850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/27850/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/27850",
"html_url": "https://github.com/huggingface/transformers/pull/27850",
"diff_url": "https://github.com/huggingface/transformers/pull/27850.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/27850.patch",
"merged_at": 1701775620000
} |