url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/28256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28256/comments
https://api.github.com/repos/huggingface/transformers/issues/28256/events
https://github.com/huggingface/transformers/pull/28256
2,057,035,270
PR_kwDOCUB6oc5iz7e_
28,256
Fix load balancing loss func for mixtral
{ "login": "liangxuZhang", "id": 57205192, "node_id": "MDQ6VXNlcjU3MjA1MTky", "avatar_url": "https://avatars.githubusercontent.com/u/57205192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liangxuZhang", "html_url": "https://github.com/liangxuZhang", "followers_url": "https://api.github.com/users/liangxuZhang/followers", "following_url": "https://api.github.com/users/liangxuZhang/following{/other_user}", "gists_url": "https://api.github.com/users/liangxuZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/liangxuZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liangxuZhang/subscriptions", "organizations_url": "https://api.github.com/users/liangxuZhang/orgs", "repos_url": "https://api.github.com/users/liangxuZhang/repos", "events_url": "https://api.github.com/users/liangxuZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/liangxuZhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "How does this differ from https://github.com/huggingface/transformers/pull/28115 ?", "What is the impact of this issue on Mixtral training? Will this fix conceivability improve the quality of training? Is it likely that previous Mixtral trainings are not as good as they could be?\r\n\r\nIt seems like an important issue for those working with Mixtral that has been waiting on merge approval for a while.\r\n", "I personally finds the loss to be much lower with the new implementation. But I wasn't sure if it has to do with the (num_experts**2) instead of just N. I'm pretty sure this is an error on original mixtral side. So far still waiting for the training result on new implemented balance loss to finish. Deepspeed also has an implementation of top-2 which we might be able to reference.", "#28255 has information that could help, I am down to merge this for the release planned this week, just the comments that need to be adressed cc @liangxuZhang do you need help to finish this? ", "> #28255 has information that could help, I am down to merge this for the release planned this week, just the comments that need to be adressed cc @liangxuZhang do you need help to finish this?\r\n\r\n@ArthurZucker LGTM. The new implementation is correct and concise, and I've made a new commit. In #28255, maybe we can have a deep discuss whether to concatenate gate logits of all layers.", "Alright! Pretty sure the math shows it's equivalent to compute on individual layers then sum vs concate and compute, but let's merge this for now !", "> Thanks! Failing test seems unrelated let's just rebase on main\r\n\r\n@ArthurZucker I've just rebase on the main branch, but I'm not sure if I'm doing it right. Please tell me what else I need to do", "@liangxuZhang @ArthurZucker opinions about https://github.com/huggingface/transformers/pull/28403 ? It looks complementary to this PR", "Something like `git pull upstream main` if the remote is `upstream`, the exotic CI was fixed on main! I'll merge without it ", "Thanks a lot @liangxuZhang for this fix! 🤗 ", "great!", "Impressive work!" ]
1,703
1,705
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28255 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ArthurZucker and @younesbelkada Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28256/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28256/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28256", "html_url": "https://github.com/huggingface/transformers/pull/28256", "diff_url": "https://github.com/huggingface/transformers/pull/28256.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28256.patch", "merged_at": 1704986173000 }
https://api.github.com/repos/huggingface/transformers/issues/28255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28255/comments
https://api.github.com/repos/huggingface/transformers/issues/28255/events
https://github.com/huggingface/transformers/issues/28255
2,057,030,017
I_kwDOCUB6oc56m8mB
28,255
Incorrect implementation of auxiliary loss
{ "login": "liangxuZhang", "id": 57205192, "node_id": "MDQ6VXNlcjU3MjA1MTky", "avatar_url": "https://avatars.githubusercontent.com/u/57205192?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liangxuZhang", "html_url": "https://github.com/liangxuZhang", "followers_url": "https://api.github.com/users/liangxuZhang/followers", "following_url": "https://api.github.com/users/liangxuZhang/following{/other_user}", "gists_url": "https://api.github.com/users/liangxuZhang/gists{/gist_id}", "starred_url": "https://api.github.com/users/liangxuZhang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liangxuZhang/subscriptions", "organizations_url": "https://api.github.com/users/liangxuZhang/orgs", "repos_url": "https://api.github.com/users/liangxuZhang/repos", "events_url": "https://api.github.com/users/liangxuZhang/events{/privacy}", "received_events_url": "https://api.github.com/users/liangxuZhang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, liangxu! I also noticed this difference between the implementation by mixtral and the equations 4-5 in (https://arxiv.org/pdf/2101.03961.pdf). I am wondering if this has something to do with \"The objective can also be differentiated as the P-vector is differentiable, but the f-vector is not.\" (line1, page8)?", "Hey! Thanks you for the deep-dive. \r\nTLDR: there is still a small bug in our implementation, I'll review the PR, and yes given that the random model is init with uniform distrib the test values should be closer to unifrom distrib. \r\n\r\nHowever this is wrong : \r\n> in fact the routing outputs of each layer should be mapped to its own layer of experts \r\n\r\nThe paper states the following: (page 7)\r\n> For each Switch layer, this auxiliary loss is added to the total model loss during training. \r\n\r\nThis means that as long as we sum the auxiliary losses of each layers, we should be good to go.\r\n\r\nThe total auxiliary loss across all layers $\\(K\\)$ is given by:\r\n\r\n$$\r\n\\text{total loss} = \\sum_{k=1}^{K} \\left( \\alpha \\cdot N \\cdot \\sum_{i=1}^{N} f_{i,k} \\cdot P_{i,k} \\right)\r\n$$\r\n\r\nNow, let's factorize the expression and use the associativity of the addition:\r\n\r\n$$\r\n\\text{total loss} = \\alpha \\cdot N \\cdot \\sum_{i=1}^{N} \\left( \\sum_{k=1}^{K} f_{i,k} \\cdot P_{i,k} \\right)\r\n$$\r\n\r\n## The actual question\r\nNow your question can basically be summed up as: **how do we deal with top2 vs top1** ?\r\n\r\n- **top1 | top2** either we balance the distribution of top1 + top2 routing, what we are doing. (meaning that top1 and top2 should overall give a uniform repartition, on average, the fraction of tokens dispatched to each expert across both top-1 and top-2 will be balanced)\r\n- **top1 & top2** or we separately balance the distribution of top1 and the distribution of top2 (meaning that top1 should uniformly route to the 8 experts, and top2 should also uniformly route to 8 experts: the fraction of tokens dispatched to each expert is balanced independently for top-1 and top-2).\r\n\r\nMistral did not really share what they did so I am down to support both (meaning let the user decide what he want to optimize!). \r\n\r\nAt the end of the day what is important is that we have:\r\n```python\r\ntokens_per_expert * router_prob_per_expert.unsqueeze(0)\r\ntensor([[0.0180, 0.0179, 0.0157, 0.0152, 0.0117, 0.0244, 0.0053, 0.0172], # top1 \r\n [0.0256, 0.0172, 0.0130, 0.0186, 0.0151, 0.0098, 0.0094, 0.0165]], # top2\r\n grad_fn=<MulBackward0>)\r\n```\r\nLet's write explicitly the sums,either we want: (**top1 | top2** ) \r\n```python\r\ntorch.sum(tokens_per_expert * router_prob_per_expert.unsqueeze(0), dim=0).sum() * num_experts\r\n```\r\nclose to 1. Or we want: (**top1 & top2** ) \r\n```python\r\n(torch.sum(tokens_per_expert * router_prob_per_expert.unsqueeze(0), dim=0)/top_k).sum() * num_experts\r\n```\r\nto be close to 1. \r\n\r\nI used this implementation:\r\n```python\r\ndef load_balancing_loss_func(gate_logits: torch.Tensor, num_experts: torch.Tensor = None, top_k=2) -> float:\r\n if isinstance(gate_logits, tuple):\r\n compute_device = gate_logits[0].device\r\n concatenated_gate_logits = torch.cat([layer_gate.to(compute_device) for layer_gate in gate_logits], dim=0)\r\n\r\n routing_weights = torch.nn.functional.softmax(concatenated_gate_logits, dim=-1)\r\n _, selected_experts = torch.topk(routing_weights, top_k, dim=-1) # [batch_size X sequence_length, top_k]\r\n expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts) # [batch_size X sequence_length, top_k, num_experts]\r\n tokens_per_expert = torch.mean(expert_mask.float(), dim=0) # [top_k, num_experts]\r\n # Compute the average probability of routing to these experts\r\n router_prob_per_expert = torch.mean(routing_weights, dim=0) # [num_experts]\r\n overall_loss = torch.sum(tokens_per_expert * router_prob_per_expert.unsqueeze(0)) # / top_k\r\n return overall_loss * num_experts\r\n```", "Mark", "I have been following this discussion about the load balancing loss loosely, since I have my own custom training script where I'm trying to add proper support for Mixtral. Isn't the load balancing loss still wrong, since it should be computed per layer and then averaged at the end?\r\n\r\nThe [paper](https://arxiv.org/abs/2101.03961) states \"For each Switch layer, this auxiliary loss is added to the total model loss during training\". To me, this seems to imply computing the loss separately per layer, then summing or averaging at the end.\r\n\r\nI found [this implementation](https://github.com/google/flaxformer/blob/main/flaxformer/architectures/moe/routing.py) from Google that supports this. It looks like the auxiliary load balancing loss is computed per-layer as a scalar, which must then be combined at some point (I haven't read the code that closely yet).\r\n\r\nIntuitively, it also makes sense: you would not want unbalanced expert assignments from one layer to \"cancel out\" those from another layer, which can happen (and seems likely to happen) if you just concatenate everything across all layers at the beginning, which is what is done now.\r\n\r\n@ArthurZucker I know that in those equations you can compute either the sum over experts or the sum over layers first. But, you still need the $f_{i,k}$ per layer. If you concatenate all the gate logits at the beginning, then do a one_hot somewhere and average to get expert assignment fractions, you are losing the per-layer breakdown. So I think we need to explicitly keep everything separated by layer until the very end.\r\n\r\nFor my custom training script, I implemented the load balancing loss like this, it seems to work well:\r\n```python\r\ndef load_balancing_loss_func(gate_logits: torch.Tensor, num_experts: torch.Tensor = None, top_k=2) -> float:\r\n if isinstance(gate_logits, tuple):\r\n compute_device = gate_logits[0].device\r\n stacked_gate_logits = torch.stack([layer_gate.to(compute_device) for layer_gate in gate_logits], dim=0)\r\n\r\n routing_weights = torch.nn.functional.softmax(stacked_gate_logits, dim=-1) # [num_layers, num_tokens, num_experts]\r\n _, selected_experts = torch.topk(routing_weights, top_k, dim=-1) # [num_layers, num_tokens, top_k]\r\n expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts) # [num_layers, num_tokens, top_k, num_experts]\r\n # For a given token, determine if it was routed to a given expert. Think of this as a collection of top_k-hot vectors.\r\n expert_mask = torch.max(expert_mask, dim=-2).values.float() # [num_layers, num_tokens, num_experts]\r\n tokens_per_layer_and_expert = torch.mean(expert_mask, dim=-2) # [num_layers, num_experts]\r\n router_prob_per_layer_and_expert = torch.mean(routing_weights, dim=-2) # [num_layers, num_experts]\r\n return torch.mean(tokens_per_layer_and_expert * router_prob_per_layer_and_expert) * num_experts**2\r\n```\r\n\r\nI'm not 100% sure this is correct, so please check closely. Feel free to use this or reference it if it looks right. Note that the minimum loss with the above implementation is top_k, not sure if that is desired or if there should be a divide by top_k somewhere in there so the minimum loss is always 1.", "@ArthurZucker Sorry to reply so late. The new solution looks good to me. About how do we deal with top2 vs top1, I investigated some other implementations. In Google's implementation, the final loss is not divided by $top_k$ https://github.com/google/flaxformer/blob/main/flaxformer/architectures/moe/routing.py#L744.\r\n\r\n```python\r\nexpert_mask = jax.nn.one_hot(expert_indices, num_experts, dtype=jnp.int32)\r\n# For a given token, determine if it was routed to a given expert.\r\n# Shape: [num_groups, tokens_per_group, num_experts]\r\nexpert_mask = jnp.max(expert_mask, axis=-2)\r\n''' \r\ntop1 and top2\r\n tensor([[0, 0, 0, 0, 0, 1, 0, 1], \r\n [1, 0, 1, 0, 0, 0, 0, 0],\r\n [1, 0, 0, 0, 0, 0, 0, 1]])\r\n'''\r\ntokens_per_group_and_expert = jnp.mean(\r\n expert_mask, dtype=jnp.float32, axis=-2)\r\n# tensor([0.6667, 0.0000, 0.3333, 0.0000, 0.0000, 0.3333, 0.0000, 0.6667]) \r\nrouter_prob_per_group_and_expert = jnp.mean(\r\n router_probs, dtype=jnp.float32, axis=-2)\r\n# tensor([0.1267, 0.1277, 0.1109, 0.1204, 0.1274, 0.1236, 0.1479, 0.1153])\r\nreturn (\r\n jnp.mean( # pytype: disable=bad-return-type # jnp-type\r\n tokens_per_group_and_expert * router_prob_per_group_and_expert,\r\n dtype=jnp.float32,\r\n )\r\n * num_experts**2\r\n )\r\n# tensor(2.0203) close to top2\r\n```\r\n\r\nIn the implementation of deepspeed, only the top1 tokens participate in calculating the auxiliary loss. \r\nhttps://github.com/microsoft/DeepSpeed/blob/1787673edc7e45cd79fe10b95f92a02d3eb91505/deepspeed/moe/sharded_moe.py#L282C1-L314C60\r\n\r\n```python\r\ngates = F.softmax(logits, dim=1)\r\nindices1_s = torch.argmax(gates, dim=1)\r\nnum_experts = int(gates.shape[1])\r\nmask1 = F.one_hot(indices1_s, num_classes=num_experts)\r\n...\r\n# Compute l_aux\r\nme = torch.mean(gates, dim=0)\r\nce = torch.mean(mask1.float(), dim=0)\r\nl_aux = torch.mean(me * ce) * num_experts * num_experts\r\n\r\n```", "> I have been following this discussion about the load balancing loss loosely, since I have my own custom training script where I'm trying to add proper support for Mixtral. Isn't the load balancing loss still wrong, since it should be computed per layer and then averaged at the end?\r\n> \r\n> The [paper](https://arxiv.org/abs/2101.03961) states \"For each Switch layer, this auxiliary loss is added to the total model loss during training\". To me, this seems to imply computing the loss separately per layer, then summing or averaging at the end.\r\n> \r\n> I found [this implementation](https://github.com/google/flaxformer/blob/main/flaxformer/architectures/moe/routing.py) from Google that supports this. It looks like the auxiliary load balancing loss is computed per-layer as a scalar, which must then be combined at some point (I haven't read the code that closely yet).\r\n> \r\n> Intuitively, it also makes sense: you would not want unbalanced expert assignments from one layer to \"cancel out\" those from another layer, which can happen (and seems likely to happen) if you just concatenate everything across all layers at the beginning, which is what is done now.\r\n> \r\n> @ArthurZucker I know that in those equations you can compute either the sum over experts or the sum over layers first. But, you still need the fi,k per layer. If you concatenate all the gate logits at the beginning, then do a one_hot somewhere and average to get expert assignment fractions, you are losing the per-layer breakdown. So I think we need to explicitly keep everything separated by layer until the very end.\r\n> \r\n> For my custom training script, I implemented the load balancing loss like this, it seems to work well:\r\n> \r\n> ```python\r\n> def load_balancing_loss_func(gate_logits: torch.Tensor, num_experts: torch.Tensor = None, top_k=2) -> float:\r\n> if isinstance(gate_logits, tuple):\r\n> compute_device = gate_logits[0].device\r\n> stacked_gate_logits = torch.stack([layer_gate.to(compute_device) for layer_gate in gate_logits], dim=0)\r\n> \r\n> routing_weights = torch.nn.functional.softmax(stacked_gate_logits, dim=-1) # [num_layers, num_tokens, num_experts]\r\n> _, selected_experts = torch.topk(routing_weights, top_k, dim=-1) # [num_layers, num_tokens, top_k]\r\n> expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts) # [num_layers, num_tokens, top_k, num_experts]\r\n> # For a given token, determine if it was routed to a given expert. Think of this as a collection of top_k-hot vectors.\r\n> expert_mask = torch.max(expert_mask, dim=-2).values.float() # [num_layers, num_tokens, num_experts]\r\n> tokens_per_layer_and_expert = torch.mean(expert_mask, dim=-2) # [num_layers, num_experts]\r\n> router_prob_per_layer_and_expert = torch.mean(routing_weights, dim=-2) # [num_layers, num_experts]\r\n> return torch.mean(tokens_per_layer_and_expert * router_prob_per_layer_and_expert) * num_experts**2\r\n> ```\r\n> \r\n> I'm not 100% sure this is correct, so please check closely. Feel free to use this or reference it if it looks right. Note that the minimum loss with the above implementation is top_k, not sure if that is desired or if there should be a divide by top_k somewhere in there so the minimum loss is always 1.\r\n\r\n@tdrussell @ArthurZucker I also have doubts about whether to concatenate the gate logits of all layers to one tensor. \r\nConsider that there are only two load-unbalanced moe layers, the toekn in the first layer is routed to the first two experts, and the token in the second layer is routed to the last two experts. I did a test. If calculated individually, each layer has a loss of 2.6796(average or sum), and if made a concatenation, the final loss is 2.1001. Concatenating the outputs of each layer into a tensor would make the unbalanced load more balanced, which seems unreasonable.\r\n\r\nHowever, the above test is based on the premise that the experts between layers are independent of each other. In MOE, I don't know if a token is routed to the same expert in different layers. If it is routed to the same expert, then concatenating the routing outputs of each layer into the same tensor is actually the same as computing them separately. However if not the final loss will be inaccurate, although the difference is small when the hidden state dimension is large.", "1. @tdrussell, pretty sure my previous answer covers the interpretation of `For each Switch layer, this auxiliary loss is added to the total model loss during training` which means that you sum the loss computed for each layer, and I showed that this can be computed in a vectorized manner. \r\n2. @liangxuZhang I don't understand the concept of first and second layer of unbalanced MoE expert. But in the Mistral implementation, the routing is \"independant\" (compared to switch transformers where you have expert capacity for example). So mentioning my previous comment, concatenating is IMO the way to go 😉 \r\n", "Maybe I am missing something. There are two statistics needed for the loss: expert assignment fractions, and average router probability per expert. If you concatenate all layers at the beginning (it is a tuple of layer logits as input to the function, right?), then these two statistics are computed without regard to which layer the logits came from. This basically says, we don't care if individual layers have unbalanced expert assignments, as long as across the whole model, the assignments are balanced on average.\r\n\r\nIt might be a matter of interpretation as to whether the loss should be that, but other implementations look like they balance the expert assignments per layer. [Here](https://gist.github.com/tdrussell/0529afd8d280fbe2c1c582d8f865e909) is a script comparing the two versions of the loss, showing the difference. I made some contrived logits that are unbalanced for each layer, but cancel out if you combine the logits from all layers.", "I don't mind updating it! Will also prevent having to gather the logits on a single device, compute on the layer device then cpu transfer. It's really up to the community / the results! As I am not sure the paper / mistral shared their implementation! 🤗 \r\n\r\nComputing this way is simpler, feel free to open a PR if you want for the per-layer computation! WOuld be nice to have results!" ]
1,703
1,704
1,704
CONTRIBUTOR
null
### System Info - `transformers` version: 4.37.0.dev0 - Platform: macOS-13.5-arm64-arm-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Two issues were found: ## mixtral's implementation of auxiliary loss is not correct. I think `load_balancing_loss_func` in `modeling_mixtral` computes auxiliary loss incorrectly https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/models/mixtral/modeling_mixtral.py#L77-L119 Auxiliary loss is implemented as multiply fraction of tokens dispatched to expert by fraction of the router probability allocated for expert. The fraction of tokens dispatched to expert is calculated as the number of tokens routed to expert divided by the total number of tokens. The actual implementation is as follows: https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/models/mixtral/modeling_mixtral.py#L109-L113 As we know, the shape of `selected_experts` is `top_k X [batch_size X sequence_length]`,so the shape of `expert_mask` is `[top_k X batch_size X sequence_length, num_experts]`. When we excute `expert_mask = torch.max(expert_mask, dim=-2).values` , the shape of the `expert_mask` becomes `[num_experts]`, which means that whenever a token is routed to an expert, that expert has a value of 1. After the operation of `torch.mean(expert_mask.float(), dim=0)`, `tokens_per_expert` becomes a scaler, which is clearly incorrect, since tokens_per_expert should have a shape of `[num_experts]`. **Example** Example Inputs: ```python T = 3 # number of tokens [B X S] num_experts = 8 top_k = 2 # top_2 gate_logits = torch.randn(T,num_experts) routing_weights = torch.nn.functional.softmax(gate_logits, dim=-1) ``` Each row of `routing_weights` represents the probability that a token will be routed to an expert ```python tensor([[0.2551, 0.2519, 0.0357, 0.0830, 0.0897, 0.0981, 0.1565, 0.0299], [0.0728, 0.0593, 0.0948, 0.1708, 0.0098, 0.0848, 0.3884, 0.1192], [0.0292, 0.0387, 0.0696, 0.1331, 0.6699, 0.0049, 0.0442, 0.0104]]) ``` next select experts ```python _, selected_experts = torch.topk(routing_weights, top_k, dim=-1) # treat `top_k` as tokens (shape is `top_k X [batch_size X sequence_length]`) selected_experts = selected_experts.reshape(-1) expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts) ``` we get the following result (shape `[top_k X batch_size X sequence_length, num_experts]`): ```python tensor([[1, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0]]) ``` The final results are as follows ```python expert_mask = torch.max(expert_mask, dim=-2).values # tensor([1, 1, 0, 1, 1, 0, 1, 0]) tokens_per_expert = torch.mean(expert_mask.float(), dim=0) # tensor(0.6250) router_prob_per_expert = torch.mean(routing_weights, dim=0) # tensor([0.0746, 0.1031, 0.0448, 0.0804, 0.1823, 0.1216, 0.3112, 0.0820]) overall_loss = torch.sum(tokens_per_expert * router_prob_per_expert.unsqueeze(-1)) # tensor(0.6250) ``` Because the sum of the `router_prob_per_expert` is 1, the final loss value is actually the value of `tokens_per_expert`. As the total number of tokens increases, the value of `tokens_per_expert` will be 1 (each expert has tokens routed to). **Solution** The `tokens_per_expert` calculation should divide the tokens that are routed per expert by the total number of tokens. Specifically, we can sum the columns of `expert_mask` and divide by the total number of tokens. The following is an implementation ```python expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts) expert_mask = expert_mask.reshape(-1, top_k, num_experts) expert_mask = torch.max(expert_mask, dim=-2).values # Compute the percentage of tokens routed to each experts tokens_per_expert = torch.mean(expert_mask.float(), dim=0) / top_k # Compute the average probability of routing to these experts router_prob_per_expert = torch.mean(routing_weights, dim=0) overall_loss = torch.sum(tokens_per_expert * router_prob_per_expert) return overall_loss * num_experts ``` **Example** ```python expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts) expert_mask = expert_mask.reshape(T,top_k,-1) ''' tensor([[[1, 0, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0]], [[0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 1, 0, 0, 0, 0]], [[0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0]]]) ''' expert_mask = torch.max(expert_mask, dim=-2).values ''' tensor([[1, 1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 1, 0], [0, 0, 0, 1, 1, 0, 0, 0]]) ''' tokens_per_expert = torch.mean(expert_mask.float(), dim=0) / top_k # tensor([0.1667, 0.1667, 0.0000, 0.3333, 0.1667, 0.0000, 0.1667, 0.0000]) ``` ## Note On the other hand, in switch transformer ((https://arxiv.org/abs/2101.03961), auxiliary loss should converge to 1 when the load is balanced. However, the top 1 strategy is used in the paper, so the maximum value is taken when calculating tokens_per_expert. In the top_k strategy, this corresponds to `top_k*T` tokens being routed to the experts, so tokens_per_expert should be divided by `top_k`. Otherwise the final converged value should be `top_k`. By the way, the unit test should determine if the loss is close to 1 instead of 8. https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/tests/models/mixtral/test_modeling_mixtral.py#L477 ## Should the output of each layer of the gated network be concatenated into a tensor? https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/models/mixtral/modeling_mixtral.py#L98C5-L101C1 Before calculating the auxiliary loss, the routing outputs of the different transformer layers of the expert layer are concatenated into a tensor. This implies that the routing outputs of different layers are mapped to the same expert, and in fact the routing outputs of each layer should be mapped to its own layer of experts. So should the auxiliary loss be calculated for each layer independently, rather than concatenated into a tensor? ### Expected behavior I expect to examine the problem and review my solution for the first issue and have a discussion about the second issue, as I'm not sure if it makes more sense to calculate loss separately.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28255/reactions", "total_count": 11, "+1": 11, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28255/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28254/comments
https://api.github.com/repos/huggingface/transformers/issues/28254/events
https://github.com/huggingface/transformers/pull/28254
2,056,681,551
PR_kwDOCUB6oc5iyyeq
28,254
Add TinyViT
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? This PR adds TinyViT.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28254/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28254/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28254", "html_url": "https://github.com/huggingface/transformers/pull/28254", "diff_url": "https://github.com/huggingface/transformers/pull/28254.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28254.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28253/comments
https://api.github.com/repos/huggingface/transformers/issues/28253/events
https://github.com/huggingface/transformers/pull/28253
2,056,333,824
PR_kwDOCUB6oc5ixoGp
28,253
Update quantization_config.py
{ "login": "manas95826", "id": 74074241, "node_id": "MDQ6VXNlcjc0MDc0MjQx", "avatar_url": "https://avatars.githubusercontent.com/u/74074241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manas95826", "html_url": "https://github.com/manas95826", "followers_url": "https://api.github.com/users/manas95826/followers", "following_url": "https://api.github.com/users/manas95826/following{/other_user}", "gists_url": "https://api.github.com/users/manas95826/gists{/gist_id}", "starred_url": "https://api.github.com/users/manas95826/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manas95826/subscriptions", "organizations_url": "https://api.github.com/users/manas95826/orgs", "repos_url": "https://api.github.com/users/manas95826/repos", "events_url": "https://api.github.com/users/manas95826/events{/privacy}", "received_events_url": "https://api.github.com/users/manas95826/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,706
1,706
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28253/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28253", "html_url": "https://github.com/huggingface/transformers/pull/28253", "diff_url": "https://github.com/huggingface/transformers/pull/28253.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28253.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28252/comments
https://api.github.com/repos/huggingface/transformers/issues/28252/events
https://github.com/huggingface/transformers/pull/28252
2,056,145,171
PR_kwDOCUB6oc5iw_dU
28,252
Dev/tomato
{ "login": "cliangyu", "id": 45140242, "node_id": "MDQ6VXNlcjQ1MTQwMjQy", "avatar_url": "https://avatars.githubusercontent.com/u/45140242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cliangyu", "html_url": "https://github.com/cliangyu", "followers_url": "https://api.github.com/users/cliangyu/followers", "following_url": "https://api.github.com/users/cliangyu/following{/other_user}", "gists_url": "https://api.github.com/users/cliangyu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cliangyu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cliangyu/subscriptions", "organizations_url": "https://api.github.com/users/cliangyu/orgs", "repos_url": "https://api.github.com/users/cliangyu/repos", "events_url": "https://api.github.com/users/cliangyu/events{/privacy}", "received_events_url": "https://api.github.com/users/cliangyu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,703
1,703
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28252/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28252", "html_url": "https://github.com/huggingface/transformers/pull/28252", "diff_url": "https://github.com/huggingface/transformers/pull/28252.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28252.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28251/comments
https://api.github.com/repos/huggingface/transformers/issues/28251/events
https://github.com/huggingface/transformers/pull/28251
2,056,078,896
PR_kwDOCUB6oc5iwxan
28,251
fix bug:divide by zero in _maybe_log_save_evaluate()
{ "login": "frankenliu", "id": 7486431, "node_id": "MDQ6VXNlcjc0ODY0MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/7486431?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankenliu", "html_url": "https://github.com/frankenliu", "followers_url": "https://api.github.com/users/frankenliu/followers", "following_url": "https://api.github.com/users/frankenliu/following{/other_user}", "gists_url": "https://api.github.com/users/frankenliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/frankenliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankenliu/subscriptions", "organizations_url": "https://api.github.com/users/frankenliu/orgs", "repos_url": "https://api.github.com/users/frankenliu/repos", "events_url": "https://api.github.com/users/frankenliu/events{/privacy}", "received_events_url": "https://api.github.com/users/frankenliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Thanks for fixing @frankenliu !\r\n> \r\n> For future PRs, could you make sure to link to the relevant code in the PR description? This will make it quicker and easier to review.\r\n\r\nOkay !" ]
1,703
1,704
1,704
CONTRIBUTOR
null
Hi, everyone. **This is a new clean PR for previous PR #28102.** set logging_strategy="steps" and logging_steps=10, when that one epoch have 100 steps, the should_log will be set to True in last step. And self._globalstep_last_logged will be assign to self.state.global_step in _maybe_log_save_evaluate() method by line 1917 in trainer.py. the line 1933 in trainer.py , self.callback_handler.on_epoch_end() will keep the should_log=True, then in line 1934 run _maybe_log_save_evaluate() method (self.state.global_step - self._globalstep_last_logged) will be zero in line 2247. @muellerzr @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28251/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28251", "html_url": "https://github.com/huggingface/transformers/pull/28251", "diff_url": "https://github.com/huggingface/transformers/pull/28251.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28251.patch", "merged_at": 1704205183000 }
https://api.github.com/repos/huggingface/transformers/issues/28250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28250/comments
https://api.github.com/repos/huggingface/transformers/issues/28250/events
https://github.com/huggingface/transformers/pull/28250
2,056,044,016
PR_kwDOCUB6oc5iwqGw
28,250
Fix model code to accurately convert fairseq wav2vec2 model
{ "login": "upskyy", "id": 54731898, "node_id": "MDQ6VXNlcjU0NzMxODk4", "avatar_url": "https://avatars.githubusercontent.com/u/54731898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/upskyy", "html_url": "https://github.com/upskyy", "followers_url": "https://api.github.com/users/upskyy/followers", "following_url": "https://api.github.com/users/upskyy/following{/other_user}", "gists_url": "https://api.github.com/users/upskyy/gists{/gist_id}", "starred_url": "https://api.github.com/users/upskyy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/upskyy/subscriptions", "organizations_url": "https://api.github.com/users/upskyy/orgs", "repos_url": "https://api.github.com/users/upskyy/repos", "events_url": "https://api.github.com/users/upskyy/events{/privacy}", "received_events_url": "https://api.github.com/users/upskyy/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@ylacombe @sanchit-gandhi \r\nI opened a PR and I would appreciate it if you could check it out when you have time. \r\nThanks : )\r\n\r\n", "@ylacombe @sanchit-gandhi \r\n\r\nThe test error seems to change as the model parameters change. Do you have any next steps for me to take?\r\n\r\n[workflow test error](https://app.circleci.com/pipelines/github/huggingface/transformers/81359/workflows/60d2764c-fa01-4547-934e-df2cb2915d0d/jobs/1044174)", "@sanchit-gandhi \r\nThank you for a detailed description.\r\nAs you said, the 'official' Wav2Vec2 checkpoints do not exist if `config.conv_dim[-1] != config.hidden_size`.\r\nHowever, if `config.conv_dim[-1]` and `config.hidden_size` are the same like mine, they cannot use the huggingface inference. What if we took steps to make these cases available as well?\r\n\r\nAnd another story, I think the layernorm location of the 'official' Wav2Vec2 fairseq implementation and the huggingface layernorm location are different. So I added the `layer_norm_first` option. What do you think about this?" ]
1,703
1,706
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28174 ## Reasons and benefits: - fairseq model weight can be converted correctly. ## Current state: - fairseq uses nn.Linear only when the dimension of convolution subsampling and the dimension of the encoder block are different. So, if it is the same, nn.Linear is not used. - But the huggingface implementation unconditionally uses nn.Linear, so when converting, unused weight doesn't appears, but in reality, a random weight nn.Linear is added. - fairseq uses the layer_norm position dynamically using the layer_norm_first argument. - However, the implementation of huggingface is different from fairseq because the layer norm position is fixed. Fixed so that this can be controlled as an option. ## Related code **No. 1** **fairseq** - [facebookresearch/fairseq@main/fairseq/models/wav2vec/wav2vec2.py#L324-L328](https://github.com/facebookresearch/fairseq/blob/main/fairseq/models/wav2vec/wav2vec2.py?rgh-link-date=2023-12-21T02%3A52%3A07Z#L324-L328) **huggingface** - [main/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L536](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py?rgh-link-date=2023-12-21T02%3A52%3A07Z#L536) **No. 2** **fairseq** - https://github.com/facebookresearch/fairseq/blob/main/fairseq/models/wav2vec/wav2vec2.py#L1230-L1231 **huggingface** - https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L929 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28250/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28250", "html_url": "https://github.com/huggingface/transformers/pull/28250", "diff_url": "https://github.com/huggingface/transformers/pull/28250.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28250.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28249/comments
https://api.github.com/repos/huggingface/transformers/issues/28249/events
https://github.com/huggingface/transformers/issues/28249
2,056,016,481
I_kwDOCUB6oc56jFJh
28,249
For V30.0 and after, save_pretrained() doesn't correctly save values when using set_input_embeddings()
{ "login": "bjascob", "id": 22728060, "node_id": "MDQ6VXNlcjIyNzI4MDYw", "avatar_url": "https://avatars.githubusercontent.com/u/22728060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bjascob", "html_url": "https://github.com/bjascob", "followers_url": "https://api.github.com/users/bjascob/followers", "following_url": "https://api.github.com/users/bjascob/following{/other_user}", "gists_url": "https://api.github.com/users/bjascob/gists{/gist_id}", "starred_url": "https://api.github.com/users/bjascob/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bjascob/subscriptions", "organizations_url": "https://api.github.com/users/bjascob/orgs", "repos_url": "https://api.github.com/users/bjascob/repos", "events_url": "https://api.github.com/users/bjascob/events{/privacy}", "received_events_url": "https://api.github.com/users/bjascob/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The answer to this is posted on the [forum here](https://discuss.huggingface.co/t/set-input-embeddings-values-not-being-saved-with-save-pretrained/67020).\r\n\r\nIn short, gpt2 has tied input and output weights and on saving, only one of the two values are used. Which one seems to have changed between transformer versions.\r\n" ]
1,703
1,703
1,703
CONTRIBUTOR
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes / RTX3090 - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The code below works fine in transformers 4.29 but doesn't work correctly starting at 4.30 and including the latest, 4.36.2 I’m preloading the embeddings on an untrained gpt2 model. This works fine and the model trains well, but after saving and reloading, the model it doesn’t contain any of the inserted weights. Here’s some simplified code that doesn’t require training to demonstrate the issue… ``` #!/usr/bin/env python3 import torch import numpy from transformers import AutoConfig, AutoModelForCausalLM if __name__ == '__main__': model_name = 'gpt2' # 'bert-base-cased' # Build the untrained model from config model_config = AutoConfig.from_pretrained(model_name) model = AutoModelForCausalLM.from_config(model_config) print('Original weights', model.get_input_embeddings().weight[0][:5]) # Load embeddings to use in the model embed_weights = AutoModelForCausalLM.from_pretrained(model_name).get_input_embeddings().weight.detach().numpy() #embed_weights = numpy.load('data/embeddings/gpt2_input_embeddings.npz')['embed_weights'] print('Loaded embed_weights', embed_weights[0][:5]) embed_module = torch.nn.Embedding(embed_weights.shape[0], embed_weights.shape[1], _weight=torch.from_numpy(embed_weights), _freeze=True) model.set_input_embeddings(embed_module) print('Modified embeds', model.get_input_embeddings().weight[0][:5]) # Save the model save_directory = '/tmp/custom_model' model.save_pretrained(save_directory) # Reload the model model_reloaded = AutoModelForCausalLM.from_pretrained(save_directory) print('Reloaded embeds', model_reloaded.get_input_embeddings().weight[0][:5]) ``` ### Expected behavior Incorrect output when using transformers 4.36.2 ``` Original weights tensor([-0.0108, -0.0068, -0.0210, 0.0049, -0.0241], grad_fn=<SliceBackward0>) Loaded embed_weights [-0.11010301 -0.03926672 0.03310751 0.13382645 -0.04847569] Modified embeds tensor([-0.1101, -0.0393, 0.0331, 0.1338, -0.0485]) Reloaded embeds tensor([-0.0108, -0.0068, -0.0210, 0.0049, -0.0241], grad_fn=<SliceBackward0>) ``` As you can see, the model internally shows the modified weight values but after saving and reloading it’s back to the uninitialized values. Correct output using transformers 4.29.0 ``` Original weights tensor([0.0343, 0.0311, 0.0101, 0.0098, 0.0070], grad_fn=<SliceBackward0>) Loaded embed_weights [-0.11010301 -0.03926672 0.03310751 0.13382645 -0.04847569] Modified embeds tensor([-0.1101, -0.0393, 0.0331, 0.1338, -0.0485]) Reloaded embeds tensor([-0.1101, -0.0393, 0.0331, 0.1338, -0.0485], grad_fn=<SliceBackward0>) ``` In the older version of the code this works correctly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28249/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28248/comments
https://api.github.com/repos/huggingface/transformers/issues/28248/events
https://github.com/huggingface/transformers/issues/28248
2,055,841,176
I_kwDOCUB6oc56iaWY
28,248
`transformers==4.36.2` not available from `huggingface` conda channel
{ "login": "kevherro", "id": 10460086, "node_id": "MDQ6VXNlcjEwNDYwMDg2", "avatar_url": "https://avatars.githubusercontent.com/u/10460086?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevherro", "html_url": "https://github.com/kevherro", "followers_url": "https://api.github.com/users/kevherro/followers", "following_url": "https://api.github.com/users/kevherro/following{/other_user}", "gists_url": "https://api.github.com/users/kevherro/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevherro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevherro/subscriptions", "organizations_url": "https://api.github.com/users/kevherro/orgs", "repos_url": "https://api.github.com/users/kevherro/repos", "events_url": "https://api.github.com/users/kevherro/events{/privacy}", "received_events_url": "https://api.github.com/users/kevherro/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "On a related note: it appears that upgrading to the latest version of `transformers` (`4.36.2`) resolves the `KeyError: 'mixtral'` [bug](https://huggingface.co./mistralai/Mixtral-8x7B-Instruct-v0.1/discussions/9). This means that anyone installing `transformers` from the `huggingface` conda channel (as described in the official docs [here](https://huggingface.co./mistralai/Mixtral-8x7B-Instruct-v0.1/discussions/9)) will run into this bug.\r\n\r\nAs a workaround, folks can get it from the [conda-forge channel](https://anaconda.org/conda-forge/transformers).", "So was your problem with the `mistral` bug or with not able to install `transformers` (`4.36.2`) from conda channel as the latest version there is still `4.33.3`.", "The latter. Until the latest `transformers` version (`4.36.2`) is available from the `huggingface` conda channel, the `mistral` bug is likely unavoidable on that channel.", "Yeah, true since `conda` channel is running quite behind the `conda-forge`.", "In the future we'll likely remove the `huggingface` channel support for conda unless we see explicit demand here; if that's the case, please put a thumbs up here (we'll monitor), otherwise, the conda-forge channel is usually very up to date. \r\n\r\nThanks for raising the issue!", "Thanks for following up!\r\n\r\nI'm channel-indifferent, but the `README` should reflect the channel that offers the latest version of `transformers`.\r\n\r\nI went ahead and opened a PR to use the `conda-forge` channel in the `README`.", "> In the future we'll likely remove the `huggingface` channel support for conda unless we see explicit demand here; if that's the case, please put a thumbs up here (we'll monitor), otherwise, the conda-forge channel is usually very up to date.\r\n> \r\n> Thanks for raising the issue!\r\n\r\nI would prefer pointing my CI pipelines to an official HuggingFace channel if conda is supported way to install." ]
1,703
1,707
1,704
CONTRIBUTOR
null
### System Info - `transformers` version: 4.33.3 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.0 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. Install the latest version of [**Miniconda**](https://docs.conda.io/projects/conda/en/stable/user-guide/install/download.html) 2. Create a `conda` environment: `conda create -n ENVNAME python=3.10` 3. Activate the environment: `conda activate ENVNAME` 4. Search for the `transformers` package: ``` (ENVNAME) % conda search huggingface::transformers Loading channels: done # Name Version Build Channel transformers v4.0.0 pyh39e3cac_0 huggingface ... transformers 4.33.3 py_0 huggingface ``` Note that the highest version available is `4.33.3`. When I attempt to install the _latest_ version (at the time of this writing, `4.36.2`): ``` (ENVNAME) % conda install huggingface::transformers==4.36.2 Channels: - defaults - huggingface - pytorch Platform: osx-64 Collecting package metadata (repodata.json): done Solving environment: failed PackagesNotFoundError: The following packages are not available from current channels: - huggingface::transformers==4.36.2 ``` ### Expected behavior I expected the latest version of `transformers` (`4.36.2`) to be available from the `huggingface` conda channel.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28248/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28247/comments
https://api.github.com/repos/huggingface/transformers/issues/28247/events
https://github.com/huggingface/transformers/issues/28247
2,055,792,277
I_kwDOCUB6oc56iOaV
28,247
[WIP] Translate 'Contribute' Section into Chinese
{ "login": "Mayfsz", "id": 145372523, "node_id": "U_kgDOCKo1aw", "avatar_url": "https://avatars.githubusercontent.com/u/145372523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mayfsz", "html_url": "https://github.com/Mayfsz", "followers_url": "https://api.github.com/users/Mayfsz/followers", "following_url": "https://api.github.com/users/Mayfsz/following{/other_user}", "gists_url": "https://api.github.com/users/Mayfsz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mayfsz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mayfsz/subscriptions", "organizations_url": "https://api.github.com/users/Mayfsz/orgs", "repos_url": "https://api.github.com/users/Mayfsz/repos", "events_url": "https://api.github.com/users/Mayfsz/events{/privacy}", "received_events_url": "https://api.github.com/users/Mayfsz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
CONTRIBUTOR
null
Hello, I'm new here and this is my first time contributing. If there are any issues with my work, please let me know. I'll do my best to make any necessary changes. This is a part of #26803 . My plan is to translate the 6 sections of the 'Contribute' chapter into Chinese. The files that need translation include: - [x] contributing.md #28243 - [ ] add_new_model.md - [ ] add_tensorflow_model.md - [ ] add_new_pipeline.md - [ ] testing.md - [ ] pr_checks.md There are interlinkages among these sections. For instance, in `contributing.md`, there are links referring to `testing.md`. Before translating `testing.md`, I'll temporarily link to the original language file. After translating `testing.md`, I'll update the links accordingly. Similar adjustments will be made for other interconnected files.I'll address these link adjustments at the end: - [ ] Link adjustments for translated files Looking forward to any feedback and suggestions!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28247/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28246/comments
https://api.github.com/repos/huggingface/transformers/issues/28246/events
https://github.com/huggingface/transformers/pull/28246
2,055,780,934
PR_kwDOCUB6oc5iv0Gx
28,246
[WIP] Add CAV-MAE model, initially cloned from VitMAE
{ "login": "rationalism", "id": 813306, "node_id": "MDQ6VXNlcjgxMzMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/813306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rationalism", "html_url": "https://github.com/rationalism", "followers_url": "https://api.github.com/users/rationalism/followers", "following_url": "https://api.github.com/users/rationalism/following{/other_user}", "gists_url": "https://api.github.com/users/rationalism/gists{/gist_id}", "starred_url": "https://api.github.com/users/rationalism/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rationalism/subscriptions", "organizations_url": "https://api.github.com/users/rationalism/orgs", "repos_url": "https://api.github.com/users/rationalism/repos", "events_url": "https://api.github.com/users/rationalism/events{/privacy}", "received_events_url": "https://api.github.com/users/rationalism/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Created an initial PR to add the CAV-MAE model by cloning the visual-audio multimodal model TVLT\r\n\r\nPaper: https://arxiv.org/abs/2210.07839\r\n\r\nOriginal model repo: https://github.com/YuanGongND/cav-mae\r\n\r\n(haven't actually added the CAV-MAE code yet! this is just a scaffold)", "Re-basing this on the ViTMAE model for ease of development (https://huggingface.co./docs/transformers/model_doc/vit_mae)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,706
1,706
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/28236 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28246/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28246", "html_url": "https://github.com/huggingface/transformers/pull/28246", "diff_url": "https://github.com/huggingface/transformers/pull/28246.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28246.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28245/comments
https://api.github.com/repos/huggingface/transformers/issues/28245/events
https://github.com/huggingface/transformers/pull/28245
2,055,766,865
PR_kwDOCUB6oc5ivxKU
28,245
Fix initialization for missing parameters in `from_pretrained` under ZeRO-3
{ "login": "XuehaiPan", "id": 16078332, "node_id": "MDQ6VXNlcjE2MDc4MzMy", "avatar_url": "https://avatars.githubusercontent.com/u/16078332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XuehaiPan", "html_url": "https://github.com/XuehaiPan", "followers_url": "https://api.github.com/users/XuehaiPan/followers", "following_url": "https://api.github.com/users/XuehaiPan/following{/other_user}", "gists_url": "https://api.github.com/users/XuehaiPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/XuehaiPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuehaiPan/subscriptions", "organizations_url": "https://api.github.com/users/XuehaiPan/orgs", "repos_url": "https://api.github.com/users/XuehaiPan/repos", "events_url": "https://api.github.com/users/XuehaiPan/events{/privacy}", "received_events_url": "https://api.github.com/users/XuehaiPan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I add a simple test case in this PR. I also test it with and without the patch in this PR:\r\n\r\n- with the patch: PASS\r\n- without the patch (main branch): FAIL", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28245). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds `deepspeed.zero.GatheredParameters` context during initialization of the missing parameters. These parameters are already partitioned, so their sizes are `torch.Size([])`. The `torch.nn.init.func_()` functions will have no effect on these parameters. This PR gathers the parameters before initialization and repartition them after initialization. Fixes #28244 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @pacman100 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28245/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28245/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28245", "html_url": "https://github.com/huggingface/transformers/pull/28245", "diff_url": "https://github.com/huggingface/transformers/pull/28245.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28245.patch", "merged_at": 1704812302000 }
https://api.github.com/repos/huggingface/transformers/issues/28244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28244/comments
https://api.github.com/repos/huggingface/transformers/issues/28244/events
https://github.com/huggingface/transformers/issues/28244
2,055,760,635
I_kwDOCUB6oc56iGr7
28,244
[BUG] `from_pretrained` does not properly initialize missing parameters under DeepSpeed ZeRO-3
{ "login": "XuehaiPan", "id": 16078332, "node_id": "MDQ6VXNlcjE2MDc4MzMy", "avatar_url": "https://avatars.githubusercontent.com/u/16078332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XuehaiPan", "html_url": "https://github.com/XuehaiPan", "followers_url": "https://api.github.com/users/XuehaiPan/followers", "following_url": "https://api.github.com/users/XuehaiPan/following{/other_user}", "gists_url": "https://api.github.com/users/XuehaiPan/gists{/gist_id}", "starred_url": "https://api.github.com/users/XuehaiPan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuehaiPan/subscriptions", "organizations_url": "https://api.github.com/users/XuehaiPan/orgs", "repos_url": "https://api.github.com/users/XuehaiPan/repos", "events_url": "https://api.github.com/users/XuehaiPan/events{/privacy}", "received_events_url": "https://api.github.com/users/XuehaiPan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,704
1,704
CONTRIBUTOR
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35 - Python version: 3.11.5 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.0 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: True ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Reproduce script: ```python import deepspeed import torch.distributed as dist from transformers import AutoModelForSequenceClassification from transformers.integrations.deepspeed import HfDeepSpeedConfig, is_deepspeed_zero3_enabled def main() -> None: deepspeed.init_distributed() _hfdsc = HfDeepSpeedConfig( { 'zero_optimization': {'stage': 3}, 'train_batch_size': 128, 'train_micro_batch_size_per_gpu': 16, 'gradient_accumulation_steps': None, }, ) assert is_deepspeed_zero3_enabled() model = AutoModelForSequenceClassification.from_pretrained('gpt2') with deepspeed.zero.GatheredParameters(params=[model.score.weight]): if dist.get_rank() == 0: print('weight', model.score.weight) if __name__ == '__main__': main() ``` Commad line: ```console $ torchrun --nnode 1 --nproc-per-node 8 test.py ... [2023-12-25 23:49:31,983] [INFO] [partition_parameters.py:348:__exit__] finished initializing model - num_params = 149, num_elems = 0.12B weight Parameter containing: tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], device='cuda:0', requires_grad=True) ``` ### Expected behavior The parameters that are missing in the checkpoint should be randomly initialized. These parameters are not initialized under the `_fast_init` setting: https://github.com/huggingface/transformers/blob/fa21ead73db473d88f8eca1ec244aba776fd9047/src/transformers/modeling_utils.py#L3477-L3485 After the model successfully being partitioned under ZeRO-3, the parameter size is `torch.Size([])`. It will have no effect on the statement `model.apply(model._initialize_weights)`: https://github.com/huggingface/transformers/blob/fa21ead73db473d88f8eca1ec244aba776fd9047/src/transformers/modeling_utils.py#L3995-L4005
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28244/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28243/comments
https://api.github.com/repos/huggingface/transformers/issues/28243/events
https://github.com/huggingface/transformers/pull/28243
2,055,745,633
PR_kwDOCUB6oc5ivspF
28,243
Translate contributing.md into Chinese
{ "login": "Mayfsz", "id": 145372523, "node_id": "U_kgDOCKo1aw", "avatar_url": "https://avatars.githubusercontent.com/u/145372523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mayfsz", "html_url": "https://github.com/Mayfsz", "followers_url": "https://api.github.com/users/Mayfsz/followers", "following_url": "https://api.github.com/users/Mayfsz/following{/other_user}", "gists_url": "https://api.github.com/users/Mayfsz/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mayfsz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mayfsz/subscriptions", "organizations_url": "https://api.github.com/users/Mayfsz/orgs", "repos_url": "https://api.github.com/users/Mayfsz/repos", "events_url": "https://api.github.com/users/Mayfsz/events{/privacy}", "received_events_url": "https://api.github.com/users/Mayfsz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Part of #28247 Translate contributing.md into Chinese. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @stevhliu @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28243/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28243", "html_url": "https://github.com/huggingface/transformers/pull/28243", "diff_url": "https://github.com/huggingface/transformers/pull/28243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28243.patch", "merged_at": 1704321303000 }
https://api.github.com/repos/huggingface/transformers/issues/28242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28242/comments
https://api.github.com/repos/huggingface/transformers/issues/28242/events
https://github.com/huggingface/transformers/pull/28242
2,055,709,028
PR_kwDOCUB6oc5ivk1c
28,242
[Nougat] Fix pipeline
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28242). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,703
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? This PR ensures [Nougat](https://huggingface.co./docs/transformers/main/model_doc/nougat) works with the image-to-text pipeline. Fixes #27475
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28242/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28242", "html_url": "https://github.com/huggingface/transformers/pull/28242", "diff_url": "https://github.com/huggingface/transformers/pull/28242.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28242.patch", "merged_at": 1707729675000 }
https://api.github.com/repos/huggingface/transformers/issues/28241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28241/comments
https://api.github.com/repos/huggingface/transformers/issues/28241/events
https://github.com/huggingface/transformers/issues/28241
2,055,635,695
I_kwDOCUB6oc56hoLv
28,241
why max_position_embeddings = 2048 for llama2
{ "login": "ckfgihub", "id": 44078448, "node_id": "MDQ6VXNlcjQ0MDc4NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/44078448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ckfgihub", "html_url": "https://github.com/ckfgihub", "followers_url": "https://api.github.com/users/ckfgihub/followers", "following_url": "https://api.github.com/users/ckfgihub/following{/other_user}", "gists_url": "https://api.github.com/users/ckfgihub/gists{/gist_id}", "starred_url": "https://api.github.com/users/ckfgihub/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ckfgihub/subscriptions", "organizations_url": "https://api.github.com/users/ckfgihub/orgs", "repos_url": "https://api.github.com/users/ckfgihub/repos", "events_url": "https://api.github.com/users/ckfgihub/events{/privacy}", "received_events_url": "https://api.github.com/users/ckfgihub/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @ckfgihub, thanks for raising this issue! \r\n\r\nThe conversion script was originally written for the original Llama, which I suspect is why we have this discrepancy. cc @ArthurZucker to confirm. If so, it'll be a simple case of updating the condition in the script. Happy to review a PR for anyone who would like to fix this. ", "Yes, Llama1 one had 2048. I don't think that this is an issue as you should only need to update this in the config. Feel free to open a PR for a fix! ", "Hi @ArthurZucker , I can take this one!" ]
1,703
1,706
null
NONE
null
### System Info when I use under cmd to convert llama2 7B model and max_position_embeddings = 2048,but llama2 this max_position_embeddings = 4096. ``` python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 2048, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.34.1", "use_cache": true, "vocab_size": 32000 } https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py ``` if base > 10000.0: max_position_embeddings = 16384 else: max_position_embeddings = 2048 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction when I use under cmd to convert llama2 7B model and max_position_embeddings = 2048,but llama2 this max_position_embeddings = 4096. ``` python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 2048, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.34.1", "use_cache": true, "vocab_size": 32000 } https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py ``` if base > 10000.0: max_position_embeddings = 16384 else: max_position_embeddings = 2048 ``` ### Expected behavior when I use under cmd to convert llama2 7B model and max_position_embeddings = 2048,but llama2 this max_position_embeddings = 4096. ``` python src/transformers/models/llama/convert_llama_weights_to_hf.py \ --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir /output/path ``` { "architectures": [ "LlamaForCausalLM" ], "attention_bias": false, "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 2048, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "num_key_value_heads": 32, "pretraining_tp": 1, "rms_norm_eps": 1e-05, "rope_scaling": null, "rope_theta": 10000.0, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.34.1", "use_cache": true, "vocab_size": 32000 } https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert_llama_weights_to_hf.py ``` if base > 10000.0: max_position_embeddings = 16384 else: max_position_embeddings = 2048 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28241/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28240/comments
https://api.github.com/repos/huggingface/transformers/issues/28240/events
https://github.com/huggingface/transformers/pull/28240
2,055,633,282
PR_kwDOCUB6oc5ivUdJ
28,240
[`Mixtral` / `Awq`] Add mixtral fused modules for Awq
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for your review @amyeroberts ! I left few comments and open questions, let me know wdyt! 🙏 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28240). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks @amyeroberts for all your reviews! I just added the more general test with a tiny model ! I will merge the PR and address potential comments in a follow up PR ! 🙏 " ]
1,703
1,705
1,705
CONTRIBUTOR
null
# What does this PR do? Adds Mixtral + AWQ fused modules for blazing fast text generation! ```python from transformers import MixtralForCausalLM, AwqConfig, AutoTokenizer model_path = "casperhansen/mixtral-instruct-awq" quantization_config = AwqConfig( do_fuse=True, fuse_max_seq_len=1024, ) model = MixtralForCausalLM.from_pretrained(model_path, quantization_config=quantization_config, device_map="auto") tokenizer = AutoTokenizer.from_pretrained(model_path) tokenizer.pad_token = tokenizer.eos_token inputs = ["Here are the top 10 useful Hindi phrases for your upcoming trip to India:\n1. ", "Hello my name is"] inputs = tokenizer(inputs, return_tensors="pt", padding=True).to(0) outputs = model.generate(**inputs, max_new_tokens=100, do_sample=False) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ``` I introduced the same changes in modeling_utils as https://github.com/huggingface/transformers/pull/28239 for a tiny issue with respect to `modules_to_not_convert` not being handled correctly for fused module. Users needs autoawq>=0.1.8 to use this feature cc @casper-hansen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28240/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28240", "html_url": "https://github.com/huggingface/transformers/pull/28240", "diff_url": "https://github.com/huggingface/transformers/pull/28240.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28240.patch", "merged_at": 1705066176000 }
https://api.github.com/repos/huggingface/transformers/issues/28239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28239/comments
https://api.github.com/repos/huggingface/transformers/issues/28239/events
https://github.com/huggingface/transformers/pull/28239
2,055,607,043
PR_kwDOCUB6oc5ivOzF
28,239
[`Awq`] Add llava fused modules support
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you soooo much, this PR and #28032 helped me work well now!", "Thanks for your reviews @ArthurZucker ! Merging ! I'll address the points you shared in https://github.com/huggingface/transformers/pull/28239#discussion_r1448621388 in another PR as stated in my reply" ]
1,703
1,705
1,705
CONTRIBUTOR
null
# What does this PR do? This PR adds the Llava + fused modules support for blazing fast text generation using Llava + AWQ! This PR also fixes the issue: https://github.com/huggingface/transformers/pull/28032#issuecomment-1868262141 pointed out by a user since a custom past key value is passed to the model, indeed filtering out indexes that are inside the range of `extended_attention_mask` fixes the issue. Added also a slow test Can also confirm all Llava slow tests pass! cc @casper-hansen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28239/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28239/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28239", "html_url": "https://github.com/huggingface/transformers/pull/28239", "diff_url": "https://github.com/huggingface/transformers/pull/28239.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28239.patch", "merged_at": 1705038954000 }
https://api.github.com/repos/huggingface/transformers/issues/28238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28238/comments
https://api.github.com/repos/huggingface/transformers/issues/28238/events
https://github.com/huggingface/transformers/issues/28238
2,055,478,265
I_kwDOCUB6oc56hBv5
28,238
Finetuning Whisper with adapter with MAML
{ "login": "LYPinASR", "id": 112866899, "node_id": "U_kgDOBro2Uw", "avatar_url": "https://avatars.githubusercontent.com/u/112866899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LYPinASR", "html_url": "https://github.com/LYPinASR", "followers_url": "https://api.github.com/users/LYPinASR/followers", "following_url": "https://api.github.com/users/LYPinASR/following{/other_user}", "gists_url": "https://api.github.com/users/LYPinASR/gists{/gist_id}", "starred_url": "https://api.github.com/users/LYPinASR/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LYPinASR/subscriptions", "organizations_url": "https://api.github.com/users/LYPinASR/orgs", "repos_url": "https://api.github.com/users/LYPinASR/repos", "events_url": "https://api.github.com/users/LYPinASR/events{/privacy}", "received_events_url": "https://api.github.com/users/LYPinASR/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" }, { "id": 6470596964, "node_id": "LA_kwDOCUB6oc8AAAABga15ZA", "url": "https://api.github.com/repos/huggingface/transformers/labels/Audio", "name": "Audio", "color": "760453", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @sanchit-gandhi @ylacombe " ]
1,703
1,706
null
NONE
null
### Feature request MAML is a widely used meta-learning method for reinitializing model parameters, which can effectively cope with low-resource situations. As Whisper is a pre-trained model, its parameters cannot be reinitialized, but the bottleneck structure adapter can be applied to the encoder and decoder layers of the model, and then the adapter can be trained using MAML. Request code for fine-tuning Whisper with adapter using MAML such as meta-training with 6 languages and final fine-tuning with 4 other languages. ### Motivation low-resource ASR. ### Your contribution Anything you need and I can.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28238/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28237/comments
https://api.github.com/repos/huggingface/transformers/issues/28237/events
https://github.com/huggingface/transformers/issues/28237
2,055,447,081
I_kwDOCUB6oc56g6Ip
28,237
Add JAX implementation of Phi
{ "login": "shivance", "id": 51750587, "node_id": "MDQ6VXNlcjUxNzUwNTg3", "avatar_url": "https://avatars.githubusercontent.com/u/51750587?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shivance", "html_url": "https://github.com/shivance", "followers_url": "https://api.github.com/users/shivance/followers", "following_url": "https://api.github.com/users/shivance/following{/other_user}", "gists_url": "https://api.github.com/users/shivance/gists{/gist_id}", "starred_url": "https://api.github.com/users/shivance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shivance/subscriptions", "organizations_url": "https://api.github.com/users/shivance/orgs", "repos_url": "https://api.github.com/users/shivance/repos", "events_url": "https://api.github.com/users/shivance/events{/privacy}", "received_events_url": "https://api.github.com/users/shivance/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
closed
false
null
[]
[ "@shivance This issue is marked as completed but I can neither find the implementation in the [code tree](https://github.com/huggingface/transformers/tree/main/src/transformers/models/phi), nor load the model via `FlaxAutoModel`. Can you please point me to the implementation? ", "@dfdx The Flax implementation of Phi hasn't been added yet. If you or anyone else in the community would like to contribute, we'd be happy to review a PR! ", "FYI: I'm working on Flax implementation of Phi [here](https://github.com/dfdx/transformers/blob/dfdx/phi-2-flax/src/transformers/models/phi/modeling_flax_phi.py) (based on Flax Llama code). Attention is implemented, the rest should be pretty straightforward, so I expect a PR to be ready in a couple of weeks. " ]
1,703
1,706
1,704
NONE
null
### Model description Transformers already has Phi #26110 . I want to contribute the JAX/Flax implementation of the same. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28237/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28236/comments
https://api.github.com/repos/huggingface/transformers/issues/28236/events
https://github.com/huggingface/transformers/issues/28236
2,055,389,388
I_kwDOCUB6oc56gsDM
28,236
Add CAV-MAE audio-image encoder model
{ "login": "rationalism", "id": 813306, "node_id": "MDQ6VXNlcjgxMzMwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/813306?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rationalism", "html_url": "https://github.com/rationalism", "followers_url": "https://api.github.com/users/rationalism/followers", "following_url": "https://api.github.com/users/rationalism/following{/other_user}", "gists_url": "https://api.github.com/users/rationalism/gists{/gist_id}", "starred_url": "https://api.github.com/users/rationalism/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rationalism/subscriptions", "organizations_url": "https://api.github.com/users/rationalism/orgs", "repos_url": "https://api.github.com/users/rationalism/repos", "events_url": "https://api.github.com/users/rationalism/events{/privacy}", "received_events_url": "https://api.github.com/users/rationalism/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "I totally support this proposal. The code and checkpoints are available at https://github.com/YuanGongND/cav-mae.", "Created an initial PR here by cloning the visual-audio multimodal model TVLT: https://github.com/huggingface/transformers/pull/28246\r\n\r\n(haven't actually added the CAV-MAE code yet! this is just a scaffold)" ]
1,703
1,703
null
NONE
null
### Model description Contrastive Audio-Visual Masked Autoencoder (CAV-MAE) combines two major self-supervised learning frameworks: contrastive learning and masked data modeling, to learn a joint and coordinated audio-visual representation. It appears to be the open source SOTA on the AudioSet and VGGSound datasets (the OmniVec and Facebook MAViL models seem to have never had weights released). ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation https://github.com/YuanGongND/cav-mae @YuanGongND
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28236/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28235/comments
https://api.github.com/repos/huggingface/transformers/issues/28235/events
https://github.com/huggingface/transformers/pull/28235
2,055,227,277
PR_kwDOCUB6oc5it-Ll
28,235
if apply_ocr is False, then don't pass any image
{ "login": "fawazahmed0", "id": 20347013, "node_id": "MDQ6VXNlcjIwMzQ3MDEz", "avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fawazahmed0", "html_url": "https://github.com/fawazahmed0", "followers_url": "https://api.github.com/users/fawazahmed0/followers", "following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}", "gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}", "starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions", "organizations_url": "https://api.github.com/users/fawazahmed0/orgs", "repos_url": "https://api.github.com/users/fawazahmed0/repos", "events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}", "received_events_url": "https://api.github.com/users/fawazahmed0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nCould you clarify your PR? You still need to prepare the image for the model (by resizing + normalizing it), even if apply_ocr is set to False.", "sorry, I was using [LayoutLMV1](https://huggingface.co./docs/transformers/model_doc/layoutlm) with LayoutLMv2Processor and in my specific case, I doesn't have to pass any image" ]
1,703
1,703
1,703
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28235/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28235", "html_url": "https://github.com/huggingface/transformers/pull/28235", "diff_url": "https://github.com/huggingface/transformers/pull/28235.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28235.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28234/comments
https://api.github.com/repos/huggingface/transformers/issues/28234/events
https://github.com/huggingface/transformers/issues/28234
2,055,222,287
I_kwDOCUB6oc56gDQP
28,234
Phi for sequence classfication gives nan
{ "login": "Alymostafa", "id": 34445689, "node_id": "MDQ6VXNlcjM0NDQ1Njg5", "avatar_url": "https://avatars.githubusercontent.com/u/34445689?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Alymostafa", "html_url": "https://github.com/Alymostafa", "followers_url": "https://api.github.com/users/Alymostafa/followers", "following_url": "https://api.github.com/users/Alymostafa/following{/other_user}", "gists_url": "https://api.github.com/users/Alymostafa/gists{/gist_id}", "starred_url": "https://api.github.com/users/Alymostafa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Alymostafa/subscriptions", "organizations_url": "https://api.github.com/users/Alymostafa/orgs", "repos_url": "https://api.github.com/users/Alymostafa/repos", "events_url": "https://api.github.com/users/Alymostafa/events{/privacy}", "received_events_url": "https://api.github.com/users/Alymostafa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Alymostafa \r\nI suspect there might be something off with the trust_remote_code version of phi_1-5, can you try with the official transformers version which is pushed here: https://huggingface.co./susnato/phi-1_5_dev ?\r\nCan you also try to train with bf16 mixed precision? i.e. you need to load the model in fp32 and pass `bf16=True` in `TrainingArguments`", "> Hi @Alymostafa\r\n> I suspect there might be something off with the trust_remote_code version of phi_1-5, can you try with the official transformers version which is pushed here: https://huggingface.co./susnato/phi-1_5_dev ?\r\n> Can you also try to train with bf16 mixed precision? i.e. you need to load the model in fp32 and pass `bf16=True` in `TrainingArguments`\r\n\r\nHello, may I ask: is it necessary to load model in fp32 for amp?", "I think it would work for both scenoarios (loading it in fp32 or fp16) but is preferable to load it in fp32", "> I think it would work for both scenoarios (loading it in fp32 or fp16) but is preferable to load it in fp32\r\n\r\nThanks.👍", "@younesbelkada Thanks, it works.\r\n Is it possible to add a note on the Microsoft repo page to use the model you mentioned? That would be useful for other people." ]
1,703
1,706
1,706
NONE
null
### System Info I tried fine-tuning the Phi-1_5 model for a sequence classification task, but it gives ```nan``` in the validation loss and ```no log``` in the training loss. I tried fp16 & float 32, and both have the same output. @younesbelkada ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint,torch_dtype=torch.float32, device_map="cuda", num_labels=1, trust_remote_code=True) args = TrainingArguments( f"{model_name}-finetuned-{task}", evaluation_strategy = "steps", eval_steps=100, save_strategy = "steps", learning_rate=2e-5, per_device_train_batch_size=4, per_device_eval_batch_size=4, num_train_epochs=6, fp16=True, load_best_model_at_end=True, metric_for_best_model=metric_name, ) trainer = Trainer( model, args, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset['validation'], tokenizer=tokenizer, compute_metrics=compute_metrics_for_regression ) trainer.train() ### Expected behavior loss as a numbers not nan or no logging
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28234/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28232/comments
https://api.github.com/repos/huggingface/transformers/issues/28232/events
https://github.com/huggingface/transformers/pull/28232
2,055,031,094
PR_kwDOCUB6oc5itXk1
28,232
something
{ "login": "dirwolf", "id": 92151410, "node_id": "U_kgDOBX4ecg", "avatar_url": "https://avatars.githubusercontent.com/u/92151410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dirwolf", "html_url": "https://github.com/dirwolf", "followers_url": "https://api.github.com/users/dirwolf/followers", "following_url": "https://api.github.com/users/dirwolf/following{/other_user}", "gists_url": "https://api.github.com/users/dirwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/dirwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dirwolf/subscriptions", "organizations_url": "https://api.github.com/users/dirwolf/orgs", "repos_url": "https://api.github.com/users/dirwolf/repos", "events_url": "https://api.github.com/users/dirwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/dirwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks! https://github.com/huggingface/transformers/pull/28231 should fix this issue and also for `text_generation.py`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28232/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28232", "html_url": "https://github.com/huggingface/transformers/pull/28232", "diff_url": "https://github.com/huggingface/transformers/pull/28232.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28232.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28231/comments
https://api.github.com/repos/huggingface/transformers/issues/28231/events
https://github.com/huggingface/transformers/pull/28231
2,055,015,523
PR_kwDOCUB6oc5itUlk
28,231
Update docs related to generate() method in transformers/pipelines
{ "login": "coolyashas", "id": 32161167, "node_id": "MDQ6VXNlcjMyMTYxMTY3", "avatar_url": "https://avatars.githubusercontent.com/u/32161167?v=4", "gravatar_id": "", "url": "https://api.github.com/users/coolyashas", "html_url": "https://github.com/coolyashas", "followers_url": "https://api.github.com/users/coolyashas/followers", "following_url": "https://api.github.com/users/coolyashas/following{/other_user}", "gists_url": "https://api.github.com/users/coolyashas/gists{/gist_id}", "starred_url": "https://api.github.com/users/coolyashas/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/coolyashas/subscriptions", "organizations_url": "https://api.github.com/users/coolyashas/orgs", "repos_url": "https://api.github.com/users/coolyashas/repos", "events_url": "https://api.github.com/users/coolyashas/events{/privacy}", "received_events_url": "https://api.github.com/users/coolyashas/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I believe all necessary changes have been made @stevhliu ", "Hey, can you also apply the changes to the `text2text_generation.py` and `text_generation.py` files please?", "> Hey, can you also apply the changes to the `text2text_generation.py` and `text_generation.py` files please?\r\n\r\n![image](https://github.com/huggingface/transformers/assets/32161167/92b28c41-8fa3-41f8-bdad-e0a15a513872)\r\n\r\nHey, the changes have already been made in the files you mentioned in this PR itself [here](https://github.com/huggingface/transformers/pull/28231/files). Are there any other changes you wanted implemented?\r\n", "@stevhliu I deleted this repo mistakenly. Could you pls reopen it in order to finish the PR?", "@coolyashas Unfortunately we can't reopen this PR, as the repo through which this PR was submitted has been deleted. If you reopen another PR with the same changes I'll be happy to do a quick review. ", "> @coolyashas Unfortunately we can't reopen this PR, as the repo through which this PR was submitted has been deleted. If you reopen another PR with the same changes I'll be happy to do a quick review.\r\n\r\nSure will do, thanks for the reply!" ]
1,703
1,706
1,704
NONE
null
# What does this PR do? Fixes #28224 In the issue, it is suggested to change the link to [this](https://huggingface.co./docs/transformers/generation_strategies), but I find [this](https://huggingface.co./docs/transformers/v4.36.1/en/main_classes/text_generation) to be a better fit. @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28231/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28231", "html_url": "https://github.com/huggingface/transformers/pull/28231", "diff_url": "https://github.com/huggingface/transformers/pull/28231.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28231.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28230
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28230/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28230/comments
https://api.github.com/repos/huggingface/transformers/issues/28230/events
https://github.com/huggingface/transformers/issues/28230
2,055,009,660
I_kwDOCUB6oc56fPV8
28,230
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback) [undefined symbol]
{ "login": "ramyaprabhu-alt", "id": 138777240, "node_id": "U_kgDOCEWSmA", "avatar_url": "https://avatars.githubusercontent.com/u/138777240?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ramyaprabhu-alt", "html_url": "https://github.com/ramyaprabhu-alt", "followers_url": "https://api.github.com/users/ramyaprabhu-alt/followers", "following_url": "https://api.github.com/users/ramyaprabhu-alt/following{/other_user}", "gists_url": "https://api.github.com/users/ramyaprabhu-alt/gists{/gist_id}", "starred_url": "https://api.github.com/users/ramyaprabhu-alt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ramyaprabhu-alt/subscriptions", "organizations_url": "https://api.github.com/users/ramyaprabhu-alt/orgs", "repos_url": "https://api.github.com/users/ramyaprabhu-alt/repos", "events_url": "https://api.github.com/users/ramyaprabhu-alt/events{/privacy}", "received_events_url": "https://api.github.com/users/ramyaprabhu-alt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "solved it", "@ramyaprabhu-alt Hi, I meet the same question. Can you tell me how did you solve it? thx", "pip uninstall transformer-engine worked for me", "Not sure why I was tagged. Glad you found a solution" ]
1,703
1,704
1,703
NONE
null
### System Info transformers==4.36.2 torch==2.1.2+cu121 Python==3.10.12 ### Who can help? @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I installed eleutherai's lm-eval-harness and vllm on Pytorch's NGC docker [23:10-py3] container. Once I'm done with the installation, i get the following error message: ``` File "/usr/local/bin/transformers-cli", line 5, in <module> from transformers.commands.transformers_cli import main File "/usr/local/lib/python3.10/dist-packages/transformers/commands/transformers_cli.py", line 25, in <module> from .run import RunCommand File "/usr/local/lib/python3.10/dist-packages/transformers/commands/run.py", line 17, in <module> from ..pipelines import Pipeline, PipelineDataFormat, get_supported_tasks, pipeline File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py", line 49, in <module> from .audio_classification import AudioClassificationPipeline File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/audio_classification.py", line 21, in <module> from .base import PIPELINE_INIT_ARGS, Pipeline File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 34, in <module> from ..modelcard import ModelCard File "/usr/local/lib/python3.10/dist-packages/transformers/modelcard.py", line 48, in <module> from .training_args import ParallelMode File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 69, in <module> from accelerate.state import AcceleratorState, PartialState File "/usr/local/lib/python3.10/dist-packages/accelerate/__init__.py", line 3, in <module> from .accelerator import Accelerator File "/usr/local/lib/python3.10/dist-packages/accelerate/accelerator.py", line 35, in <module> from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "/usr/local/lib/python3.10/dist-packages/accelerate/checkpointing.py", line 24, in <module> from .utils import ( File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/__init__.py", line 150, in <module> from .launch import ( File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/launch.py", line 32, in <module> from ..utils.other import is_port_in_use, merge_dicts File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/other.py", line 36, in <module> from .transformer_engine import convert_model File "/usr/local/lib/python3.10/dist-packages/accelerate/utils/transformer_engine.py", line 21, in <module> import transformer_engine.pytorch as te File "/usr/local/lib/python3.10/dist-packages/transformer_engine/pytorch/__init__.py", line 6, in <module> from .module import LayerNormLinear File "/usr/local/lib/python3.10/dist-packages/transformer_engine/pytorch/module/__init__.py", line 6, in <module> from .layernorm_linear import LayerNormLinear File "/usr/local/lib/python3.10/dist-packages/transformer_engine/pytorch/module/layernorm_linear.py", line 15, in <module> from .. import cpp_extensions as tex File "/usr/local/lib/python3.10/dist-packages/transformer_engine/pytorch/cpp_extensions/__init__.py", line 6, in <module> from transformer_engine_extensions import * ImportError: /usr/local/lib/python3.10/dist-packages/transformer_engine_extensions.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE ``` infact i run into this error if I run transformers-cli env too ### Expected behavior It should run without any error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28230/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28229
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28229/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28229/comments
https://api.github.com/repos/huggingface/transformers/issues/28229/events
https://github.com/huggingface/transformers/pull/28229
2,054,981,100
PR_kwDOCUB6oc5itOI3
28,229
small typo
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,703
1,703
CONTRIBUTOR
null
fixing a small typo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28229/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28229", "html_url": "https://github.com/huggingface/transformers/pull/28229", "diff_url": "https://github.com/huggingface/transformers/pull/28229.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28229.patch", "merged_at": 1703623931000 }
https://api.github.com/repos/huggingface/transformers/issues/28228
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28228/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28228/comments
https://api.github.com/repos/huggingface/transformers/issues/28228/events
https://github.com/huggingface/transformers/issues/28228
2,054,969,638
I_kwDOCUB6oc56fFkm
28,228
Prompt_ids vs. decoder_input_ids in Whisper
{ "login": "vymao", "id": 18024303, "node_id": "MDQ6VXNlcjE4MDI0MzAz", "avatar_url": "https://avatars.githubusercontent.com/u/18024303?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vymao", "html_url": "https://github.com/vymao", "followers_url": "https://api.github.com/users/vymao/followers", "following_url": "https://api.github.com/users/vymao/following{/other_user}", "gists_url": "https://api.github.com/users/vymao/gists{/gist_id}", "starred_url": "https://api.github.com/users/vymao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vymao/subscriptions", "organizations_url": "https://api.github.com/users/vymao/orgs", "repos_url": "https://api.github.com/users/vymao/repos", "events_url": "https://api.github.com/users/vymao/events{/privacy}", "received_events_url": "https://api.github.com/users/vymao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @vymao, `prompt_ids` basically refers to the input tokens or token IDs provided to the model before generating text and it serves as an initial context for the model to begin generating text.\r\n\r\nOn the other hand, `decoder_input_ids` are mainly used in sequence-to-sequence models or in models with a decoder part. For example, Transformer architectures with encoder-decoder structure. So `decoder_input_ids` are inputs provided to the decoder part of a sequence-to-sequence model and they help guide the generation of subsequent tokens in the sequence.\r\n\r\nWhen it comes to generating text via `Whisper`, both `prompt_ids` and `decoder_input_ids` can be used to provide context to guide the model's text generation. Also, the `prefix` feature in `Whisper` mainly uses either `prompt_ids` or `decoder_input_ids` or a combination of both to provide context to the model.\r\n\r\nThe implementation difference between `prompt_ids` and `decoder_input_ids` is that `prompt_ids` usually provide the initial context while `decoder_input_ids` guides the decoding or generation process, mostly when it involves encoder-decoder architectures.\r\n\r\n@254guru\r\n", "Thanks. I'm still slightly confused: when you say `prompt_ids` are used to provide initial context, isn't that still on the decoder side before the actual generated text? How is this different from using `decoder_input_ids`? ", "Maybe for @sanchit-gandhi or @ylacombe ", "Hey @vymao, I'm not a Whisper expert yet but as I understand and as the [documentation](https://huggingface.co./docs/transformers/v4.36.1/en/model_doc/whisper#transformers.WhisperForConditionalGeneration.generate.prompt_ids) suggests, `prompt_ids` are created by using the tokenizer's or the processor's `get_prompt_ids`.\r\n\r\nhttps://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/models/whisper/tokenization_whisper.py#L827-L836\r\n\r\nAs you can see from the code, get_prompt_ids handles the input text so you don't have to worry about special tokens that need to be inserted to tell the model that this text is context and not the start of the transcription.\r\n\r\nThen the code processes the `prompt_ids` in place of the `decoder_input_ids`.\r\n\r\nIn other words, you can use `prompt_ids` obtained from `get_prompt_ids` if you want to pass a context to Whisper. `decoder_input_ids` is much more flexible: you could reproduce `prompt_ids` obtained from `get_prompt_ids` or use it to have a more advanced use of Whisper.\r\n\r\nI hope that it helps!\r\n\r\ncc @sanchit-gandhi or @ArthurZucker if you want to correct me or give a more advanced explanation\r\n\r\n", "Hey @vymao,\r\n\r\nThat's a very good question! In a nutshell, `decoder_input_ids` and `prompt_ids` are the same thing. The allow you to prompt Whisper on a specific prefix just like it's explained here: https://platform.openai.com/docs/guides/speech-to-text/prompting\r\n\r\nPlease use `prompt_ids` for the moment and don't use `decoder_input_ids`. I'm working on improving the docs and usability of Whisper at the moment which this PR: https://github.com/huggingface/transformers/pull/27658", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### Feature request I am trying to understand the different between adding prior text to `prompt_ids` vs. `decoder_input_ids` when [generating text via Whisper](https://huggingface.co./docs/transformers/v4.36.1/en/model_doc/whisper#transformers.WhisperForConditionalGeneration). The documentation is not very clear on how these differ implementation-wise; AFAIK, it seems like using `prompt_ids` will lead to `forced_input_ids` being modified [here](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/models/whisper/modeling_whisper.py#L2211-L2215). But I'm not sure how exactly using `decoder_input_ids` differs from this. ### Motivation To add context to the whisper transcription. For example, if the model previously transcribed `I have a` in a streaming fashion, I would like to add this as "context" into the model to help it predict the next word. I believe the actual OpenAI Whisper implementation has a feature called "prefix" that does this. ### Your contribution Will try.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28228/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28228/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28227
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28227/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28227/comments
https://api.github.com/repos/huggingface/transformers/issues/28227/events
https://github.com/huggingface/transformers/issues/28227
2,054,953,571
I_kwDOCUB6oc56fBpj
28,227
TrOCR wrong output after fine-tuning on specific tokenizer
{ "login": "magistermilitum", "id": 37591896, "node_id": "MDQ6VXNlcjM3NTkxODk2", "avatar_url": "https://avatars.githubusercontent.com/u/37591896?v=4", "gravatar_id": "", "url": "https://api.github.com/users/magistermilitum", "html_url": "https://github.com/magistermilitum", "followers_url": "https://api.github.com/users/magistermilitum/followers", "following_url": "https://api.github.com/users/magistermilitum/following{/other_user}", "gists_url": "https://api.github.com/users/magistermilitum/gists{/gist_id}", "starred_url": "https://api.github.com/users/magistermilitum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/magistermilitum/subscriptions", "organizations_url": "https://api.github.com/users/magistermilitum/orgs", "repos_url": "https://api.github.com/users/magistermilitum/repos", "events_url": "https://api.github.com/users/magistermilitum/events{/privacy}", "received_events_url": "https://api.github.com/users/magistermilitum/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @NielsRogge ", "This was resolved here: https://github.com/huggingface/transformers/issues/19329#issuecomment-1869144991", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,706
1,706
NONE
null
### System Info I get funny outputs like that: '叡 叡 叡ělělěllioliolio 揮 揮 揮ത്തിത്തിത്തി ὴ ὴ ὴ Dans Dans Dans Wesley Wesley Wesley contaba contaba contaba ...... ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have following Nielsr and replacing the decoder of the pretrained trocr-base-handwritten with a custom one as follows: ```python processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten") processor.tokenizer = AutoTokenizer.from_pretrained("magistermilitum/bert_medieval_multilingual") #or any other bert or roberta processor.save_pretrained('./processor') model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") model.decoder = AutoModelForCausalLM.from_pretrained("magistermilitum/bert_medieval_multilingual", is_decoder=True, add_cross_attention=True) model.config.decoder_start_token_id = processor.tokenizer.cls_token_id model.config.pad_token_id = processor.tokenizer.pad_token_id model.config.vocab_size = model.config.decoder.vocab_size model.save_pretrained("trainer_trocr/model") ``` In this fashion training works fine and at the end I have a model achiving CER = 0.09 and WER = 0.17 on the development set, which are overall good results (in any case better that using only trocr-base-handwritten). But, when using the generated model on inference there is a possible miss-alignement as using in this way: ```python device=torch.device("cpu") processor = TrOCRProcessor.from_pretrained("./processor") model = VisionEncoderDecoderModel.from_pretrained("trainer_trocr/model").to(device) ### load image url = "D:\\trocr\\1538983940018_363.png" image = Image.open(url).convert("RGB") pixel_values = processor(image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(generated_ids) print("OCR output:", generated_text) ``` I get funny outputs like that: ``` 叡 叡 叡ělělěllioliolio 揮 揮 揮ത്തിത്തിത്തി ὴ ὴ ὴ Dans Dans Dans Wesley Wesley Wesley contaba contaba contaba ...... ``` Even worse, each time i pass the inference the output change but always using this patttern of 3 by 3 characters. Maybe i have not using inference from a tokenizer-modified model in the proper way but i have not be able to find any readme indication about this. ### Expected behavior a logical output
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28227/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28226
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28226/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28226/comments
https://api.github.com/repos/huggingface/transformers/issues/28226/events
https://github.com/huggingface/transformers/pull/28226
2,054,911,110
PR_kwDOCUB6oc5itA4w
28,226
[Whisper] Fix word-level timestamps error in DTW - `ValueError: too many values to unpack (expected 2)`
{ "login": "guy1992l", "id": 83535508, "node_id": "MDQ6VXNlcjgzNTM1NTA4", "avatar_url": "https://avatars.githubusercontent.com/u/83535508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guy1992l", "html_url": "https://github.com/guy1992l", "followers_url": "https://api.github.com/users/guy1992l/followers", "following_url": "https://api.github.com/users/guy1992l/following{/other_user}", "gists_url": "https://api.github.com/users/guy1992l/gists{/gist_id}", "starred_url": "https://api.github.com/users/guy1992l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/guy1992l/subscriptions", "organizations_url": "https://api.github.com/users/guy1992l/orgs", "repos_url": "https://api.github.com/users/guy1992l/repos", "events_url": "https://api.github.com/users/guy1992l/events{/privacy}", "received_events_url": "https://api.github.com/users/guy1992l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Just tested this, can confirm it fixes the issue 👍 ", "Hey @guy1992l, thanks for opening the PR, this is indeed something that slipped through my net with my PR!\r\n\r\n#28288 is offering the same fix + support with MPS backend so we'll probably go with that PR ! Thanks again for your work on this ", "No problem, then I'm closing this PR. Thanks!" ]
1,703
1,704
1,704
NONE
null
# What does this PR do? When trying to use the ASR with a whisper model, `return_timestamps="word"` and `batch_size>1`, there was an error in the method `_dynamic_time_warping`: ``` File "/transformers/models/whisper/modeling_whisper.py", line 258, in _dynamic_time_warping output_length, input_length = matrix.shape ValueError: too many values to unpack (expected 2) ``` This PR resolves it. It is a small (or better suited for whisper - tiny) fix for checking the `num_frames` for its type (the `np.ndarray` type was being ignored) in order to enter the correct condition before calling the method `_dynamic_time_warping`. If the `num_frames` shouldn't be a `np.ndarray` in the first place, then this PR should be closed and ignored. Otherwise, it is working as expected. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. I think @ylacombe can review as it is a continuation to this great fix (https://github.com/huggingface/transformers/pull/28114) Otherwise, then by the suggestions in the PR: @sanchit-gandhi <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> I hope it is ok, Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28226/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28226", "html_url": "https://github.com/huggingface/transformers/pull/28226", "diff_url": "https://github.com/huggingface/transformers/pull/28226.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28226.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28225
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28225/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28225/comments
https://api.github.com/repos/huggingface/transformers/issues/28225/events
https://github.com/huggingface/transformers/pull/28225
2,054,864,723
PR_kwDOCUB6oc5is3hX
28,225
HF_quantizers draft PR (created in error)
{ "login": "poedator", "id": 24738311, "node_id": "MDQ6VXNlcjI0NzM4MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poedator", "html_url": "https://github.com/poedator", "followers_url": "https://api.github.com/users/poedator/followers", "following_url": "https://api.github.com/users/poedator/following{/other_user}", "gists_url": "https://api.github.com/users/poedator/gists{/gist_id}", "starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poedator/subscriptions", "organizations_url": "https://api.github.com/users/poedator/orgs", "repos_url": "https://api.github.com/users/poedator/repos", "events_url": "https://api.github.com/users/poedator/events{/privacy}", "received_events_url": "https://api.github.com/users/poedator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,703
1,703
CONTRIBUTOR
null
created by error. pls ignore
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28225/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28225", "html_url": "https://github.com/huggingface/transformers/pull/28225", "diff_url": "https://github.com/huggingface/transformers/pull/28225.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28225.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28224
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28224/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28224/comments
https://api.github.com/repos/huggingface/transformers/issues/28224/events
https://github.com/huggingface/transformers/issues/28224
2,054,864,441
I_kwDOCUB6oc56er45
28,224
[DOCS] - Wrong link to model#generative-models in Pipeline docs
{ "login": "BrunoGomesCoelho", "id": 17727737, "node_id": "MDQ6VXNlcjE3NzI3NzM3", "avatar_url": "https://avatars.githubusercontent.com/u/17727737?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BrunoGomesCoelho", "html_url": "https://github.com/BrunoGomesCoelho", "followers_url": "https://api.github.com/users/BrunoGomesCoelho/followers", "following_url": "https://api.github.com/users/BrunoGomesCoelho/following{/other_user}", "gists_url": "https://api.github.com/users/BrunoGomesCoelho/gists{/gist_id}", "starred_url": "https://api.github.com/users/BrunoGomesCoelho/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BrunoGomesCoelho/subscriptions", "organizations_url": "https://api.github.com/users/BrunoGomesCoelho/orgs", "repos_url": "https://api.github.com/users/BrunoGomesCoelho/repos", "events_url": "https://api.github.com/users/BrunoGomesCoelho/events{/privacy}", "received_events_url": "https://api.github.com/users/BrunoGomesCoelho/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @BrunoGomesCoelho, thanks for raising this issue! \r\n\r\nWould you like to open a PR to fix these links? This way you get the github contribution", "I think @coolyashas had attempted this in #28231 - It seems ready to go other than minor comments - Do you want to re-open and submit @coolyashas? ", "> I think @coolyashas had attempted this in #28231 - It seems ready to go other than minor comments - Do you want to re-open and submit @coolyashas?\r\n\r\nYes, I would like to re-open and submit it! @amyeroberts " ]
1,703
1,708
null
NONE
null
### System Info Latest docs on https://huggingface.co./docs/transformers/en/main_classes/pipelines#transformers.TextGenerationPipeline ### Who can help? @stevhliu and @MKhalusova ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Inside the documentation for [pipelines#transformers.TextGenerationPipeline.__call__](https://huggingface.co./docs/transformers/en/main_classes/pipelines#transformers.TextGenerationPipeline.__call__) the docs read: > generate_kwargs — Additional keyword arguments to pass along to the generate method of the model (see the generate method corresponding to your framework [here](https://huggingface.co./docs/transformers/en/main_classes/model#generative-models)). The header part of the link (#generative-models) does not exist in the [main model page](https://huggingface.co./docs/transformers/en/main_classes/model) though - I'm not sure any of the headers there are appropriate actually. The problem is repeated for two other tasks at least: Conversational, text2textgeneration. For the text-generation case, the corresponding docs link is [pipelines/text_generation.py#L198](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/text_generation.py#L198) ### Expected behavior The link sends me to documentations about common model-generation parameters - Maybe [this](https://huggingface.co./docs/transformers/generation_strategies)?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28224/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28223
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28223/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28223/comments
https://api.github.com/repos/huggingface/transformers/issues/28223/events
https://github.com/huggingface/transformers/pull/28223
2,054,843,227
PR_kwDOCUB6oc5iszYp
28,223
Update docs around mixing hf scheduler with deepspeed optimizer
{ "login": "dwyatte", "id": 2512762, "node_id": "MDQ6VXNlcjI1MTI3NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwyatte", "html_url": "https://github.com/dwyatte", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "repos_url": "https://api.github.com/users/dwyatte/repos", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? Looks like the docs around mixing and matching deepspeed optimizers with HF schedulers were not updated with https://github.com/huggingface/transformers/pull/25863. I think there's probably an analogous update to make in [`accelerate`](https://github.com/huggingface/accelerate/blob/ceb7c699bc36bdb3bbf32cceaaca2d1ceaf62dae/docs/source/usage_guides/deepspeed.md?plain=1#L418-L419) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @muellerzr, @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28223/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28223", "html_url": "https://github.com/huggingface/transformers/pull/28223", "diff_url": "https://github.com/huggingface/transformers/pull/28223.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28223.patch", "merged_at": 1704196097000 }
https://api.github.com/repos/huggingface/transformers/issues/28222
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28222/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28222/comments
https://api.github.com/repos/huggingface/transformers/issues/28222/events
https://github.com/huggingface/transformers/issues/28222
2,054,797,174
I_kwDOCUB6oc56ebd2
28,222
torch.export & save doesn't work
{ "login": "MrParosk", "id": 35773375, "node_id": "MDQ6VXNlcjM1NzczMzc1", "avatar_url": "https://avatars.githubusercontent.com/u/35773375?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MrParosk", "html_url": "https://github.com/MrParosk", "followers_url": "https://api.github.com/users/MrParosk/followers", "following_url": "https://api.github.com/users/MrParosk/following{/other_user}", "gists_url": "https://api.github.com/users/MrParosk/gists{/gist_id}", "starred_url": "https://api.github.com/users/MrParosk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MrParosk/subscriptions", "organizations_url": "https://api.github.com/users/MrParosk/orgs", "repos_url": "https://api.github.com/users/MrParosk/repos", "events_url": "https://api.github.com/users/MrParosk/events{/privacy}", "received_events_url": "https://api.github.com/users/MrParosk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, You are totally right, All hugging face models have outputs that are instances of subclasses of [ModelOutput](https://huggingface.co./docs/transformers/v4.36.1/en/main_classes/output#transformers.utils.ModelOutput) which as you mentioned aren't `JSON serializable`.\r\n\r\nThankfully, You can opt not to use them and just return tuples instead by setting the kwarg `return_dict=False` in the model loading method.\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForSequenceClassification\r\n\r\nhf_model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=5, return_dict=False)\r\ndummy_input = torch.randint(0, 1000, size=(2, 512))\r\n\r\nwith torch.no_grad():\r\n print(hf_model(dummy_input))\r\n\r\n# (tensor([[-0.1271, -0.2149, -0.0251, 0.3655, 0.2407],\r\n# [-0.1448, -0.1787, -0.0458, 0.3854, 0.2657]]),)\r\n\r\nexported_model = torch.export.export(hf_model, args=(dummy_input,))\r\ntorch.export.save(exported_model, \"model.pt\")\r\npt_model = torch.export.load('model.pt')\r\n\r\nwith torch.no_grad():\r\n print(pt_model(dummy_input))\r\n\r\n# (tensor([[-0.1271, -0.2149, -0.0251, 0.3655, 0.2407],\r\n# [-0.1448, -0.1787, -0.0458, 0.3854, 0.2657]]),)\r\n\r\n```", "Yes seems to be working, also works with the Trainer API, thanks a lot! @IbrahimAmin1 " ]
1,703
1,703
1,703
NONE
null
### System Info ## Description PyTorch 2.x supports torch.export.export to export a model into a graph representation and later serialize to with torch.export.save. However, trying to do that with HF transformers models doesn't work (see reproduction section below). I think the main issue is that HF transformers seems to return dataclasses (e.g. DetrObjectDetectionOutput) when doing a forward pass. However, these dataclasses cannot be serialized to json (and hence the errors). Is there any plans to support this in the near future? If so, is it possible to contribute to it? ## System Info - `transformers` version: 4.36.1 - Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35 - Python version: 3.11.7 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ## Reproduction ```python import torch from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=5) dummy_input = torch.randint(0, 1000, size=(2, 512)) exported_model = torch.export.export(model, args=(dummy_input,)) torch.export.save(exported_model, "model.pt") ``` Gives: ```python Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. SequenceClassifierOutput(loss=None, logits=tensor([[-0.1969, 0.6469, -0.4259, 0.0733, -0.4104], [-0.0467, 0.5932, -0.3885, 0.1355, -0.4127]], grad_fn=<AddmmBackward0>), hidden_states=None, attentions=None) Traceback (most recent call last): File "/home/erik/code/tutorials/huggingface/.venv/lib/python3.11/site-packages/torch/utils/_pytree.py", line 462, in _treespec_to_json serialized_context = json.dumps(spec.context) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/erik/.pyenv/versions/3.11.7/lib/python3.11/json/__init__.py", line 231, in dumps return _default_encoder.encode(obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/erik/.pyenv/versions/3.11.7/lib/python3.11/json/encoder.py", line 200, in encode chunks = self.iterencode(o, _one_shot=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/erik/.pyenv/versions/3.11.7/lib/python3.11/json/encoder.py", line 258, in iterencode return _iterencode(o, 0) ^^^^^^^^^^^^^^^^^ File "/home/erik/.pyenv/versions/3.11.7/lib/python3.11/json/encoder.py", line 180, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type type is not JSON serializable The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/erik/code/tutorials/huggingface/example.py", line 12, in <module> torch.export.save(exported_model, "model.pt") File "/home/erik/code/tutorials/huggingface/.venv/lib/python3.11/site-packages/torch/export/__init__.py", line 1075, in save save(ep, f, extra_files=extra_files, opset_version=opset_version) File "/home/erik/code/tutorials/huggingface/.venv/lib/python3.11/site-packages/torch/_export/__init__.py", line 473, in save serialized_program, serialized_state_dict = serialize(ep, opset_version) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/erik/code/tutorials/huggingface/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 1438, in serialize ExportedProgramSerializer(opset_version).serialize(exported_program) File "/home/erik/code/tutorials/huggingface/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 810, in serialize ).serialize(exported_program.graph_module) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/erik/code/tutorials/huggingface/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 790, in serialize call_spec=serialize_call_spec(self.call_spec), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/erik/code/tutorials/huggingface/.venv/lib/python3.11/site-packages/torch/_export/serde/serialize.py", line 194, in serialize_call_spec out_spec=treespec_dumps(call_spec.out_spec, TREESPEC_VERSION) if call_spec.out_spec else "", ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/erik/code/tutorials/huggingface/.venv/lib/python3.11/site-packages/torch/utils/_pytree.py", line 517, in treespec_dumps json_spec = _SUPPORTED_PROTOCOLS[protocol].treespec_to_json(treespec) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/erik/code/tutorials/huggingface/.venv/lib/python3.11/site-packages/torch/utils/_pytree.py", line 464, in _treespec_to_json raise TypeError( TypeError: Unable to serialize context. Please make the context json dump-able, or register a custom serializer using _register_pytree_node. ``` Similar results are given when trying other models, like DETR: ```python from transformers import DetrImageProcessor, DetrForObjectDetection import torch from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50", revision="no_timm") model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50", revision="no_timm") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) exported_model = torch.export.export(model, args=(inputs["pixel_values"], inputs["pixel_mask"])) torch.export.save(exported_model, "model.pt") ``` ### Who can help? @ArthurZucker @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction See above ### Expected behavior That we can export the models with torch.export.export & torch.export.save
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28222/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28221
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28221/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28221/comments
https://api.github.com/repos/huggingface/transformers/issues/28221/events
https://github.com/huggingface/transformers/pull/28221
2,054,790,352
PR_kwDOCUB6oc5ispS_
28,221
Make VideoMAEImageProcessor much faster
{ "login": "ikergarcia1996", "id": 18737249, "node_id": "MDQ6VXNlcjE4NzM3MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/18737249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ikergarcia1996", "html_url": "https://github.com/ikergarcia1996", "followers_url": "https://api.github.com/users/ikergarcia1996/followers", "following_url": "https://api.github.com/users/ikergarcia1996/following{/other_user}", "gists_url": "https://api.github.com/users/ikergarcia1996/gists{/gist_id}", "starred_url": "https://api.github.com/users/ikergarcia1996/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ikergarcia1996/subscriptions", "organizations_url": "https://api.github.com/users/ikergarcia1996/orgs", "repos_url": "https://api.github.com/users/ikergarcia1996/repos", "events_url": "https://api.github.com/users/ikergarcia1996/events{/privacy}", "received_events_url": "https://api.github.com/users/ikergarcia1996/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@amyeroberts The suggestion makes sense, although we also need to define `data` if `return_tensors` is None. I have slightly modified the suggestion.\r\n\r\nRegarding extending this logic to other image processors. I think that the root of the issue is here: https://github.com/huggingface/transformers/blob/4a66c0d95219bbeb91bebd010d75a29a5e5f3f43/src/transformers/feature_extraction_utils.py#L138-L141\r\n\r\nThe `as_tensor` function in `_get_is_as_tensor_fns` from the `BatchFeature` class already takes into account that if the data is a list of `np.ndarray` is should first be converted into an `np.array`. However, this is not the case with the VideoMAE data becase it is a `list of lists of ndarrays`. Doing a recursive type check would fix the issue, altough I am not sure if it would break any other functionallity. Maybe I should open a separate PR for this? \r\n\r\n```python\r\ndef recursive_ndarray_check(value):\r\n if isinstance(value, (list, tuple)) and len(value) > 0:\r\n return recursive_ndarray_check(value[0])\r\n return isinstance(value, np.ndarray)\r\n\r\nif recursive_ndarray_check(value):\r\n value = np.array(value)\r\nreturn torch.tensor(value)\r\n```", "@ikergarcia1996 Thanks for the detailed explanation! In principle, I'd be pro adding in the recursive check. However, there's other models e.g. audio models which rely on this logic which might also be affected by this change so we'd have to make sure it's well tested for these models too. \r\n\r\nLet's add it now, we can iron out any issues it might flag for the vision models and then I can ask the audio team if they think they're be any issues. ", "Hi @ikergarcia1996, are you still working on this? It would be great to have this contribution! The next steps would be adding in the recursive logic you proposed. ", "Sorry, @amyeroberts, I had other urgent matters to attend to and forgot about this. I have updated the code; it works for VideoMAEImageProcessor, although I am not sure if it may cause issues with other models. The tests are failing, but the error seems unrelated to the changes in the code.\r\n\r\n```\r\nRuntimeError: Failed to import transformers.models.nat.modeling_nat because of the following error (look up to see its traceback):\r\nE Failed to import NATTEN's CPP backend. This could be due to an invalid/incomplete install. Please uninstall NATTEN (pip uninstall natten) and re-install with the correct torch build: shi-labs.com/natten\r\n```", "@ikergarcia1996 Thanks for updating! \r\n\r\nThe natten issues aren't related to this PR - we recently had issues on our CI runs because of recent package releases and incompatible versions. A fix has been pushed to the `main` branch. Rebasing to include these changes should resolve them. ", "@amyeroberts I have updated my branch, but the tests still fail. ", "@ikergarcia1996 Apolgies. There's been some continued issues with handling compatibility between packages. A final fix _should_ have been merged into main now. Could you try rebasing again?", "@amyeroberts done :smiley: ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28221). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,703
1,707
null
CONTRIBUTOR
null
# What does this PR do? Currently, `VideoMAEImageProcessor` is extremely slow. In fact, during inference, it takes longer to preprocess a video than to run the model. After investigating the code, I discovered that this issue can be easily fixed. Currently, the `preprocess()` function in `VideoMAEImageProcessor` creates a `list of list of ndarrays` (as `self._preprocess_image()` returns an ndarray), which is then sent to `BatchFeature` to be converted into a `torch.tensor`. The issue arises because creating a tensor from a list of ndarrays is extremely slow. Additional information on this problem can be found here: [https://github.com/pytorch/pytorch/issues/13918](https://github.com/pytorch/pytorch/issues/13918). By converting the list of ndarrays into a single ndarray, a significant speedup can be achieved. Here is a minimal example for demonstration. We create two processors: one using the current code and another with the modified code. ```python from transformers import VideoMAEImageProcessor as original_image_processor from image_processing_videomae import VideoMAEImageProcessor as new_image_processor import torch IMAGE_MEAN: list[float] = [0.33363932, 0.32581538, 0.31566033] IMAGE_STD: list[float] = [0.1914285, 0.18449214, 0.1853477] image_processor_og = original_image_processor( do_resize=False, do_center_crop=False, do_rescale=True, do_normalize=True, image_mean=IMAGE_MEAN, image_std=IMAGE_STD, ) image_processor_new = new_image_processor( do_resize=False, do_center_crop=False, do_rescale=True, do_normalize=True, image_mean=IMAGE_MEAN, image_std=IMAGE_STD, ) ``` We then create a video of 128 frammes with a 200x200 resolution ```python image_sequences = np.asarray( [ np.random.rand(200, 200, 3)*255, ]*128, dtype=np.uint8, ) image_sequences = list(image_sequences) print(len(image_sequences)) print(image_sequences[0].shape) print(image_sequences[0][0][0][:10]) ## OUTPUT 128 (200, 200, 3) [119 129 132] ``` I have run both processors in a jupyter notebook ```python %%timeit model_inputs_og = image_processor_og( images=image_sequences, input_data_format="channels_last", return_tensors="pt", ) # 1.11 s ± 123 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` ```python %%timeit model_inputs_new = image_processor_new( images=image_sequences, input_data_format="channels_last", return_tensors="pt", ) # 154 ms ± 2.26 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) ``` Just to be sure than nothing changes ```python model_inputs_og["pixel_values"].size() # torch.Size([1, 128, 3, 200, 200]) model_inputs_new["pixel_values"].size() # torch.Size([1, 128, 3, 200, 200]) torch.all(model_inputs_new["pixel_values"]==model_inputs_og["pixel_values"]) # tensor(True) ``` With this small change, we reduce the video preprocessing time from 1.11 seconds to 154 ms :smiley: ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28221/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28221/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28221", "html_url": "https://github.com/huggingface/transformers/pull/28221", "diff_url": "https://github.com/huggingface/transformers/pull/28221.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28221.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28220
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28220/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28220/comments
https://api.github.com/repos/huggingface/transformers/issues/28220/events
https://github.com/huggingface/transformers/pull/28220
2,054,790,206
PR_kwDOCUB6oc5ispRK
28,220
remove two deprecated function
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @statelesshz Thanks a lot for this PR! Let's wait a bit the response from #25948 🙏 ", "@ydshieh \r\nThanks for the notification.\r\n\r\nThis is handled by someone else currently, so I don't hold strong opinion against it.\r\nMy understanding is if there's breaking change they'll likely to pin down version instead of patching.\r\n\r\nHowever all previous points regarding API stability still holds.\r\nIt'll unavoidably break a lot of existing community code, without clear benefit.\r\nIt there's any strong benefit I missed, please state that in PR desc.", "The points are valid, but I am not sure about ` break a lot of existing community code`.\r\n\r\nKeep something in the codebase we don't use anymore is not great. Having it in a legacy dir is only worth it if that code is really used in the wild (for which I am not sure it's the case). Keep it with a warning is also not ideal - we have a lot of discussion about if to keep warnings or not etc.\r\n\r\nI will move on - and if we eventually see it is indeed used a lot by the community, we can put it back (but in a legacy dir).", "> Keep something in the codebase we don't use anymore is not great\r\n\r\nAgree\r\n\r\n> Having it in a legacy dir is only worth it if that code is really used in the wild (for which I am not sure it's the case)\r\n\r\n`@torch_required` is currently used by 704 files just on GitHub:\r\n- https://github.com/search?q=%40torch_required&type=code\r\n\r\n![image](https://github.com/huggingface/transformers/assets/5203025/ea661c53-5fb7-451d-bd3d-94878786ffd7)\r\n\r\n66 files for `@tf_required`:\r\n- https://github.com/search?q=%40tf_required&type=code\r\n", "Thanks a lot for this information!\r\n\r\nBut some (if not many) of them are actually including the definitions of those 2 functions in a file in their own repository." ]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> we are now at 4.37.0.dev0, it's time to say goodbye to `torch_required` and `tf_required` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> cc @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28220/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28220", "html_url": "https://github.com/huggingface/transformers/pull/28220", "diff_url": "https://github.com/huggingface/transformers/pull/28220.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28220.patch", "merged_at": 1704713638000 }
https://api.github.com/repos/huggingface/transformers/issues/28219
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28219/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28219/comments
https://api.github.com/repos/huggingface/transformers/issues/28219/events
https://github.com/huggingface/transformers/pull/28219
2,054,714,019
PR_kwDOCUB6oc5isah-
28,219
Fix trainer saving safetensors: metadata is None
{ "login": "hiyouga", "id": 16256802, "node_id": "MDQ6VXNlcjE2MjU2ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/16256802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hiyouga", "html_url": "https://github.com/hiyouga", "followers_url": "https://api.github.com/users/hiyouga/followers", "following_url": "https://api.github.com/users/hiyouga/following{/other_user}", "gists_url": "https://api.github.com/users/hiyouga/gists{/gist_id}", "starred_url": "https://api.github.com/users/hiyouga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hiyouga/subscriptions", "organizations_url": "https://api.github.com/users/hiyouga/orgs", "repos_url": "https://api.github.com/users/hiyouga/repos", "events_url": "https://api.github.com/users/hiyouga/events{/privacy}", "received_events_url": "https://api.github.com/users/hiyouga/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/hiyouga/LLaMA-Factory/issues/1959 If we use `Trainer` to train a model that does not belong to the `PreTrainedModel` class, such as the `PreTrainedModelwithValuehead` from the TRL library, the trainer will not save the metadata. This leads to errors in reading the metadata when using `AutoModelforCausalLM.from_pretrained` to load the model. https://github.com/huggingface/transformers/blob/29e7a1e1834f331a4916853ecd58549ed78235d6/src/transformers/trainer.py#L2911 https://github.com/huggingface/transformers/blob/29e7a1e1834f331a4916853ecd58549ed78235d6/src/transformers/modeling_utils.py#L3403-L3407 Although it may sound strange to load a model that does not belong to the `PreTrainedModel` class using `AutoModelForCausalLM.from_pretrained`, this approach benefits model loading by utilizing features such as low_cpu_mem_usage if the model checkpoints share the same structure. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @muellerzr @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28219/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28219/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28219", "html_url": "https://github.com/huggingface/transformers/pull/28219", "diff_url": "https://github.com/huggingface/transformers/pull/28219.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28219.patch", "merged_at": 1704200309000 }
https://api.github.com/repos/huggingface/transformers/issues/28218
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28218/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28218/comments
https://api.github.com/repos/huggingface/transformers/issues/28218/events
https://github.com/huggingface/transformers/issues/28218
2,054,646,556
I_kwDOCUB6oc56d2sc
28,218
Tokenizer adds an additional space after the added token
{ "login": "kitkhai", "id": 71968397, "node_id": "MDQ6VXNlcjcxOTY4Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/71968397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kitkhai", "html_url": "https://github.com/kitkhai", "followers_url": "https://api.github.com/users/kitkhai/followers", "following_url": "https://api.github.com/users/kitkhai/following{/other_user}", "gists_url": "https://api.github.com/users/kitkhai/gists{/gist_id}", "starred_url": "https://api.github.com/users/kitkhai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kitkhai/subscriptions", "organizations_url": "https://api.github.com/users/kitkhai/orgs", "repos_url": "https://api.github.com/users/kitkhai/repos", "events_url": "https://api.github.com/users/kitkhai/events{/privacy}", "received_events_url": "https://api.github.com/users/kitkhai/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey! Thanks for raising the issue. This is pretty much a duplicated of #26318 and will be fixed by #27883! \r\nYou can already work around this following the tutorial here: https://github.com/huggingface/tokenizers/pull/1357 ", "PR Was merged, let me check if this is fixed! ", "Okay not fixed yet I' ll include it in #27717 ", "Hi @ArthurZucker excited to hear about the progress! :)\r\n\r\nAlso, I realised that if there are multiple added tokens (`abcd `& `gyma`) that could in a sense overlap and be segmented from the same word `gymabcd`. It seems like the current implementation seems to just go **from left to right** and **look for any added token that appears** first. \r\nInstead of doing a left to right, **is there a way to control which added token should take precedence and be segmented first**?\r\n\r\nJust a reminder that I am thinking of Chinese language where words are not separated by space and hence my seemingly weird example of `gymabcd `", "There is no real way to do that yet, I think we check the longest first. " ]
1,703
1,706
null
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer checkpoint = "facebook/nllb-200-distilled-600M" tokenizer = AutoTokenizer.from_pretrained(checkpoint, src_lang = "eng_Latn", tgt_lang = "zho_Hans") tokenizer.add_tokens(["abcd"]) sent = 'I like to walk abcdgym along the beach' print("tokenizer: ", tokenizer.tokenize(sent)) print("tokenizer: ", tokenizer.decode(tokenizer.encode(sent)[1:-1])) sent = 'I like to walk gymabcd along the beach' print("tokenizer: ", tokenizer.tokenize(sent)) print("tokenizer: ", tokenizer.decode(tokenizer.encode(sent)[1:-1])) ``` ### Expected behavior The output from my code: ![image](https://github.com/huggingface/transformers/assets/71968397/5ba945f5-eb79-4c7d-b82f-8b74c2db0321) The original post where I raised this potential bug and was asked to file an issue would be at: https://discuss.huggingface.co/t/tokenizer-shrinking-recipes/8564/5 For context, I am originally trying to add Chinese tokens to the tokenizer. However, for illustration purposes, I have demonstrated the “bug” in English. Chinese words are not separated by spaces and hence in the example you will see me trying to add a token that is a subword. Evidently, tokenizer.add_tokens() works well if there will always be space after the added token but it doesn’t work as intended if there isn’t space after the added token (where the tokenizer will then introduce the additional space on its own). I read the [docs](https://huggingface.co./docs/transformers/v4.36.1/en/internal/tokenization_utils#transformers.SpecialTokensMixin.add_tokens) and figured out it is probably because the added tokens are isolated before the tokenization algorithm is applied, hence I am not 100% sure this behaviour by the tokenizer is intended.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28218/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28217
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28217/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28217/comments
https://api.github.com/repos/huggingface/transformers/issues/28217/events
https://github.com/huggingface/transformers/issues/28217
2,054,557,872
I_kwDOCUB6oc56dhCw
28,217
Huggingface Agents Tutorial has an error when using Huggingface models
{ "login": "mohit-raghavendra", "id": 42749143, "node_id": "MDQ6VXNlcjQyNzQ5MTQz", "avatar_url": "https://avatars.githubusercontent.com/u/42749143?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mohit-raghavendra", "html_url": "https://github.com/mohit-raghavendra", "followers_url": "https://api.github.com/users/mohit-raghavendra/followers", "following_url": "https://api.github.com/users/mohit-raghavendra/following{/other_user}", "gists_url": "https://api.github.com/users/mohit-raghavendra/gists{/gist_id}", "starred_url": "https://api.github.com/users/mohit-raghavendra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mohit-raghavendra/subscriptions", "organizations_url": "https://api.github.com/users/mohit-raghavendra/orgs", "repos_url": "https://api.github.com/users/mohit-raghavendra/repos", "events_url": "https://api.github.com/users/mohit-raghavendra/events{/privacy}", "received_events_url": "https://api.github.com/users/mohit-raghavendra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! I don't have an openAI key, could you try with the latest version of transformers and / or share the exact traceback as where this failed> ", "I tied running the notebook with the main version of Transformers, with the Huggingface StarCoder model. \r\n\r\nI set the Huggingface token (hf_xyz) as follows:\r\n![image](https://github.com/huggingface/transformers/assets/42749143/d917969f-2ef1-4adc-b0bb-7ba6e2349d61)\r\n\r\nThis is the error I am getting now.\r\n\r\n![image](https://github.com/huggingface/transformers/assets/42749143/5b979014-1041-44fa-8e73-bfae84092946)\r\n", "You cannot use a huggingface token for the openai API 😓 You need to also pass your openAI api token to the agent like it's done in the notebook I think", "Isn't StarCoder an HF model? Do I need an OpenAI key to use even that?", "Right sorry for the confusion, your colab is not defaulting on starcoder. \r\nNot 100% sure what's at play, maybe @Wauplin for the colab integration? 🤗 ", "Hi @mohit-raghavendra could you regenerate a token in https://huggingface.co./settings/tokens and save it in your colab secrets? This issue can happen if the token is outdated/revoked for some reason.", "I tried it with a new token, same error. ", "@mohit-raghavendra \r\nI had the same issue with the Error 400 token for the Huggingface StarCoder model. It was resolved by passing the token directly to the agent initialization like:\r\n\r\n`agent = HfAgent(url_endpoint=\"https://api-inference.huggingface.co/models/bigcode/starcoder\", token='hf_my_token')`\r\n\r\n@Wauplin @ArthurZucker \r\nThere is a new error with the StarCoder agent:\r\n![image](https://github.com/huggingface/transformers/assets/54349415/dd940949-4a49-443a-846c-5bc89ac40155)\r\n\r\nTo my mind, it seems like there is a strict limitation on `max_new_tokens` [here](https://github.com/huggingface/transformers/blob/a7cab3c283312b8d4de5df3bbe719971e24f4281/src/transformers/tools/agents.py#L640C40-L640C40)", "Thanks for the workaround @dashapetr \r\n\r\nRegarding the other issue, could you open in a separate issue so that we can discuss it further? It is just to avoid mixing topics. Thanks in advance!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### System Info Transformers v4.29.0 ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1c7MHD-T1forUPGcC_jlwsIptOzpG3hSj ### Expected behavior I believe it is supposed to generate the results that are in the notebook but it returns the error: ValueError: Error 422: {'error': 'Input validation error: `inputs` must have less than 1024 tokens. Given: 1545', 'error_type': 'validation'}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28217/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28216
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28216/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28216/comments
https://api.github.com/repos/huggingface/transformers/issues/28216/events
https://github.com/huggingface/transformers/pull/28216
2,054,515,980
PR_kwDOCUB6oc5irxrb
28,216
Image Feature Extraction pipeline
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28216). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "We will need to make some changes in \r\n\r\nhttps://github.com/amyeroberts/transformers/blob/d048cee154ddb36d4eaf0a072f5ffabd9c0bef2d/tests/models/align/test_modeling_align.py#L43\r\n\r\nto make the test of `\"image-feature-extraction\"` works.", "> Remark (just FYI): the attribute pipeline_model_mapping is generated via a script. And usually we don't modify them manually. It's fine here in this PR. But next time when I need to run the script again, it will add back \"feature-extraction\" anyway (despite the corresponding test will be skip).\r\n\r\n@ydshieh OK, thanks for flagging. Is there anything else I need to do to make sure that `\"image-feature-extraction\"` will be automatically added / not deleted when running this script? ", "> > Remark (just FYI): the attribute pipeline_model_mapping is generated via a script. And usually we don't modify them manually. It's fine here in this PR. But next time when I need to run the script again, it will add back \"feature-extraction\" anyway (despite the corresponding test will be skip).\r\n> \r\n> @ydshieh OK, thanks for flagging. Is there anything else I need to do to make sure that `\"image-feature-extraction\"` will be automatically added / not deleted when running this script?\r\n\r\nNo, no action required from your side :-) in this PR\r\n", "FYI @Rocketknight1 " ]
1,703
1,707
1,707
COLLABORATOR
null
# What does this PR do? Adds an ImageFeatureExtractor pipeline, which uses vision models for feature extraction. This is based off the `FeatureExtractionPipeline`, but is compatible with models which use image processors. ```py from transformers import pipeline extractor = pipeline(model="google/vit-base-patch16-224", task="image-feature-extraction") result = extractor("https://huggingface.co./datasets/Narsil/image_dummy/raw/main/parrots.png", return_tensors=True) ``` This was added instead of adapting the current feature-extractor pipeline because of the conditional handling required for the tokenizers versus image processors ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28216/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28216", "html_url": "https://github.com/huggingface/transformers/pull/28216", "diff_url": "https://github.com/huggingface/transformers/pull/28216.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28216.patch", "merged_at": 1707144608000 }
https://api.github.com/repos/huggingface/transformers/issues/28215
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28215/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28215/comments
https://api.github.com/repos/huggingface/transformers/issues/28215/events
https://github.com/huggingface/transformers/issues/28215
2,054,492,113
I_kwDOCUB6oc56dQ_R
28,215
causal_mask in GPT2Attention should not be broadcastable across the seq_len
{ "login": "bknyaz", "id": 3225366, "node_id": "MDQ6VXNlcjMyMjUzNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/3225366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bknyaz", "html_url": "https://github.com/bknyaz", "followers_url": "https://api.github.com/users/bknyaz/followers", "following_url": "https://api.github.com/users/bknyaz/following{/other_user}", "gists_url": "https://api.github.com/users/bknyaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/bknyaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bknyaz/subscriptions", "organizations_url": "https://api.github.com/users/bknyaz/orgs", "repos_url": "https://api.github.com/users/bknyaz/repos", "events_url": "https://api.github.com/users/bknyaz/events{/privacy}", "received_events_url": "https://api.github.com/users/bknyaz/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @ArthurZucker ", "Hey! The GPT2Attention layer is not part of the public documentation and is not self contained. This is thus expected. You are in luck as #28132 will probably adresse this. " ]
1,703
1,706
null
NONE
null
### System Info Python : 3.8.2 torch : 2.2.0.dev20231207+cu121 transformers : 4.31.0 torchvision : 0.17.0.dev20231207+cu121 cuda version : 12.1 In `transformers.models.gpt2.modeling_gpt2.GPT2Attention` https://github.com/huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_gpt2.py#L202 the `causal_mask` must have the same shape for the last 2 dims, otherwise if the `max_position_embeddings=1` while the sequence length is longer than 1, the resulted attention weights leads to attending the future tokens. See the steps to reproduce the behavior for details. Normally, one wouldn't set `max_position_embeddings=1`, but nevertheless the broadcasting should not happen. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Code to reproduce the issue: ``` import torch import transformers import matplotlib.pyplot as plt from mpl_toolkits.axes_grid1 import make_axes_locatable bsz, seq_len, hid = 2, 3, 4 fig, axes = plt.subplots(ncols=3, figsize=(9,2)) for n_positions, ax in zip([1, 2, seq_len], axes): attn = transformers.models.gpt2.modeling_gpt2.GPT2Attention(transformers.GPT2Config(n_embd=hid, n_layer=1, n_head=1, n_positions=n_positions)) ax.axis(False) ax.set_title('attn_weights, n_positions=%d' % n_positions, fontsize=9) attn_input = torch.randn(bsz, seq_len, hid) try: attn_output, _, attn_weights = attn(attn_input, output_attentions=True) except Exception as e: print('n_positions=%d' % n_positions, attn_input.shape, 'attn_output', 'ERROR:', e) continue print('n_positions=%d' % n_positions, 'attn_input', attn_input.shape, 'attn_output', attn_output.shape, 'attn_weights', attn_weights.shape) divider = make_axes_locatable(ax) cax = divider.append_axes('right', size='5%', pad=0.05) im = ax.imshow(attn_weights[0, 0].data.cpu().numpy()) fig.colorbar(im, cax=cax, orientation='vertical') plt.show() ``` Output: ``` n_positions=1 attn_input torch.Size([2, 3, 4]) attn_output torch.Size([2, 3, 4]) attn_weights torch.Size([2, 1, 3, 3]) n_positions=2 torch.Size([2, 3, 4]) attn_output ERROR: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 3 n_positions=3 attn_input torch.Size([2, 3, 4]) attn_output torch.Size([2, 3, 4]) attn_weights torch.Size([2, 1, 3, 3]) ``` ![image](https://github.com/huggingface/transformers/assets/3225366/c644271f-b3c7-44ce-b236-cc857597708e) ### Expected behavior There should be some error message, for example triggered by `assert attn_weights.shape[-2:] == causal_mask.shape[-2:], 'attn_weights and causal_mask must have the same seq length'`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28215/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28214
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28214/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28214/comments
https://api.github.com/repos/huggingface/transformers/issues/28214/events
https://github.com/huggingface/transformers/pull/28214
2,054,484,317
PR_kwDOCUB6oc5irq9X
28,214
Enable instantiating model with pretrained backbone weights
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28214). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @NielsRogge ", "As an update - when running slow tests for DETR, the instantiated model fails on the integration tests: output logits are similar, but not exactly the same. Digging into it 🕵️ ", "Good news- failures were just an artefact of running on the ubuntu machine (they fail running on main too). Running on my mac, everything's good. Just one test value had a 6th decimal place change. Updated in [06ff4f6](https://github.com/huggingface/transformers/pull/28214/commits/06ff4f66ad618e033a4b2d0925b9924649c3f39c). \r\n\r\n@ArthurZucker Only outstanding thing to address is for [this comment](https://github.com/huggingface/transformers/pull/28214#discussion_r1440543327). I've added tests in `test_load_backbone_in_new_model` to include checking loading with a timm backbone too. Is there anything else you'd like me to add / cover? ", "Currently failing tests are flaky. Opened a PR #28458 to address" ]
1,703
1,706
1,706
COLLABORATOR
null
# What does this PR do? At the moment, there's two ways backbone weights can be initialized in a model: * Pretrained weights that are already saved alongside the parent model's weights in a checkpoint * Randomly initialized from config However, the main use-case of backbones is to be able to leverage pretrained weights of a feature extractor which then feed features to the neck/head. It's therefore necessary to be able instantiate a new model e.g. a MaskFormer model with pretrained backbone weights and the rest of the weights randomly initialized. This PR adds a new function `load_backbone` which enables loading a backbone from either a backbone config or using the parent model's config which contains information about the backbone e.g. the checkpoint. To reduce the number of changes, this PR doesn't update the modeling code to use `load_backbone`. Once this PR is added and modeling code updated, it will be possible to load backbones in a variety of ways: Initialize a new MaskFormerModel with a pretrained resnet backbone ```py from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig config = MaskFormerConfig(backbone="microsoft/resnet50", use_pretrained_backbone=True) model = MaskFormerForInstanceSegmentation(config) ``` Initialize a new MaskFormerModel with a randomly initalized resnet backbone ```py from transformers import MaskFormerConfig, MaskFormerForInstanceSegmentation, ResNetConfig config = MaskFormerConfig(backbone="microsoft/resnet50", use_pretrained_backbone=False) model = MaskFormerForInstanceSegmentation(config) ``` Initialize a new MaskFormerModel with a backbone config ```py backbone_config = ResNetConfig() config = MaskFormerConfig(backbone_config=backbone_config) model = MaskFormerForInstanceSegmentation(config) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28214/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28214", "html_url": "https://github.com/huggingface/transformers/pull/28214", "diff_url": "https://github.com/huggingface/transformers/pull/28214.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28214.patch", "merged_at": 1706007710000 }
https://api.github.com/repos/huggingface/transformers/issues/28213
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28213/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28213/comments
https://api.github.com/repos/huggingface/transformers/issues/28213/events
https://github.com/huggingface/transformers/pull/28213
2,054,480,750
PR_kwDOCUB6oc5irqLN
28,213
Remove fast tokenization warning in Data Collators
{ "login": "dbuos", "id": 68216, "node_id": "MDQ6VXNlcjY4MjE2", "avatar_url": "https://avatars.githubusercontent.com/u/68216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dbuos", "html_url": "https://github.com/dbuos", "followers_url": "https://api.github.com/users/dbuos/followers", "following_url": "https://api.github.com/users/dbuos/following{/other_user}", "gists_url": "https://api.github.com/users/dbuos/gists{/gist_id}", "starred_url": "https://api.github.com/users/dbuos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dbuos/subscriptions", "organizations_url": "https://api.github.com/users/dbuos/orgs", "repos_url": "https://api.github.com/users/dbuos/repos", "events_url": "https://api.github.com/users/dbuos/events{/privacy}", "received_events_url": "https://api.github.com/users/dbuos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts" ]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? Modifies the data collators (DataCollatorWithPadding, DataCollatorForTokenClassification, DataCollatorForSeq2Seq, ...) to pad without warning the user to instead calling tokenizer.__call__ when using fast tokenizers. The modification in the Data collators: - we save the state of the tokenizer with regards to the warning - disable the warning - pad - restore the state of whether we want to warn or not. See #22638 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19471 and #22638 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28213/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28213", "html_url": "https://github.com/huggingface/transformers/pull/28213", "diff_url": "https://github.com/huggingface/transformers/pull/28213.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28213.patch", "merged_at": 1704220344000 }
https://api.github.com/repos/huggingface/transformers/issues/28212
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28212/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28212/comments
https://api.github.com/repos/huggingface/transformers/issues/28212/events
https://github.com/huggingface/transformers/issues/28212
2,054,411,482
I_kwDOCUB6oc56c9Ta
28,212
Bark logits_warper(...) 'list' object not calllable while sampling
{ "login": "Colt-Zero", "id": 5473644, "node_id": "MDQ6VXNlcjU0NzM2NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5473644?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Colt-Zero", "html_url": "https://github.com/Colt-Zero", "followers_url": "https://api.github.com/users/Colt-Zero/followers", "following_url": "https://api.github.com/users/Colt-Zero/following{/other_user}", "gists_url": "https://api.github.com/users/Colt-Zero/gists{/gist_id}", "starred_url": "https://api.github.com/users/Colt-Zero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Colt-Zero/subscriptions", "organizations_url": "https://api.github.com/users/Colt-Zero/orgs", "repos_url": "https://api.github.com/users/Colt-Zero/repos", "events_url": "https://api.github.com/users/Colt-Zero/events{/privacy}", "received_events_url": "https://api.github.com/users/Colt-Zero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I managed to solve my own problem by editing `transformers/generation/utils.py` to reconstruct the logits_warper lists within the function it's called like this:\r\n```\r\ndef beam_sample:\r\n...\r\nlogits_warper = LogitsProcessorList(logits_warper)\r\n...\r\n\r\ndef sample:\r\n...\r\nlogits_warper = LogitsProcessorList(logits_warper)\r\n...\r\n```\r\nI don't understand why this was necessary. It doesn't make any sense to me.", "cc @ylacombe 🙌", "Hey @Colt-Zero, thanks for opening this issue, I don't seem to be able to reproduce the bug with the following code:\r\n```python\r\nfrom transformers import AutoProcessor, pipeline\r\nimport torch\r\n\r\nprocessor = AutoProcessor.from_pretrained(\"suno/bark\")\r\nsynthesiser = pipeline(\"text-to-speech\", \"suno/bark\", torch_dtype=torch.float16, device=0)\r\n\r\ninputs = processor(\"hey gen\", voice_preset=\"v2/en_speaker_6\").to(\"cuda\")\r\n\r\nforward_params = { \"history_prompt\": inputs[\"history_prompt\"], \"num_beams\":6, \"do_sample\":True}\r\n\r\n\r\nspeech_output = synthesiser(\"hey gen\", forward_params=forward_params)\r\naudio_data = speech_output[\"audio\"]\r\nsampling_rate = speech_output[\"sampling_rate\"]\r\n```\r\nLet me know if that's help\r\n\r\nCan you send a full end-to-end script to reproduce the issue ?\r\nMany thanks\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,708
1,708
NONE
null
### System Info transformers 4.36.2, ubuntu 20.04, python 3.10 ### Who can help? @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Import transformers pipeline 2. Setup Bark pipeline and processor: ``` isSmall = "" if isV2 else "-small" synthesiser = pipeline("text-to-speech", f"suno/bark{isSmall}", torch_dtype=torch.float16, framework="pt", device=torch.device(params["mode"])) if better_transformer: synthesiser.model = synthesiser.model.to_bettertransformer() if cpu_offload: synthesiser.model.enable_cpu_offload() processor = AutoProcessor.from_pretrained(f"suno/bark{isSmall}") tts = (synthesiser, processor) ``` 3. Inference Bark: ``` synthesiser, processor = tts inputs = processor(text, voice_preset="v2/en_speaker_6").to(params["mode"]) forward_params = { "history_prompt": inputs["history_prompt"], "num_beams":6, "do_sample":True } speech_output = synthesiser(turn, forward_params=forward_params) audio_data = speech_output["audio"] sampling_rate = speech_output["sampling_rate"] ``` The code seems to work fine up until `speech_output = synthesiser(text, forward_params=forward_params)` Where it gets the error: ``` Traceback (most recent call last): File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/gradio/routes.py", line 427, in run_predict output = await app.get_blocks().process_api( File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/gradio/blocks.py", line 1323, in process_api result = await self.call_function( File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/gradio/blocks.py", line 1067, in call_function prediction = await utils.async_iteration(iterator) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/gradio/utils.py", line 336, in async_iteration return await iterator.__anext__() File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/gradio/utils.py", line 329, in __anext__ return await anyio.to_thread.run_sync( File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2106, in run_sync_in_worker_thread return await future File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 833, in run result = context.run(func, *args) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/gradio/utils.py", line 312, in run_sync_iterator_async return next(iterator) File "/storage/OngoingWork/Llama-2-13b/text-generation-webui/extensions/history_extension_applier/script.py", line 213, in apply_to_history new_lines[line_ind%2] = apply_extensions(settings['ext_mode'], history[processed_history][line_ind//2][line_ind%2], gradio('interface_state'), is_chat=True) File "/storage/OngoingWork/Llama-2-13b/text-generation-webui/extensions/history_extension_applier/script.py", line 194, in apply_extensions return EXTENSION_MAP[typ](*args, **kwargs) File "/storage/OngoingWork/Llama-2-13b/text-generation-webui/extensions/history_extension_applier/script.py", line 47, in _apply_string_extensions text = func(*args, **kwargs) File "/storage/OngoingWork/Llama-2-13b/text-generation-webui/extensions/text_generation_webui_xtts/script.py", line 502, in output_modifier return tts_char(string) File "/storage/OngoingWork/Llama-2-13b/text-generation-webui/extensions/text_generation_webui_xtts/script.py", line 358, in tts_char speech_output = model(text, forward_params=forward_params) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/transformers/pipelines/text_to_audio.py", line 182, in __call__ return super().__call__(text_inputs, **forward_params) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1140, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1147, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1046, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/transformers/pipelines/text_to_audio.py", line 143, in _forward output = self.model.generate(**model_inputs, **forward_params) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/transformers/models/bark/modeling_bark.py", line 1821, in generate semantic_output = self.semantic.generate( File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/transformers/models/bark/modeling_bark.py", line 1008, in generate semantic_output = super().generate( File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 1834, in generate return self.beam_sample( File "/storage/OngoingWork/Llama-2-13b/installer_files/env/lib/python3.10/site-packages/transformers/generation/utils.py", line 3533, in beam_sample next_token_scores_processed = logits_warper(input_ids, next_token_scores_processed) TypeError: 'list' object is not callable ``` ### Expected behavior I'm expecting it to be able to be able to properly perform a sampling inference operation with the bark model, but it fails no matter what I've tried. And that includes: No history prompt. No beam sampling (so just normal sampling). I've tried the BarkModel/AutoModel class with a manual call to generate, but I run into the same error. I'm really not sure what's going wrong here. It's supposed to be treating the logits_warper as a LogitsProcessorList retrieved via GenerationMixin._get_logits_warper(...) but it seems to sometimes regard it as a normal list instead. Help would be appreciated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28212/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28211
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28211/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28211/comments
https://api.github.com/repos/huggingface/transformers/issues/28211/events
https://github.com/huggingface/transformers/pull/28211
2,054,322,047
PR_kwDOCUB6oc5irGlQ
28,211
[Phi2] Add support for phi2 models
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> Hey! Thanks for the PR, alright overall, but let's not add the phi2 changes here since it's not supported!\r\n\r\nHi @ArthurZucker, phi2 has the same modeling code as the library phi model...so I guess it's supported. \r\nI also created [susnato/phi-2](https://huggingface.co./susnato/phi-2) to verify the outputs and they are ok.\r\n\r\nI am sorry if I am missing something here...", "Oh no I checked #28163 and thought we needed a lot more changes! If this is enough let's rename this one to `[Phi2] Add support for phi2 models` and close the other one? \r\n(We agree that this is the only changes needed to support phi2 ?)\r\n", "Hi @ArthurZucker, yes this is enough for support of `phi2`(weights conversion script and verify logits in test) but #28163 is for adding more features for both phi1 and phi2 models so we should not close that IMO. ", "Alright! Got it! ", "Hi @ArthurZucker, I have pushed the changes. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28211). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Btw it’s mostly nits we can merge without them as the community is pretty eager to have it!🤗", "Hi @ArthurZucker, sorry I forgot to update the doc file to include the `phi2` example, I guess we should add an example and show it because many people were unable to use it with the library and raised issues in `microsoft/phi-2` (until the weights are in appropriate order we should use `susnato/phi-2` instead of the official repo to load it successfully). If you agree, I would create a quick PR to add the example. WDYT? ", "Should be alright,. we'll do a release this week as well so that everything will be included and the weights will have to be changed as well! cc @gugarosa for visibility " ]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes the conversion script for `phi` to include the changes made in this [commit](https://huggingface.co./microsoft/phi-1/commit/bbace889388b8bb0ba6ec3d28dcccca00f962062). It also adds a integration test for `phi2`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gugarosa @ArthurZucker Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28211/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28211", "html_url": "https://github.com/huggingface/transformers/pull/28211", "diff_url": "https://github.com/huggingface/transformers/pull/28211.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28211.patch", "merged_at": 1704611954000 }
https://api.github.com/repos/huggingface/transformers/issues/28210
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28210/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28210/comments
https://api.github.com/repos/huggingface/transformers/issues/28210/events
https://github.com/huggingface/transformers/pull/28210
2,054,213,644
PR_kwDOCUB6oc5iqtqJ
28,210
add context-free grammar constrained decoding(ebnf interface) into reserach project directory
{ "login": "Saibo-creator", "id": 53392976, "node_id": "MDQ6VXNlcjUzMzkyOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saibo-creator", "html_url": "https://github.com/Saibo-creator", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello @gante ,\r\n Thank you for your feedback. I think the first option is better. I will close this PR and keep my repo. \r\n My rationales are:\r\n The grammar constrained generation is not yet complete. It doesn't support all tokenizers and unicode. I will continue to improve it. \r\n As a standalone repo, it is easier to maintain and update.\r\n\r\n Thank you for offering to amplify my work on social media. \r\n I will let you know when I have a more complete version of the grammar constrained generation. Now it's working but no thorough testing has been done yet.\r\n I know that it may have unexpected behavior in some cases, tho users may even not notice it.", "@Saibo-creator perfect!\r\n\r\nLet me know when you think it's ready, so we can post on social media about it 🙌 " ]
1,703
1,705
1,705
CONTRIBUTOR
null
This PR is a follow-up from PR #27557 where @gante has suggested putting this feature as a research project. It adds a new feature: Context Free Grammar Constrained Decoding, similarly to what llama-cpp has. It provides the (almost )same interface as llama-cpp. Fixes #25778 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/pull/27557 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28210/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28210/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28210", "html_url": "https://github.com/huggingface/transformers/pull/28210", "diff_url": "https://github.com/huggingface/transformers/pull/28210.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28210.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28209
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28209/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28209/comments
https://api.github.com/repos/huggingface/transformers/issues/28209/events
https://github.com/huggingface/transformers/pull/28209
2,054,146,717
PR_kwDOCUB6oc5iqePc
28,209
update warning for image processor loading
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @NielsRogge ", "Thanks for opening this @ydshieh! \r\n\r\nAs mentioned in the internal slack channel - I'd rather we had a transition with this: at least two releases instructing the users how to update before downgrading the warnings. Unlike the text_config for CLIP, loading feature extractors for vision is deprecated and it's necessary for users to change their configs to ensure future compatibility. ", "Thanks @amyeroberts Sound fair.\r\n\r\nDo you want me to change the warning to include the instructions and the version number involved (i.e. 2 release after since the current one)?\r\n\r\n", "@amyeroberts I'm not sure it's doable to expect users to update their config? That would upset a lot of users.\r\n\r\nWe have 8000+ `preprocessor_config.json` [files on the hub](https://huggingface.co./models?pipeline_tag=image-classification&library=transformers&sort=trending) => we can't update all of these right?\r\n\r\nSo my suggestion would be to gracefully handle this internally in the `from_pretrained` method without informing users.", "@ydshieh Yes please! \r\n\r\n> We have 8000+ preprocessor_config.json [files on the hub](https://huggingface.co./models?pipeline_tag=image-classification&library=transformers&sort=trending) => we can't update all of these right?\r\n\r\n@NielsRogge Certainly not! :) This is why we will downgrade the warning after a cycle. However, we should give users the opportunity to correctly update their configs as necessary by giving them the necessary information as they will not be supported in the future. ", "I have to change a bit of the warning message - if you would like to take another look." ]
1,703
1,704
1,704
COLLABORATOR
null
# What does this PR do? Similar to #28108, there is not much we/user can do with the files on the Hub. For this image processor loading, IMO, the attribute `image_processor_class` / `feature_extractor_class` are super unlikely specified by users (they are created automatically during saving for auto class to work). So `logger.info` is fine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28209/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28209", "html_url": "https://github.com/huggingface/transformers/pull/28209", "diff_url": "https://github.com/huggingface/transformers/pull/28209.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28209.patch", "merged_at": 1704786698000 }
https://api.github.com/repos/huggingface/transformers/issues/28208
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28208/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28208/comments
https://api.github.com/repos/huggingface/transformers/issues/28208/events
https://github.com/huggingface/transformers/pull/28208
2,054,115,320
PR_kwDOCUB6oc5iqXPA
28,208
Convert `torch_dtype` as `str` to actual torch data type (i.e. "float16" …to `torch.float16`)
{ "login": "KossaiSbai", "id": 35923560, "node_id": "MDQ6VXNlcjM1OTIzNTYw", "avatar_url": "https://avatars.githubusercontent.com/u/35923560?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KossaiSbai", "html_url": "https://github.com/KossaiSbai", "followers_url": "https://api.github.com/users/KossaiSbai/followers", "following_url": "https://api.github.com/users/KossaiSbai/following{/other_user}", "gists_url": "https://api.github.com/users/KossaiSbai/gists{/gist_id}", "starred_url": "https://api.github.com/users/KossaiSbai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KossaiSbai/subscriptions", "organizations_url": "https://api.github.com/users/KossaiSbai/orgs", "repos_url": "https://api.github.com/users/KossaiSbai/repos", "events_url": "https://api.github.com/users/KossaiSbai/events{/privacy}", "received_events_url": "https://api.github.com/users/KossaiSbai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts committed your suggestion, thanks for making me understand the difference between `is` and `isinstance`", "@KossaiSbai There's still some failing tests on the CI run. Rebasing on the most recent version of main should resolve this", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28208). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@amyeroberts I have just merged my branch into master, let's see if the CI tests now pass" ]
1,703
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes [#27087](https://github.com/huggingface/transformers/issues/27087) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28208/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28208", "html_url": "https://github.com/huggingface/transformers/pull/28208", "diff_url": "https://github.com/huggingface/transformers/pull/28208.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28208.patch", "merged_at": 1707746693000 }
https://api.github.com/repos/huggingface/transformers/issues/28207
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28207/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28207/comments
https://api.github.com/repos/huggingface/transformers/issues/28207/events
https://github.com/huggingface/transformers/pull/28207
2,054,075,588
PR_kwDOCUB6oc5iqOfh
28,207
Byebye torch 1.10
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "OK, going to merge. Feel free to drop a more detailed comment on tolerances if it's something needs to be fixed." ]
1,703
1,704
1,704
COLLABORATOR
null
# What does this PR do? Byebye torch 1.10 ☃️
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28207/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 3 }
https://api.github.com/repos/huggingface/transformers/issues/28207/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28207", "html_url": "https://github.com/huggingface/transformers/pull/28207", "diff_url": "https://github.com/huggingface/transformers/pull/28207.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28207.patch", "merged_at": 1704986308000 }
https://api.github.com/repos/huggingface/transformers/issues/28206
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28206/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28206/comments
https://api.github.com/repos/huggingface/transformers/issues/28206/events
https://github.com/huggingface/transformers/issues/28206
2,054,033,239
I_kwDOCUB6oc56bg9X
28,206
ValueError: too many values to unpack (expected 2) when Fine-tuning an LLM
{ "login": "0920GX", "id": 94618005, "node_id": "U_kgDOBaPBlQ", "avatar_url": "https://avatars.githubusercontent.com/u/94618005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/0920GX", "html_url": "https://github.com/0920GX", "followers_url": "https://api.github.com/users/0920GX/followers", "following_url": "https://api.github.com/users/0920GX/following{/other_user}", "gists_url": "https://api.github.com/users/0920GX/gists{/gist_id}", "starred_url": "https://api.github.com/users/0920GX/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/0920GX/subscriptions", "organizations_url": "https://api.github.com/users/0920GX/orgs", "repos_url": "https://api.github.com/users/0920GX/repos", "events_url": "https://api.github.com/users/0920GX/events{/privacy}", "received_events_url": "https://api.github.com/users/0920GX/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @0920GX, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm learning how to fine-tune LLMs. However, I've been encountering a consistent error when executing Trainer.train() in my fine-tuning process, regardless of how I modify my data processing approach. The error is as follows: `--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-11-3435b262f1ae> in <cell line: 1>() ----> 1 trainer.train() 12 frames /usr/local/lib/python3.10/dist-packages/transformers/modeling_attn_mask_utils.py in _expand_mask(mask, dtype, tgt_len) 152 Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. 153 """ --> 154 bsz, src_len = mask.size() 155 tgt_len = tgt_len if tgt_len is not None else src_len 156 ValueError: too many values to unpack (expected 2)` I've checked the dimensions of my attention mask, and it appears to be two-dimensional, which I believe is correct. Here's an example of one of my data entries: `{'input_ids': tensor([[ 1, 1788, 29901, 29871, 30417, 31915, 30313, 13, 13496, 7451, 29901, 29871, 233, 133, 171, 31076, 233, 176, 164, 235, 194, 145, 30867, 235, 138, 171, 30214, 31482, 30408, 31522, 232, 153, 160, 236, 190, 161, 231, 190, 131, 236, 189, 191, 232, 148, 165, 29973, 13, 15539, 29901, 29871, 30672, 30698, 30287, 233, 160, 178, 232, 137, 186, 233, 182, 150, 30986, 13, 13496, 7451, 29901, 29871, 233, 133, 171, 31522, 30698, 234, 151, 157, 236, 189, 191, 234, 151, 159, 30898, 233, 189, 174, 30898, 232, 148, 165, 29973, 13, 15539, 29901, 29871, 232, 188, 174, 30672, 232, 132, 157, 31209, 234, 182, 153, 31935, 232, 137, 179, 13, 13496, 7451, 29901, 29871, 236, 131, 156, 233, 171, 166, 30392, 29946, 29900, 30824, 30214, 31383, 30698, 235, 191, 140, 232, 136, 186, 3542, 10472, 232, 154, 145, 29973, 13, 15539, 29901, 29871, 30769, 30698, 13, 13496, 7451, 29901, 29871, 31356, 235, 191, 140, 232, 136, 186, 235, 174, 142, 232, 179, 144, 233, 189, 153, 236, 131, 156, 232, 131, 142, 236, 146, 164, 31876, 30214, 3542, 10472, 235, 174, 142, 233, 145, 134, 233, 146, 146, 31803, 30557, 31432, 30210, 233, 165, 160, 234, 165, 191, 13, 5205, 29901, 29871, 31689, 233, 175, 193, 30494, 31134, 13, 13496, 7451, 29901, 29871, 31076, 30210, 236, 131, 156, 233, 171, 166, 31238, 30682, 30651, 30743, 30214, 235, 174, 142, 233, 154, 132, 236, 133, 141, 234, 171, 144, 31184, 30287, 30557, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])}` I am working with a dataset formatted like this: { "id": "chat1", "conversations": [ {"role": "user", "content": "Hello, how are you?"}, {"role": "assistant", "content": "I'm doing great. How can I help you today?"}, {"role": "user", "content": "I'd like to show off how chat templating works!"} ] } I am executing this script in Google Colab. Here's my script: ``` import json from transformers import AutoTokenizer, AutoModelForCausalLM, Trainer, TrainingArguments from torch.utils.data import Dataset import torch file_path = '/content/gdrive/MyDrive/kirin.json' with open(file_path, 'r') as file: data = json.load(file) tokenizer = AutoTokenizer.from_pretrained("/content/gdrive/MyDrive/ColabNotebooks/Taiwan-LLM-7B-v2.0.1-chat") model = AutoModelForCausalLM.from_pretrained("/content/gdrive/MyDrive/ColabNotebooks/Taiwan-LLM-7B-v2.0.1- chat") def format_conversation(conversation): formatted = "" for turn in conversation['conversations']: formatted += turn['role'] + ": " + turn['content'] + "\n" return formatted.strip() formatted_data = [format_conversation(conv) for conv in data] class ConversationDataset(Dataset): def __init__(self, convs, tokenizer, max_length=512): self.tokenizer = tokenizer self.inputs = [tokenizer(text, return_tensors="pt", max_length=max_length, truncation=True, padding="max_length") for text in convs] def __len__(self): return len(self.inputs) def __getitem__(self, idx): return self.inputs[idx] dataset = ConversationDataset(formatted_data, tokenizer) training_args = TrainingArguments( output_dir="./llama2-finetuned", num_train_epochs=3, per_device_train_batch_size=1, save_steps=10_000, save_total_limit=2, ) trainer = Trainer( model=model, args=training_args, train_dataset=dataset ) trainer.train()``` ### Expected behavior The script is expected to run the training process smoothly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28206/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28205
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28205/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28205/comments
https://api.github.com/repos/huggingface/transformers/issues/28205/events
https://github.com/huggingface/transformers/issues/28205
2,053,923,404
I_kwDOCUB6oc56bGJM
28,205
loss did not change in expert loss of mixtral model
{ "login": "LinB203", "id": 62638829, "node_id": "MDQ6VXNlcjYyNjM4ODI5", "avatar_url": "https://avatars.githubusercontent.com/u/62638829?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LinB203", "html_url": "https://github.com/LinB203", "followers_url": "https://api.github.com/users/LinB203/followers", "following_url": "https://api.github.com/users/LinB203/following{/other_user}", "gists_url": "https://api.github.com/users/LinB203/gists{/gist_id}", "starred_url": "https://api.github.com/users/LinB203/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LinB203/subscriptions", "organizations_url": "https://api.github.com/users/LinB203/orgs", "repos_url": "https://api.github.com/users/LinB203/repos", "events_url": "https://api.github.com/users/LinB203/events{/privacy}", "received_events_url": "https://api.github.com/users/LinB203/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "have you fix this problem, i just met the same issue", "> have you fix this problem, i just met the same issue\r\n\r\nI change to use the deepspeed implementation of moe. I don't know how to fix it in HF. HF's loss is always a constant, while deepspeed's converges to change.", "@LinB203 could you point me to the deepspeed implementation of moe? Is it this one : [deepspeed/moe/layer.py](https://github.com/microsoft/DeepSpeed/blob/1787673edc7e45cd79fe10b95f92a02d3eb91505/deepspeed/moe/layer.py#L16) ?", "> @LinB203 could you point me to the deepspeed implementation of moe? Is it this one : [deepspeed/moe/layer.py](https://github.com/microsoft/DeepSpeed/blob/1787673edc7e45cd79fe10b95f92a02d3eb91505/deepspeed/moe/layer.py#L16) ?\r\n\r\nYes.", "@LinB203 when i run demo ,cannot find preprocessor_config.json on LanguageBind_Video_merge and LanguageBind_Video_Image huggingface.co, can you offen it ?\r\n\r\nTraceback (most recent call last):\r\n File \"/home/guowl/.local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/home/guowl/.local/lib/python3.10/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co./LanguageBind/LanguageBind_Video_merge/resolve/main/preprocessor_config.json\r\n", "Alright, I'll test again for the masked language modeling loss which should no be None. \r\nThe expert loss is gonna be fixed by #28256 " ]
1,703
1,704
1,703
NONE
null
### System Info FROM THIS: https://github.com/huggingface/transformers/issues/28093 If I understand correctly, the loss consists of two parts, the autoregressive categorical loss on the one hand, and balancing the loss of each expert on the other. I print each of the above two losses before adding up the final one, however, only the first one has a `grad_fn` of `NllLossBackward0`, the second one is just a tensor without `grad_fn` . That's why the `grad_fn` of the final loss is `NllLossBackward0` instead of `AddBackward`. And the balancing expert's loss doesn't change during the training process. Maybe the changes you see are just changes due to the autoregressive categorical loss. ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction print each of the above two losses before adding up the final one ### Expected behavior The expert balanced loss should converge and have `grad_f`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28205/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28204
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28204/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28204/comments
https://api.github.com/repos/huggingface/transformers/issues/28204/events
https://github.com/huggingface/transformers/issues/28204
2,053,897,139
I_kwDOCUB6oc56a_uz
28,204
```RuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: Invalid --2a886c8_slice_builder_worker_addresses specified. Expected 4 worker addresses, got 1.``` when using kaggle tpu
{ "login": "yongjer", "id": 54315206, "node_id": "MDQ6VXNlcjU0MzE1MjA2", "avatar_url": "https://avatars.githubusercontent.com/u/54315206?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yongjer", "html_url": "https://github.com/yongjer", "followers_url": "https://api.github.com/users/yongjer/followers", "following_url": "https://api.github.com/users/yongjer/following{/other_user}", "gists_url": "https://api.github.com/users/yongjer/gists{/gist_id}", "starred_url": "https://api.github.com/users/yongjer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yongjer/subscriptions", "organizations_url": "https://api.github.com/users/yongjer/orgs", "repos_url": "https://api.github.com/users/yongjer/repos", "events_url": "https://api.github.com/users/yongjer/events{/privacy}", "received_events_url": "https://api.github.com/users/yongjer/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "cc @muellerzr ", "Hi! Looking at the first 2 lines in the error log (after `The above exception was the direct cause of the following exception:`), it looks like the error occurs at the very early stage, which is relevant with `xla_spawn.py` rather than inside the modeling part.\r\n\r\n```bash\r\n File \"/kaggle/input/4-36-2/examples/pytorch/xla_spawn.py\", line 83, in <module>\r\n main()\r\n File \"/kaggle/input/4-36-2/examples/pytorch/xla_spawn.py\", line 79, in main\r\n xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)\r\n```\r\n\r\nI am not sure if this is relevant to `transformers` (or even `accelerate`), but let's wait @muellerz back.\r\n\r\n(there are people having the same issue, for example [here](https://github.com/google/jax/issues/13260))", "I'd recommend using `accelerate launch` and not using `python`. We've done work to make sure that spawn should still work fine, can you try running:\r\n\r\n```bash\r\n!accelerate launch --tpu --num_processes 8 \\\r\n/kaggle/input/examples/pytorch/text-classification/run_classification.py \\\r\n--model_name_or_path ckip-joint/bloom-1b1-zh \\\r\n--do_train \\\r\n--do_eval \\\r\n--output_dir /kaggle/working/ \\\r\n--train_file /kaggle/input/dataset/train.csv \\\r\n--validation_file /kaggle/input/dataset/test.csv \\\r\n--text_column_names sentence \\\r\n--label_column_name label \\\r\n--overwrite_output_dir \\\r\n--torch_compile \\\r\n--fp16 \\\r\n--auto_find_batch_size\r\n```", "But isn't that accelerate can only use with no-trainer version script? Or I misunderstood ?", "No, accelerate is used always now as Accelerate is the heart of the Trainer :) ", "and here is the error:\r\n```\r\nWARNING:accelerate.commands.launch:The following values were not passed to `accelerate launch` and had defaults used instead:\r\n\t`--num_machines` was set to a value of `1`\r\n\t`--mixed_precision` was set to a value of `'no'`\r\n\t`--dynamo_backend` was set to a value of `'no'`\r\nTo avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.\r\n2023-12-22 16:23:10.038215: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-12-22 16:23:10.038284: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-12-22 16:23:10.040151: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/accelerate\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py\", line 47, in main\r\n args.func(args)\r\n File \"/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py\", line 1013, in launch_command\r\n tpu_launcher(args)\r\n File \"/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py\", line 745, in tpu_launcher\r\n if not hasattr(mod, args.main_training_function):\r\nTypeError: hasattr(): attribute name must be string\r\n```\r\n\r\n> I'd recommend using `accelerate launch` and not using `python`. We've done work to make sure that spawn should still work fine, can you try running:\r\n> \r\n> ```shell\r\n> !accelerate launch --tpu --num_processes 8 \\\r\n> /kaggle/input/examples/pytorch/text-classification/run_classification.py \\\r\n> --model_name_or_path ckip-joint/bloom-1b1-zh \\\r\n> --do_train \\\r\n> --do_eval \\\r\n> --output_dir /kaggle/working/ \\\r\n> --train_file /kaggle/input/dataset/train.csv \\\r\n> --validation_file /kaggle/input/dataset/test.csv \\\r\n> --text_column_names sentence \\\r\n> --label_column_name label \\\r\n> --overwrite_output_dir \\\r\n> --torch_compile \\\r\n> --fp16 \\\r\n> --auto_find_batch_size\r\n> ```\r\n\r\n", "You need to define a `main_training_function` as part of the command, so try doing the following:\r\n\r\n(And thanks for your patience!)\r\n\r\n```bash\r\n!accelerate launch --tpu --num_processes 8 \\\r\n--main_training_function main \\\r\n/kaggle/input/examples/pytorch/text-classification/run_classification.py \\\r\n--model_name_or_path ckip-joint/bloom-1b1-zh \\\r\n--do_train \\\r\n--do_eval \\\r\n--output_dir /kaggle/working/ \\\r\n--train_file /kaggle/input/dataset/train.csv \\\r\n--validation_file /kaggle/input/dataset/test.csv \\\r\n--text_column_names sentence \\\r\n--label_column_name label \\\r\n--overwrite_output_dir \\\r\n--torch_compile \\\r\n--fp16 \\\r\n--auto_find_batch_size\r\n```", "```\r\nWARNING:accelerate.commands.launch:The following values were not passed to `accelerate launch` and had defaults used instead:\r\n\t`--num_machines` was set to a value of `1`\r\n\t`--mixed_precision` was set to a value of `'no'`\r\n\t`--dynamo_backend` was set to a value of `'no'`\r\nTo avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/accelerate\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py\", line 47, in main\r\n args.func(args)\r\n File \"/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py\", line 1013, in launch_command\r\n tpu_launcher(args)\r\n File \"/usr/local/lib/python3.10/site-packages/accelerate/commands/launch.py\", line 744, in tpu_launcher\r\n mod = importlib.import_module(mod_name)\r\n File \"/usr/local/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1004, in _find_and_load_unlocked\r\nModuleNotFoundError: No module named 'run_classification'\r\n```\r\n\r\n> You need to define a `main_training_function` as part of the command, so try doing the following:\r\n> \r\n> (And thanks for your patience!)\r\n> \r\n> ```shell\r\n> !accelerate launch --tpu --num_processes 8 \\\r\n> --main_training_function main \\\r\n> /kaggle/input/examples/pytorch/text-classification/run_classification.py \\\r\n> --model_name_or_path ckip-joint/bloom-1b1-zh \\\r\n> --do_train \\\r\n> --do_eval \\\r\n> --output_dir /kaggle/working/ \\\r\n> --train_file /kaggle/input/dataset/train.csv \\\r\n> --validation_file /kaggle/input/dataset/test.csv \\\r\n> --text_column_names sentence \\\r\n> --label_column_name label \\\r\n> --overwrite_output_dir \\\r\n> --torch_compile \\\r\n> --fp16 \\\r\n> --auto_find_batch_size\r\n> ```\r\n\r\n", "Thanks, I'll try and take a look at this though it will probably not be until after the holidays", "thanks for your help, wish you happy holidays ", "I encountered the same error on kaggle's TPU VM v3-8 when using lit-gpt project's example finetuning code today. is there any progress on this issue?" ]
1,703
1,706
null
NONE
null
### System Info - `transformers` version: 4.37.0.dev0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.36 - Python version: 3.10.13 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.4 (tpu) - Jax version: 0.4.17 - JaxLib version: 0.4.17 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction run in Kaggle with tpu VM v3-8 accelerator ``` !python3 \ /kaggle/input/examples/pytorch/xla_spawn.py --num_cores 8 \ /kaggle/input/examples/pytorch/text-classification/run_classification.py \ --model_name_or_path ckip-joint/bloom-1b1-zh \ --do_train \ --do_eval \ --output_dir /kaggle/working/ \ --train_file /kaggle/input/dataset/train.csv \ --validation_file /kaggle/input/dataset/test.csv \ --text_column_names sentence \ --label_column_name label \ --overwrite_output_dir \ --torch_compile \ --fp16 \ --auto_find_batch_size ``` error: ``` 2023-12-22 12:57:36.401695: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-12-22 12:57:36.401755: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-12-22 12:57:36.403454: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered WARNING:root:Unsupported nprocs (8), ignoring... /usr/local/lib/python3.10/site-packages/jax/_src/cloud_tpu_init.py:75: UserWarning: JAX_USE_PJRT_C_API_ON_TPU no longer has an effect (the new TPU runtime is always enabled now). Unset the environment variable to disable this warning. warnings.warn( /usr/local/lib/python3.10/site-packages/jax/_src/cloud_tpu_init.py:75: UserWarning: JAX_USE_PJRT_C_API_ON_TPU no longer has an effect (the new TPU runtime is always enabled now). Unset the environment variable to disable this warning. warnings.warn( /usr/local/lib/python3.10/site-packages/jax/_src/cloud_tpu_init.py:75: UserWarning: JAX_USE_PJRT_C_API_ON_TPU no longer has an effect (the new TPU runtime is always enabled now). Unset the environment variable to disable this warning. warnings.warn( /usr/local/lib/python3.10/site-packages/jax/_src/cloud_tpu_init.py:75: UserWarning: JAX_USE_PJRT_C_API_ON_TPU no longer has an effect (the new TPU runtime is always enabled now). Unset the environment variable to disable this warning. warnings.warn( 2023-12-22 12:57:43.680522: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-12-22 12:57:43.680522: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-12-22 12:57:43.680585: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-12-22 12:57:43.680589: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-12-22 12:57:43.682210: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-12-22 12:57:43.682211: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-12-22 12:57:43.727851: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-12-22 12:57:43.727908: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-12-22 12:57:43.728235: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2023-12-22 12:57:43.728283: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2023-12-22 12:57:43.729554: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2023-12-22 12:57:43.729728: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 246, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 205, in _process_chunk return [fn(*args) for args in chunk] File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 205, in <listcomp> return [fn(*args) for args in chunk] File "/usr/local/lib/python3.10/site-packages/torch_xla/runtime.py", line 82, in wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 56, in _run_thread_per_device initializer_fn(local_rank, local_world_size) File "/usr/local/lib/python3.10/site-packages/torch_xla/runtime.py", line 82, in wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 115, in initialize_multiprocess devices = xm.get_xla_supported_devices() File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_model.py", line 91, in get_xla_supported_devices xla_devices = _DEVICES.value File "/usr/local/lib/python3.10/site-packages/torch_xla/utils/utils.py", line 29, in value self._value = self._gen_fn() File "/usr/local/lib/python3.10/site-packages/torch_xla/core/xla_model.py", line 19, in <lambda> _DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices()) RuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: Invalid --2a886c8_slice_builder_worker_addresses specified. Expected 4 worker addresses, got 1. """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/kaggle/input/4-36-2/examples/pytorch/xla_spawn.py", line 83, in <module> main() File "/kaggle/input/4-36-2/examples/pytorch/xla_spawn.py", line 79, in main xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) File "/usr/local/lib/python3.10/site-packages/torch_xla/runtime.py", line 82, in wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 38, in spawn return pjrt.spawn(fn, nprocs, start_method, args) File "/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 202, in spawn run_multiprocess(spawn_fn, start_method=start_method) File "/usr/local/lib/python3.10/site-packages/torch_xla/runtime.py", line 82, in wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 159, in run_multiprocess replica_results = list( File "/usr/local/lib/python3.10/site-packages/torch_xla/_internal/pjrt.py", line 160, in <genexpr> itertools.chain.from_iterable( File "/usr/local/lib/python3.10/concurrent/futures/process.py", line 575, in _chain_from_iterable_of_lists for element in iterable: File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator yield _result_or_cancel(fs.pop()) File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel return fut.result(timeout) File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 458, in result return self.__get_result() File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception RuntimeError: Bad StatusOr access: UNKNOWN: TPU initialization failed: Invalid --2a886c8_slice_builder_worker_addresses specified. Expected 4 worker addresses, got 1. ``` ### Expected behavior train without error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28204/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28203
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28203/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28203/comments
https://api.github.com/repos/huggingface/transformers/issues/28203/events
https://github.com/huggingface/transformers/pull/28203
2,053,810,910
PR_kwDOCUB6oc5ipUb4
28,203
fix FA2 when using quantization
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28203). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi, this behavior of casting to `float16` is still present in these models - `whisper`, `bart`, `phi`, `distilbert`...I will create a PR to fix it." ]
1,703
1,703
1,703
CONTRIBUTOR
null
# What does this PR do? 1. when I use QLoRA+Flash Attention with bf16, I get the following warning of casting to `float16` which is incorrect as it should be casting to bf16: ```bash The input hidden states seems to be silently casted in float32, this might be related to the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in torch.float16. ``` This PR resolves this issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28203/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28203", "html_url": "https://github.com/huggingface/transformers/pull/28203", "diff_url": "https://github.com/huggingface/transformers/pull/28203.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28203.patch", "merged_at": 1703560001000 }
https://api.github.com/repos/huggingface/transformers/issues/28202
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28202/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28202/comments
https://api.github.com/repos/huggingface/transformers/issues/28202/events
https://github.com/huggingface/transformers/pull/28202
2,053,675,463
PR_kwDOCUB6oc5io2r6
28,202
Fix the check of models supporting FA/SDPA not run
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "i create the pr(https://github.com/huggingface/transformers/pull/28201) check failed casue by `tests/utils/test_doc_samples.py` faild.\r\nshould i merge this commit, repush pr?", "@inkinworld Yes, once this PR has been merged into `main` then you can rebase on main or merge in the main branch into yours to include this fix and then push your branch again to trigger a re-run of the CI " ]
1,703
1,703
1,703
COLLABORATOR
null
# What does this PR do? The original check (as test methods) in `tests/utils/test_doc_samples.py` won't run (and didn't run like in #28133) as that file is not impacted by the modeling files (in terms of import relation). Those 2 checks don't need `torch` at all and could be done in the first stage of check (`check_repository_consistency`)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28202/reactions", "total_count": 4, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28202/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28202", "html_url": "https://github.com/huggingface/transformers/pull/28202", "diff_url": "https://github.com/huggingface/transformers/pull/28202.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28202.patch", "merged_at": 1703246171000 }
https://api.github.com/repos/huggingface/transformers/issues/28201
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28201/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28201/comments
https://api.github.com/repos/huggingface/transformers/issues/28201/events
https://github.com/huggingface/transformers/pull/28201
2,053,671,852
PR_kwDOCUB6oc5io15K
28,201
[BUG] BarkEosPrioritizerLogitsProcessor eos_token_id use list, tensor size mismatch
{ "login": "inkinworld", "id": 12553724, "node_id": "MDQ6VXNlcjEyNTUzNzI0", "avatar_url": "https://avatars.githubusercontent.com/u/12553724?v=4", "gravatar_id": "", "url": "https://api.github.com/users/inkinworld", "html_url": "https://github.com/inkinworld", "followers_url": "https://api.github.com/users/inkinworld/followers", "following_url": "https://api.github.com/users/inkinworld/following{/other_user}", "gists_url": "https://api.github.com/users/inkinworld/gists{/gist_id}", "starred_url": "https://api.github.com/users/inkinworld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/inkinworld/subscriptions", "organizations_url": "https://api.github.com/users/inkinworld/orgs", "repos_url": "https://api.github.com/users/inkinworld/repos", "events_url": "https://api.github.com/users/inkinworld/events{/privacy}", "received_events_url": "https://api.github.com/users/inkinworld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "(cc @ylacombe )", "Thanks for fixing @inkinworld !" ]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? Fixes bug about `transformers.generation.logits_process.BarkEosPrioritizerLogitsProcessor`. when `BarkEosPrioritizerLogitsProcessor` eos_token_id use list, tensor size mismatch. such as below test case: ``` def test_early_stop_processor_multi_eos(self): input_ids = None eos_token_id = [2, 3] min_eos_p = 0.1 ## some small float scores = self._get_uniform_logits(2, 4) scores[0][eos_token_id] = -6 ## less than log(min_eos_p) esp = BarkEosPrioritizerLogitsProcessor(eos_token_id=eos_token_id, min_eos_p=min_eos_p) actual_scores = esp(input_ids, scores) expected_scores_list = [ scores[0].tolist(), [float("-inf"), float("-inf"), scores[0][0], scores[0][0]], ] self.assertListEqual(actual_scores.tolist(), expected_scores_list) ``` will occur this exception ``` self = <transformers.generation.logits_process.BarkEosPrioritizerLogitsProcessor object at 0x12f1e0220> input_ids = None scores = tensor([[ 0.2500, 0.2500, -6.0000, -6.0000], [ 0.2500, 0.2500, 0.2500, 0.2500]]) @add_start_docstrings(LOGITS_PROCESSOR_INPUTS_DOCSTRING) def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor: if self.min_eos_p: probs = torch.nn.functional.softmax(scores.float(), dim=-1) # create scores full of -inf except for the eos_token_id early_stop_scores = torch.ones_like(scores) * -float("inf") early_stop_scores[:, self.eos_token_id] = scores[:, self.eos_token_id] do_early_stop = probs[:, self.eos_token_id] > self.min_eos_p # do_early_stop = torch.any(do_early_stop, dim=1, keepdim=True) > scores = torch.where(do_early_stop, early_stop_scores, scores) E RuntimeError: The size of tensor a (2) must match the size of tensor b (4) at non-singleton dimension 1 src/transformers/generation/logits_process.py:2142: RuntimeError ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28201/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28201", "html_url": "https://github.com/huggingface/transformers/pull/28201", "diff_url": "https://github.com/huggingface/transformers/pull/28201.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28201.patch", "merged_at": 1704883609000 }
https://api.github.com/repos/huggingface/transformers/issues/28200
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28200/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28200/comments
https://api.github.com/repos/huggingface/transformers/issues/28200/events
https://github.com/huggingface/transformers/issues/28200
2,053,668,491
I_kwDOCUB6oc56aH6L
28,200
RuntimeError: Failed to import transformers.models.mistral.modeling_mistral because of the following error (look up to see its traceback): cannot import name 'is_flash_attn_greater_or_equal_2_10' from 'transformers.utils' (/usr/local/lib/python3.10/dist-packages/transformers/utils/__init__.py)
{ "login": "Jaykumaran", "id": 60032500, "node_id": "MDQ6VXNlcjYwMDMyNTAw", "avatar_url": "https://avatars.githubusercontent.com/u/60032500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jaykumaran", "html_url": "https://github.com/Jaykumaran", "followers_url": "https://api.github.com/users/Jaykumaran/followers", "following_url": "https://api.github.com/users/Jaykumaran/following{/other_user}", "gists_url": "https://api.github.com/users/Jaykumaran/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jaykumaran/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jaykumaran/subscriptions", "organizations_url": "https://api.github.com/users/Jaykumaran/orgs", "repos_url": "https://api.github.com/users/Jaykumaran/repos", "events_url": "https://api.github.com/users/Jaykumaran/events{/privacy}", "received_events_url": "https://api.github.com/users/Jaykumaran/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Jaykumaran, thanks for raising this issue! \r\n\r\nCould you run the following in the command line to check the version of flash-attn being run in your python environment: \r\n\r\n`python -c \"import flash_attn; from transformers.utils.import_utils import is_flash_attn_greater_or_equal_2_10; print(flash_attn.__version__); print(is_flash_attn_greater_or_equal_2_10())\"`\r\n\r\n?" ]
1,703
1,703
1,703
NONE
null
### System Info # !pip install trl transformers==4.35.2 accelerate peft==0.6.2 -Uqqq !pip install trl transformers accelerate peft==0.6.2 -Uqqq !pip install datasets bitsandbytes einops wandb -Uqqq !pip install flash-attn --no-build-isolation -Uqq ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction # !pip install trl transformers==4.35.2 accelerate peft==0.6.2 -Uqqq !pip install trl transformers accelerate peft==0.6.2 -Uqqq !pip install datasets bitsandbytes einops wandb -Uqqq !pip install flash-attn --no-build-isolation -Uqq MODEL_NAME = "HuggingFaceH4/zephyr-7b-beta" bnb_config = BitsAndBytesConfig( load_in_4bit=True, # load model in 4-bit precision bnb_4bit_quant_type="nf4", # pre-trained model should be quantized in 4-bit NF format bnb_4bit_use_double_quant=True, # Using double quantization as mentioned in QLoRA paper bnb_4bit_compute_dtype=torch.bfloat16, # During computation, pre-trained model should be loaded in BF16 format ) model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, quantization_config = bnb_config, device_map = 0, use_cache=True, trust_remote_code=True, use_flash_attention_2 = True ) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "right" ### Expected behavior when trying to load the model,it results in following error. RuntimeError: Failed to import transformers.models.mistral.modeling_mistral because of the following error (look up to see its traceback): cannot import name 'is_flash_attn_greater_or_equal_2_10' from 'transformers.utils' (/usr/local/lib/python3.10/dist-packages/transformers/utils/__init__.py)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28200/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28199
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28199/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28199/comments
https://api.github.com/repos/huggingface/transformers/issues/28199/events
https://github.com/huggingface/transformers/pull/28199
2,053,656,288
PR_kwDOCUB6oc5ioyeT
28,199
Autocast
{ "login": "jiqing-feng", "id": 107918818, "node_id": "U_kgDOBm614g", "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiqing-feng", "html_url": "https://github.com/jiqing-feng", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @jiqing-feng, thanks for opening a PR and contributing to the library! Let us know when it's ready for final review. \r\n\r\nCould you provide some more context on what this addresses and how this should be used - ideally with an example snippet? I'm assuming to enable FA2?\r\n\r\nAs it's already possible to pass in a pretrained model which has been instantiated with a specific dtype e.g. `torch_dtype=torch.float16` or `torch_dtype=\"auto\", there might be an interplay which should be tested to make sure no unexpected behaviour happens. \r\n\r\ncc @Narsil ", "@yao-matrix", "Hi @amyeroberts . I am sorry that I don't understand what is FA2 (Flash Attention?).\r\n\r\nThe motivation is from a bug, and we can reproduce it by the following script:\r\n```python\r\nfrom transformers import pipeline\r\nfrom datasets import load_dataset\r\nfrom datasets import Audio\r\nimport torch\r\n\r\nminds = load_dataset(\"PolyAI/minds14\", name=\"de-DE\", split=\"train\")\r\nminds = minds.cast_column(\"audio\", Audio(sampling_rate=16_000))\r\nexample = minds[0]\r\n\r\nasr = pipeline(\"automatic-speech-recognition\", model=\"maxidl/wav2vec2-large-xlsr-german\", torch_dtype=torch.bfloat16)\r\noutput = asr(example[\"audio\"][\"array\"])\r\n```\r\n\r\nThe ASR pipeline does not support bf16 `torch_dtype`, so I think we could enable `autocast` to fix the problem.\r\n\r\nWith my changes, we can load the `pipeline` by:\r\n```python\r\nasr = pipeline(\"automatic-speech-recognition\", model=\"maxidl/wav2vec2-large-xlsr-german\", enable_autocast=True, dtype=torch.bfloat16)\r\n``` \r\nso we can run the script.\r\n\r\nWith `torch_dtype=torch.bfloat16`, we expected that the input tensors with float data type must be bfloat16, but `torch.autocast` don't have this limitation.", "Hi @amyeroberts @Narsil . What do you think about this PR? I could enable autocast or full low-precision on the ASR pipeline.", "Hi @jiqing-feng,\r\n\r\nWhy cannot you wrap `autocast` directly on the pipeline calls themselves ?\r\n\r\nAlso note that I've bitten hard by autocast it creates slower that it should inference pretty fast because of all the data movement. In my experience it' s almost always better to add the proper types where needed manually to get good performance." ]
1,703
1,705
1,705
CONTRIBUTOR
null
Enable autocast in the pipeline.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28199/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28199", "html_url": "https://github.com/huggingface/transformers/pull/28199", "diff_url": "https://github.com/huggingface/transformers/pull/28199.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28199.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28198
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28198/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28198/comments
https://api.github.com/repos/huggingface/transformers/issues/28198/events
https://github.com/huggingface/transformers/pull/28198
2,053,617,836
PR_kwDOCUB6oc5ioqFd
28,198
Update `docs/source/en/perf_infer_gpu_one.md`
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,703
1,703
COLLABORATOR
null
# What does this PR do? Update `docs/source/en/perf_infer_gpu_one.md` to fix > FAILED tests/utils/test_doc_samples.py::TestDocLists::test_sdpa_support_list - ValueError: mixtral should be in listed in the SDPA documentation but is not. Please update the documentation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28198/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28198", "html_url": "https://github.com/huggingface/transformers/pull/28198", "diff_url": "https://github.com/huggingface/transformers/pull/28198.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28198.patch", "merged_at": 1703238022000 }
https://api.github.com/repos/huggingface/transformers/issues/28197
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28197/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28197/comments
https://api.github.com/repos/huggingface/transformers/issues/28197/events
https://github.com/huggingface/transformers/issues/28197
2,053,583,464
I_kwDOCUB6oc56ZzJo
28,197
LLaVA: index error when computing extended_attention_mask
{ "login": "TideDra", "id": 92413813, "node_id": "U_kgDOBYIfdQ", "avatar_url": "https://avatars.githubusercontent.com/u/92413813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TideDra", "html_url": "https://github.com/TideDra", "followers_url": "https://api.github.com/users/TideDra/followers", "following_url": "https://api.github.com/users/TideDra/following{/other_user}", "gists_url": "https://api.github.com/users/TideDra/gists{/gist_id}", "starred_url": "https://api.github.com/users/TideDra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TideDra/subscriptions", "organizations_url": "https://api.github.com/users/TideDra/orgs", "repos_url": "https://api.github.com/users/TideDra/repos", "events_url": "https://api.github.com/users/TideDra/events{/privacy}", "received_events_url": "https://api.github.com/users/TideDra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @TideDra, thanks for reporting this! \r\n\r\nThere's an ongoing PR which aims to address this issue: #28032 \r\n\r\ncc @younesbelkada for reference", "Thanks! Yes, I second what @amyeroberts said, I will put that PR as high prio and merge it asap" ]
1,703
1,703
1,703
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.15.0-1042-azure-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @younesbelkad ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm evaluating llava-1.5-7b-hf on MM-Vet using batch generation with `use_cache=True`, here is my script: ```python import json from PIL import Image from transformers import AutoProcessor, LlavaForConditionalGeneration,AutoTokenizer from torch.utils.data import Dataset,DataLoader import torch import os from tqdm import tqdm DATA_ROOT = "/mnt/gozhang/code/LLaVA/playground/data/eval/mm-vet" processor = AutoProcessor.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf") tokenizer = AutoTokenizer.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf") processor.tokenizer.pad_token = processor.tokenizer.bos_token class MMVetDataset(Dataset): def __init__(self,data_root) -> None: super().__init__() self.data_root = data_root with open(os.path.join(data_root, "mm-vet.json"), "r") as f: data = json.load(f) self.data = [(k,v) for k,v in data.items()] def __len__(self): return len(self.data) def __getitem__(self, index): return {'id':self.data[index][0], 'image':os.path.join(self.data_root,'images',self.data[index][1]['imagename']), 'question':"USER: <image>\n"+self.data[index][1]['question']+" ASSISTANT:"} def collator(batch): ids = [b['id'] for b in batch] questions = [b['question'] for b in batch] images = [Image.open(b['image']) for b in batch] inputs = processor(text=questions,images=images,return_tensors="pt",padding=True) return ids,inputs model = LlavaForConditionalGeneration.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf",torch_dtype=torch.float16) model.to('cuda') #model.to(torch.float16) dataset = MMVetDataset(DATA_ROOT) dataloader = DataLoader(dataset,batch_size=16,collate_fn=collator) results = {} bar = tqdm(total=len(dataset)) model.eval() with torch.inference_mode(): for ids, inputs in dataloader: inputs.to('cuda') inputs['pixel_values'] = inputs['pixel_values'].half() outputs = model.generate(**inputs,temperature=0.2,do_sample=True,max_new_tokens=1024,use_cache=True) input_token_len = inputs['input_ids'].shape[1] responses=tokenizer.batch_decode(outputs[:, input_token_len:], skip_special_tokens=True, clean_up_tokenization_spaces=False) for id,res in zip(ids,responses): results[id]=res bar.update(len(responses)) with open('mmvet_result.json','w') as f: json.dump(results,f,indent=4) ``` However, it occasionally raises `RuntimeError: CUDA error: device-side assert triggered` when computing `extended_attention_mask`. This error happens randomly during the whole evaluation, sometimes happens in the third batch, sometimes in the last batch, etc. I print some shapes in the `model.forward()` method and I think the `extended_attention_mask` is wrongly computed. ```python def forward( self, input_ids: torch.LongTensor = None, pixel_values: torch.FloatTensor = None, attention_mask: Optional[torch.Tensor] = None, position_ids: Optional[torch.LongTensor] = None, past_key_values: Optional[List[torch.FloatTensor]] = None, inputs_embeds: Optional[torch.FloatTensor] = None, vision_feature_layer: Optional[int] = None, vision_feature_select_strategy: Optional[str] = None, labels: Optional[torch.LongTensor] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, LlavaCausalLMOutputWithPast]: output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict vision_feature_layer = ( vision_feature_layer if vision_feature_layer is not None else self.config.vision_feature_layer ) vision_feature_select_strategy = ( vision_feature_select_strategy if vision_feature_select_strategy is not None else self.config.vision_feature_select_strategy ) if inputs_embeds is None: # 1. Extra the input embeddings inputs_embeds = self.get_input_embeddings()(input_ids) # 2. Merge text and images if pixel_values is not None and input_ids.shape[1] != 1: image_outputs = self.vision_tower(pixel_values, output_hidden_states=True) # this is not memory efficient at all (output_hidden_states=True) will save all the hidden stated. selected_image_feature = image_outputs.hidden_states[vision_feature_layer] if vision_feature_select_strategy == "default": selected_image_feature = selected_image_feature[:, 1:] elif vision_feature_select_strategy == "full": selected_image_feature = selected_image_feature else: raise ValueError( f"Unexpected select feature strategy: {self.config.vision_feature_select_strategy}" ) image_features = self.multi_modal_projector(selected_image_feature) inputs_embeds, attention_mask, position_ids = self._merge_input_ids_with_image_features( image_features, inputs_embeds, input_ids, attention_mask, position_ids ) if labels is None: labels = torch.full_like(attention_mask, self.config.ignore_index).to(torch.long) else: # In case input_ids.shape[1] == 1 & pixel_values==None & past_key_values != None, we are in the case of # generation with cache if past_key_values is not None and pixel_values is not None and input_ids.shape[1] == 1: # Retrieve the first layer to inspect the logits and mask out the hidden states # that are set to 0 first_layer_past_key_value = past_key_values[0][0][:, 0, :, 0] batch_index, non_attended_tokens = torch.where(first_layer_past_key_value == 0) # Get the target length target_seqlen = first_layer_past_key_value.shape[-1] + 1 extended_attention_mask = torch.ones( (attention_mask.shape[0], target_seqlen - attention_mask.shape[1]), dtype=attention_mask.dtype, device=attention_mask.device, ) # Zero-out the places where we don't need to attend print(extended_attention_mask.shape) # torch.Size([16,575]) print(len(past_key_values)) # 32 print(len(past_key_values[0])) # 2 print(past_key_values[0][0].shape) # torch.Size([16,32,688,128]) print(attention_mask.shape) # torch.Size(16,114) print(batch_index) #tensor([2],device='cuda:0') print(non_attended_tokens) #tensor([687],device='cuda:0') try: extended_attention_mask[batch_index, non_attended_tokens] = 0 except: pdb.set_trace() attention_mask = torch.cat((attention_mask, extended_attention_mask), dim=1) position_ids = torch.sum(attention_mask, dim=1).unsqueeze(-1) - 1 ####Following code is ignored ``` Apparently, `extended_attention_mask` has a constant sequence length of 575 (target_seqlen - attention_mask.shape[1]), which I think is roughly the number of image tokens, while the index of `non_attended_tokens` may exceed this length and then raise the CUDA error. Maybe the sequence length of `extended_attention_mask` should just be `target_seqlen`, and don't need to be concatenate with `attention_mask`? Honestly I don't understand the code here, it's really weird. ### Expected behavior The generation should always work fine when using cache.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28197/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28196
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28196/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28196/comments
https://api.github.com/repos/huggingface/transformers/issues/28196/events
https://github.com/huggingface/transformers/pull/28196
2,053,577,492
PR_kwDOCUB6oc5iohTk
28,196
Add CogVLM (cleaner)
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "I have a branch [add_cogvlm_cleaner_address_comments](https://github.com/NielsRogge/transformers/tree/add_cogvlm_cleaner_address_comments) which I can merge once I can verify the conversion", "Thanks for your review, so 3 comments are about adding a `text_config` to `CogvlmConfig` and leverage `AutoModelForCausalLM` in `modeling_cogvlm.py`, however this is not really possible for this model (I've considered it). Unlike other models like BLIP-2 and Llava, the authors modify the text model to include \"vision-expert attention\". One would need to work with hooks etc. to make it work directly with `AutoModelForCausalLM`, to inject the vision expert attention layers inside the language model. Rather, the authors [defined](https://huggingface.co./THUDM/cogvlm-chat-hf/blob/main/modeling_cogvlm.py#L275) a new `CogvlmDecoderLayer`, which includes both text- and vision attention. I'm not sure the model is plug-and-play. Refer to [this figure](https://github.com/THUDM/CogVLM/raw/main/assets/method-min.png) for details.\r\n\r\n\r\n" ]
1,703
1,708
null
CONTRIBUTOR
null
# What does this PR do? This PR adds CogVLM, in a cleaner way. Follow-up of #27718.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28196/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28196", "html_url": "https://github.com/huggingface/transformers/pull/28196", "diff_url": "https://github.com/huggingface/transformers/pull/28196.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28196.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28195
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28195/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28195/comments
https://api.github.com/repos/huggingface/transformers/issues/28195/events
https://github.com/huggingface/transformers/pull/28195
2,053,571,469
PR_kwDOCUB6oc5iogAr
28,195
Drop `feature_extractor_type` when loading an image processor file
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "failure is also on `main`.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28195). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,703
1,703
1,703
COLLABORATOR
null
# What does this PR do? `preprocessor_config.json` created in old days like [this](https://huggingface.co./openai/clip-vit-large-patch14/blob/main/preprocessor_config.json) has, for example, `"feature_extractor_type": "CLIPFeatureExtractor",` in it. If that file is for an image processor, during the loading (in `__init__`), it is added as the object's attribute. This is already misleading. If we save the image processor again, the file will contain `feature_extractor_type` and `image_processor_type`, which is even more confusing. See the example below. **This PR pop up this attribute during the loading, so it won't be an attribute of the loaded object.** ### To reproduce ```python from transformers import CLIPImageProcessor import json p = CLIPImageProcessor.from_pretrained("openai/clip-vit-large-patch14") print(getattr(p, "feature_extractor_type", None)) print(getattr(p, "image_processor_type", None)) print("-" * 40) p.save_pretrained("myclip") p = CLIPImageProcessor.from_pretrained("myclip") print(getattr(p, "feature_extractor_type", None)) print(getattr(p, "image_processor_type", None)) ``` ### Output **before this PR** ```bash CLIPFeatureExtractor None ---------------------------------------- CLIPFeatureExtractor CLIPImageProcessor ``` **after this PR** ```bash None None ---------------------------------------- None CLIPImageProcessor ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28195/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28195", "html_url": "https://github.com/huggingface/transformers/pull/28195", "diff_url": "https://github.com/huggingface/transformers/pull/28195.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28195.patch", "merged_at": 1703247544000 }
https://api.github.com/repos/huggingface/transformers/issues/28194
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28194/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28194/comments
https://api.github.com/repos/huggingface/transformers/issues/28194/events
https://github.com/huggingface/transformers/issues/28194
2,053,555,305
I_kwDOCUB6oc56ZsRp
28,194
Can you please provide the longformer version of the torch to tf file?
{ "login": "Struggle-lsl", "id": 109401083, "node_id": "U_kgDOBoVT-w", "avatar_url": "https://avatars.githubusercontent.com/u/109401083?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Struggle-lsl", "html_url": "https://github.com/Struggle-lsl", "followers_url": "https://api.github.com/users/Struggle-lsl/followers", "following_url": "https://api.github.com/users/Struggle-lsl/following{/other_user}", "gists_url": "https://api.github.com/users/Struggle-lsl/gists{/gist_id}", "starred_url": "https://api.github.com/users/Struggle-lsl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Struggle-lsl/subscriptions", "organizations_url": "https://api.github.com/users/Struggle-lsl/orgs", "repos_url": "https://api.github.com/users/Struggle-lsl/repos", "events_url": "https://api.github.com/users/Struggle-lsl/events{/privacy}", "received_events_url": "https://api.github.com/users/Struggle-lsl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lsl200032, \r\n\r\nThere isn't a specific file for Longformer to convert from torch to TF weights. The conversion happens when using `from_pt` in the `from_pretrained` call: \r\n\r\n```\r\nfrom transformers import TFLongformerModel \r\n\r\n# Loads the pytorch weights and converts them to the equivalent TF format then loads into the TF model\r\nmodel = TFLongformerModel.from_pretrained(checkpoint, from_pt=True)\r\n```", "I have a longformer weight file that has been retrained in a downstream task, but it is a .bin file. I want to load it under the tf framework. How to solve it?\r\n\r\n> 你好@lsl200032,\r\n> \r\n> Longformer 没有将火炬权重转换为 TF 权重的特定文件。`from_pt`在调用中使用时会发生转换`from_pretrained`:\r\n> \r\n> ```\r\n> from transformers import TFLongformerModel \r\n> \r\n> # Loads the pytorch weights and converts them to the equivalent TF format then loads into the TF model\r\n> model = TFLongformerModel.from_pretrained(checkpoint, from_pt=True)\r\n> ```\r\n\r\nI have a longformer weight file that has been retrained in a downstream task, but it is a .bin file. I want to load it under the tf framework. How to solve it?", "If the model was trained using Hugging Face's Longformer pytorch architecture, then you can use the code I mentioned above to load it as a TF model\r\n\r\n```\r\nfrom transformers import TFLongformerModel \r\n\r\n# Loads the pytorch weights and converts them to the equivalent TF format then loads into the TF model\r\nmodel = TFLongformerModel.from_pretrained(path/to/model/folder, from_pt=True)\r\n```", "> If the model was trained using Hugging Face's Longformer pytorch architecture, then you can use the code I mentioned above to load it as a TF model\r\n> \r\n> ```\r\n> from transformers import TFLongformerModel \r\n> \r\n> # Loads the pytorch weights and converts them to the equivalent TF format then loads into the TF model\r\n> model = TFLongformerModel.from_pretrained(path/to/model/folder, from_pt=True)\r\n> ```\r\n\r\nThank you beautiful lady, your answer has given me confidence in my scientific research path. I hope to keep in touch with you.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,706
1,706
NONE
null
### Feature request Can you please provide the longformer version of the torch to tf file? ### Motivation Can you please provide the longformer version of the torch to tf file? ### Your contribution Can you please provide the longformer version of the torch to tf file?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28194/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28193
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28193/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28193/comments
https://api.github.com/repos/huggingface/transformers/issues/28193/events
https://github.com/huggingface/transformers/issues/28193
2,053,500,334
I_kwDOCUB6oc56Ze2u
28,193
ValueError: Target module WQLinear_GEMM is not supported. Currently, only `torch.nn.Linear` and `Conv1D` are supported.- AWQ Quantisation Issues
{ "login": "Vasanth03", "id": 59615743, "node_id": "MDQ6VXNlcjU5NjE1NzQz", "avatar_url": "https://avatars.githubusercontent.com/u/59615743?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Vasanth03", "html_url": "https://github.com/Vasanth03", "followers_url": "https://api.github.com/users/Vasanth03/followers", "following_url": "https://api.github.com/users/Vasanth03/following{/other_user}", "gists_url": "https://api.github.com/users/Vasanth03/gists{/gist_id}", "starred_url": "https://api.github.com/users/Vasanth03/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vasanth03/subscriptions", "organizations_url": "https://api.github.com/users/Vasanth03/orgs", "repos_url": "https://api.github.com/users/Vasanth03/repos", "events_url": "https://api.github.com/users/Vasanth03/events{/privacy}", "received_events_url": "https://api.github.com/users/Vasanth03/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Vasanth03, AutoAWQ is not compatible with training yet. It will be in the future though.", "Thank you so much @casper-hansen. And I look forward to its compatibility in the future." ]
1,703
1,703
1,703
NONE
null
Hi @casper-hansen -> I am trying to train the AWQ quantised model using hugging face trainer. While using PEFT (LoRA adaptor) the following error pops up. ![Screenshot 2023-12-22 at 12 47 36 PM](https://github.com/huggingface/transformers/assets/59615743/0bf7d637-04a9-4830-8d74-da7fd02b128c) -> This is the version that I have used !pip install -q -U https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl Any help is much appreciated. Thanks _Originally posted by @Vasanth03 in https://github.com/huggingface/transformers/issues/27321#issuecomment-1867330086_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28193/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28192
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28192/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28192/comments
https://api.github.com/repos/huggingface/transformers/issues/28192/events
https://github.com/huggingface/transformers/pull/28192
2,053,434,683
PR_kwDOCUB6oc5ioCSy
28,192
don't initialize the output embeddings if we're going to tie them to input embeddings
{ "login": "tom-p-reichel", "id": 43631024, "node_id": "MDQ6VXNlcjQzNjMxMDI0", "avatar_url": "https://avatars.githubusercontent.com/u/43631024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tom-p-reichel", "html_url": "https://github.com/tom-p-reichel", "followers_url": "https://api.github.com/users/tom-p-reichel/followers", "following_url": "https://api.github.com/users/tom-p-reichel/following{/other_user}", "gists_url": "https://api.github.com/users/tom-p-reichel/gists{/gist_id}", "starred_url": "https://api.github.com/users/tom-p-reichel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tom-p-reichel/subscriptions", "organizations_url": "https://api.github.com/users/tom-p-reichel/orgs", "repos_url": "https://api.github.com/users/tom-p-reichel/repos", "events_url": "https://api.github.com/users/tom-p-reichel/events{/privacy}", "received_events_url": "https://api.github.com/users/tom-p-reichel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @tom-p-reichel - thanks for opening this PR! \r\n\r\nVery nice if there's such a big speedup 🔥 cc @sanchit-gandhi \r\n\r\nCould you share some snippets on how this was tested - speed and outputs? ", "@amyeroberts Sure. Here's a minimal example:\r\n`python -m timeit -s \"from transformers import pipeline; import torch\" \"pipeline('automatic-speech-recognition',model='openai/whisper-large-v3', torch_dtype=torch.float16)\"`\r\n\r\nBefore patch (origin/main):\r\n`1 loop, best of 5: 6.91 sec per loop`\r\n\r\nAfter patch:\r\n`1 loop, best of 5: 1.72 sec per loop`\r\n\r\nNote: If we're not using `dtype=torch.float16` as insanely-fast-whisper does then both versions run much faster and the patch only gives a marginal improvement. It seems like randomly initializing something in float16 is really expensive!\r\n\r\nAs a sanity check that nothing terribly bad has happened to the weights as a result of this patch, we can also download some public domain audio and test that the downstream tool works as intended:\r\n\r\n```\r\nwget https://www.archive.org/download/metamorphosis_librivox/metamorphosis_librivox_64kb_mp3.zip\r\nunzip metamorphosis_librivox_64kb_mp3.zip\r\ninsanely-fast-whisper --file-name metamorphosis_1_kafka_64kb.mp3\r\n```\r\nThe resultant `output.json` pre and post patch are identical for the 49 minutes of audio transcribed.\r\n\r\nEDIT: Although the tests for this PR are failing, the only test that failed appears to be a documentation check for mistral, which was not touched in this PR.\r\n", "Rebase took care of the one failing documentation test. Anything else that needs to be done for this PR?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28192). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi, sorry for the delay, I was doing graduate student things. Here is a rebase onto the current main and a new test.\r\n\r\n@ArthurZucker is this what you intended for the test case? There are already test cases that ensure that the model is unchanged while loading and saving, even during fast_init, so I only test the new behavior I added -- which is that the output embedding is never initialized if it's tied.\r\n\r\nThe commit is broken into two parts, the first which adds the test and the second which implements the fix. You can check out the commit where the test is added (`e1b99fde912d90b50f8d70145bc6d5b75058f6a0`) and run something to the effect of `python -m pytest tests/models/whisper/test_modeling_whisper.py -k \"test_fast_init_tied_embeddings\"` to see that the test fails on the current transformers `main` branch. Checking out the second commit in this PR causes the test to pass.", "@ArthurZucker Good to hear it, rebased. ", "Thanks for the PR! ", "FYI @pacman100 if we have any deepspeed issues 🤗 ", "FYI I ran into an issue with this for the ElectraModel with this [test](https://github.com/huggingface/transformers/blob/d628664688b05cabdd69f4e7e295bc4aee0a8d31/tests/test_modeling_common.py#L434). For some reason, the failure only showed up when I modified the source for ElectraModel for an unrelated change. \r\n\r\nThe issue appears to be that the `bias` term in the `output_embeddings` is not tied in [`_tie_or_clone_weights()`](https://github.com/huggingface/transformers/blob/d628664688b05cabdd69f4e7e295bc4aee0a8d31/src/transformers/modeling_utils.py#L1712). The test ended up failing because initialization was skipped for the `bias` as well.\r\n\r\nI added this [commit](https://github.com/huggingface/transformers/pull/28802/commits/9fa20df3faba1a72fb21cb7eca55e4065836a5f3) to get the test to pass. Let me know if you have any concerns. AFAIT, the whisper model doesn't have `bias` in their embeddings, and so my fix shouldn't affect the gains from this change. ", "@hackyon Good catch! Yes, that is an oversight. Curious about needing to edit ElectraModel to see the failure and the passing tests in this PR." ]
1,703
1,707
1,706
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This small change marks the output embeddings for a model as initialized if we will be tying them to the input embeddings. Without this change, the output embeddings are usually randomly initialized every time affected models (models that tie the output embeddings to input embeddings and do not otherwise initialize the output embeddings) are loaded. This seems to be responsible for *multiple second* startup delays in downstream tools, e.g. insanely-fast-whisper, as every single time the whisper model is loaded a very massive matrix is unnecessarily filled with uniformly random numbers before it is replaced with another matrix. Before and after applying this patch, downstream tool insanely-fast-whisper transcribed a short audio file in 18 and 13 seconds respectively for a 5 second improvement. The patch does not seem to change the behavior of the tool-- a test transcription of an hour of audio remains unchanged before and after the patch. I suspect other applications using models that tie their input/output embeddings together will experience a small speedup in loading from this patch. I ran a portion of the transformers testing locally, which passed, but we'll see how the full test suite fares soon enough. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28192/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28192", "html_url": "https://github.com/huggingface/transformers/pull/28192", "diff_url": "https://github.com/huggingface/transformers/pull/28192.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28192.patch", "merged_at": 1706663958000 }
https://api.github.com/repos/huggingface/transformers/issues/28191
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28191/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28191/comments
https://api.github.com/repos/huggingface/transformers/issues/28191/events
https://github.com/huggingface/transformers/issues/28191
2,053,399,431
I_kwDOCUB6oc56ZGOH
28,191
ImportError: Using the Trainer with PyTorch requires accelerate>=0.20.1
{ "login": "Ompramod9921", "id": 86967995, "node_id": "MDQ6VXNlcjg2OTY3OTk1", "avatar_url": "https://avatars.githubusercontent.com/u/86967995?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ompramod9921", "html_url": "https://github.com/Ompramod9921", "followers_url": "https://api.github.com/users/Ompramod9921/followers", "following_url": "https://api.github.com/users/Ompramod9921/following{/other_user}", "gists_url": "https://api.github.com/users/Ompramod9921/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ompramod9921/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ompramod9921/subscriptions", "organizations_url": "https://api.github.com/users/Ompramod9921/orgs", "repos_url": "https://api.github.com/users/Ompramod9921/repos", "events_url": "https://api.github.com/users/Ompramod9921/events{/privacy}", "received_events_url": "https://api.github.com/users/Ompramod9921/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "I see that you are using the accelerated runtime from Google Colaboratory. Have you tried restarting the session after installing the accelerate package? I know, it is obvious, but just to be completely foolproof.", "I encountered the same issue. When I executed the command !pip install accelerate -U and then went to Runtime -> Restart Session, the problem was resolved by not running any further pip commands afterwards.", "@GinUTE's directions are correct. I believe at some point you should have gotten a red warning message when installing the accelerate version later saying some packages were installed that were already imported in the environment directing you to restart your runtime. If this is not the case please provide us with a fully working colab notebook with the full execution order etc in-tact. I may be able to find some way to guard this warning better. Thanks!", "I am getting the same error for TrainingArguments when running locally on an M2 Mac. My \"accelerate\" version is 0.25.0 and restarting the kernel does not work. Google Colab works fine though.", "cc @SunMarc for MPS ", "Hi @RealCyclotomic, could you please provide a snippet ? Can you show me what the following snippet returns ? \r\n```python\r\nfrom transformers.utils.import_utils import _is_package_available\r\n_accelerate_available, _accelerate_version = _is_package_available(\"accelerate\", return_version=True)\r\nprint(_accelerate_available)\r\nprint(_accelerate_version)\r\n```\r\nThis is what is used to get check if one has the right version of accelerate inside transformers. ", "It was false. But I just upgraded \"accelerate\" from 0.25.0 to 0.26.1 and now it says true, and the error for TrainingArguments() has gone away. Thank you.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,708
1,708
NONE
null
### System Info @muellerzr and @pacman100 I'm trying to use the Trainer with PyTorch in my Python project, but I'm encountering an ImportError stating that accelerate>=0.20.1 is required. Despite having installed the accelerate package, I'm still getting this error. Here's the error message I'm seeing: ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U` ![MergedImages](https://github.com/huggingface/transformers/assets/86967995/a6e7dff3-1738-4fa3-8750-3490fc75614a) I have tried both suggested solutions (pip install transformers[torch] and pip install accelerate -U), but the issue persists. Could anyone please provide guidance on how to resolve this issue? Thank you! ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here's a minimal code snippet that reproduces the issue: from transformers import TrainingArguments, Trainer # Define the training arguments training_args = TrainingArguments( output_dir='./results', num_train_epochs=3, per_device_train_batch_size=16, per_device_eval_batch_size=64, warmup_steps=500, weight_decay=0.01, logging_dir='./logs', logging_steps=10, ) When running this code, I receive the following error: ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U` Despite having installed the accelerate package, I continue to encounter this error. I have attempted to upgrade the accelerate package using pip install --upgrade accelerate, and cleared the pip cache using pip cache purge, but the issue remains unresolved. The versions of the relevant packages I'm using are as follows: import transformers import accelerate print(transformers.__version__) print(accelerate.__version__) Output: 4.12.5 0.21.0 As you can see, I'm using transformers version 4.12.5 and accelerate version 0.21.0, both of which should be compatible with each other ### Expected behavior Expected Behavior: I expect the `Trainer` to work seamlessly with `PyTorch` without any import errors. Specifically, I expect the `accelerate` package to be correctly recognized by the `Trainer`, allowing me to run my code without encountering the `ImportError` stating that `accelerate>=0.20.1` is required. The `accelerate` package is a key dependency for the `Trainer` to function properly, and despite having installed it, I continue to face this issue. I have tried both suggested solutions (`pip install transformers[torch]` and `pip install accelerate -U`) to no avail. Therefore, I believe there might be a compatibility issue between the `Trainer` and the `accelerate` package, or perhaps an issue with my current Python environment setup. I would appreciate any guidance on how to troubleshoot and resolve this issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28191/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28190
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28190/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28190/comments
https://api.github.com/repos/huggingface/transformers/issues/28190/events
https://github.com/huggingface/transformers/issues/28190
2,053,318,726
I_kwDOCUB6oc56YyhG
28,190
torch.compile() silently fails when used on HuggingFace pipeline inference code
{ "login": "rosario-purple", "id": 123594463, "node_id": "U_kgDOB13m3w", "avatar_url": "https://avatars.githubusercontent.com/u/123594463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rosario-purple", "html_url": "https://github.com/rosario-purple", "followers_url": "https://api.github.com/users/rosario-purple/followers", "following_url": "https://api.github.com/users/rosario-purple/following{/other_user}", "gists_url": "https://api.github.com/users/rosario-purple/gists{/gist_id}", "starred_url": "https://api.github.com/users/rosario-purple/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rosario-purple/subscriptions", "organizations_url": "https://api.github.com/users/rosario-purple/orgs", "repos_url": "https://api.github.com/users/rosario-purple/repos", "events_url": "https://api.github.com/users/rosario-purple/events{/privacy}", "received_events_url": "https://api.github.com/users/rosario-purple/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[ "No this can't work yet at we do not have a static KV cache. #27931 will fix this", "While working on benchmarking and improving our model's performance I experienced the same issue with `torch.compile`. It does not bring any speed up at all and can confirm the issue. \r\n\r\n@ArthurZucker Looking forward to having `torch.compile` supported in transformers!", "This is still broken", "Indeed 🤗 marking as WIP until the PR is merged" ]
1,703
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.0 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.21 - JaxLib version: 0.4.21 - Using GPU in script?: A100 ### Who can help? @Narsil @gante @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the following Python code: ``` model = AutoModelForCausalLM.from_pretrained( MODEL_ID, torch_dtype=torch.bfloat16, device_map=device, use_flash_attention_2=True, ) model.eval() tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) tokenizer.pad_token_id = tokenizer.eos_token_id model = torch.compile(model) generation_pipeline = pipeline( "text-generation", model=model, tokenizer=tokenizer, batch_size=10, ) batch_results = generation_pipeline( ["foo", "bar", "bin", "baz"], max_new_tokens=200, temperature=0.6, do_sample=True, repetition_penalty=1.05, num_return_sequences=20, ) ``` (in my case, MODEL_ID is set to `"Open-Orca/Mistral-7B-OpenOrca"`, which is a fine-tune of Mistral-7B, but any LLM should work) ### Expected behavior torch.compile() should compile the model, print some compilation messages, and then cause inference/text generation to be run faster. Instead, torch.compile() appears to not run at all, no messages are printed, and it has no effect on inference/generation speed. There is no error message, it just silently doesn't compile, effectively acting as if the line `model = torch.compile(model)` doesn't exist.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28190/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28190/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28189
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28189/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28189/comments
https://api.github.com/repos/huggingface/transformers/issues/28189/events
https://github.com/huggingface/transformers/issues/28189
2,053,227,321
I_kwDOCUB6oc56YcM5
28,189
Text-to-speech data collator exhibits weird batching behavior with Seq2SeqTrainer
{ "login": "GinUTE", "id": 91470404, "node_id": "MDQ6VXNlcjkxNDcwNDA0", "avatar_url": "https://avatars.githubusercontent.com/u/91470404?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GinUTE", "html_url": "https://github.com/GinUTE", "followers_url": "https://api.github.com/users/GinUTE/followers", "following_url": "https://api.github.com/users/GinUTE/following{/other_user}", "gists_url": "https://api.github.com/users/GinUTE/gists{/gist_id}", "starred_url": "https://api.github.com/users/GinUTE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GinUTE/subscriptions", "organizations_url": "https://api.github.com/users/GinUTE/orgs", "repos_url": "https://api.github.com/users/GinUTE/repos", "events_url": "https://api.github.com/users/GinUTE/events{/privacy}", "received_events_url": "https://api.github.com/users/GinUTE/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi @ylacombe ", "Hello again, I wish to provide more information.\r\n\r\nFirst, I followed the stack trace and landed on this specific line throwing the error: \r\n```\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/speecht5/modeling_speecht5.py](https://localhost:8080/#) in forward(self, input_values, speaker_embeddings)\r\n 699 speaker_embeddings = speaker_embeddings.expand(-1, inputs_embeds.size(1), -1)\r\n 700 speaker_embeddings = speaker_embeddings.repeat(inputs_embeds.size(0), 1, 1)\r\n--> 701 inputs_embeds = torch.cat([inputs_embeds, speaker_embeddings], dim=-1)\r\n 702 inputs_embeds = nn.functional.relu(self.speaker_embeds_layer(inputs_embeds))\r\n 703 \r\n\r\nRuntimeError: Sizes of tensors must match except in dimension 2. Expected size 16 but got size 256 for tensor number 1 in the list.\r\n```\r\n\r\nIt seems the error is thrown by the `speaker_embeddings` within the batch. I tested this by not including it in the batch returned by the data collator, and the training proceeds as per usual. As far as I am concerned, however, `speaker_embeddings` is a required component, both during fine-tuning and inference.\r\n\r\nI did triple-check the shape of the `speaker_embeddings` tensors within the batch returned by the data collator, and the size is still the same as what I reported in my post.\r\n\r\nWhat should I do now? Any help is much appreciated.", "I have the same error. Any solution?", "> Hello again, I wish to provide more information.\r\n> \r\n> First, I followed the stack trace and landed on this specific line throwing the error:\r\n> \r\n> ```\r\n> [/usr/local/lib/python3.10/dist-packages/transformers/models/speecht5/modeling_speecht5.py](https://localhost:8080/#) in forward(self, input_values, speaker_embeddings)\r\n> 699 speaker_embeddings = speaker_embeddings.expand(-1, inputs_embeds.size(1), -1)\r\n> 700 speaker_embeddings = speaker_embeddings.repeat(inputs_embeds.size(0), 1, 1)\r\n> --> 701 inputs_embeds = torch.cat([inputs_embeds, speaker_embeddings], dim=-1)\r\n> 702 inputs_embeds = nn.functional.relu(self.speaker_embeds_layer(inputs_embeds))\r\n> 703 \r\n> \r\n> RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 16 but got size 256 for tensor number 1 in the list.\r\n> ```\r\n> \r\n> It seems the error is thrown by the `speaker_embeddings` within the batch. I tested this by not including it in the batch returned by the data collator, and the training proceeds as per usual. As far as I am concerned, however, `speaker_embeddings` is a required component, both during fine-tuning and inference.\r\n> \r\n> I did triple-check the shape of the `speaker_embeddings` tensors within the batch returned by the data collator, and the size is still the same as what I reported in my post.\r\n> \r\n> What should I do now? Any help is much appreciated.\r\n\r\nHave you solved your problem?", "Hey @GinUTE and @yasamanhbn, thanks for flagging this issue!\r\n\r\n@GinUTE did you manage to solve this ? If not, could you send a snippet to reproduce the issue (e.g with a dataset on the hub) ?\r\n\r\n", "Dear Friends,\r\n\r\nI have the same problem, and I am fine-tuning the Amharic language.\r\n\r\nI can't handle the error when I try to run \"trainer.train().\"\r\n\r\n18 frames\r\n/usr/local/lib/python3.10/dist-packages/transformers/models/speecht5/modeling_speecht5.py in forward(self, input_values, speaker_embeddings)\r\n 699 speaker_embeddings = speaker_embeddings.expand(-1, inputs_embeds.size(1), -1)\r\n 700 speaker_embeddings = speaker_embeddings.repeat(inputs_embeds.size(0), 1, 1)\r\n--> 701 inputs_embeds = torch.cat([inputs_embeds, speaker_embeddings], dim=-1)\r\n 702 inputs_embeds = nn.functional.relu(self.speaker_embeds_layer(inputs_embeds))\r\n 703 \r\n\r\nRuntimeError: Sizes of tensors must match except in dimension 2. Expected size 16 but got size 256 for tensor number 1 in the list.\r\n\r\nAny help is much appreciated.\r\nThank you\r\n", "My apologies, I forgot to leave a comment detailing how I solve the issue after I closed it. To help with understanding the issue, I am devising a short notebook to reproduce the error, the original is a mess.\r\n\r\nFollowing my comment on Dec 26, I knew that the problem was with the speaker embedding batch, but I did not understand why. Thus, I inspected the two code cells involving speaker embedding extraction and batching: my batch-processing function (to use with dataset mapping) and data collator.\r\n\r\nI only made modifications to the batch-processing function though because it was more likely to be faulty. The two changes I made are:\r\n- Instead of directly extracting speaker embeddings from audio arrays, I used `torchaudio` to load the audio signals from their paths first\r\n- I converted the labels returned by the processor within the batch from numpy arrays to lists\r\n\r\nHonestly, I think the changes are orthogonal to the issue though. I was just blindly changing where I think the issue originated. I will make another comment when I can reproduce the error.", "I tried my best to clean up the original inferior notebook. I was able to reproduce the error. I managed to pinpoint the culprit, which is not the batch-processing function or data collator. The issue was, in fact, how I installed the `transformers` package.\r\n\r\nI installed `transformers` as follows:\r\n```\r\n!pip3 install git+https://github.com/huggingface/transformers.git # to install transformers\r\n!pip3 install transformers[torch] # to install accelerate>=0.21.0\r\n```\r\n\r\nThe second `pip3 install` was because of the import error thrown when instantiating `Seq2SeqTrainingArguments` without it:\r\n```\r\nUsing the `Trainer` with `PyTorch` requires `accelerate>=0.21.0`: Please run `pip install transformers[torch]` or `pip install accelerate -U`\r\n```\r\n\r\nThis environment setup will somehow break the trainer and produce the batch size issue. I confirmed this by installing `transformers` using only ```!pip3 install transformers[torch]```, and the error does not persist.", "> I tried my best to clean up the original inferior notebook. I was able to reproduce the error [here](https://colab.research.google.com/drive/1I1mMkBTW-Og-ulTJ2XnHAzvYu74qUOsh). You will find the code cells to batch-process the dataset in the first section. I pushed the processed dataset onto my Hugging Face Hub though, so you only need to run the second code section to reproduce the error.\r\n> \r\n> You might find the only difference between the working and inferior notebooks lies within the batch-processing function.\r\n> \r\n> ### Edit\r\n> I managed to pinpoint the culprit, which is (unsurprisingly) not the batch-processing function or data collator. The issue was, in fact, how I installed the `transformers` package.\r\n> \r\n> In the inferior notebook, I installed `transformers` as follows:\r\n> \r\n> ```\r\n> !pip3 install git+https://github.com/huggingface/transformers.git # to install transformers\r\n> !pip3 install transformers[torch] # to install accelerate>=0.21.0\r\n> ```\r\n> \r\n> The second `pip3 install` is because of the import error thrown when instantiating `Seq2SeqTrainingArguments` without it:\r\n> \r\n> ```\r\n> Using the `Trainer` with `PyTorch` requires `accelerate>=0.21.0`: Please run `pip install transformers[torch]` or `pip install accelerate -U`\r\n> ```\r\n> \r\n> This environment setup will somehow break the trainer and produce the batch size issue. I confirmed this by installing `transformers` using only `!pip3 install transformers[torch]`, and the error does not persist. This behavior is also evidenced by my working fine-tuning notebook, in which I installed `transformers` by the second method.\r\n\r\nThanks a lot. It solves my problem." ]
1,703
1,704
1,703
NONE
null
### System Info - transformers version: 4.37.0.dev0 - platform: Linux-6.1.58+-x86_64-with-glibc2.35 (Colaboratory free accelerated runtime) - python version: 3.10.12 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am currently fine-tuning SpeechT5 on Vietnamese TTS. I followed the official fine-tuning guide [here](https://colab.research.google.com/drive/1i7I5pzBcU3WDFarDnzweIj4-sVVoIUFJ). The only difference I made is that I changed the tokenizer wrapped in SpeechT5Processor with my own Vietnamese SentencePiece character-level tokenizer. I made sure to add the same special tokens in the original tokenizer, and it is working as expected. I used the following code snippet: ``` processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts") tokenizer = SpeechT5Tokenizer("spm-char.model") processor.tokenizer = tokenizer model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts") model.resize_token_embeddings(new_num_tokens=len(tokenizer), pad_to_multiple_of=8) ``` The issue arises when I got to the training phase at `trainer.train()`. It throws the following error: `Sizes of tensors must match except in dimension 2. Expected size 16 but got size 256 for tensor number 1 in the list.` I found that the error changes according to batch size. Specifically, the second sentence always throws: `Expect size <batch size> but got size <batch size to the power of 2> for tensor number 1 in the list.` Batch size other than 1 will throw such an error. I made no change to the original data collator, here is the code snippet: ``` @dataclass class TTSDataCollatorWithPadding: processor: Any def __call__( self, features: List[Dict[str, Union[List[int], torch.Tensor]]] ) -> Dict[str, torch.Tensor]: input_ids = [{"input_ids": feature["input_ids"]} for feature in features] label_features = [{"input_values": feature["labels"]} for feature in features] speaker_features = [feature["speaker_embeddings"] for feature in features] batch = processor.pad( input_ids=input_ids, labels=label_features, return_tensors="pt" ) batch["labels"] = batch["labels"].masked_fill( batch.decoder_attention_mask.unsqueeze(-1).ne(1), -100 ) del batch["decoder_attention_mask"] if model.config.reduction_factor > 1: target_lengths = torch.tensor( [len(feature["input_values"]) for feature in label_features] ) target_lengths = target_lengths.new( [ length - length % model.config.reduction_factor for length in target_lengths ] ) max_length = max(target_lengths) batch["labels"] = batch["labels"][:, :max_length] batch["speaker_embeddings"] = torch.tensor(speaker_features) return batch data_collator = TTSDataCollatorWithPadding(processor=processor) ``` I checked the batch returned by the data collator with 16 examples and it seems to check out: ``` {'input_ids': torch.Size([16, 188]), 'attention_mask': torch.Size([16, 188]), 'labels': torch.Size([16, 628, 80]), 'speaker_embeddings': torch.Size([16, 512])} ``` I suspect it must be something to do with the DataLoader, or something else obvious that I just cannot wrap my head around. Any help is appreciated. ### Expected behavior The fine-tuning should proceed as per usual. I fine-tuned SpeechT5 on Vietnamese TTS once before but not with a custom tokenizer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28189/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28188
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28188/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28188/comments
https://api.github.com/repos/huggingface/transformers/issues/28188/events
https://github.com/huggingface/transformers/issues/28188
2,052,983,589
I_kwDOCUB6oc56Xgsl
28,188
RuntimeError: FlashAttention only supports Ampere GPUs or newer.
{ "login": "bilalghanem", "id": 47889448, "node_id": "MDQ6VXNlcjQ3ODg5NDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/47889448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bilalghanem", "html_url": "https://github.com/bilalghanem", "followers_url": "https://api.github.com/users/bilalghanem/followers", "following_url": "https://api.github.com/users/bilalghanem/following{/other_user}", "gists_url": "https://api.github.com/users/bilalghanem/gists{/gist_id}", "starred_url": "https://api.github.com/users/bilalghanem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bilalghanem/subscriptions", "organizations_url": "https://api.github.com/users/bilalghanem/orgs", "repos_url": "https://api.github.com/users/bilalghanem/repos", "events_url": "https://api.github.com/users/bilalghanem/events{/privacy}", "received_events_url": "https://api.github.com/users/bilalghanem/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I got the same error on kaggle's notebook.", "Hey! Not sure I understand the issue? \r\n> RuntimeError: FlashAttention only supports Ampere GPUs or newer. \r\n\r\nMeans that flash attention implementation that you install does not support your GPU yet! (either too old or too new). \r\nI would rather look into the flash attention repo for the support to specific hardware not here! 🤗 ", "I hit the same issue on Kaggle notebook. I'm using FlashAttention2 and the official repo README.md said\r\n\r\n```\r\nFlashAttention-2 currently supports:\r\n\r\nAmpere, Ada, or Hopper GPUs (e.g., A100, RTX 3090, RTX 4090, H100). Support for Turing GPUs (T4, RTX 2080) is coming soon, please use FlashAttention 1.x for Turing GPUs for now.\r\nDatatype fp16 and bf16 (bf16 requires Ampere, Ada, or Hopper GPUs).\r\nAll head dimensions up to 256. Head dim > 192 backward requires A100/A800 or H100/H800.\r\n```\r\n\r\nhttps://github.com/Dao-AILab/flash-attention\r\n\r\nSo, we cannot use it on Kaggle environment, especially with b16.", "Is there a way to check through python if flash attention is supported?\r\nI just want my code to use this parameter when deployed in a machine where flash attention is supported and vice versa.\r\n", "@ahassaine If a models supports flash attention, it will have the private attribute `_supports_flash_attn_2` set to `True` e.g. [like here for bark](https://github.com/huggingface/transformers/blob/39c3c0a72af6fbda5614dde02ff236069bb79827/src/transformers/models/bark/modeling_bark.py#L487).", "> @ahassaine If a models supports flash attention, it will have the private attribute `_supports_flash_attn_2` set to `True` e.g. [like here for bark](https://github.com/huggingface/transformers/blob/39c3c0a72af6fbda5614dde02ff236069bb79827/src/transformers/models/bark/modeling_bark.py#L487).\r\n\r\nI think he means, to see if the gpu supports flash attention imp.", "@bilalghanem My mistake! Read too quickly. I don't think there's an easy way to do this as [the checks are in cpp](https://github.com/Dao-AILab/flash-attention/blob/197f2083a2f0953af9319cf4ce32d0bf2aae4bd8/csrc/flash_attn/flash_api.cpp#L303). ", "@amyeroberts \r\nThat's actually very helpful. So in python, that would be:\r\n```\r\nimport torch\r\ndef supports_flash_attention(device_id):\r\n \"\"\"Check if a GPU supports FlashAttention.\"\"\"\r\n major, minor = torch.cuda.get_device_capability(device_id)\r\n \r\n # Check if the GPU architecture is Ampere (SM 8.x) or newer (SM 9.0)\r\n is_sm8x = major == 8 and minor >= 0\r\n is_sm90 = major == 9 and minor == 0\r\n\r\n return is_sm8x or is_sm90\r\n```\r\nwith `device_id` being `0` for the first gpu, `1` for the second...", "@ahassaine Right! You can certainly translate to get a python equivalent. One area of difficulty would be making sure the custom validation is up-to-date with the same checks in the FA2 library, but it will still give you a minimal set of compatible hardware. " ]
1,703
1,706
1,704
NONE
null
### System Info I am trying to run the following code: ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer # Configs device = "cuda:7" model_name = "openchat/openchat_3.5" model = AutoModelForCausalLM.from_pretrained(model_name, device_map=device, load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16, attn_implementation="flash_attention_2") tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left') ``` I can load the model completely fine, but when I want to generate, I get this error: > --------------------------------------------------------------------------- > RuntimeError Traceback (most recent call last) > Cell In[3], [line 76](vscode-notebook-cell:?execution_count=3&line=76) > [74](vscode-notebook-cell:?execution_count=3&line=74) model_input_text = template.format(start, html_, end) > [75](vscode-notebook-cell:?execution_count=3&line=75) model_inputs = tokenizer([model_input_text], return_tensors="pt", padding=False).to(device) > ---> [76](vscode-notebook-cell:?execution_count=3&line=76) generated_ids = model.generate(**model_inputs, do_sample=True, top_p=1.0, temperature=0.8, top_k=50, max_new_tokens=1024) > [77](vscode-notebook-cell:?execution_count=3&line=77) model_outputs_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] > [78](vscode-notebook-cell:?execution_count=3&line=78) print(model_outputs_text[model_input_text.rindex("GPT4 Correct Assistant:")+10:]) > > File [~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:115](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:115), in context_decorator.<locals>.decorate_context(*args, **kwargs) > [112](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:112) @functools.wraps(func) > [113](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:113) def decorate_context(*args, **kwargs): > [114](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:114) with ctx_factory(): > --> [115](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/torch/utils/_contextlib.py:115) return func(*args, **kwargs) > > File [~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1764](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1764), in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs) > [1756](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1756) input_ids, model_kwargs = self._expand_inputs_for_generation( > [1757](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1757) input_ids=input_ids, > [1758](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1758) expand_size=generation_config.num_return_sequences, > [1759](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1759) is_encoder_decoder=self.config.is_encoder_decoder, > [1760](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1760) **model_kwargs, > [1761](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1761) ) > [1763](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1763) # 13. run sample > -> [1764](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1764) return self.sample( > [1765](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/transformers/generation/utils.py:1765) input_ids, > ... > [58](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/flash_attn/flash_attn_interface.py:58) None, > [59](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/flash_attn/flash_attn_interface.py:59) ) > [60](https://vscode-remote+ssh-002dremote-002b7b22686f73744e616d65223a22544e414970726d676d743031227d.vscode-resource.vscode-cdn.net/PATH/notebooks/~/PATH/venv/lib/python3.8/site-packages/flash_attn/flash_attn_interface.py:60) return out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state > > RuntimeError: FlashAttention only supports Ampere GPUs or newer. I am working on Ubuntu 20.04 with NVIDIA Quadro RTX 5000. Cuda version: 12.2 NVIDIA-SMI 535.129.03 torch==2.1.2 transformers==4.36.2 ### Who can help? @SunMarc @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Loading an LLM model with enabling fast attention. ### Expected behavior Generate text.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28188/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28188/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28187
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28187/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28187/comments
https://api.github.com/repos/huggingface/transformers/issues/28187/events
https://github.com/huggingface/transformers/pull/28187
2,052,849,458
PR_kwDOCUB6oc5imDdC
28,187
Update YOLOS slow test values
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,703
1,703
COLLABORATOR
null
# What does this PR do? Updates the test values for YOLOS after the merging in of #27663 to resolve failing slow model tests on nightly. Some small value changes are expected because of the change of output image size from the image processor. As a sense check, plotted the output of the object detection model in the tests to visualise differences to confirm they are small and still sensible: **Old detections** ![yolos_old](https://github.com/huggingface/transformers/assets/22614925/009b8800-356e-4f80-8813-b1b3579abe39) **New detections** ![yolos_new](https://github.com/huggingface/transformers/assets/22614925/aebf728f-5582-4260-933d-eee2fce87785)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28187/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28187/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28187", "html_url": "https://github.com/huggingface/transformers/pull/28187", "diff_url": "https://github.com/huggingface/transformers/pull/28187.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28187.patch", "merged_at": 1703182627000 }
https://api.github.com/repos/huggingface/transformers/issues/28186
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28186/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28186/comments
https://api.github.com/repos/huggingface/transformers/issues/28186/events
https://github.com/huggingface/transformers/pull/28186
2,052,807,186
PR_kwDOCUB6oc5il6Gk
28,186
Fix slow backbone tests - out_indices must match stage name ordering
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28186). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,703
1,703
1,703
COLLABORATOR
null
# What does this PR do? Fixes slow autobackbone tests failing on nightly after #27606 #27606 enforces the out_indices and out_features to be in the same order as the stage names. This ensures backbone selects the correct features in its forward pass.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28186/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28186/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28186", "html_url": "https://github.com/huggingface/transformers/pull/28186", "diff_url": "https://github.com/huggingface/transformers/pull/28186.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28186.patch", "merged_at": 1703182611000 }
https://api.github.com/repos/huggingface/transformers/issues/28185
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28185/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28185/comments
https://api.github.com/repos/huggingface/transformers/issues/28185/events
https://github.com/huggingface/transformers/pull/28185
2,052,665,966
PR_kwDOCUB6oc5ila8v
28,185
Cache: dynamic cache with cross attention and UMT5 `Cache` support
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28185). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,703
1,706
null
MEMBER
null
# What does this PR do? #28065 was becoming messy due to all Bart "copied from" dependencies, so this PR is a tiny version of it. This PR: 1. Introduces `DynamicCacheWithCrossAttention`, which expands `DynamicCache` [cache object equivalent to the previous `past_key_values` input/output] with the ability to hold a cross-attention cache. This design was intentional: most LLMs (and now even multimodel models) tend to be decoder-only, so this separation will keep the cache class for decoder-only models simpler. It also enables us to be more strict -- in #28065 I've caught an unintended cache deletion in Whisper thanks to the increased specificity! 2. Adds `Cache` support to `modeling_umt5.py`, which is a form to test whether `DynamicCacheWithCrossAttention` is equivalent to the previous cache. These changes are the equivalent of the modeling changes in #26681, but for encoder-decoder models. ______________________________________ Local tests run: 1. `RUN_SLOW=1 py.test tests/models/umt5/test_modeling_umt5.py -vv` [Note: adds a test to ensure we keep the same results as in `main`]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28185/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28185/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28185", "html_url": "https://github.com/huggingface/transformers/pull/28185", "diff_url": "https://github.com/huggingface/transformers/pull/28185.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28185.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28184
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28184/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28184/comments
https://api.github.com/repos/huggingface/transformers/issues/28184/events
https://github.com/huggingface/transformers/issues/28184
2,052,603,134
I_kwDOCUB6oc56WDz-
28,184
LLaVa Left Padding Got Weird Results
{ "login": "SeungyounShin", "id": 20262536, "node_id": "MDQ6VXNlcjIwMjYyNTM2", "avatar_url": "https://avatars.githubusercontent.com/u/20262536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SeungyounShin", "html_url": "https://github.com/SeungyounShin", "followers_url": "https://api.github.com/users/SeungyounShin/followers", "following_url": "https://api.github.com/users/SeungyounShin/following{/other_user}", "gists_url": "https://api.github.com/users/SeungyounShin/gists{/gist_id}", "starred_url": "https://api.github.com/users/SeungyounShin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SeungyounShin/subscriptions", "organizations_url": "https://api.github.com/users/SeungyounShin/orgs", "repos_url": "https://api.github.com/users/SeungyounShin/repos", "events_url": "https://api.github.com/users/SeungyounShin/events{/privacy}", "received_events_url": "https://api.github.com/users/SeungyounShin/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @younesbelkada @ArthurZucker ", "hi @SeungyounShin \r\nWhat transformers version are you using?\r\nin the first input `prompt1 = \"<image>\\n<image>\\nUSER: What's the the difference of two images?\\nASSISTANT:\"` you passed two images; note multi-image query is not well supported for Llava-like models as they have not excplicitly trained for that according to the authors.", "btw you can also to `inputs = inputs.to(\"cuda\")`", "I am currently using `4.37.0.dev0`\r\n\r\n\r\n```python\r\nprompt1 = \"<image>\\n<image>\\nUSER: What's the the difference of two images?\\nASSISTANT:\"\r\nprompt2 = \"<image>\\n<image>\\nUSER: Describe the two images.\\nASSISTANT:\"\r\n# prompt3 = \"<image>\\nUSER: Describe the image.\\nASSISTANT:\"\r\nurl1 = \"https://images.unsplash.com/photo-1552053831-71594a27632d?q=80&w=3062&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\"\r\nurl2 = \"https://images.unsplash.com/photo-1617258683320-61900b281ced?q=80&w=3087&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\"\r\nimage1 = Image.open(requests.get(url1, stream=True).raw)\r\nimage2 = Image.open(requests.get(url2, stream=True).raw)\r\n\r\ninputs = processor(\r\n text=[prompt1, prompt2],\r\n images=[image1, image2, image1, image2],\r\n return_tensors=\"pt\",\r\n padding=True,\r\n)\r\n```\r\nThis will output :\r\n```text\r\n [1]\r\nUSER: What's the the difference of two images?\r\nASSISTANT: In the two images, the primary difference is the presence of a flower in the dog's mouth. In the first image, the dog is holding a flower in its mouth, while in the second image, the dog is not holding a flower. This subtle change in the scene highlights the dog's interaction with the flower, and it may evoke different emotions or interpretations depending on the viewer's perspective.\r\n\r\n [2]\r\nUSER: Describe the two images.\r\nASSISTANT: The two images show a cute brown and white dog standing on a grassy hill. In one image, the dog is holding a green leaf in its mouth, while in the other, it is holding a yellow flower. Both images capture the dog's playful and curious nature as it interacts with its surroundings.\r\n```\r\n\r\n\r\nThe implementation appears to be functioning correctly. Upon reviewing, I noticed that the [final embeddingl](https://github.com/huggingface/transformers/blob/260b9d2179ea9592b24ff102ab9ea672f6a4f3ef/src/transformers/models/llava/modeling_llava.py#L304) effectively supports multiple images.", "[modeling_llava.py#L304](https://github.com/huggingface/transformers/blob/260b9d2179ea9592b24ff102ab9ea672f6a4f3ef/src/transformers/models/llava/modeling_llava.py#L304) is this expected behavior?\r\n\r\nConsidering the relationship between image patches. Specifically, if image patch 100 references image patch 84, it appears there shouldn't be any issue. I haven't come across any mention of masking related to image patches in the LLaVa paper. Is this approach used in the official implementation of `LLaVa`?\r\n\r\n\r\n**It would be beneficial to have an example of fine-tuning for multi-images. Would you be open to accepting a Pull Request (PR) that includes an example of fine-tuning on multi-images?\r\n\r\n", "Hi @SeungyounShin \r\nIndeed it seems you are correct, despite the model not being explicitly trained for this, it seems to perform well on some examples as you shared, which is very nice! cc @haotian-liu for visibility! \r\nI suspect something is off with SDPA (`torch.scaled_dot_product_attention` not being able to deal with arbitraty attention masks. I need some time to properly investigate how to fix this. Meanwhile you can do two things\r\n1- Use the `eager` attention implementation:\r\n\r\n```diff\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import AutoProcessor, LlavaForConditionalGeneration\r\n\r\nmodel = LlavaForConditionalGeneration.from_pretrained(\"llava-hf/llava-1.5-7b-hf\").to(\r\n+ model = LlavaForConditionalGeneration.from_pretrained(\"llava-hf/llava-1.5-7b-hf\", attn_implementation=\"eager\").to(\r\n \"cuda\"\r\n)\r\nprocessor = AutoProcessor.from_pretrained(\"llava-hf/llava-1.5-7b-hf\")\r\n\r\nprompt1 = \"<image>\\n<image>\\nUSER: What's the the difference of two images?\\nASSISTANT:\"\r\nprompt2 = \"<image>\\nUSER: Describe the image.\\nASSISTANT:\"\r\nprompt3 = \"<image>\\nUSER: Describe the image.\\nASSISTANT:\"\r\nurl1 = \"https://images.unsplash.com/photo-1552053831-71594a27632d?q=80&w=3062&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\"\r\nurl2 = \"https://images.unsplash.com/photo-1617258683320-61900b281ced?q=80&w=3087&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\"\r\nimage1 = Image.open(requests.get(url1, stream=True).raw)\r\nimage2 = Image.open(requests.get(url2, stream=True).raw)\r\n\r\ninputs = processor(\r\n text=[prompt1, prompt2, prompt3],\r\n images=[image1, image2, image1, image2],\r\n return_tensors=\"pt\",\r\n padding=True,\r\n)\r\nfor key in inputs:\r\n inputs[key] = inputs[key].to(\"cuda\")\r\n print(key, inputs[key].shape)\r\n\r\n# Generate\r\ngenerate_ids = model.generate(**inputs, max_length=512)\r\noutputs = processor.batch_decode(\r\n generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False\r\n)\r\n\r\nprint(outputs)\r\n```\r\n2- Process the prompts one-by-one instead of performing batched generation\r\n\r\ncc @fxmarty as well as this is about SDPA", "@younesbelkada is this in the end not related to sdpa?", "@fxmarty I think it is related to SDPA as Llava model creates non-standard attention mask and the script fails for SDPA ", "@younesbelkada i also found similar issue when i tried to implement batch inference. do you know why it creates non-standard attention mask? it should theoretically use the standard autoregressive mask?", "@haotian-liu I think this happens in the case you try to have different numbers of images per prompt + multi-turn chat. If let's say you have 2 images in the first prompt and one image on the second prompt, your attention mask will look like\r\n\r\n```bash\r\n[image 1] [prompt 1] [image 2] [prompt 2]\r\n0 0 0.. 0 1 1 1 1 1 .. 1 0 0 0 ... 0 1 1 1 1 1 ... 1\r\n[image 3] [prompt 3]\r\n0 0 0.. 0 1 1 1 1 1 .. 1\r\n```\r\n\r\nI think the reason that for the prompt\r\n```python\r\nprompt1 = \"<image>\\n<image>\\nUSER: What's the the difference of two images?\\nASSISTANT:\"\r\nprompt2 = \"<image>\\nUSER: Describe the image.\\nASSISTANT:\"\r\nprompt3 = \"<image>\\nUSER: Describe the image.\\nASSISTANT:\"\r\nurl1 = \"https://images.unsplash.com/photo-1552053831-71594a27632d?q=80&w=3062&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\"\r\nurl2 = \"https://images.unsplash.com/photo-1617258683320-61900b281ced?q=80&w=3087&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\"\r\nimage1 = Image.open(requests.get(url1, stream=True).raw)\r\nimage2 = Image.open(requests.get(url2, stream=True).raw)\r\n\r\ninputs = processor(\r\n text=[prompt1, prompt2, prompt3],\r\n images=[image1, image2, image1, image2],\r\n return_tensors=\"pt\",\r\n padding=True,\r\n)\r\n```\r\nWe are getting a non-standard attention mask is the presence of `\\n` between the two `<image>` tokens for `prompt1`. Can you try out the following:\r\n```diff\r\n- prompt1 = \"<image>\\n<image>\\nUSER: What's the the difference of two images?\\nASSISTANT:\"\r\n+ prompt1 = \"<image><image>\\nUSER: What's the the difference of two images?\\nASSISTANT:\"\r\nprompt2 = \"<image>\\nUSER: Describe the image.\\nASSISTANT:\"\r\nprompt3 = \"<image>\\nUSER: Describe the image.\\nASSISTANT:\"\r\nurl1 = \"https://images.unsplash.com/photo-1552053831-71594a27632d?q=80&w=3062&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\"\r\nurl2 = \"https://images.unsplash.com/photo-1617258683320-61900b281ced?q=80&w=3087&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D\"\r\nimage1 = Image.open(requests.get(url1, stream=True).raw)\r\nimage2 = Image.open(requests.get(url2, stream=True).raw)\r\n\r\ninputs = processor(\r\n text=[prompt1, prompt2, prompt3],\r\n images=[image1, image2, image1, image2],\r\n return_tensors=\"pt\",\r\n padding=True,\r\n)\r\n```\r\nThat way the attention mask will become standard I believe cc @haotian-liu what do you think?", "@younesbelkada Thank you! i thought it may be due to a different reason, as the strange behavior occured when I previously tried to do batch inference with one image for each sample. I'll try to find another example later to see if it still exists." ]
1,703
1,707
null
NONE
null
### System Info Reproduce : ```python from PIL import Image import requests from transformers import AutoProcessor, LlavaForConditionalGeneration model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-7b-hf").to( "cuda" ) processor = AutoProcessor.from_pretrained("llava-hf/llava-1.5-7b-hf") prompt1 = "<image>\n<image>\nUSER: What's the the difference of two images?\nASSISTANT:" prompt2 = "<image>\nUSER: Describe the image.\nASSISTANT:" prompt3 = "<image>\nUSER: Describe the image.\nASSISTANT:" url1 = "https://images.unsplash.com/photo-1552053831-71594a27632d?q=80&w=3062&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" url2 = "https://images.unsplash.com/photo-1617258683320-61900b281ced?q=80&w=3087&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" image1 = Image.open(requests.get(url1, stream=True).raw) image2 = Image.open(requests.get(url2, stream=True).raw) inputs = processor( text=[prompt1, prompt2, prompt3], images=[image1, image2, image1, image2], return_tensors="pt", padding=True, ) for key in inputs: inputs[key] = inputs[key].to("cuda") print(key, inputs[key].shape) # Generate generate_ids = model.generate(**inputs, max_length=512) outputs = processor.batch_decode( generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print(outputs) ``` This will outputs : ```Result ["\n \nUSER: What's the the difference of two images?\nASSISTANT: In the two images, the primary difference is the presence of a flower in the dog's mouth. In the first image, the dog is holding a flower in its mouth, while in the second image, the dog is not holding a flower. This subtle change in the scene highlights the dog's interaction with the flower, and it may evoke different emotions or interpretations depending on the viewer's perspective.", '\nUSER: Describe the image.\nASSISTANT: The dog is a \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n', '\nUSER: Describe the image.\nASSISTANT: The \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nЪ schließ'] ``` I checked images are rightly placed. but for batch2 and 3 It's consist of lots of padding (False x 583) [False x 583, False, True x 576 , False, False, False, False, False, False, False, False, False, False, False, False, False, False] I guess llava doesn't see this kind of prefix on training phase would result in weird behavior. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction stated at above ### Expected behavior skip
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28184/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28184/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28183
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28183/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28183/comments
https://api.github.com/repos/huggingface/transformers/issues/28183/events
https://github.com/huggingface/transformers/issues/28183
2,052,577,262
I_kwDOCUB6oc56V9fu
28,183
Bug in new version transformers 4.34.0-4.36.2
{ "login": "JAX627", "id": 113168400, "node_id": "U_kgDOBr7QEA", "avatar_url": "https://avatars.githubusercontent.com/u/113168400?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JAX627", "html_url": "https://github.com/JAX627", "followers_url": "https://api.github.com/users/JAX627/followers", "following_url": "https://api.github.com/users/JAX627/following{/other_user}", "gists_url": "https://api.github.com/users/JAX627/gists{/gist_id}", "starred_url": "https://api.github.com/users/JAX627/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JAX627/subscriptions", "organizations_url": "https://api.github.com/users/JAX627/orgs", "repos_url": "https://api.github.com/users/JAX627/repos", "events_url": "https://api.github.com/users/JAX627/events{/privacy}", "received_events_url": "https://api.github.com/users/JAX627/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @JAX627, thanks for opening an issue! \r\n\r\nBy default, a `pytorch_model.bin` file will no longer be saved out as `safetensors` is now the default format. To save a pytorch file out instead, you can explicitly use the `safe_serialization` argument when calling `save_pretrained`:\r\n\r\n```py\r\nmodel.save_pretrained(checkpoint_name, safe_serialization=False)\r\n``` ", "pytorch 2.1.2, tokenizer 0.14.1, transformer 4.36.2. \r\n\r\nUsing \r\n\r\n```\r\nmodel.save_pretrained(checkpoint_name, safe_serialization=False)\r\n```\r\n\r\nThe `pytorch_model.bin` is still missing. \r\n\r\ngenerated files\r\n\r\n```\r\nmodel.safetensors optimizer.pt rng_state.pth scheduler.pt trainer_state.json training_args.bin\r\n```\r\n", "Issue resolved. The problem is that when constructing the trainer, `save_safetensors=False` should be set. Otherwise, the above `safe_serialization=False` will not work. \r\n\r\nhttps://huggingface.co./docs/transformers/v4.36.1/en/main_classes/trainer#transformers.TrainingArguments.save_safetensors", "Happy New Year!\r\n\r\n@WilliamYi96 + amyeroberts\r\nMany thanks for your information, helps a lot! \r\nI'm having the same issue when fine-tuning whisper models.\r\nDo you maybe know: \r\n 1. Does 'save_safetensors=False' also work for older transformers<4.34 - i.e. is it simply ignored or does it cause an error?\r\n 2. Is there any possibility (tool, converter, code) to extract the (original) pytorch_model.bin from a model.safetensors file??\r\n many thanks for any hint!\r\n ", "Hi @welliX, happy new year! \r\n\r\n1. Yes, it works for older versions of transformers. It was added as an argument to `save_retrained` in #19175, which was part of v4.23. However, its first iterations were as an experimental feature and not guaranteed to be bug free. We strongly advise using the most recent versions of transformers, as many different issues in the saving and serialization of the models have been ironed out since then. \r\n\r\n2. You can simply load a model that was saved in safetensors and save it out again: \r\n```py\r\nfrom transformers import LlavaForConditionalGeneration\r\n\r\ncheckpoint = \"llava-hf/llava-1.5-7b-hf\"\r\nmodel = LlavaForConditionalGeneration.from_pretrained(checkpoint)\r\n\r\nmodel.save_pretrained('new_llava_model', safe_serialization=False)\r\n```", "Dear Amy, \r\nmany thanks for your speedy response; this was quite helpful!\r\nI tried your proposal 2. and in principle it does s.th., i.e. it produces following directory ;-) \r\nHowever with 6 *huuuge* pytorch*bin files:\r\n\r\n> 1024 -rwxr-xr-x 1 akiessling users 1746 Jan 3 16:00 config.json*\r\n> 1024 -rwxr-xr-x 1 akiessling users 3531 Jan 3 16:00 generation_config.json*\r\n> 4766720 -rwxr-xr-x 1 akiessling users 4880137238 Jan 3 16:01 pytorch_model-00001-of-00006.bin*\r\n> 4744192 -rwxr-xr-x 1 akiessling users 4857219539 Jan 3 16:03 pytorch_model-00002-of-00006.bin*\r\n> 4744192 -rwxr-xr-x 1 akiessling users 4857219603 Jan 3 16:06 pytorch_model-00003-of-00006.bin*\r\n> 4744192 -rwxr-xr-x 1 akiessling users 4857219603 Jan 3 16:08 pytorch_model-00004-of-00006.bin*\r\n> 4744192 -rwxr-xr-x 1 akiessling users 4857219603 Jan 3 16:11 pytorch_model-00005-of-00006.bin*\r\n> 3851264 -rwxr-xr-x 1 akiessling users 3942840241 Jan 3 16:13 pytorch_model-00006-of-00006.bin*\r\n> 1024 -rwxr-xr-x 1 akiessling users 70128 Jan 3 16:13 pytorch_model.bin.index.json*\r\n\r\nIs this what as expected?\r\nSo far my fine-tuning delivered a model dir/ with only a single file (and smaller) 'pytorch_model.bin' ,\r\nwhich I than used directly for inference (following transcribe_audio.py from https://github.com/vasistalodagala/whisper-finetune).\r\nAre all the six files necessary for inference? Can they be merged into one single pytorch_model.bin ?\r\n\r\nmany thanks again for your help!\r\n /Andi\r\n\r\n", "Hi amyeroberts,\r\nI've tried to load/use one of the 6 pytorch_model-*.bin just as before the pytorch_model.bin model for infrerence - of course it is not working:\r\n\r\n File \"/home/run/icsf/icsf_inference/index.py\", line 618, in loadASRmodel\r\n ASRtranscriber=ASRinf.Init_ASRinference(\"openai/whisper-tiny\", icsfitems['ASR_Activemodel'], 'English', 0)\r\n File \"/home/run/icsf/icsf_inference/./whisper/ASRinference.py\", line 48, in Init_ASRinference\r\n transcribe = loadmodel(args.ckpt_dir, args.device, tokenizer)\r\n File \"/home/run/icsf/icsf_inference/./whisper/ASRinference.py\", line 25, in loadmodel\r\n t = pipeline(task=\"automatic-speech-recognition\",\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/pipelines/__init__.py\", line 741, in pipeline\r\n config = AutoConfig.from_pretrained(model, _from_pipeline=task, **hub_kwargs, **model_kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py\", line 1039, in from_pretrained\r\n config_class = CONFIG_MAPPING[config_dict[\"model_type\"]]\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py\", line 734, in __getitem__\r\n raise KeyError(key)\r\nKeyError: 'llava'\r\n\r\nProbably it's the better idea to use the (new) model.safetensors file directly for inference. I guess I have to use s.th. like\r\n use_safetensors=True\r\nin the pipeline() function call, right? Or do I have to switch this somewhere else?\r\n\r\nIn my last try some weeks ago when doing that I got an error \r\nTypeError: AutomaticSpeechRecognitionPipeline._sanitize_parameters() got an unexpected keyword argument 'use_safetensors'\r\n\r\nmaybe it was because of a (still) older transformer version?\r\n\r\nKind regards, Andi\r\n\r\n", "Hi @welliX \r\n\r\n>I've tried to load/use one of the 6 pytorch_model-*.bin just as before the pytorch_model.bin model for infrerence - of course it is not working:\r\n\r\njust from the error messages, it looks like you're trying to load a llava model into the ASR pipeline. Llava is a vision-language model and so isn't compatible with this task. The llava checkpoint I provided in my example was just for demonstration. You'll want to use the checkpoint for whichever ASR model you were trying to get the pytorch files for.\r\n\r\n> Probably it's the better idea to use the (new) model.safetensors file directly for inference. \r\n\r\nyes\r\n\r\n> I guess I have to use s.th. like use_safetensors=True in the pipeline() function call, right? Or do I have to switch this somewhere else?\r\n\r\nThat shouldn't be necessary. Using safetensors is the default behaviour in the transformers library\r\n\r\n> In my last try some weeks ago when doing that I got an error ... \r\n> maybe it was because of a (still) older transformer version?\r\n\r\nI really couldn't tell you without know the code you were running and the versions. However, I suspect you'd get the same error now. The error is being raised in `_sanitze_parameters` which is a method responsible for allocating kwargs to different parts of the pipeline. This is because passing kwargs to the pipeline can be ambiguous - what is this argument supposed to be configuring? If it's raising an error, it means the pipeline doesn't know how to use the argument. In general, if you want to start customizing parts of the pipeline, it might be easiest to work with the code more directly e.g. [in this ASR example](https://huggingface.co./docs/transformers/model_doc/whisper#transformers.WhisperForCausalLM.forward.example). ", "Hi @amyeroberts,\r\nmany thanks for your detailed answer - yes, you are right, I'll try the (new) way via \r\nmodel.safetensors file now.\r\nHope it'll work out.\r\n\r\nAnother question: The whisper models are pretty huge. \r\nThus when loading more than one model (for switching between different inference models) it often happens that CUDA is out of memory (see example below). \r\nDo you know perhaps an easy way how to free the CUDA memory of an earlier used model before loading a new one? \r\nI found the following but it doesn't really hAve a big effect:\r\n import torch, gc\r\n gc.collect()\r\n torch.cuda.empty_cache()\r\n\r\nkind regards!\r\n\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 7.75 GiB of which 30.19 MiB is free. Process 71391 has 3.39 GiB memory in use. Including non-PyTorch memory, this process has 1.95 GiB memory in use. Of the allocated memory 1.80 GiB is allocated by PyTorch, and 2.84 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n", "Hi @welliX, these are questions best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### System Info ver: transformers 4.34.0-4.36.2 problem: finetune chatglm3 model, finetune.py don't generate pytorch_model.bin file in output, as point out in https://github.com/THUDM/ChatGLM3/discussions/253#discussioncomment-7837093 it seems like problems in modeling_utils.py file, and it can be solved by pip install transformers==4.33.0, seems like higher version transformers not suitable for chatglm3 totally ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. download chatglm3-6b-32k model 2. pip install transformers 4.34.0-4.36.2 3. follow finetune steps in https://github.com/THUDM/ChatGLM3/tree/main/finetune_chatmodel_demo 4. after finish finetuning, there is no pytorch_model.bin file in output dir 5. pip install transformers==4.33.0 6. follow finetune steps in https://github.com/THUDM/ChatGLM3/tree/main/finetune_chatmodel_demo 7. after finish finetuning, there is the pytorch_model.bin file in output dir ### Expected behavior solve the problem in new version transformers
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28183/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28183/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28182
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28182/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28182/comments
https://api.github.com/repos/huggingface/transformers/issues/28182/events
https://github.com/huggingface/transformers/pull/28182
2,052,427,965
PR_kwDOCUB6oc5ikmUG
28,182
[`Docs`] Add 4-bit serialization docs
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28182). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,703
1,703
1,703
CONTRIBUTOR
null
# What does this PR do? Follow up work from: https://github.com/huggingface/transformers/pull/26037 Adds few lines in the documentation about serializing 4-bit models on the Hub cc @amyeroberts @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28182/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28182/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28182", "html_url": "https://github.com/huggingface/transformers/pull/28182", "diff_url": "https://github.com/huggingface/transformers/pull/28182.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28182.patch", "merged_at": 1703236713000 }
https://api.github.com/repos/huggingface/transformers/issues/28181
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28181/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28181/comments
https://api.github.com/repos/huggingface/transformers/issues/28181/events
https://github.com/huggingface/transformers/pull/28181
2,052,412,895
PR_kwDOCUB6oc5iki_r
28,181
update the logger message with accordant weights_file_name
{ "login": "izyForever", "id": 43177954, "node_id": "MDQ6VXNlcjQzMTc3OTU0", "avatar_url": "https://avatars.githubusercontent.com/u/43177954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/izyForever", "html_url": "https://github.com/izyForever", "followers_url": "https://api.github.com/users/izyForever/followers", "following_url": "https://api.github.com/users/izyForever/following{/other_user}", "gists_url": "https://api.github.com/users/izyForever/gists{/gist_id}", "starred_url": "https://api.github.com/users/izyForever/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/izyForever/subscriptions", "organizations_url": "https://api.github.com/users/izyForever/orgs", "repos_url": "https://api.github.com/users/izyForever/repos", "events_url": "https://api.github.com/users/izyForever/events{/privacy}", "received_events_url": "https://api.github.com/users/izyForever/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts \r\n\r\nI can't figure out why I failed the tests_torch for I only changed the logger info.😖 \r\n\r\n![)6)H WRK7EVWMBBDTCFV 8](https://github.com/huggingface/transformers/assets/43177954/e8e7294c-ef53-4265-a486-5401b56db14a)\r\n", "> Thanks for updating this and contributing to improving the code!\r\n> \r\n> For the failing (unrelated) tests, there was a fix merge into main recently - #28202. Could you rebase and push the updated branch? This should resolve and trigger another CI run\r\n\r\n@amyeroberts It works!It seems the workflow need extra approval to finish merge.", "@izyForever Yes - the doc tests aren't automatically run for security reasons. I've approved the workflow now - once that passes I'll merge in. \r\n\r\nThanks again for your contribution! " ]
1,703
1,703
1,703
CONTRIBUTOR
null
# What does this PR do? Update the logger message with accordant weights_file_name. Fixes # (issue) https://github.com/huggingface/transformers/issues/28076 @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28181/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28181/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28181", "html_url": "https://github.com/huggingface/transformers/pull/28181", "diff_url": "https://github.com/huggingface/transformers/pull/28181.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28181.patch", "merged_at": 1703257510000 }
https://api.github.com/repos/huggingface/transformers/issues/28180
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28180/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28180/comments
https://api.github.com/repos/huggingface/transformers/issues/28180/events
https://github.com/huggingface/transformers/issues/28180
2,052,332,919
I_kwDOCUB6oc56VB13
28,180
Verify interpolation of image processors
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
open
false
null
[]
[ "@NielsRogge Thanks for opening the issue! \r\n\r\nIt's fine to open up to the community but you'll need to add a checklist of the image processors so it's clear who is working on what and what's done as well as ideally some instructions on what it means for each one to be \"done\" e.g. making sure to run slow tests for models. ", "@NielsRogge ,\r\n\r\nIf I understand it correctly, we need to match the interpolation:\r\n\r\nFor example for convnext: [convnext](https://github.com/huggingface/transformers/blob/main/src/transformers/models/convnext/image_processing_convnext.py#L93) should be changed to Bicubic as per [timm/convnext](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/convnext.py#L498) .\r\n\r\nIf that's correct , I can take this up for all the models. Let me know.", "Yes that is correct, see also the [original implementation](https://github.com/facebookresearch/ConvNeXt/blob/048efcea897d999aed302f2639b6270aedf8d4c8/main.py#L107). Thanks for spotting that. Hence feel free to open a PR to update this, along with the image processor created in the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/convnext/convert_convnext_to_pytorch.py#L149-L150). Ideally we assert the pixel values created by it against the original implementation, like done [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/dinov2/convert_dinov2_to_hf.py#L197-L220) for DINOv2.", "Sure! thanks for the pointers, will work on it.", "DieT and DPT default interpolation types matches with the original implementation types to BICUBIC . That's what I see it. Let me know if I overlooked.\r\n\r\n", "@NielsRogge ,\r\n\r\nwould you have a look ?" ]
1,703
1,707
null
CONTRIBUTOR
null
### Feature request As pointed out in https://github.com/huggingface/transformers/pull/27742, some image processors might need a correction on the default interpolation method being used (resampling in Pillow). We could check this on a per-model basis. ### Motivation Interpolation methods have a slight (often minimal) impact on performance. However it would be great to verify this on a per-model basis. e.g. [ViT](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/image_processing_vit.py#L52)'s image processor defaults to BILINEAR but should use BICUBIC as seen [here](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/vision_transformer.py#L1062). We can update the default values of the image processors, but can't update the configs on the hub as this would break people's fine-tuned models. ### Your contribution I could work on this, but this seems like a good first issue for first contributors. To be checked (by comparing against original implementation): - [ ] ViT - [ ] ConvNext - [ ] DeiT - [ ] DPT - [ ] LeViT - [ ] Swin - [ ] Swinv2
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28180/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28180/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28179
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28179/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28179/comments
https://api.github.com/repos/huggingface/transformers/issues/28179/events
https://github.com/huggingface/transformers/issues/28179
2,052,091,367
I_kwDOCUB6oc56UG3n
28,179
How to fine tune facebook/esm2_t33_650M_UR50D
{ "login": "Admire7494", "id": 98265794, "node_id": "U_kgDOBdtqwg", "avatar_url": "https://avatars.githubusercontent.com/u/98265794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Admire7494", "html_url": "https://github.com/Admire7494", "followers_url": "https://api.github.com/users/Admire7494/followers", "following_url": "https://api.github.com/users/Admire7494/following{/other_user}", "gists_url": "https://api.github.com/users/Admire7494/gists{/gist_id}", "starred_url": "https://api.github.com/users/Admire7494/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Admire7494/subscriptions", "organizations_url": "https://api.github.com/users/Admire7494/orgs", "repos_url": "https://api.github.com/users/Admire7494/repos", "events_url": "https://api.github.com/users/Admire7494/events{/privacy}", "received_events_url": "https://api.github.com/users/Admire7494/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Admire7494, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,706
1,706
NONE
null
### System Info How to fine tune facebook/esm2_t33_650M_UR50D?It's too big and the model.half() couldn't work. Besids, i always met the error : CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc). Is it possible that the model in the huggingface is wrong? The following is the script: from os.path import join import os import pandas as pd import numpy as np import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torch.utils.data as data import transformers from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer from datasets import Dataset,load_metric from sklearn.model_selection import train_test_split #os.environ['CUDA_VISIBLE_DEVICES'] = '1' CURRENT_DIR = os.getcwd() check_point = join(CURRENT_DIR,"esm1b_t33_650M_UR50S") #Data processing def process_tsv(file): sequences = list() labels = list() df = pd.read_csv(file,sep="\t") for ind in df.index: sequences.append(df["sequence"][ind]) labels.append(df["label"][ind]) return sequences,labels def tokenize_add_label(sequences, labels, tokenizer): """This function takes sequences and labels creates a Dataset containing tokenized sequences and add labels to it args: sequences (str): a list of sequences labels (int): a list of labels tokenizer : a pre-trained tokenizer return: Dataset: tokenized sequences and associated labels)""" sequences_tokenized = tokenizer(sequences, padding=True, truncation=True) sequences_tokenized = torch.float16(sequences_tokenized) labels = torch.tensor(labels) labels = labels.long() sequences_dataset = Dataset.from_dict(sequences_tokenized) sequences_dataset = sequences_dataset.add_column("labels", labels) return sequences_dataset sequences,labels = process_tsv(join(CURRENT_DIR,"example.tsv")) tokenizer = AutoTokenizer.from_pretrained(check_point) sequences_dataset = tokenize_add_label(sequences,labels,tokenizer) num_labels = max(labels)+1 model = AutoModelForSequenceClassification.from_pretrained(check_point,num_labels=num_labels) #device = "cuda" if torch.cuda.is_available() else "cpu" #model.to(device) model.cuda() #model = model.half() #model.enable_input_require_grads() model_name = check_point.split("/")[-1] trainer_dir = f"{model_name}-finetuned-model_esm-1b_on_7beta" if not os.path.exists(trainer_dir): os.mkdir(trainer_dir) batch_size = 1 training_args = transformers.TrainingArguments( output_dir=trainer_dir, # output directory overwrite_output_dir=True, num_train_epochs=3, # total number of training epochs per_device_train_batch_size=batch_size, # batch size per device during training per_device_eval_batch_size=batch_size, # batch size for evaluation learning_rate=2e-5, warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=trainer_dir, # directory for storing logs logging_steps=10, load_best_model_at_end=True, evaluation_strategy="epoch", save_strategy="epoch", save_total_limit=1, metric_for_best_model="accuracy", greater_is_better=True, disable_tqdm=True, gradient_accumulation_steps = 2, gradient_checkpointing=True ) metric = load_metric(join(CURRENT_DIR,"metrics","accuracy/accuracy.py")) def compute_metrics(eval_pred): logits, labels = eval_pred print("logits",logits) print("labels",labels) predictions = np.argmax(logits, axis=-1) print("predictions",predictions) return metric.compute(predictions=predictions, references=labels) trainer = Trainer( model = model, args = training_args, train_dataset=sequences_dataset, eval_dataset=sequences_dataset, tokenizer=tokenizer, compute_metrics=compute_metrics, ) model.config.problem_type trainer.train() trainer.state.log_history ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. Some weights of EsmForSequenceClassification were not initialized from the model checkpoint at /home/wangmuqiang/fine_tune_esm2/esm1b_t33_650M_UR50S and are newly initialized: ['classifier.dense.bias', 'classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. /home/wangmuqiang/fine_tune_esm2/fine_tune_esm1b_7beta.py:87: FutureWarning: load_metric is deprecated and will be removed in the next major version of datasets. Use 'evaluate.load' instead, from the new library 馃 Evaluate: https://huggingface.co./docs/evaluate metric = load_metric(join(CURRENT_DIR,"metrics","accuracy/accuracy.py")) Detected kernel version 4.18.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher. /home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/utils/checkpoint.py:429: UserWarning: torch.utils.checkpoint: please pass in use_reentrant=True or use_reentrant=False explicitly. The default value of use_reentrant will be updated to be False in the future. To maintain current behavior, pass use_reentrant=True. It is recommended that you use use_reentrant=False. Refer to docs for more details on the differences between the two variants. warnings.warn( /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [102,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [78,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1699449181081/work/aten/src/ATen/native/cuda/Indexing.cu:1292: indexSelectLargeIndex: block: [62,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "/home/wangmuqiang/fine_tune_esm2/fine_tune_esm1b_7beta.py", line 108, in <module> trainer.train() File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/transformers/trainer.py", line 1537, in train return inner_training_loop( File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/transformers/trainer.py", line 1854, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/transformers/trainer.py", line 2737, in training_step self.accelerator.backward(loss) File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/accelerate/accelerator.py", line 1905, in backward loss.backward(**kwargs) File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/_tensor.py", line 492, in backward torch.autograd.backward( File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/autograd/__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/autograd/function.py", line 288, in apply return user_fn(self, *args) File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 288, in backward torch.autograd.backward(outputs_with_grad, args_with_grad) File "/home/wangmuqiang/.conda/envs/esm/lib/python3.9/site-packages/torch/autograd/__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` ### Expected behavior the script that successfully ran in RTX 3090
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28179/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28179/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28178
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28178/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28178/comments
https://api.github.com/repos/huggingface/transformers/issues/28178/events
https://github.com/huggingface/transformers/issues/28178
2,052,081,383
I_kwDOCUB6oc56UEbn
28,178
Call `.destroy()` on `DeepSpeedEngine` somewhere post training
{ "login": "chiragjn", "id": 10295418, "node_id": "MDQ6VXNlcjEwMjk1NDE4", "avatar_url": "https://avatars.githubusercontent.com/u/10295418?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chiragjn", "html_url": "https://github.com/chiragjn", "followers_url": "https://api.github.com/users/chiragjn/followers", "following_url": "https://api.github.com/users/chiragjn/following{/other_user}", "gists_url": "https://api.github.com/users/chiragjn/gists{/gist_id}", "starred_url": "https://api.github.com/users/chiragjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chiragjn/subscriptions", "organizations_url": "https://api.github.com/users/chiragjn/orgs", "repos_url": "https://api.github.com/users/chiragjn/repos", "events_url": "https://api.github.com/users/chiragjn/events{/privacy}", "received_events_url": "https://api.github.com/users/chiragjn/received_events", "type": "User", "site_admin": false }
[ { "id": 2155169140, "node_id": "MDU6TGFiZWwyMTU1MTY5MTQw", "url": "https://api.github.com/repos/huggingface/transformers/labels/trainer", "name": "trainer", "color": "2ef289", "default": false, "description": "" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" }, { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
open
false
null
[]
[ "Gentle ping @pacman100 for your thoughts on this feature addition" ]
1,703
1,708
null
NONE
null
### System Info transformers==4.36.2 accelerate==0.25.0 deepspeed==0.12.5 ### Who can help? I was using deepspeed stage 2 with Trainer and accelerate and at the end of training when the Trainer has been garbage collected, I noticed my GPU VRAM was not clearing even after aggressively calling `gc.collect()` and `torch.cuda.empty_cache()` I spent some time debugging and narrowed it down to deepspeed optimizer not removing hooks on pytorch tensors. I have submitted a PR on Deepspeed: https://github.com/microsoft/DeepSpeed/pull/4858 But to invoke this logic `engine.destroy()` must be called in some place post-training For now, I am manually calling it outside the trainer post-training and can confirm it works, would be nice if Trainer can take care of it or there is some note in the docs. @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction - Train any model with Zero 2 + gradient accumulation, delete and let the trainer garbage collect, model parameters would still linger around in the GPU memory ### Expected behavior GPU memory should be reclaimable post training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28178/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28178/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28177
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28177/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28177/comments
https://api.github.com/repos/huggingface/transformers/issues/28177/events
https://github.com/huggingface/transformers/issues/28177
2,052,062,336
I_kwDOCUB6oc56T_yA
28,177
AttributeError: Can't get attribute 'SiLUActivation' on <module 'transformers.activations'
{ "login": "Lokesh-Jatangi", "id": 142205264, "node_id": "U_kgDOCHnhUA", "avatar_url": "https://avatars.githubusercontent.com/u/142205264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lokesh-Jatangi", "html_url": "https://github.com/Lokesh-Jatangi", "followers_url": "https://api.github.com/users/Lokesh-Jatangi/followers", "following_url": "https://api.github.com/users/Lokesh-Jatangi/following{/other_user}", "gists_url": "https://api.github.com/users/Lokesh-Jatangi/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lokesh-Jatangi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lokesh-Jatangi/subscriptions", "organizations_url": "https://api.github.com/users/Lokesh-Jatangi/orgs", "repos_url": "https://api.github.com/users/Lokesh-Jatangi/repos", "events_url": "https://api.github.com/users/Lokesh-Jatangi/events{/privacy}", "received_events_url": "https://api.github.com/users/Lokesh-Jatangi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Lokesh-Jatangi, thanks for raising this issue! \r\n\r\nIs there a reason you're using `torch.load` here? The officially supported way to load checkpoints is through the `from_pretrained` method. ", "The checkpoint stores a pruned model whose structure and weights are different and hence I couldnot use `from_pretrained` method.", "@Lokesh-Jatangi Have you solved your problem?", "@Lokesh-Jatangi We can't guarantee backwards compatibility for a checkpoint which isn't a transformers architecture and isn't loaded through the officially supported API. In order to be able to maintain the repo, there will be objects which we'll move, rename and delete and so pickling in this way may cause issues. \r\n\r\nI'd suggest loading the model on the most recent compatible version of transformers. Updating the model to use torch's silu activation implementation and then resave the model out. I _think_ this should resolve the issue and allow you to load in the model in more recent transformers versions again. " ]
1,703
1,705
1,705
NONE
null
### System Info System info - - `transformers` version: 4.36.2 - Platform: Linux-5.10.0-26-cloud-amd64-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No I am using a custom script which loads LLAMA checkpoint through torch. `model_orig = torch.load(checkpoint_path)` While unpickling checkpoints in torch "SiLUActivation" class is missing from activations.py. This PR https://github.com/huggingface/transformers/pull/27136 removed the SiLUActivation class mentioning it was reduntant. P.S :- With transformers version 4.35.0 , loading a checkpoint through torch containing SiLU activation layer was succesful. Find the below trace :- ` line 65, in load_model_from_checkpoint model_orig = torch.load(checkpoint_path) File "/opt/conda/envs/adapt/lib/python3.10/site-packages/torch/serialization.py", line 1014, in load return _load(opened_zipfile, File "/opt/conda/envs/adapt/lib/python3.10/site-packages/torch/serialization.py", line 1422, in _load result = unpickler.load() File "/opt/conda/envs/adapt/lib/python3.10/site-packages/torch/serialization.py", line 1415, in find_class return super().find_class(mod_name, name) AttributeError: Can't get attribute 'SiLUActivation' on <module 'transformers.activations' from '/opt/conda/envs/adapt/lib/python3.10/site-packages/transformers/activations.py'>` I would happy to add it the SiLU class back to activations.py file and submit it here. Please let me know if i can proceed . ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Any model which has SILU activation function and loaded through "torch.load()" will face this issue. ### Expected behavior After adding reverting back the changes , the torch should be able identify SiLU activation class.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28177/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28177/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28176
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28176/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28176/comments
https://api.github.com/repos/huggingface/transformers/issues/28176/events
https://github.com/huggingface/transformers/issues/28176
2,051,950,925
I_kwDOCUB6oc56TklN
28,176
Swinv2config isnt working with depth estimator
{ "login": "hackkhai", "id": 51231270, "node_id": "MDQ6VXNlcjUxMjMxMjcw", "avatar_url": "https://avatars.githubusercontent.com/u/51231270?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hackkhai", "html_url": "https://github.com/hackkhai", "followers_url": "https://api.github.com/users/hackkhai/followers", "following_url": "https://api.github.com/users/hackkhai/following{/other_user}", "gists_url": "https://api.github.com/users/hackkhai/gists{/gist_id}", "starred_url": "https://api.github.com/users/hackkhai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hackkhai/subscriptions", "organizations_url": "https://api.github.com/users/hackkhai/orgs", "repos_url": "https://api.github.com/users/hackkhai/repos", "events_url": "https://api.github.com/users/hackkhai/events{/privacy}", "received_events_url": "https://api.github.com/users/hackkhai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts would you be able to merge #27742? Will resolve this PR ;) \r\n\r\nAlso make sure to use Transformers from source as the PR won't be included until the next release: \r\n`pip install --upgrade git+https://github.com/huggingface/transformers.git`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,706
1,706
NONE
null
### System Info ValueError: Unrecognized configuration class <class 'transformers.models.swinv2.configuration_swinv2.Swinv2Config'> for this kind of AutoModel: AutoBackbone. Model type should be one of BeitConfig, BitConfig, ConvNextConfig, ConvNextV2Config, DinatConfig, Dinov2Config, FocalNetConfig, MaskFormerSwinConfig, NatConfig, ResNetConfig, SwinConfig, TimmBackboneConfig, VitDetConfig. ### Who can help? @amyeroberts @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import pipeline pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-large-384") result = pipe("http://images.cocodataset.org/val2017/000000039769.jpg") result["depth"] ``` ### Expected behavior ValueError: Unrecognized configuration class <class 'transformers.models.swinv2.configuration_swinv2.Swinv2Config'> for this kind of AutoModel: AutoBackbone. Model type should be one of BeitConfig, BitConfig, ConvNextConfig, ConvNextV2Config, DinatConfig, Dinov2Config, FocalNetConfig, MaskFormerSwinConfig, NatConfig, ResNetConfig, SwinConfig, TimmBackboneConfig, VitDetConfig.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28176/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28176/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28175
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28175/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28175/comments
https://api.github.com/repos/huggingface/transformers/issues/28175/events
https://github.com/huggingface/transformers/issues/28175
2,051,940,970
I_kwDOCUB6oc56TiJq
28,175
ValueError: LlavaForConditionalGeneration does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new
{ "login": "1106280506Hx", "id": 103016865, "node_id": "U_kgDOBiPpoQ", "avatar_url": "https://avatars.githubusercontent.com/u/103016865?v=4", "gravatar_id": "", "url": "https://api.github.com/users/1106280506Hx", "html_url": "https://github.com/1106280506Hx", "followers_url": "https://api.github.com/users/1106280506Hx/followers", "following_url": "https://api.github.com/users/1106280506Hx/following{/other_user}", "gists_url": "https://api.github.com/users/1106280506Hx/gists{/gist_id}", "starred_url": "https://api.github.com/users/1106280506Hx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/1106280506Hx/subscriptions", "organizations_url": "https://api.github.com/users/1106280506Hx/orgs", "repos_url": "https://api.github.com/users/1106280506Hx/repos", "events_url": "https://api.github.com/users/1106280506Hx/events{/privacy}", "received_events_url": "https://api.github.com/users/1106280506Hx/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" }, { "id": 6349658421, "node_id": "LA_kwDOCUB6oc8AAAABengZNQ", "url": "https://api.github.com/repos/huggingface/transformers/labels/SDPA", "name": "SDPA", "color": "195411", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @younesbelkada for reference ", "can you tell me how to sovle it?", "Hi @1106280506Hx, \r\n\r\nAs the error suggests, SDPA isn't supported in Llava on the version of transformers being run. However, just looking it up, #28107 was merged in a few days ago. You should be able to resolve this by running the dev version of transformers - installing from source: \r\n`pip install git+https://github.com/huggingface/transformers`", "Your answer is meaningful, thanks for your quick reply @amyeroberts" ]
1,703
1,703
null
NONE
null
processor = AutoProcessor.from_pretrained("/gemini/data-2/data/llava") model = AutoModelForPreTraining.from_pretrained("/gemini/data-2/data/llava",load_in_4bit=True,bnb_4bit_compute_dtype=torch.float16,low_cpu_mem_usage=True,attn_implementation="sdpa").to("cuda")
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28175/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28175/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28174
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28174/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28174/comments
https://api.github.com/repos/huggingface/transformers/issues/28174/events
https://github.com/huggingface/transformers/issues/28174
2,051,602,981
I_kwDOCUB6oc56SPol
28,174
Problems when converting fairseq model to hf format
{ "login": "upskyy", "id": 54731898, "node_id": "MDQ6VXNlcjU0NzMxODk4", "avatar_url": "https://avatars.githubusercontent.com/u/54731898?v=4", "gravatar_id": "", "url": "https://api.github.com/users/upskyy", "html_url": "https://github.com/upskyy", "followers_url": "https://api.github.com/users/upskyy/followers", "following_url": "https://api.github.com/users/upskyy/following{/other_user}", "gists_url": "https://api.github.com/users/upskyy/gists{/gist_id}", "starred_url": "https://api.github.com/users/upskyy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/upskyy/subscriptions", "organizations_url": "https://api.github.com/users/upskyy/orgs", "repos_url": "https://api.github.com/users/upskyy/repos", "events_url": "https://api.github.com/users/upskyy/events{/privacy}", "received_events_url": "https://api.github.com/users/upskyy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ylacombe as well for reference ", "Hey @upskyy, thanks for opening this issue, this is very clear and in line with #28165 which converts a model from [seamless communication](https://github.com/facebookresearch/seamless_communication) and fairseq2.\r\n\r\nWe are supposed to have [integration tests](https://github.com/huggingface/transformers/blob/814619f54f677df79a337396794325f13f96251f/tests/models/wav2vec2_conformer/test_modeling_wav2vec2_conformer.py#L866) making sure that the two implementations have the same results, but they may very well be outdated or specific to certain wav2vec model.\r\n\r\nRegarding your issues, could you provide the model that you are testing and a script that shows how to replicate the fact that results are different ?\r\n\r\nRegarding issue 1, we'd have to make sure that the case in which `self.projection = None` actually happens with the wav2vec2 checkpoints proposed. If that never happens, there's no need to add some unnecessary complexity! \r\n\r\nRegarding issue 2, #28165 adds `skip_encoder_layer_norm`, a parameter to simply skip this layer norm. However, the name `layer_norm_first` implies that it might be computed somewhere else. In my case, `skip_encoder_layer_norm` is enough but it might not generalize to your checkpoints.\r\n\r\nThanks again!", "@ylacombe Thanks for your reply.\r\n\r\nSo should I just wait for #28165 PR to merge? \r\nIn the actual fairseq learning process, the projection is used only when the last dimension of convolution subsampling and the dimension of the conformer encoder block are different.\r\nFor example, if both are 512 dimension, the projection weight is not in the fairseq checkpoint.\r\nSo, no error occurs when converted to huggingface format, but when I inference the huggingface model, random weight projection is used. Then the result will be ruined.\r\n\r\nThanks : )", "Hey @upskyy #28165 won't solve your issue 1 for sure, and might solve 2 as well.\r\nCould you open a PR with your proposed solution ? And also give me a pointer to a checkpoint in which there are no projection weight ?\r\nMany thanks!\r\n\r\n", "@ylacombe \r\n\r\nI posted a PR, please check it.\r\nThanks : )", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.37.0.dev0 - Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35 - Python version: 3.10.8 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.3.2 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @sanchit-gandhi ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Thanks for releasing this awesome repo. ## Issue 1 I am converting the fairseq checkpoint to huggingface format (wav2vec2_conformer). Converting is no problem, but the results are different. I did some debugging and found something different from the fairseq implementation. In fairseq, if the convolution subsampling dimension and encoder dimension are the same, `nn.Linear` is not used, but hf is used unconditionally, so there is a problem of using random weights. ### fairseq https://github.com/facebookresearch/fairseq/blob/main/fairseq/models/wav2vec/wav2vec2.py#L324-L328 ```python self.post_extract_proj = ( nn.Linear(self.embed, cfg.encoder_embed_dim) if self.embed != cfg.encoder_embed_dim and not cfg.quantize_input else None ) ``` ### huggingface https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L536 ```python class Wav2Vec2ConformerFeatureProjection(nn.Module): def __init__(self, config): super().__init__() self.layer_norm = nn.LayerNorm(config.conv_dim[-1], eps=config.layer_norm_eps) self.projection = nn.Linear(config.conv_dim[-1], config.hidden_size) # <-- HERE self.dropout = nn.Dropout(config.feat_proj_dropout) def forward(self, hidden_states): # non-projected hidden states are needed for quantization norm_hidden_states = self.layer_norm(hidden_states) hidden_states = self.projection(norm_hidden_states) hidden_states = self.dropout(hidden_states) return hidden_states, norm_hidden_states ``` I think this is right. ```python class Wav2Vec2ConformerFeatureProjection(nn.Module): def __init__(self, config): super().__init__() self.layer_norm = nn.LayerNorm(config.conv_dim[-1], eps=config.layer_norm_eps) if config.conv_dim[-1] != config.hidden_size: self.projection = nn.Linear(config.conv_dim[-1], config.hidden_size) self.dropout = nn.Dropout(config.feat_proj_dropout) ``` ## Issue 2 Also, fairseq performs layer norm before entering the conformer encoder, but huggingface is supposed to perform layer norm after the conformer encoder without any options. Can this be handled as an option? I think the results change because of this. ### fairseq https://github.com/facebookresearch/fairseq/blob/main/fairseq/models/wav2vec/wav2vec2.py#L1230-L1231 ```python def extract_features(self, x, padding_mask=None, tgt_layer=None): if padding_mask is not None: x = index_put(x, padding_mask, 0) # B x T x C -> T x B x C x = x.transpose(0, 1) # B X T X C here position_emb = None if self.pos_enc_type == "rel_pos": position_emb = self.embed_positions(x) if not self.layer_norm_first: # <-- HERE x = self.layer_norm(x) x = F.dropout(x, p=self.dropout, training=self.training) layer_results = [] r = None for i, layer in enumerate(self.layers): dropout_probability = np.random.random() if not self.training or (dropout_probability > self.layerdrop): x, z = layer( x, self_attn_padding_mask=padding_mask, need_weights=False, position_emb=position_emb, ) if tgt_layer is not None: layer_results.append((x, z)) if i == tgt_layer: r = x break ``` ### huggingface https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py#L929 ### Expected behavior How do you think about this problem? If modifications are possible, I can proceed with the PR by including a converting script including the fairseq extension.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28174/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28174/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28173
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28173/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28173/comments
https://api.github.com/repos/huggingface/transformers/issues/28173/events
https://github.com/huggingface/transformers/issues/28173
2,051,315,575
I_kwDOCUB6oc56RJd3
28,173
VitsTokenizer decode without special tokens produces odd results
{ "login": "xenova", "id": 26504141, "node_id": "MDQ6VXNlcjI2NTA0MTQx", "avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xenova", "html_url": "https://github.com/xenova", "followers_url": "https://api.github.com/users/xenova/followers", "following_url": "https://api.github.com/users/xenova/following{/other_user}", "gists_url": "https://api.github.com/users/xenova/gists{/gist_id}", "starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xenova/subscriptions", "organizations_url": "https://api.github.com/users/xenova/orgs", "repos_url": "https://api.github.com/users/xenova/repos", "events_url": "https://api.github.com/users/xenova/events{/privacy}", "received_events_url": "https://api.github.com/users/xenova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I understand users won't typically use the decode method, but it's still quite odd behaviour 😅 ", "👿 I'm using my joker for this on, it's on the audio team 🤭 ", "Here's the culprit:\r\nhttps://github.com/huggingface/transformers/blob/1d7773594754457ed4a79cf6d98bcaabea5bff51/src/transformers/models/vits/tokenization_vits.py#L225-L228", "Thanks for reporting @xenova! The VITS MMS tokenizer is very much non-standard in the sense that it uses the same token id as the special pad token as well as a standard vocab token => this is something we probably should have changed when we integrated the model! Is there a case where you'd need to encode and subsequently decode the input text? Happy to take a look at a fix if so! Otherwise, we can add a disclaimer to the docs saying that `tokenizer.decode(tokenizer.encode(input_text)) != input_text ` due to this behaviour", "Copying a related discussion from Slack:\r\n\r\nVB\r\n [23 days ago](https://huggingface.slack.com/archives/C02G13FEMDH/p1703055571159369?thread_ts=1703042642.253789&cid=C02G13FEMDH)\r\nYes, it is the pad token: https://huggingface.co./facebook/mms-tts-eng/blob/main/tokenizer_config.json#L8\r\nAFAIK, k represents the character boundary (to seperate one character from the other). Since the text is split and converted to phonemic representation later, this helps segregate it. each character has a seperate phonemic representation.\r\neach phonemic representation creates it's respective mel-spec based on the duration prediction.\r\nk is later on ignored in during attention as the mask is set to 1 i.e. do not attend to it for it.\r\nhttps://huggingface.co./facebook/mms-tts-eng/blob/c71de0fe7204c83f1c10820a7d696d0b450048ba/vocab.json#L23\r\nI think the reason for choosing k in case of MMS was to make it be able to generalise across the multitudes of language they trained on.\r\n\r\n\r\nJoshua Lochner\r\n [23 days ago](https://huggingface.slack.com/archives/C02G13FEMDH/p1703074142921659?thread_ts=1703042642.253789&cid=C02G13FEMDH)\r\nOh wow that’s so interesting :joy: Thanks!\r\n\r\n\r\nJoshua Lochner\r\n [23 days ago](https://huggingface.slack.com/archives/C02G13FEMDH/p1703074186957259?thread_ts=1703042642.253789&cid=C02G13FEMDH)\r\nI also checked some of the other languages, and indeed they also set a pad token to be one of those characters the vocabulary.\r\n\r\n\r\nSanchit\r\n [23 days ago](https://huggingface.slack.com/archives/C02G13FEMDH/p1703075297154269?thread_ts=1703042642.253789&cid=C02G13FEMDH)\r\nYes it’s a bit messy, but they essentially just picked one of the existing characters in the MMS tokenizer to act as the padding token (in this case k), and then mask it during the forward pass with the attention mask\r\nIn the MMS integration to transformers I used the same behaviour they did in the original repo (use k as the padding token). Probably what would be more elegant is:\r\n1. Adding a new padding token to the MMS vocab (e.g. <pad>)\r\n2. Re-sizing the embedding layer to account for this new padding token with randomly initialised weights\r\n3. Mask out the embeddings that come from the padding token in the forward pass (as is done already)\r\nThis would break the existing MMS weights though, so unfortunately can’t be done now, but would have been a better option in retrospect (edited) \r\n\r\n\r\nSanchit\r\n [23 days ago](https://huggingface.slack.com/archives/C02G13FEMDH/p1703075337870219?thread_ts=1703042642.253789&cid=C02G13FEMDH)\r\n(The solution I explained above is how they do it in VITS with a dedicated <pad> token)\r\n\r\n\r\nAmy\r\n [23 days ago](https://huggingface.slack.com/archives/C02G13FEMDH/p1703079737456419?thread_ts=1703042642.253789&cid=C02G13FEMDH)\r\nThere’s a related discussion here: https://github.com/huggingface/transformers/pull/24085#discussion_r1241240099", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
CONTRIBUTOR
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.20 - JaxLib version: 0.4.20 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker (tokenizers) @Vaibhavs10 @sanchit-gandhi (audio team) ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```py >>> from transformers import AutoTokenizer >>> tokenizer=AutoTokenizer.from_pretrained('facebook/mms-tts-eng') >>> tokenizer.encode('hello world') [0, 6, 0, 7, 0, 21, 0, 21, 0, 22, 0, 19, 0, 9, 0, 22, 0, 25, 0, 21, 0, 5, 0] >>> tokenizer.decode(tokenizer.encode('hello world'), skip_special_tokens=False) 'hello world' >>> tokenizer.decode(tokenizer.encode('hello world'), skip_special_tokens=True) 'el ol' >>> tokenizer.decode(tokenizer.encode('abcdefghijklmnopqrstuvwxyz'), skip_special_tokens=True) 'bdfhjmoqsuwy' ``` From the last example, it looks like it's taking the even-positioned elements. ### Expected behavior `[0, 6, 0, 7, 0, 21, 0, 21, 0, 22, 0, 19, 0, 9, 0, 22, 0, 25, 0, 21, 0, 5, 0]`, for which the tokenized version is: ``` ['k', 'h', 'k', 'e', 'k', 'l', 'k', 'l', 'k', 'o', 'k', ' ', 'k', 'w', 'k', 'o', 'k', 'r', 'k', 'l', 'k', 'd', 'k'] ``` should be decoded as 'hello world', or something more informative than 'el ol'.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28173/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28173/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28172
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28172/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28172/comments
https://api.github.com/repos/huggingface/transformers/issues/28172/events
https://github.com/huggingface/transformers/pull/28172
2,051,285,377
PR_kwDOCUB6oc5igryw
28,172
[docs] Sort es/toctree.yml like en/toctree.yml
{ "login": "aaronjimv", "id": 67152883, "node_id": "MDQ6VXNlcjY3MTUyODgz", "avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aaronjimv", "html_url": "https://github.com/aaronjimv", "followers_url": "https://api.github.com/users/aaronjimv/followers", "following_url": "https://api.github.com/users/aaronjimv/following{/other_user}", "gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}", "starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions", "organizations_url": "https://api.github.com/users/aaronjimv/orgs", "repos_url": "https://api.github.com/users/aaronjimv/repos", "events_url": "https://api.github.com/users/aaronjimv/events{/privacy}", "received_events_url": "https://api.github.com/users/aaronjimv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I was doing a translation of `performance.md`, whose section is not in the Spanish documentation, and I got the impression that the file `es/_toctree.yml` is not aligned with `en/_toctree.yml`. I am open to any comments.", "Hi the `es/_toctree` should be aligned with the English version, so please feel free to realign it! You can also create the `Performance and scalability` section for the Spanish docs if you'd like 😄 ", "Hi @stevhliu, thanks for replying, I add the `Performance and Scalability` section to `es/_toctree.yml` and I would like to check this new alignment.", "Continues in #28262" ]
1,703
1,703
1,703
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> I think that the file `es/_toctree.yml` is not aligned with `en/_toctree.yml`. I would like to ask if it was this way intentionally, and if not the case, I would appreciate checking this change. I kept this part the same because the `Performance and Scalability` section is not in the Spanish documentation: ``` - isExpanded: false sections: - local: debugging title: Debugging title: Rendimiento y escalabilidad ``` Thanks for your time. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @osanseviero @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28172/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28172/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28172", "html_url": "https://github.com/huggingface/transformers/pull/28172", "diff_url": "https://github.com/huggingface/transformers/pull/28172.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28172.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28171
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28171/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28171/comments
https://api.github.com/repos/huggingface/transformers/issues/28171/events
https://github.com/huggingface/transformers/pull/28171
2,051,234,051
PR_kwDOCUB6oc5iggiH
28,171
Bug: `training_args.py` fix missing import with accelerate with version `accelerate==0.20.1`
{ "login": "michaelfeil", "id": 63565275, "node_id": "MDQ6VXNlcjYzNTY1Mjc1", "avatar_url": "https://avatars.githubusercontent.com/u/63565275?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelfeil", "html_url": "https://github.com/michaelfeil", "followers_url": "https://api.github.com/users/michaelfeil/followers", "following_url": "https://api.github.com/users/michaelfeil/following{/other_user}", "gists_url": "https://api.github.com/users/michaelfeil/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelfeil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelfeil/subscriptions", "organizations_url": "https://api.github.com/users/michaelfeil/orgs", "repos_url": "https://api.github.com/users/michaelfeil/repos", "events_url": "https://api.github.com/users/michaelfeil/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelfeil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @michaelfeil, thanks for opening this PR and writing the detailed description! \r\n\r\nIndeed, there is a bug where `is_accelerate_available()` doesn't correctly evaluate to `True` if `accelerate==0.20.1`. \r\n\r\nHowever, the [required version for transformers](https://github.com/huggingface/transformers/blob/f9a98c476c3a92beaba3b20e51f1ff49417231a6/setup.py#L99) is >=0.21.0, where the bug no longer occurs. As such, we won't be adding in fixes to the codebase to resolve an issue for versions that aren't supported. \r\n\r\nOut of interest, what's the reason for using the lower version of accelerate? ", "- Out of interest, what's the reason for using the lower version of accelerate?\r\nQuite simple, the dependencies are pinned, and this was an issue while bumping the `transformer` package. Pinning the package is the usual scenario in production.\r\nAlso adding\r\n```toml\r\n `transformers = {\"4.36.2\", extras=[\"torch\"]}\r\n accelerate=\"^0.20.1\"\r\n ```\r\n does not lead to accelerate beeing bumped in poetry, seems like this \r\nIts a bit unclear, why the decision is not to fix this. \r\n\r\n### Two sides:\r\n### 1.\r\nIf minimum version is expected to be `0.21.` the displayed message going forward should \r\n\r\n```python\r\nraise ImportError(\r\n \"Using the `Trainer` with `PyTorch` requires `accelerate>=0.21.0`: Please run `pip install transformers[torch]` or `pip install accelerate -U`\"\r\n )\r\n```\r\n### 2. \r\nIf there would be a guarantee from the dependency resolver the line of code checking for the min version would be redundant.\r\n" ]
1,703
1,703
1,703
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) I have accelerate and transformers using `poetry` pinned to ``` accelerate="^0.20.1" transformers="4.36.2" ``` This leads to the weird error, that ```python is_accelerate_available(min_version="0.20.1") # True is_accelerate_available() # False, leading no import at top of file ``` ```python Step #1 - "build-image": #99 59.93 @cached_property Step #1 - "build-image": #99 59.93 def _setup_devices(self) -> "torch.device": Step #1 - "build-image": #99 59.93 requires_backends(self, ["torch"]) Step #1 - "build-image": #99 59.93 logger.info("PyTorch: setting up devices") Step #1 - "build-image": #99 59.93 if not is_sagemaker_mp_enabled(): Step #1 - "build-image": #99 59.93 if not is_accelerate_available(min_version="0.20.1"): Step #1 - "build-image": #99 59.93 raise ImportError( Step #1 - "build-image": #99 59.93 "Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`" Step #1 - "build-image": #99 59.93 ) Step #1 - "build-image": #99 59.93 > AcceleratorState._reset_state(reset_partial_state=True) Step #1 - "build-image": #99 59.93 E NameError: name 'AcceleratorState' is not defined ``` ## Before submitting - [NA ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ NA ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [NA ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28171/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28171/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28171", "html_url": "https://github.com/huggingface/transformers/pull/28171", "diff_url": "https://github.com/huggingface/transformers/pull/28171.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28171.patch", "merged_at": 1703245295000 }
https://api.github.com/repos/huggingface/transformers/issues/28170
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28170/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28170/comments
https://api.github.com/repos/huggingface/transformers/issues/28170/events
https://github.com/huggingface/transformers/issues/28170
2,051,205,921
I_kwDOCUB6oc56Qush
28,170
Error while importing the transformers
{ "login": "iamshreeram", "id": 7752805, "node_id": "MDQ6VXNlcjc3NTI4MDU=", "avatar_url": "https://avatars.githubusercontent.com/u/7752805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iamshreeram", "html_url": "https://github.com/iamshreeram", "followers_url": "https://api.github.com/users/iamshreeram/followers", "following_url": "https://api.github.com/users/iamshreeram/following{/other_user}", "gists_url": "https://api.github.com/users/iamshreeram/gists{/gist_id}", "starred_url": "https://api.github.com/users/iamshreeram/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iamshreeram/subscriptions", "organizations_url": "https://api.github.com/users/iamshreeram/orgs", "repos_url": "https://api.github.com/users/iamshreeram/repos", "events_url": "https://api.github.com/users/iamshreeram/events{/privacy}", "received_events_url": "https://api.github.com/users/iamshreeram/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@sanchit-gandhi @ylacombe same issue as reported in #28162 \r\n\r\n", "@amyeroberts , Thank you for investigating. Everything is functioning properly after the installation of the stable version of transformers (`4.36.2`)." ]
1,703
1,703
1,703
NONE
null
### System Info **Transformers Version** : 4.36.0.dev0 **Platform** : Mac OS **Python** : 3.9.18 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce : 1. Run the following program to translate to the target language: ``` from transformers import pipeline pipeline_generator = pipeline( "automatic-speech-recognition", "facebook/seamless-m4t-v2-large", ) transcript = pipeline_generator("https://www2.cs.uic.edu/~i101/SoundFiles/preamble10.wav", generate_kwargs={"tgt_lang": "spa", },) ``` 2. This throws the following exception: ``` Traceback (most recent call last): File "/Users/home/ram/project/python/text-translation-speech/ttrans.py", line 12, in <module> transcript = pipeline_generator("https://www2.cs.uic.edu/~i101/SoundFiles/preamble10.wav", generate_kwargs={"tgt_lang": "ta", },) File "/Applications/anaconda3/envs/dub/lib/python3.9/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 357, in __call__ return super().__call__(inputs, **kwargs) File "/Applications/anaconda3/envs/dub/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1134, in __call__ self.get_iterator( File "/Applications/anaconda3/envs/dub/lib/python3.9/site-packages/transformers/pipelines/base.py", line 1182, in get_iterator feature_extractor = self.feature_extractor if self.feature_extractor is not None else self.image_processor AttributeError: 'AutomaticSpeechRecognitionPipeline' object has no attribute 'image_processor' ``` 3. Despite not importing `image_processor`, the exception is thrown. ### Expected behavior Produce output in the target language as seen in this [thread](https://github.com/facebookresearch/seamless_communication/issues/237#issuecomment-1864534911), running with the expected results.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28170/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28170/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28169
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28169/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28169/comments
https://api.github.com/repos/huggingface/transformers/issues/28169/events
https://github.com/huggingface/transformers/pull/28169
2,051,138,169
PR_kwDOCUB6oc5igLsF
28,169
disable test_retain_grad_hidden_states_attentions on SeamlessM4TModelWithTextInputTest
{ "login": "dwyatte", "id": 2512762, "node_id": "MDQ6VXNlcjI1MTI3NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwyatte", "html_url": "https://github.com/dwyatte", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "repos_url": "https://api.github.com/users/dwyatte/repos", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Related to #28035 and #28060 which should have already skipped it" ]
1,703
1,703
1,703
CONTRIBUTOR
null
# What does this PR do? Disables `tests/models/seamless_m4t/test_modeling_seamless_m4t.py::SeamlessM4TModelWithTextInputTest::test_retain_grad_hidden_states_attentions` as discussed in https://github.com/huggingface/transformers/pull/28144#issuecomment-1864990888 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28169/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28169/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28169", "html_url": "https://github.com/huggingface/transformers/pull/28169", "diff_url": "https://github.com/huggingface/transformers/pull/28169.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28169.patch", "merged_at": 1703144384000 }
https://api.github.com/repos/huggingface/transformers/issues/28168
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28168/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28168/comments
https://api.github.com/repos/huggingface/transformers/issues/28168/events
https://github.com/huggingface/transformers/pull/28168
2,051,109,413
PR_kwDOCUB6oc5igFeH
28,168
Fix `input_embeds` docstring in encoder-decoder architectures
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,703
1,703
MEMBER
null
# What does this PR do? Big diff, small change: - adds a missing paragraph between the docstring of `past_key_values` and `input_embeds` - adds missing `input_embeds` docstring in a few TF models It chips away some of the diff in #28065
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28168/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28168", "html_url": "https://github.com/huggingface/transformers/pull/28168", "diff_url": "https://github.com/huggingface/transformers/pull/28168.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28168.patch", "merged_at": 1703156515000 }
https://api.github.com/repos/huggingface/transformers/issues/28167
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28167/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28167/comments
https://api.github.com/repos/huggingface/transformers/issues/28167/events
https://github.com/huggingface/transformers/issues/28167
2,050,971,650
I_kwDOCUB6oc56P1gC
28,167
Misleading doc on BLIP `outputs.loss`: doesn't return true NLL but NLL *with label smoothing*
{ "login": "DianeBouchacourt", "id": 13796686, "node_id": "MDQ6VXNlcjEzNzk2Njg2", "avatar_url": "https://avatars.githubusercontent.com/u/13796686?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DianeBouchacourt", "html_url": "https://github.com/DianeBouchacourt", "followers_url": "https://api.github.com/users/DianeBouchacourt/followers", "following_url": "https://api.github.com/users/DianeBouchacourt/following{/other_user}", "gists_url": "https://api.github.com/users/DianeBouchacourt/gists{/gist_id}", "starred_url": "https://api.github.com/users/DianeBouchacourt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DianeBouchacourt/subscriptions", "organizations_url": "https://api.github.com/users/DianeBouchacourt/orgs", "repos_url": "https://api.github.com/users/DianeBouchacourt/repos", "events_url": "https://api.github.com/users/DianeBouchacourt/events{/privacy}", "received_events_url": "https://api.github.com/users/DianeBouchacourt/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "cc @younesbelkada I would remove the label smoothing from our code actually. We usually just add the regular cross-entropy loss to our models, label smoothing leads to subtle bugs (probably only useful during pre-training on noisy data)", "Hi @DianeBouchacourt thanks for pointing this out ! Would you be happy to address the issue in a Pull request and introduce your changes?", "@NielsRogge \r\n\r\n> `I would remove the label smoothing from our code actually. We usually just add the regular cross-entropy loss to our models, label smoothing leads to subtle bugs`\r\n\r\nThen the change to the [line](https://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/models/blip/modeling_blip_text.py#L892) making label_smoothing=0.0 suffice? \r\n\r\nCan I work on it ?", "Hi @nileshkokane01 \r\nYes that should be it, would you like to open a PR for that?", "Sure. " ]
1,703
1,708
1,708
NONE
null
### System Info Transformers 4.35.2 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Not really a bug, more a misleading feature: Computing the negative log-likelihood (NLL) is useful for understanding the probability of a caption for a given image, using BLIP generative text decoder. However, if one uses BLIP for ConditionalGeneration as explained here https://huggingface.co./docs/transformers/model_doc/blip#transformers.BlipForConditionalGeneration adapted for computation of the NLL, one would naturally do: ``` from PIL import Image import requests from transformers import AutoProcessor, BlipForConditionalGeneration processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "A image of two cats" inputs = processor(images=image, text=text, return_tensors="pt") outputs = model(**inputs, labels=inputs['input_ids']) nll=outputs.loss.item() ``` However, the loss is computed **with label smoothing** as in training, because it is hard-coded in BLIPLLM head (just like in the original Salesforce code) https://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/models/blip/modeling_blip_text.py#L892 Therefore it isn't the true NLL that the call to .loss returns, and I believe the documentation should be clearer on this. I propose to: * change the doc to make this clearer * or add a parameter label_smoothing when initializing the BLIP model * or add a function to compute NLL explicitely, separated from .loss, e.g.: ``` def return_nll(scores, target): loss_fct = CrossEntropyLoss(reduction='mean', label_smoothing=0.0) # we're setting it to 0 loss = loss_fct(scores, target) return loss def compute_generative_probability(model, processor, image, text): inputs = processor(images=image, text=text, return_tensors="pt", padding=True) outputs = model(**inputs, labels=inputs['input_ids']) shifted_predictions_scores = outputs.logits[0 , :-1, :].contiguous() shifted_labels = inputs["input_ids"][0, 1:].contiguous().to(shifted_predictions_scores.device) nll = return_nll(shifted_predictions_scores, target=shifted_labels) return nll ``` Writing this so that others researchers are aware :) Thanks a lot for the amazing library ### Expected behavior See code above
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28167/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28166
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28166/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28166/comments
https://api.github.com/repos/huggingface/transformers/issues/28166/events
https://github.com/huggingface/transformers/pull/28166
2,050,712,916
PR_kwDOCUB6oc5ietpb
28,166
Generate: fix speculative decoding
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts all tests were passing, the quality of the outputs was meh in some cases. \r\n\r\nGoing to add a slow test with a parameterization corresponding to a bad output in the previous commit, so we don't regress :)" ]
1,703
1,703
1,703
MEMBER
null
# What does this PR do? This PR: - Fixes speculative decoding quality: - Incorrect indexing operation - The assistant model should sample when the larger model also samples (more generally, it should take the original model's `generation_config`) - Custom logits processors should also be passed to the assistant model - Changes docs to put an emphasis on "speculative decoding" as opposed to "assisted generation", as the former is more popular ________ `RUN_SLOW=1 py.test tests/ -k speculative` was run locally to confirm that slow assisted generation whisper tests were passing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28166/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28166", "html_url": "https://github.com/huggingface/transformers/pull/28166", "diff_url": "https://github.com/huggingface/transformers/pull/28166.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28166.patch", "merged_at": 1703098535000 }
https://api.github.com/repos/huggingface/transformers/issues/28165
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28165/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28165/comments
https://api.github.com/repos/huggingface/transformers/issues/28165/events
https://github.com/huggingface/transformers/pull/28165
2,050,664,966
PR_kwDOCUB6oc5iei9i
28,165
Add new meta w2v2-conformer BERT-like model
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28165). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks for the many good suggestions @sanchit-gandhi, could you take one last look? \r\n\r\nAlso cc @amyeroberts, could you review this PR? Normally I'd wait for @sanchit-gandhi's approval before asking your opinion, but as the end-of-year vacations are approaching, I'd rather speed up the process! ", "Hi @amyeroberts, happy new year!\r\n\r\nI've addressed your feedback! \r\nTwo last remarks:\r\n1. [this commit](https://github.com/huggingface/transformers/pull/28165/commits/4848a26c1d7c8e03ba225767b26a541d3054dead) is there to facilitate training\r\n2. this new checkpoint doesn't use `Wav2Vec2Processor` but a fbank feature extractor, here `SeamlessM4TFeatureExtractor`. Consequence -> the input name is `input_features` instead of `input_values`. Do you think we should leave this as it is ?\r\n\r\nLet me know if you need other changes !", "I'm opening a discussion after an offline discussion with @sanchit-gandhi on the relevance of adding a totally new modeling code instead of modifying the one of Wav2Vec2Conformer.\r\n\r\nMain reason behind it, this new model uses a totally different feature extractor with two main consequences:\r\n1. the input name is `input_features` instead of `input_values`\r\n2. it totally gets rid of the first block of Wav2Vec2Conformer.\r\n\r\nThere also differences in architecture that could justify this choice.\r\n\r\nI have no personal position on the subject as both options (leaving this PR as it is or adding a new model) have pros and cons. The only thing is that if we choose to keep this PR, I'd have to add `input_features` as a possible input of the different Wav2Vec2Conformer models.\r\n\r\nWDYT @amyeroberts ?", "+1 on @ArthurZucker's comment - @ylacombe Yes, a separate modeling file I think makes sense. ", "Hey @ArthurZucker and @amyeroberts, thanks for your feedback. I've updated the codebase to add a new model instead of modifying the existing one! \r\nCould you review this PR again ?\r\n\r\nNote that one failing test (torch&jax) is unrelated to my change (probably flaky) and the other one relies on a trained model checkpoint, and there are none available yet -> I'm training one at the moment and will use it if it is ready at the moment of merging. Otherwise, I'll remove/update the related docstrings ", "Hey @sanchit-gandhi, thanks for the review here ! Will iterate soon!\r\n\r\nJust a QQ regarding the pre-training task, you're right, it differs with the fairseq2 implementation so I'll remove the corresponding class for now. However, it raises the question of how to convert the weights, which was for now done with the pretraining class. My opinion is to do it with whatever other main class (e.g the CTC one) and being clear that the CTC head is not trained in the docs and the model card. WDYT ?\r\n\r\n", "Could you not convert it with `Wav2Vec2BERTModel`? It's the same as the pre-training class but without the codebook weights?\r\n\r\nAlso one point of discussion: should the model name be camel-cased as is now the convention in Transformers (`Wav2Vec2BertModel`)? This is a bit of an edge case since BERT was original named as `BERTModel` (upper-case), so it's a question of whether we want consistency with BERT, or want to adhere to the camel-case conventions.", "> Could you not convert it with Wav2Vec2BERTModel? It's the same as the pre-training class but without the codebook weights?\r\n\r\nEvery downstream model (`Wav2Vec2BERTForPreTraining`, `Wav2Vec2BERTForCTC`, `Wav2Vec2BERTForSequenceClassification`, etc.) actually has `Wav2Vec2BertModel` as a parameter (`self.wav2vec2_bert`). \r\n\r\nSo I could convert the weights to `Wav2Vec2BERTModel`, but actually using `CLASS.from_pretrained(REPO_ID)` won't work, right ?\r\n\r\n\r\n> Also one point of discussion: should the model name be camel-cased as is now the convention in Transformers (Wav2Vec2BertModel)? This is a bit of an edge case since BERT was original named as BERTModel (upper-case), so it's a question of whether we want consistency with BERT, or want to adhere to the camel-case conventions.\r\n\r\nI'll let @ArthurZucker or @amyeroberts decide on this one !", "This should be handled by `base_model_prefix`:\r\n```python\r\nfrom transformers import Wav2Vec2BERTConfig, Wav2Vec2BERTModel, Wav2Vec2BERTForCTC\r\nimport torch\r\n\r\nconfig = Wav2Vec2BERTConfig()\r\nrandom_model = Wav2Vec2BERTModel(config)\r\n\r\nrandom_model.save_pretrained(\"./output_dir\")\r\nloaded_model = Wav2Vec2BERTForCTC.from_pretrained(\"./output_dir\", vocab_size=32)\r\n\r\n# check a layer-norm layer is loaded correctly\r\nprint(torch.allclose(loaded_model.wav2vec2_bert.feature_projection.layer_norm.weight, random_model.feature_projection.layer_norm.weight))\r\n```\r\n**Output:**\r\n```\r\nSome weights of Wav2Vec2BERTForCTC were not initialized from the model checkpoint at ./output_dir \r\nand are newly initialized: ['lm_head.bias', 'lm_head.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions \r\nand inference.\r\nTrue\r\n```\r\n=> we see that all the encoder weights are loaded correctly, and just the LM head newly initialised", "Hey @amyeroberts, thanks for the review ! I've addressed all your comments, let me know if that works with you ! " ]
1,703
1,705
1,705
COLLABORATOR
null
# What does this PR do? Meta just open-sourced a Wav2Vec2-BERT conformer [model](https://huggingface.co./facebook/w2v-bert-2.0). This one is particularly interesting because it's under a MIT license and was pretrained on 101 input languages! It requires adaption to the current w2v2-conformer code, which this PR does. cc @sanchit-gandhi, @Vaibhavs10 and @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28165/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28165/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28165", "html_url": "https://github.com/huggingface/transformers/pull/28165", "diff_url": "https://github.com/huggingface/transformers/pull/28165.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28165.patch", "merged_at": 1705585054000 }
https://api.github.com/repos/huggingface/transformers/issues/28164
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28164/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28164/comments
https://api.github.com/repos/huggingface/transformers/issues/28164/events
https://github.com/huggingface/transformers/issues/28164
2,050,512,860
I_kwDOCUB6oc56OFfc
28,164
Inconsistencies between `.save_pretrained` and `from_pretrained` for slow and fast tokenizers (RoFormer)
{ "login": "xenova", "id": 26504141, "node_id": "MDQ6VXNlcjI2NTA0MTQx", "avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xenova", "html_url": "https://github.com/xenova", "followers_url": "https://api.github.com/users/xenova/followers", "following_url": "https://api.github.com/users/xenova/following{/other_user}", "gists_url": "https://api.github.com/users/xenova/gists{/gist_id}", "starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xenova/subscriptions", "organizations_url": "https://api.github.com/users/xenova/orgs", "repos_url": "https://api.github.com/users/xenova/repos", "events_url": "https://api.github.com/users/xenova/events{/privacy}", "received_events_url": "https://api.github.com/users/xenova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Another weird thing is that behaviour changes after saving the tokenizer:\r\n```py\r\n>>> from transformers import AutoTokenizer\r\n>>> tokenizer = AutoTokenizer.from_pretrained('alchemab/antiberta2')\r\n>>> text='test $1 R2 #3 €4 £5 ¥6 ₣7 ₹8 ₱9 test'\r\n>>> tokenizer.encode(text)\r\n[1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2]\r\n>>> len(tokenizer.encode(text))\r\n21\r\n>>> tokenizer.save_pretrained('saved')\r\n('saved/tokenizer_config.json', 'saved/special_tokens_map.json', 'saved/vocab.txt', 'saved/added_tokens.json', 'saved/tokenizer.json')\r\n>>> tokenizer.encode(text)\r\n[1, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2]\r\n```\r\n", "Alright, there is a discrepency in the `tokenization_roformer_fast`, the `__setstate__` function and `__getstate__` function will both set a different tokenizer. Save-pretrained will set another one.\r\nThat is not consistent but I have no Idea why this is done this way. ", "https://github.com/huggingface/transformers/blob/0cdcd7a2b319689d75ae4807cfb7b228aa322f83/src/transformers/models/roformer/tokenization_roformer_fast.py#L137-L145", "and: \r\n\r\nhttps://github.com/huggingface/transformers/blob/0cdcd7a2b319689d75ae4807cfb7b228aa322f83/src/transformers/models/roformer/tokenization_roformer_fast.py#L204-L214 ", "https://github.com/huggingface/tokenizers/issues/581 explains why 😉 But it was not well done in this specific case IMO", "1. First case we have a conversion from slow\r\n2. We don't and we use `fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)` after that we never set the correct custom pre_tokenizer", "Using `from_slow = True` pretty much fixes the issue, this should also fix it\r\n```python \r\n # Make sure we correctly set the custom PreTokenizer\r\n vocab = self.backend_tokenizer.get_vocab()\r\n self.backend_tokenizer.pre_tokenizer = PreTokenizer.custom(JiebaPreTokenizer(vocab))\r\n```\r\n\r\n\r\n", "Thanks so much @ArthurZucker for the investigation and fix!" ]
1,703
1,705
1,705
CONTRIBUTOR
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.20 - JaxLib version: 0.4.20 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction My original problem occurred when loading and saving with AutoTokenizer: ```py from transformers import AutoTokenizer # Load original tokenizer original = AutoTokenizer.from_pretrained('alchemab/antiberta2') print(original("生活的真谛是")) # {'input_ids': [1, 4, 4, 4, 4, 4, 4, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]} # Save tokenizer original.save_pretrained('saved') # Load this new tokenizer new = AutoTokenizer.from_pretrained('saved') print(new("生活的真谛是")) # {'input_ids': [1, 4, 2], 'token_type_ids': [0, 0, 0], 'attention_mask': [1, 1, 1]} ``` Digging a bit deeper, it seems to be an issue with the slow to fast converter, with certain default values being overridden (presumably `handle_chinese_chars` in `BertNormalizer`). I know RoFormer isn't a very popular model these days, but since it uses a near-identical tokenization strategy to Bert models, this issue may have implications elsewhere. ### Expected behavior Should produce the same (correct) results if it were loaded with the original (slow) tokenizer ```py from transformers import RoFormerTokenizer # Load original tokenizer original = RoFormerTokenizer.from_pretrained('alchemab/antiberta2') print(original("生活的真谛是")) # {'input_ids': [1, 4, 4, 4, 4, 4, 4, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]} # Save tokenizer original.save_pretrained('saved') # Load this new tokenizer new = RoFormerTokenizer.from_pretrained('saved') print(new("生活的真谛是")) # {'input_ids': [1, 4, 4, 4, 4, 4, 4, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]} ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28164/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28163
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28163/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28163/comments
https://api.github.com/repos/huggingface/transformers/issues/28163/events
https://github.com/huggingface/transformers/pull/28163
2,050,454,676
PR_kwDOCUB6oc5id0l1
28,163
[Phi] Extend implementation to use GQA/MQA.
{ "login": "gugarosa", "id": 4120639, "node_id": "MDQ6VXNlcjQxMjA2Mzk=", "avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gugarosa", "html_url": "https://github.com/gugarosa", "followers_url": "https://api.github.com/users/gugarosa/followers", "following_url": "https://api.github.com/users/gugarosa/following{/other_user}", "gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}", "starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions", "organizations_url": "https://api.github.com/users/gugarosa/orgs", "repos_url": "https://api.github.com/users/gugarosa/repos", "events_url": "https://api.github.com/users/gugarosa/events{/privacy}", "received_events_url": "https://api.github.com/users/gugarosa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "TODO: address results differences when evaluating models.", "> The idea for us is that if the model phi2 fits in phi (modeling_phi) with these changes, then a single conversion is great but we don't mind updating it.\r\n\r\nI believe yes `phi2` fits into `phi1/1.5` modeling file... I guess this is to add more functionalities (such as GQA, MQA) to both ph1 and phi2 models.\r\n\r\nRegarding the [issues](https://github.com/huggingface/transformers/issues/28049#issuecomment-1863021572) with the current conversion script, should I create a separate PR to address that or should it be included in this PR? @ArthurZucker ", "Thanks @ArthurZucker!\r\n\r\nIt is exactly what @susnato mentioned, either phi-1, phi-1.5 and phi-2 will fit in the updated files. We are just extending its scope to leverage any potential use of GQA/MQA in newer models.\r\n\r\nRegarding the conversion, we can do either way that works for you. For example:\r\n\r\n1. Merge susnato changes -> Merge PR and re-upload the new weights / conversion script.\r\n2. Merge PR / conversion script directly and convert phi-msft (current code in repos) to new structure.", "Same PR works. \r\nUnderstood if the changes are just GQA works great ! (f we have to add new layers or do some if else in the forward, we can't keep it in the same model as per the transformers philosophy but that should be alright 🤗 ) ", "Yeah we can:\r\n1. merge the code needed in transformers (using the correct revision for examples) \r\n2. convert the weights and open PRs to the hub at the same time. With both the checkpoints and updated modeling code\r\n🤗 ", "> TODO: address results differences when evaluating models.\r\n\r\nHi @gugarosa, could you please give more context in what you mean by that? Do you mean the difference between the logits between the model on the library and the model on the Hub?", "Hi @ArthurZucker, I have completed the conversion script and added the `phi2` test in this [PR](https://github.com/huggingface/transformers/pull/28211) ", "Thanks for the review @ArthurZucker! I just got back from vacation and will proceed with the updates.\r\n\r\n@susnato Exactly! I still need to re-evaluate some of our tasks due to the logits. There were slight differences on the results of the tasks, but I will double check everything now that I am back.", "Just ironed out the differences, it was basically a silly mistake on my end 😄.\r\n\r\nAny advice on fixing the `check_repository_consistency` test error?", "Yes! Copied from missused somewhere let me review! ", "Please disregard the \"force-push\" messages, I was squashing some of the commits.", "No worries and feel free to ping me again whenever! 🤗 ", "@ArthurZucker could you please do a final review? It should be all good now.\r\n\r\nI have updated the `microsoft/phi-1` repository to reflect this implementation as well. As soon as everything is merged, we will rollout the changes to 1.5 and 2.\r\n\r\nThanks for all the attention!", "If you can adresse the final comments + the merge we'll be able to merge! 🥳 ", "> If you can adresse the final comments + the merge we'll be able to merge! 🥳\r\n\r\nThanks @ArthurZucker! I have addressed the latest comments and merge. Everything should be good now! 🎉 ", "@ArthurZucker \r\n\r\nIs there any more time for a small addition? There is a \"bug\" that we can solve with upcasting queries and keys to fp32 in `PhiAttention`.\r\n\r\nCode to reproduce (`main` branch):\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"susnato/phi-2\", torch_dtype=\"auto\").to(\"cuda\") # or .half()\r\ntokenizer = AutoTokenizer.from_pretrained(\"susnato/phi-2\")\r\ninput_ids = torch.tensor(tokenizer.encode(\"Hello, my dog is cute\", add_special_tokens=True)).unsqueeze(0)\r\n\r\nlogits = model(input_ids.to(\"cuda\")).logits\r\nprint(logits)\r\n```\r\n\r\nLogits are always NaN due to attention overflow when using the `PhiAttention` class with FP16. We have observed this before, that's why we had this https://huggingface.co./microsoft/phi-2/blob/main/modeling_phi.py#L359. Funny thing is that it does not happen on Phi-1 and Phi-1.5 😆 \r\n\r\nI am using @susnato model as reference, but it is extensible to any Phi-2 model. ", "That's interesting, thanks for pointing this out @gugarosa!\r\n\r\nNow I am curious why does this not happen for phi-1 and phi-1.5. 😅 \r\n\r\nWere there any difference in the dtype between the phi-1 and phi-2 during training?", "> That's interesting, thanks for pointing this out @gugarosa!\r\n> \r\n> Now I am curious why does this not happen for phi-1 and phi-1.5. 😅\r\n> \r\n> Were there any difference in the dtype between the phi-1 and phi-2 during training?\r\n\r\nUnfortunately no, it was the same training code. The only difference is that phi-2 has been trained for longer and it is a bigger model. We were baffled when we saw it and so far we don't have a clue.\r\n\r\nIt could have been this very specific checkpoint and/or a combination of factors that caused the overflow, but still, no clear idea 😭 ", "Sure let me have a look. \r\nWe also had similar issue with Llama I believed, so having a look", "> Sure let me have a look. We also had similar issue with Llama I believed, so having a look\r\n\r\nThanks! The fix is on the last commit, only two more lines (and one changed) were able to solve it.", "> 2 nits and we can merge! Thanks a lot for bearing with me @gugarosa\r\n\r\nNo problems! Just added the changes and thanks for all the help with this PR.", "@ArthurZucker could we please merge this?\r\n\r\nI am anxious to fix Phi-2 😄 ", "Was waiting for the CIs and fell asleep! Sorry @gugarosa !\r\nThanks @susnato and @gugarosa for this awesome join contribution! 🤗 \r\n🚀 \r\n", "@gugarosa @susnato @ArthurZucker \r\nguys it would be great to communicate on your main Phi-2 README that the checkpoints have changed.\r\nYou had 200K+ downloads and many people finetuned on the old fused QKV ckpt.", "Hi @vince62s, I think there has been a misunderstanding here, the checkpoints are not changed, they are the same but in different order.\r\n\r\nThe checkpoints which are present at microsoft/phi-2 are not in the right order to be used with the library model so I reordered the weights and pushed them to susnato/phi-2.\r\nBut they are same values. If you compare the logits of those two different models( in fp32 ) you will find that they are same within a acceptable range (1e-3). \r\n\r\nPlease let me know if this explanation helps or not. 🤗 ", "used to be:\r\n```\r\n \"weight_map\": {\r\n \"lm_head.linear.bias\": \"model-00002-of-00002.safetensors\",\r\n \"lm_head.linear.weight\": \"model-00002-of-00002.safetensors\",\r\n \"lm_head.ln.bias\": \"model-00002-of-00002.safetensors\",\r\n \"lm_head.ln.weight\": \"model-00002-of-00002.safetensors\",\r\n \"transformer.embd.wte.weight\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.ln.bias\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.ln.weight\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.mixer.Wqkv.bias\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.mixer.Wqkv.weight\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.mixer.out_proj.bias\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.mixer.out_proj.weight\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.mlp.fc1.bias\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.mlp.fc1.weight\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.mlp.fc2.bias\": \"model-00001-of-00002.safetensors\",\r\n \"transformer.h.0.mlp.fc2.weight\": \"model-00001-of-00002.safetensors\",\r\n```\r\nnow is\r\n\r\n```\r\n\"lm_head.bias\": \"model-00002-of-00002.safetensors\",\r\n--\r\n  | \"lm_head.weight\": \"model-00002-of-00002.safetensors\",\r\n  | \"model.embed_tokens.weight\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.final_layernorm.bias\": \"model-00002-of-00002.safetensors\",\r\n  | \"model.final_layernorm.weight\": \"model-00002-of-00002.safetensors\",\r\n  | \"model.layers.0.input_layernorm.bias\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.input_layernorm.weight\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.mlp.fc1.bias\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.mlp.fc1.weight\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.mlp.fc2.bias\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.mlp.fc2.weight\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.self_attn.dense.bias\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.self_attn.dense.weight\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.self_attn.k_proj.bias\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.self_attn.k_proj.weight\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.self_attn.q_proj.bias\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.self_attn.q_proj.weight\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.self_attn.v_proj.bias\": \"model-00001-of-00002.safetensors\",\r\n  | \"model.layers.0.self_attn.v_proj.weight\": \"model-00001-of-00002.safetensors\",\r\n```\r\n\r\nAm I wrong?\r\n\r\nI am not saying the weights have changed but the tensors name have changed and now QKV are unmerged (I presume)", "and btw you can see people complaining https://huggingface.co./microsoft/phi-2/discussions/76", "Yes @vince62s, there seems to be some recent updates, sorry I wasn't aware of this commit. ", "People who finetuned can either convert the checkpoint or use truste_remote_code with a specific revision before the changes, which should be the best solution IMO", "I'm fine with this, just thinking that would be great to post some info on the README page about the change." ]
1,703
1,705
1,704
CONTRIBUTOR
null
# What does this PR do? As we discussed on the repositories and the e-mail thread, these are minor changes that we would like to integrate into HF. One thing that we need to discuss is how to leverage the current batch of models (using `transformers>=4.36.0`), since the proposed change will make a shape difference in `qkv` weights and biases. We could change the conversion script from Phi (transformers=4.36.0) to reflect this new implementation, while we compromise in converting our current repositories weights to this new format and use the HF-based code in the next deploys. With that, we could avoid having two conversions, i.e., phi-msft -> phi and phi (4.36.0) -> new_phi.   Please let me know your thoughts! ## Changes - Adds support for using GQA/MQA with Phi-based models. This is a combined implementation between the old `PhiAttention` and `LlamaAttention`. - Fixes documentation official Phi-based models paths. - Adds support for dynamically pad the vocab_size to a multiple of 64 (better use of Ampere/Hopper-based GPUs). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? @susnato @LysandreJik @ArthurZucker @philschmid @osanseviero Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28163/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28163/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28163", "html_url": "https://github.com/huggingface/transformers/pull/28163", "diff_url": "https://github.com/huggingface/transformers/pull/28163.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28163.patch", "merged_at": 1704985082000 }
https://api.github.com/repos/huggingface/transformers/issues/28162
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28162/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28162/comments
https://api.github.com/repos/huggingface/transformers/issues/28162/events
https://github.com/huggingface/transformers/issues/28162
2,050,409,938
I_kwDOCUB6oc56NsXS
28,162
save_pretrained no longer works for AutomaticSpeechRecognitionPipeline
{ "login": "Hubert-Bonisseur", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hubert-Bonisseur", "html_url": "https://github.com/Hubert-Bonisseur", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Hubert-Bonisseur, thanks for raising this issue! \r\n\r\nIndeed, this seems to be the result of two PRs - #25884 - which enables saving image processors in pipelines and #25438 which adapted ASR's pipeline's init. \r\n\r\ncc @sanchit-gandhi who knows more about the decisions in #25438 and the reason for removing the `super().__init__` call", "Overriding the init was required to ensure we could correctly set ASR-specific decoding params when we instantiate the pipeline: https://github.com/huggingface/transformers/pull/25438#issuecomment-1676918766\r\n\r\nHere's a PR to fix the `save_pretrained` functionality of the ASR pipeline: https://github.com/huggingface/transformers/pull/28486" ]
1,703
1,705
1,705
NONE
null
### System Info transformers-4.37.0.dev0 ### Who can help? @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import pipeline asr_pipeline = pipeline('automatic-speech-recognition', model="openai/whisper-tiny") asr_pipeline.save_pretrained("pipeline_save") ``` Gives this error: ``` Traceback (most recent call last): File "/Users/bruno/testing.py", line 6, in <module> asr_pipeline.save_pretrained("pipeline_save") File "/Users/bruno/venv/lib/python3.11/site-packages/transformers/pipelines/base.py", line 883, in save_pretrained if self.image_processor is not None: ^^^^^^^^^^^^^^^^^^^^ AttributeError: 'AutomaticSpeechRecognitionPipeline' object has no attribute 'image_processor' ``` ### Expected behavior The pipeline should be saved. save_pretrained a pipeline is used by BentoML, as a result versions of transformers newer than 4.32.1 cannot be used to serve a AutomaticSpeechRecognitionPipeline with bentoML. https://github.com/bentoml/BentoML/issues/4339
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28162/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28162/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28161
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28161/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28161/comments
https://api.github.com/repos/huggingface/transformers/issues/28161/events
https://github.com/huggingface/transformers/pull/28161
2,050,387,978
PR_kwDOCUB6oc5idl31
28,161
Update FA2 exception msg to point to hub discussions
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,703
1,703
1,703
COLLABORATOR
null
# What does this PR do? Small update the FA2 warning pointing users towards discussions on the hub. Addresses cases like in #28100 when support is requested for model not in the transformers repo.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28161/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28161", "html_url": "https://github.com/huggingface/transformers/pull/28161", "diff_url": "https://github.com/huggingface/transformers/pull/28161.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28161.patch", "merged_at": 1703091137000 }
https://api.github.com/repos/huggingface/transformers/issues/28160
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28160/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28160/comments
https://api.github.com/repos/huggingface/transformers/issues/28160/events
https://github.com/huggingface/transformers/issues/28160
2,050,382,630
I_kwDOCUB6oc56Nlsm
28,160
[Flash Attention 2] Performance improvement
{ "login": "li-plus", "id": 39846316, "node_id": "MDQ6VXNlcjM5ODQ2MzE2", "avatar_url": "https://avatars.githubusercontent.com/u/39846316?v=4", "gravatar_id": "", "url": "https://api.github.com/users/li-plus", "html_url": "https://github.com/li-plus", "followers_url": "https://api.github.com/users/li-plus/followers", "following_url": "https://api.github.com/users/li-plus/following{/other_user}", "gists_url": "https://api.github.com/users/li-plus/gists{/gist_id}", "starred_url": "https://api.github.com/users/li-plus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/li-plus/subscriptions", "organizations_url": "https://api.github.com/users/li-plus/orgs", "repos_url": "https://api.github.com/users/li-plus/repos", "events_url": "https://api.github.com/users/li-plus/events{/privacy}", "received_events_url": "https://api.github.com/users/li-plus/received_events", "type": "User", "site_admin": false }
[ { "id": 3081136536, "node_id": "MDU6TGFiZWwzMDgxMTM2NTM2", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue", "name": "Good Difficult Issue", "color": "684CC7", "default": false, "description": "" }, { "id": 6202871275, "node_id": "LA_kwDOCUB6oc8AAAABcbhN6w", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flash%20Attention", "name": "Flash Attention", "color": "201FF8", "default": false, "description": "" } ]
open
false
null
[]
[ "cc @ArthurZucker @younesbelkada ", "Hi @li-plus \r\nThanks a lot for the suggestion ! \r\n@fxmarty tried the approach of pad / unpadd at the beginning of the model forward call here: https://github.com/younesbelkada/transformers/pull/5 but the implementation ended up bloating the modeling code, therefore it has been decided to not move forward for that approach maybe we can revisit this cc @ArthurZucker ", "I think this could be revisited given that we have more flexibility with the cache and the attention layer as well, not bandwidth on my side but ready to review a PR so will label it as a good difficult issue! " ]
1,703
1,703
null
CONTRIBUTOR
null
### Feature request The current flash attention 2 integration is sub-optimal in performance because it requires unpadding and padding the activations on **each** layer. For example in llama implementation: https://github.com/huggingface/transformers/blob/769a9542de4e8b23f0a551738e18760621f463e8/src/transformers/models/llama/modeling_llama.py#L591-L612 These small kernels for unpad/pad keep gpu waiting for cpu, as shown in the visible gaps between kernels in cuda stream. ![image](https://github.com/huggingface/transformers/assets/39846316/f8bfa837-3ddd-447f-a6dd-de4883db63e6) I'll suggest unpadding the activations at the very beginning (right after word embeddings) and padding it back at the end (maybe before lm_head), and the gap should disappear. ### Motivation To eliminate performance overhead of flash attention 2. ### Your contribution I can write the code when I'm not busy. Maybe not now.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28160/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28160/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28159
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28159/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28159/comments
https://api.github.com/repos/huggingface/transformers/issues/28159/events
https://github.com/huggingface/transformers/issues/28159
2,050,081,233
I_kwDOCUB6oc56McHR
28,159
traning a model `Falcon-7b instruct` and facing error
{ "login": "rajveer43", "id": 64583161, "node_id": "MDQ6VXNlcjY0NTgzMTYx", "avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajveer43", "html_url": "https://github.com/rajveer43", "followers_url": "https://api.github.com/users/rajveer43/followers", "following_url": "https://api.github.com/users/rajveer43/following{/other_user}", "gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions", "organizations_url": "https://api.github.com/users/rajveer43/orgs", "repos_url": "https://api.github.com/users/rajveer43/repos", "events_url": "https://api.github.com/users/rajveer43/events{/privacy}", "received_events_url": "https://api.github.com/users/rajveer43/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The snippet is not full ", "> The snippet is not full\r\n\r\nI have updated the snippet ", "I can't guess the model you are using, is it quantized did you use from pretrained, which checkpoint, trust remote code? ? etc", "```\r\nMODEL_NAME = \"tiiuae/falcon-7b-instruct\"\r\n\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16\r\n)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n MODEL_NAME,\r\n device_map=\"auto\",\r\n trust_remote_code=True,\r\n quantization_config=bnb_config\r\n)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\nmodel.gradient_checkpointing_enable()\r\nmodel = prepare_model_for_kbit_training(model)\r\n\r\nconfig = LoraConfig(\r\n r=16,\r\n lora_alpha=32,\r\n target_modules=[\"query_key_value\"],\r\n lora_dropout=0.05,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\",\r\n \r\n)\r\n# model.add_adapter(config)\r\nmodel = get_peft_model(model, config)\r\nprint_trainable_parameters(model)\r\n```\r\n\r\n\r\n\r\n\r\nI am using it from pretranied for QNA task , for medical data\r\n\r\n\r\n", "I think I found the issue! check and see if your transformers version is up to date because this problem occurs when the Trainer displaces the model to the CPU to save it, when the saving ends trainer can't train with TPU anymore because the whole model is in the CPU now! (but this problem is fixed) Check you're transformers version and use this to update it:\r\n!pip install -U git+https://github.com/huggingface/transformers.git", "Thanks a bunch! @Aliync, It worked! " ]
1,703
1,703
1,703
CONTRIBUTOR
null
### System Info Kaggle notebook, google colab ``` training_args = transformers.TrainingArguments( per_device_train_batch_size=4, gradient_accumulation_steps=4, #4 num_train_epochs=6, learning_rate=2e-4, fp16=True, save_total_limit=3, logging_steps=500, output_dir="experiments", optim="paged_adamw_8bit", lr_scheduler_type="cosine", warmup_ratio=0.05, push_to_hub=True, ) trainer = transformers.Trainer( model=model, train_dataset=train_data_transformed, args=training_args, data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False) ) model.config.use_cache = False trainer.train() ``` error: ``` RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd. ``` ``` You're using a PreTrainedTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[27], line 1 ----> 1 trainer.train() File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1528, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1525 try: 1526 # Disable progress bars when uploading models during checkpoints to avoid polluting stdout 1527 hf_hub_utils.disable_progress_bars() -> 1528 return inner_training_loop( 1529 args=args, 1530 resume_from_checkpoint=resume_from_checkpoint, 1531 trial=trial, 1532 ignore_keys_for_eval=ignore_keys_for_eval, 1533 ) 1534 finally: 1535 hf_hub_utils.enable_progress_bars() File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1854, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1851 self.control = self.callback_handler.on_step_begin(args, self.state, self.control) 1853 with self.accelerator.accumulate(model): -> 1854 tr_loss_step = self.training_step(model, inputs) 1856 if ( 1857 args.logging_nan_inf_filter 1858 and not is_torch_tpu_available() 1859 and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step)) 1860 ): 1861 # if loss is nan or inf simply add the average of previous logged losses 1862 tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged) File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2732, in Trainer.training_step(self, model, inputs) 2730 scaled_loss.backward() 2731 else: -> 2732 self.accelerator.backward(loss) 2734 return loss.detach() / self.args.gradient_accumulation_steps File /opt/conda/lib/python3.10/site-packages/accelerate/accelerator.py:1903, in Accelerator.backward(self, loss, **kwargs) 1901 return 1902 elif self.scaler is not None: -> 1903 self.scaler.scale(loss).backward(**kwargs) 1904 else: 1905 loss.backward(**kwargs) File /opt/conda/lib/python3.10/site-packages/torch/_tensor.py:487, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs) 477 if has_torch_function_unary(self): 478 return handle_torch_function( 479 Tensor.backward, 480 (self,), (...) 485 inputs=inputs, 486 ) --> 487 torch.autograd.backward( 488 self, gradient, retain_graph, create_graph, inputs=inputs 489 ) File /opt/conda/lib/python3.10/site-packages/torch/autograd/__init__.py:200, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 195 retain_graph = create_graph 197 # The reason we repeat same the comment below is that 198 # some Python versions print out the first line of a multi-line function 199 # calls in the traceback and some print out the last line --> 200 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 201 tensors, grad_tensors_, retain_graph, create_graph, inputs, 202 allow_unreachable=True, accumulate_grad=True) File /opt/conda/lib/python3.10/site-packages/torch/autograd/function.py:274, in BackwardCFunction.apply(self, *args) 270 raise RuntimeError("Implementing both 'backward' and 'vjp' for a custom " 271 "Function is not allowed. You should only implement one " 272 "of them.") 273 user_fn = vjp_fn if vjp_fn is not Function.vjp else backward_fn --> 274 return user_fn(self, *args) File /opt/conda/lib/python3.10/site-packages/torch/utils/checkpoint.py:141, in CheckpointFunction.backward(ctx, *args) 137 detached_inputs = detach_variable(tuple(inputs)) 138 with torch.enable_grad(), \ 139 torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs), \ 140 torch.cpu.amp.autocast(**ctx.cpu_autocast_kwargs): --> 141 outputs = ctx.run_function(*detached_inputs) 143 if isinstance(outputs, torch.Tensor): 144 outputs = (outputs,) File ~/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/cf4b3c42ce2fdfe24f753f0f0d179202fea59c99/modeling_falcon.py:785, in FalconModel.forward.<locals>.create_custom_forward.<locals>.custom_forward(*inputs) 783 def custom_forward(*inputs): 784 # None for past_key_value --> 785 return module(*inputs, use_cache=use_cache, output_attentions=output_attentions) File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs) 163 output = module._old_forward(*args, **kwargs) 164 else: --> 165 output = module._old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File ~/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/cf4b3c42ce2fdfe24f753f0f0d179202fea59c99/modeling_falcon.py:453, in FalconDecoderLayer.forward(self, hidden_states, alibi, attention_mask, layer_past, head_mask, use_cache, output_attentions) 450 attention_layernorm_out = self.input_layernorm(hidden_states) 452 # Self attention. --> 453 attn_outputs = self.self_attention( 454 attention_layernorm_out, 455 layer_past=layer_past, 456 attention_mask=attention_mask, 457 alibi=alibi, 458 head_mask=head_mask, 459 use_cache=use_cache, 460 output_attentions=output_attentions, 461 ) 463 attention_output = attn_outputs[0] 465 if not self.config.new_decoder_architecture: File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File /opt/conda/lib/python3.10/site-packages/accelerate/hooks.py:165, in add_hook_to_module.<locals>.new_forward(module, *args, **kwargs) 163 output = module._old_forward(*args, **kwargs) 164 else: --> 165 output = module._old_forward(*args, **kwargs) 166 return module._hf_hook.post_forward(module, output) File ~/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/cf4b3c42ce2fdfe24f753f0f0d179202fea59c99/modeling_falcon.py:307, in FalconAttention.forward(self, hidden_states, alibi, attention_mask, layer_past, head_mask, use_cache, output_attentions) 304 value_layer = value_layer.transpose(1, 2).reshape(batch_size * num_kv_heads, query_length, self.head_dim) 306 past_kv_length = 0 if layer_past is None else layer_past[0].shape[1] --> 307 query_layer, key_layer = self.maybe_rotary(query_layer, key_layer, past_kv_length) 309 if layer_past is not None: 310 past_key, past_value = layer_past File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs) 1496 # If we don't have any hooks, we want to skip the rest of the logic in 1497 # this function, and just call forward. 1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1499 or _global_backward_pre_hooks or _global_backward_hooks 1500 or _global_forward_hooks or _global_forward_pre_hooks): -> 1501 return forward_call(*args, **kwargs) 1502 # Do not call functions when jit is used 1503 full_backward_hooks, non_full_backward_hooks = [], [] File ~/.cache/huggingface/modules/transformers_modules/tiiuae/falcon-7b-instruct/cf4b3c42ce2fdfe24f753f0f0d179202fea59c99/modeling_falcon.py:108, in FalconRotaryEmbedding.forward(self, query, key, past_key_values_length) 106 batch, seq_len, head_dim = query.shape 107 cos, sin = self.cos_sin(seq_len, past_key_values_length, query.device, query.dtype) --> 108 return (query * cos) + (rotate_half(query) * sin), (key * cos) + (rotate_half(key) * sin) RuntimeError: Inference tensors cannot be saved for backward. To work around you can make a clone to get a normal tensor and use it in autograd. ``` how can I solve this? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction - ### Expected behavior - @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28159/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28158
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28158/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28158/comments
https://api.github.com/repos/huggingface/transformers/issues/28158/events
https://github.com/huggingface/transformers/issues/28158
2,049,997,699
I_kwDOCUB6oc56MHuD
28,158
During the training process, what happens if tf32 and bf16 are enabled at the same time?
{ "login": "Bonytu", "id": 47250017, "node_id": "MDQ6VXNlcjQ3MjUwMDE3", "avatar_url": "https://avatars.githubusercontent.com/u/47250017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bonytu", "html_url": "https://github.com/Bonytu", "followers_url": "https://api.github.com/users/Bonytu/followers", "following_url": "https://api.github.com/users/Bonytu/following{/other_user}", "gists_url": "https://api.github.com/users/Bonytu/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bonytu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bonytu/subscriptions", "organizations_url": "https://api.github.com/users/Bonytu/orgs", "repos_url": "https://api.github.com/users/Bonytu/repos", "events_url": "https://api.github.com/users/Bonytu/events{/privacy}", "received_events_url": "https://api.github.com/users/Bonytu/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "Hi,could you please help me to solve this issue? @muellerzr @pacman100 \r\nThanks!" ]
1,703
1,706
null
NONE
null
### System Info transformers 4.34.1 ### Who can help? @muellerzr @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. use trainer and set --tf32 True and --bf16 True ### Expected behavior Hi, when I was training llama2-13b, I set both --tf32 True and --bf16 True at the same time. I'm confused because the trainer worked normally when both of these parameters were enabled. During this process, which parts used tf32 and which parts used bf16? How exactly does it work when both are turned on at the same time? Also I found many tutorial set these two params at the same time. [(https://www.philschmid.de/instruction-tune-llama-2)](tutorial)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28158/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28158/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28157
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28157/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28157/comments
https://api.github.com/repos/huggingface/transformers/issues/28157/events
https://github.com/huggingface/transformers/issues/28157
2,049,793,039
I_kwDOCUB6oc56LVwP
28,157
AUtokenizer is giving wrong result
{ "login": "ONE-THING-9", "id": 123763769, "node_id": "U_kgDOB2B8OQ", "avatar_url": "https://avatars.githubusercontent.com/u/123763769?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ONE-THING-9", "html_url": "https://github.com/ONE-THING-9", "followers_url": "https://api.github.com/users/ONE-THING-9/followers", "following_url": "https://api.github.com/users/ONE-THING-9/following{/other_user}", "gists_url": "https://api.github.com/users/ONE-THING-9/gists{/gist_id}", "starred_url": "https://api.github.com/users/ONE-THING-9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ONE-THING-9/subscriptions", "organizations_url": "https://api.github.com/users/ONE-THING-9/orgs", "repos_url": "https://api.github.com/users/ONE-THING-9/repos", "events_url": "https://api.github.com/users/ONE-THING-9/events{/privacy}", "received_events_url": "https://api.github.com/users/ONE-THING-9/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for opening an issue! This is not related to the actual classes but rather the way BPE works. It tries to merge individual characters based on the `merges` it has learned. But in this case `\"▁\", \"विधायकों\"` is not part of the merge pairs. While for example `\"▁\",\"विधायक\"` is. So: \r\n```python \r\ntokenizer.tokenize(\"विधायक\")\r\n['▁विधायक']\r\n```\r\n", "Hey @ArthurZucker I am still getting that split thing \r\n\r\ntokenizer.tokenize(\"विधायकों'\")\r\n['▁विधा', 'य', 'कों', \"'\"]\r\n\r\n\r\n![Screenshot 2023-12-20 at 8 30 08 PM](https://github.com/huggingface/transformers/assets/123763769/d9e234f6-3c1b-4e00-bf9f-418c983addc3)\r\n", "Did you read my answer? 🤗 ", "yeah, and sorry if I am missing a very silly point, \r\nbut how come LlamaTokenizer(which the author provided) works fine(means not splitting the words in vocab)? \r\n\r\nHowever, the author mentioned that they have trained the tokenizer using sentencePiece algorithms, why AutoTokenizer not using that algorithm or AutoTokenizer always use BPE?\r\n\r\nmodel link - https://huggingface.co./sarvamai/OpenHathi-7B-Hi-v0.1-Base\r\n\r\n@ArthurZucker ", "Okay I'll add a detail: \r\n```python \r\ntokenizer.tokenize(\"विधायक\")\r\n['▁विधायक']\r\ntokenizer.tokenize(\"विधायकों'\")\r\n['▁विधा', 'य', 'कों', \"'\"]\r\n```\r\nand \r\n```python\r\n\"विधायक\" == \"विधायकों'\"\r\n```\r\nwe are not using the same input. \r\nAs my answer mentions, the merges is what matters not the vocab! \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.20 - JaxLib version: 0.4.20 - Using GPU in script?: no - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada While using AutoTokenizer for "sarvamai/OpenHathi-7B-Hi-v0.1-Base". The tokenizer is giving the wrong output. Tokenizer is splitting the words that are in vocab like ('▁विधायकों', 33821) tokenizer.tokenize("विधायकों") output ['▁', 'वि', 'धा', 'य', 'कों'] Observed this with many words : बिश्नोई , एबीवीपी...... However, it is working fine with LlamaTokenizer https://huggingface.co./sarvamai/OpenHathi-7B-Hi-v0.1-Base <img width="852" alt="Screenshot 2023-12-16 at 8 42 30 PM" src="https://github.com/huggingface/transformers/assets/123763769/220734ad-8ae1-4323-a1ea-a29beb2b15a2"> ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction used given code in model info page ### Expected behavior AutoTokenizer gives the wrong output
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28157/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28156
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28156/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28156/comments
https://api.github.com/repos/huggingface/transformers/issues/28156/events
https://github.com/huggingface/transformers/issues/28156
2,049,744,008
I_kwDOCUB6oc56LJyI
28,156
Whisper v3 dependency issue
{ "login": "lionsheep0724", "id": 79906095, "node_id": "MDQ6VXNlcjc5OTA2MDk1", "avatar_url": "https://avatars.githubusercontent.com/u/79906095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lionsheep0724", "html_url": "https://github.com/lionsheep0724", "followers_url": "https://api.github.com/users/lionsheep0724/followers", "following_url": "https://api.github.com/users/lionsheep0724/following{/other_user}", "gists_url": "https://api.github.com/users/lionsheep0724/gists{/gist_id}", "starred_url": "https://api.github.com/users/lionsheep0724/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lionsheep0724/subscriptions", "organizations_url": "https://api.github.com/users/lionsheep0724/orgs", "repos_url": "https://api.github.com/users/lionsheep0724/repos", "events_url": "https://api.github.com/users/lionsheep0724/events{/privacy}", "received_events_url": "https://api.github.com/users/lionsheep0724/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @lionsheep0724, thanks for raising this issue! \r\n\r\nThe most recent version of transformers [is compatible with tokenizers==0.15](https://github.com/huggingface/transformers/blob/769a9542de4e8b23f0a551738e18760621f463e8/setup.py#L177). \r\n\r\nCould you try reinstalling transformers? \r\n\r\n```\r\npip uninstall transformers\r\npip install --upgrade git+https://github.com/huggingface/transformers.git\r\n```\r\n\r\nFor the error message, could you share the full traceback? \r\n\r\ncc @sanchit-gandhi ", "Hi @amyeroberts, sorry for late response, I was in year-end vacation.\r\nI created conda env with python 3.10, and followed your comment, as below.\r\n```\r\npip uninstall transformers\r\npip install --upgrade git+https://github.com/huggingface/transformers.git\r\n```\r\nBut same result as follows(full traceback):\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Kakaobank\\Documents\\stt-benchmark\\whisper_v3.py\", line 2, in <module>\r\n from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\r\n File \"C:\\Users\\Kakaobank\\Documents\\stt-benchmark\\transformers\\__init__.py\", line 26, in <module>\r\n from . import dependency_versions_check\r\n File \"C:\\Users\\Kakaobank\\Documents\\stt-benchmark\\transformers\\dependency_versions_check.py\", line 57, in <module>\r\n require_version_core(deps[pkg])\r\n File \"C:\\Users\\Kakaobank\\Documents\\stt-benchmark\\transformers\\utils\\versions.py\", line 117, in require_version_core\r\n return require_version(requirement, hint)\r\n File \"C:\\Users\\Kakaobank\\Documents\\stt-benchmark\\transformers\\utils\\versions.py\", line 111, in require_version\r\n _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)\r\n File \"C:\\Users\\Kakaobank\\Documents\\stt-benchmark\\transformers\\utils\\versions.py\", line 44, in _compare_versions\r\n raise ImportError(\r\nImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.15.0.\r\nTry: pip install transformers -U or pip install -e '.[dev]' if you're working with git main\r\n```\r\n\r\nAnd refer to my test code:\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline\r\n\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\ntorch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32\r\n\r\nmodel_path = \"./models/whisper-large-v3\"\r\n\r\nmodel = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n model_path, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n)\r\nmodel.to(device)\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_path)\r\n\r\npipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=model,\r\n tokenizer=processor.tokenizer,\r\n feature_extractor=processor.feature_extractor,\r\n max_new_tokens=128,\r\n chunk_length_s=30,\r\n batch_size=16,\r\n return_timestamps=True,\r\n torch_dtype=torch_dtype,\r\n device=device,\r\n)\r\n```", "@lionsheep0724 Could you confirm the versions of transformers and tokenizers in your environment?\r\n\r\n```\r\npip list | grep tokenizers\r\n```\r\n\r\n```\r\npip list | grep transformers\r\n```\r\n\r\nAnd in the python environment: \r\n\r\n```\r\npython -c \"import tokenizers; import transformers; print(tokenizers.__version__); print(transformers.__version__)\"\r\n```", "@amyeroberts \r\n0.15.0 for tokenizers, 4.37.0.dev0 for transformers.", "Let me share my troubleshooting result.\r\nThe problem was windows. \r\nI installed transformers as you mentioned above in docker container (linux) and there was no dependency issue.\r\nBut I'm confusing why transformers 4.37.0.dev0 behaves diffrently in linux and windows, even though the printed version was same in both system.", "Another finding : ubuntu 18.04 version(pytorch/pytorch:1.13.1-cuda11.6-cudnn8-runtime image) also has same issue.\r\nI guess 4.37.0.dev0 works differently depending on the platform.", "Thanks for updating @lionsheep0724! \r\n\r\nAcross different platforms - when working and not working - do you see the same versions of `tokenizers` and `transformers` installed in the python environment? Are you using the same method to install the libraries e.g. pip? ", "Yes, I installed libraries using same method and the versions were same.\r\n", "@lionsheep0724 Hmmmm - I honestly have no idea what's happening here. I am to run without issue on my ubuntu machine and mac. \r\n\r\nMy best guess is that the version of transformers being run in the python environment isn't the same as the one being installed by pip. The version restrictions seen in the warning message were changed with #23909 and have been part of the library since v4.34. \r\n\r\nYou can check which version is being run using the python command I posted above. If you're running in an ipython environment, you'll need to make sure you're using the same libraries installed by pip. Running:\r\n```py\r\nimport x\r\nprint(x.__version__)\r\n```\r\nin the python environment should confirm if this is what's happening. ", "@amyeroberts \r\nAfter a lot of trials, the problem has been somehow solved. I just repeated methods the way I explained above.\r\nI'm not sure about the root cause, I just assume its caused by our security s/w.\r\nReally thank you for your reply. ", "@lionsheep0724 Thanks for the update! ", "i try to uninstall both of transform and token\r\n<img width=\"920\" alt=\"屏幕截图 2024-01-27 174155\" src=\"https://github.com/huggingface/transformers/assets/23346966/9709cd80-7ee9-400e-bc41-25b725a17364\">\r\n\r\nand then ,i used pip install transformers== 4.27.0,during this installation the token always be installed auto\r\n\r\nfinally, it worked! \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,706
null
NONE
null
### System Info - transformers version: transformers-4.37.0.dev0 (installed via `pip install --upgrade git+https://github.com/huggingface/transformers.git accelerate datasets[audio]`, which instructed in [here](https://huggingface.co./openai/whisper-large-v3) - Platform: Windows 10, WSL - Python version: 3.10 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline device = "cuda:0" if torch.cuda.is_available() else "cpu" torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32 model_path = f"./models/whisper-large-v3" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_path, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_path) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=30, batch_size=16, return_timestamps=True, torch_dtype=torch_dtype, device=device, ) ``` ### Expected behavior - I'm trying to load pretrained whisper-large-v3 model but I guess there is dependency issue in transformers (transformers-4.37.0.dev0) - I got an error as follows. ```ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.15.0.``` - I guess transformers(4.37.0.dev0) and whisper-v3 depends on tokenizers under 0.15, but installed one through pip command in official hf-whisper page is 0.15. - When I install lower version of tokenizers, ```ValueError: Non-consecutive added token ‘<|0.02|>’ found. Should have index 50365 but has index 50366 in saved vocabulary.``` error occurrs. - I'm confused which tokenizers version I need to install.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28156/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28156/timeline
null
null
null