url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/29061
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29061/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29061/comments
https://api.github.com/repos/huggingface/transformers/issues/29061/events
https://github.com/huggingface/transformers/pull/29061
2,138,735,693
PR_kwDOCUB6oc5nGBVu
29,061
Fix trainer test wrt DeepSpeed + auto_find_bs
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29061). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @amyeroberts good for review now that I verified tests pass :)" ]
1,708
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? Follow-up to https://github.com/huggingface/transformers/pull/29057, changes the test to ensure it raises a not-implemented-error ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Will verify on our CI before merging (as local deepspeed won't install for me). Can confirm passes: https://github.com/huggingface/accelerate/actions/runs/7932048373
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29061/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29061", "html_url": "https://github.com/huggingface/transformers/pull/29061", "diff_url": "https://github.com/huggingface/transformers/pull/29061.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29061.patch", "merged_at": 1708095864000 }
https://api.github.com/repos/huggingface/transformers/issues/29060
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29060/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29060/comments
https://api.github.com/repos/huggingface/transformers/issues/29060/events
https://github.com/huggingface/transformers/issues/29060
2,138,554,275
I_kwDOCUB6oc5_d7-j
29,060
Request for Flash Attention 2.0 Support in GPNRoFormerForMaskedLM
{ "login": "YBoulaimen", "id": 157366664, "node_id": "U_kgDOCWE5iA", "avatar_url": "https://avatars.githubusercontent.com/u/157366664?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YBoulaimen", "html_url": "https://github.com/YBoulaimen", "followers_url": "https://api.github.com/users/YBoulaimen/followers", "following_url": "https://api.github.com/users/YBoulaimen/following{/other_user}", "gists_url": "https://api.github.com/users/YBoulaimen/gists{/gist_id}", "starred_url": "https://api.github.com/users/YBoulaimen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YBoulaimen/subscriptions", "organizations_url": "https://api.github.com/users/YBoulaimen/orgs", "repos_url": "https://api.github.com/users/YBoulaimen/repos", "events_url": "https://api.github.com/users/YBoulaimen/events{/privacy}", "received_events_url": "https://api.github.com/users/YBoulaimen/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" }, { "id": 6202871275, "node_id": "LA_kwDOCUB6oc8AAAABcbhN6w", "url": "https://api.github.com/repos/huggingface/transformers/labels/Flash%20Attention", "name": "Flash Attention", "color": "201FF8", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @YBoulaimen, thanks for opening this request! \r\n\r\nThe model is defined and maintained under this repo: https://github.com/songlab-cal/gpn/blob/main/gpn/model.py\r\n\r\nI suggest opening a request there. " ]
1,708
1,708
null
NONE
null
Hello, I trust this message finds you well. I am currently attempting to run the GPN-MSA model, which utilizes AutoModelForMaskedLM, and I am keen on parallelizing the computation across multiple GPUs. To optimize the model's performance, I would like to request the integration of Flash Attention 2.0 support into GPNRoFormerForMaskedLM. As I explore this avenue for parallelization, I envision that many others within the community could benefit from this enhancement. Thank you for your time and consideration. Best regards.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29060/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29059
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29059/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29059/comments
https://api.github.com/repos/huggingface/transformers/issues/29059/events
https://github.com/huggingface/transformers/issues/29059
2,138,513,299
I_kwDOCUB6oc5_dx-T
29,059
Transformers trainer: All checkpoint restarts now FAILING
{ "login": "whr778", "id": 5939523, "node_id": "MDQ6VXNlcjU5Mzk1MjM=", "avatar_url": "https://avatars.githubusercontent.com/u/5939523?v=4", "gravatar_id": "", "url": "https://api.github.com/users/whr778", "html_url": "https://github.com/whr778", "followers_url": "https://api.github.com/users/whr778/followers", "following_url": "https://api.github.com/users/whr778/following{/other_user}", "gists_url": "https://api.github.com/users/whr778/gists{/gist_id}", "starred_url": "https://api.github.com/users/whr778/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/whr778/subscriptions", "organizations_url": "https://api.github.com/users/whr778/orgs", "repos_url": "https://api.github.com/users/whr778/repos", "events_url": "https://api.github.com/users/whr778/events{/privacy}", "received_events_url": "https://api.github.com/users/whr778/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Forgot the version info... sorry about that guys\r\ntransformers 4.37.2", "Hello,\r\n\r\n`trainer_state.json` is saved when calling `_save_checkpoint` at the given line below which is after the model, optimizer and schedulers are saved.\r\n\r\nhttps://github.com/huggingface/transformers/blob/b2628086565e0eedf33f238fe2146f11087c0301/src/transformers/trainer.py#L2496\r\n\r\nSo, it should be saved properly and allow for reloading from the checkpoint as usual. You can see the following resume tests pass for 9 hours ago: https://github.com/huggingface/transformers/actions/runs/7925195377/job/21638061721\r\n\r\n![Screenshot 2024-02-16 at 6 08 31 PM](https://github.com/huggingface/transformers/assets/13534540/ec2b1744-ca46-46bf-9a3b-e9a0e646d872)\r\n", "Just confirmed... my bad had two training arguments stepping on each other... ran the debugger through TrainerControl to verify. This can be closed." ]
1,708
1,708
1,708
NONE
null
### System Info @muellerzr and @pacman100 In the trainer.py Trainer code now requires trainer_state.json for checkpoint restarts trainer.py does NOT save trainer_state.json in def _save_optimizer_and_scheduler(self, output_dir): Recommend either removing the trainer_state.json dependency for checkpoint restarts or Adding at line 2497 ``` elif self.args.should_save: # deepspeed.save_checkpoint above saves model/optim/sched torch.save(self.optimizer.state_dict(), os.path.join(output_dir, OPTIMIZER_NAME)) self.state.save_to_json(os.path.join(output_dir, TRAINER_STATE_NAME)) ## <= HERE ``` ``` File "/devel/venv/whr778/py3.11-st.11/lib/python3.11/site-packages/transformers/trainer.py", line 1513, in train state = TrainerState.load_from_json(os.path.join(resume_from_checkpoint, TRAINER_STATE_NAME)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` ### Who can help? @muellerzr and @pacman100 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Trainer a Language Model kill it try to do a checkpoint restart Ran in a debugger all appropriate flags are set... file does not exist in the checkpoint as there is no code to save the file Checkpoint restart requires trainer_state.json ### Expected behavior Checkpoint restart should work as they used to
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29059/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29059/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/29058
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29058/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29058/comments
https://api.github.com/repos/huggingface/transformers/issues/29058/events
https://github.com/huggingface/transformers/pull/29058
2,138,280,124
PR_kwDOCUB6oc5nEc4B
29,058
`auto_find_batch_size` isn't yet supported with DeepSpeed/FSDP. Raise error accrodingly.
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29058). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,708
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? 1. When examining if `auto_find_batch_size` issue with DeepSpeed is solved via Zach's previous PR as someone commented on the PR that issue is still there: https://github.com/huggingface/transformers/pull/28088#issuecomment-1893093503 When I try https://github.com/pacman100/DHS-LLM-Workshop/tree/main/chat_assistant/sft/training with following command: ``` accelerate launch --config_file "configs/deepspeed_config.yaml" train.py \ --seed 100 \ --model_name_or_path "mistralai/Mistral-7B-v0.1" \ --dataset_name "smangrul/code-chat-assistant-v1" \ --chat_template_format "none" \ --add_special_tokens False \ --append_concat_token False \ --splits "train,test" \ --max_seq_len 2048 \ --num_train_epochs 1 \ --logging_steps 5 \ --log_level "info" \ --logging_strategy "steps" \ --evaluation_strategy "epoch" \ --save_strategy "epoch" \ --push_to_hub \ --hub_private_repo True \ --hub_strategy "every_save" \ --bf16 True \ --packing True \ --learning_rate 2e-5 \ --lr_scheduler_type "cosine" \ --weight_decay 0.0 \ --warmup_ratio 0.1 \ --max_grad_norm 1.0 \ --output_dir "mistral-sft-ds" \ --per_device_train_batch_size 64 \ --per_device_eval_batch_size 16 \ --gradient_accumulation_steps 1 \ --dataset_text_field "content" \ --use_flash_attn True \ --auto_find_batch_size True ``` I get a different error: ``` File "/fsx/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1328, in partition self._partition(param_list, has_been_updated=has_been_updated) File "/fsx/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1477, in _partition self._partition_param(param, has_been_updated=has_been_updated) File "/fsx/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/fsx/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1510, in _partition_param free_param(param) File "/fsx/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/fsx/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 285, in free_param assert not param.ds_active_sub_modules, param.ds_summary() AssertionError: {'id': 26, 'status': 'AVAILABLE', 'numel': 4096, 'ds_numel': 4096, 'shape': (4096,), 'ds_shape': (4096,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {44}, 'ds_tensor.shape': torch.Size([512])} [2024-02-09 14:50:57,113] torch.distributed.elastic.multiprocessing.api: [WARNING] Sending process 230181 closing signal SIGTERM [2024-02-09 14:50:57,646] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 1 (pid: 230182) of binary: /fsx/sourab/miniconda3/envs/hf/bin/python ``` As `auto_find_batch_size` is good to have feature and not a necessity, coupled with the obscure errors noticed with DeepSpeeed/FSDP, we don't want to spend more time around this at present. Hence, this PR to raise error when trying to use `auto_find_batch_size` with DeepSpeed/FSDP.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29058/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29058", "html_url": "https://github.com/huggingface/transformers/pull/29058", "diff_url": "https://github.com/huggingface/transformers/pull/29058.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29058.patch", "merged_at": 1708087269000 }
https://api.github.com/repos/huggingface/transformers/issues/29057
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29057/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29057/comments
https://api.github.com/repos/huggingface/transformers/issues/29057/events
https://github.com/huggingface/transformers/pull/29057
2,138,258,766
PR_kwDOCUB6oc5nEYNq
29,057
fix failing trainer ds tests
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29057). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,708
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? 1. After PR https://github.com/huggingface/transformers/pull/27568, when resuming from ckpt, it first loads the `trainer_state.json` file. As such, when bogus ckpt folder is passed it will throw file not found error. Earlier, the code would throw different invalid ckpt error in the function call `deepspeed_load_checkpoint`. As such, `test_can_resume_training_errors` tests were failing. This PR fixes the tests by removing the exact check on the error message when resuming from bogus ckpt.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29057/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29057/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29057", "html_url": "https://github.com/huggingface/transformers/pull/29057", "diff_url": "https://github.com/huggingface/transformers/pull/29057.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29057.patch", "merged_at": 1708084126000 }
https://api.github.com/repos/huggingface/transformers/issues/29056
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29056/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29056/comments
https://api.github.com/repos/huggingface/transformers/issues/29056/events
https://github.com/huggingface/transformers/pull/29056
2,138,242,495
PR_kwDOCUB6oc5nEUrh
29,056
StoppingCriteria tracks elements separately in the batch
{ "login": "zucchini-nlp", "id": 100715397, "node_id": "U_kgDOBgDLhQ", "avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zucchini-nlp", "html_url": "https://github.com/zucchini-nlp", "followers_url": "https://api.github.com/users/zucchini-nlp/followers", "following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions", "organizations_url": "https://api.github.com/users/zucchini-nlp/orgs", "repos_url": "https://api.github.com/users/zucchini-nlp/repos", "events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/zucchini-nlp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29056). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,708
1,708
1,708
MEMBER
null
# What does this PR do? As was pointed out in #28932 , StoppingCriteria needs to stop generation per batch element and return a boolean tensor of `batch_size`. This PR adds the logic to track each row and when StoppingCriteria is triggered, stop generating for that particular row only. Note that the when #28932 gets merged, we need to add logic to handle beam related generation. The problem is that beam search has an internal logic of tracking EOS tokens and adds candidate tokens to hypothesis when done. And if StoppingCriteria will take the responsibility to track custom EOS tokens, it has to be passed to beam scorer. Right now I am not sure if calling StoppingCriteria twice is a good decision. First time to check candidate beams, and the second time for the chosen beams. What do you think @gante? It can be something like: ``` cur_len = input_ids.shape[-1] + 1 beam_next_input_ids = torch.cat([input_ids[next_indices, :], next_tokens.unsqueeze(-1)], dim=-1) beam_next_input_ids = beam_next_input_ids.view(-1, cur_len) next_is_done = stopping_criteria(beam_next_input_ids, scores) # is all are done, then it's prob max_length, not custom EOS being triggered if all(next_is_done): next_is_done = torch.full_like(next_indices, False, dtype=torch.bool) next_is_done = next_is_done.view(next_indices.shape) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29056/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29056", "html_url": "https://github.com/huggingface/transformers/pull/29056", "diff_url": "https://github.com/huggingface/transformers/pull/29056.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29056.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29055
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29055/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29055/comments
https://api.github.com/repos/huggingface/transformers/issues/29055/events
https://github.com/huggingface/transformers/pull/29055
2,138,013,993
PR_kwDOCUB6oc5nDjBY
29,055
FIX [`PEFT` / `Trainer` ] Handle better peft + quantized compiled models
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29055). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "LGTM! Let me know when it's out of draft and you want a final review ", "It would be good to add a test for compiled models alongside this change ", "Thanks @amyeroberts ! This is ready for review ! \r\nNote currently QLoRA + torch.compile is not really supported as we need to make some changes to make 4bit layers from bnb compatible with torch.compile . Therefore I added a test that simply checks if the trainer is correctly initialized so that we don't get the unappropriate error from #29033 . Once compile + QLoRA will be supported, the integration will be seemless as users will simply have to update `bitsandbytes` to make it work", "Thanks @amyeroberts for the review! I just adapted the PR according to your suggestion, one concern I have with this new approach is that if bnb supports `torch.compile` in the next months we'll need to do some version check on trainer's init and make the code slightly bloated - though I am happy with this approach as well as I don't think this is that bad. What do you think? 🙏 ", "@younesbelkada How soon and with what confidence do you think bnb will support this? \r\n\r\n I agree we want to avoid bloat as much as possible, but I think it's better overall to be explict and have well defined behaviour for our users. The easiest way to avoid many version checks is to only support the latest version of bnb. ", "OK sounds good! I agree with that approach!\r\nCurrently we don't have any ETA :/ it might take a bit long, and it will also depend if the community will ask for it ", "@younesbelkada I would certainly be interested if bnb could support torch.compile!" ]
1,708
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? Fixes: https://github.com/huggingface/transformers/issues/29033 Even though quantized models + compile + peft is not really stable (might not work OTB for all users), the current way we deal with peft compiled models leads to errors that are hard to interpret to users such as the one described in https://github.com/huggingface/transformers/issues/29033 I will run some tests on my see to check if torch.compile + qlora is supported cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29055/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29055/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29055", "html_url": "https://github.com/huggingface/transformers/pull/29055", "diff_url": "https://github.com/huggingface/transformers/pull/29055.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29055.patch", "merged_at": 1708429509000 }
https://api.github.com/repos/huggingface/transformers/issues/29054
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29054/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29054/comments
https://api.github.com/repos/huggingface/transformers/issues/29054/events
https://github.com/huggingface/transformers/pull/29054
2,137,965,022
PR_kwDOCUB6oc5nDYUv
29,054
Fix missing translation in README_ru
{ "login": "Strikoder", "id": 71812454, "node_id": "MDQ6VXNlcjcxODEyNDU0", "avatar_url": "https://avatars.githubusercontent.com/u/71812454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Strikoder", "html_url": "https://github.com/Strikoder", "followers_url": "https://api.github.com/users/Strikoder/followers", "following_url": "https://api.github.com/users/Strikoder/following{/other_user}", "gists_url": "https://api.github.com/users/Strikoder/gists{/gist_id}", "starred_url": "https://api.github.com/users/Strikoder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Strikoder/subscriptions", "organizations_url": "https://api.github.com/users/Strikoder/orgs", "repos_url": "https://api.github.com/users/Strikoder/repos", "events_url": "https://api.github.com/users/Strikoder/events{/privacy}", "received_events_url": "https://api.github.com/users/Strikoder/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29054). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Thank you for spotting the missing paragraph, and adding translation. I left a couple of suggestions.\r\n\r\nOkay, thank you!" ]
1,708
1,708
null
NONE
null
# What does this PR do? This PR fixes the Russian translation of the README file by translating one line from English to Russian that was forgotten to be translated. ## Before submitting - [x] This PR improves the docs. ## Fixes #26208 @stevhliu @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29054/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29054", "html_url": "https://github.com/huggingface/transformers/pull/29054", "diff_url": "https://github.com/huggingface/transformers/pull/29054.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29054.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29053
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29053/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29053/comments
https://api.github.com/repos/huggingface/transformers/issues/29053/events
https://github.com/huggingface/transformers/issues/29053
2,137,954,620
I_kwDOCUB6oc5_bpk8
29,053
model_max_length arg has no effect when creating bert tokenizer
{ "login": "galtay", "id": 663051, "node_id": "MDQ6VXNlcjY2MzA1MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/663051?v=4", "gravatar_id": "", "url": "https://api.github.com/users/galtay", "html_url": "https://github.com/galtay", "followers_url": "https://api.github.com/users/galtay/followers", "following_url": "https://api.github.com/users/galtay/following{/other_user}", "gists_url": "https://api.github.com/users/galtay/gists{/gist_id}", "starred_url": "https://api.github.com/users/galtay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/galtay/subscriptions", "organizations_url": "https://api.github.com/users/galtay/orgs", "repos_url": "https://api.github.com/users/galtay/repos", "events_url": "https://api.github.com/users/galtay/events{/privacy}", "received_events_url": "https://api.github.com/users/galtay/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @galtay, thanks for raising this issue! \r\n\r\nIt looks related to #29050 \r\n\r\ncc @LysandreJik " ]
1,708
1,708
null
NONE
null
### System Info None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.37.2 - Platform: macOS-14.2.1-arm64-arm-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer new_tokenizer = AutoTokenizer.from_pretrained('google-bert/bert-base-uncased', model_max_length=8192) print(new_tokenizer.model_max_length) # 8192 old_tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_max_length=8192) print(old_tokenizer.model_max_length) # 512 ``` ### Expected behavior ```python print(old_tokenizer.model_max_length) # 8192 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29053/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29052
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29052/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29052/comments
https://api.github.com/repos/huggingface/transformers/issues/29052/events
https://github.com/huggingface/transformers/pull/29052
2,137,941,298
PR_kwDOCUB6oc5nDTJf
29,052
Add Arabic translation for README
{ "login": "Strikoder", "id": 71812454, "node_id": "MDQ6VXNlcjcxODEyNDU0", "avatar_url": "https://avatars.githubusercontent.com/u/71812454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Strikoder", "html_url": "https://github.com/Strikoder", "followers_url": "https://api.github.com/users/Strikoder/followers", "following_url": "https://api.github.com/users/Strikoder/following{/other_user}", "gists_url": "https://api.github.com/users/Strikoder/gists{/gist_id}", "starred_url": "https://api.github.com/users/Strikoder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Strikoder/subscriptions", "organizations_url": "https://api.github.com/users/Strikoder/orgs", "repos_url": "https://api.github.com/users/Strikoder/repos", "events_url": "https://api.github.com/users/Strikoder/events{/privacy}", "received_events_url": "https://api.github.com/users/Strikoder/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "I've addressed the duplicates and verified the Markdown formatting; all links are functioning correctly. Screenshots documenting each issue you highlighted have been attached for reference. \r\n\r\nRegarding the Arabic langauge, I don't really know who speaks Arabic at Hugging Face, could you please help me with that?", "Great, thanks!\r\n\r\nI would join our [Discord](https://discord.gg/hugging-face-879548962464493619) and see if any community members there would be interested in reviewing the translation. You can also try asking on our [forums](https://discuss.huggingface.co/) as well :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29052). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,708
1,708
null
NONE
null
# What does this PR do? This PR introduces the Arabic translation of the README file. ## Before submitting - [x] This PR improves the docs. ## Fixes #29045 @stevhliu @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29052/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29052", "html_url": "https://github.com/huggingface/transformers/pull/29052", "diff_url": "https://github.com/huggingface/transformers/pull/29052.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29052.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29051
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29051/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29051/comments
https://api.github.com/repos/huggingface/transformers/issues/29051/events
https://github.com/huggingface/transformers/pull/29051
2,137,680,528
PR_kwDOCUB6oc5nCcDs
29,051
[`Do not Merge`]
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29051). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,708
1,708
1,708
COLLABORATOR
null
# What does this PR do? UV
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29051/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29051", "html_url": "https://github.com/huggingface/transformers/pull/29051", "diff_url": "https://github.com/huggingface/transformers/pull/29051.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29051.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29050
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29050/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29050/comments
https://api.github.com/repos/huggingface/transformers/issues/29050/events
https://github.com/huggingface/transformers/issues/29050
2,137,665,880
I_kwDOCUB6oc5_ajFY
29,050
Migrated pre-hub models' tokenizers don't configure the same as their pre-hub version
{ "login": "mlamera", "id": 48600479, "node_id": "MDQ6VXNlcjQ4NjAwNDc5", "avatar_url": "https://avatars.githubusercontent.com/u/48600479?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mlamera", "html_url": "https://github.com/mlamera", "followers_url": "https://api.github.com/users/mlamera/followers", "following_url": "https://api.github.com/users/mlamera/following{/other_user}", "gists_url": "https://api.github.com/users/mlamera/gists{/gist_id}", "starred_url": "https://api.github.com/users/mlamera/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mlamera/subscriptions", "organizations_url": "https://api.github.com/users/mlamera/orgs", "repos_url": "https://api.github.com/users/mlamera/repos", "events_url": "https://api.github.com/users/mlamera/events{/privacy}", "received_events_url": "https://api.github.com/users/mlamera/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey @mlamera! I believe this is due to the `transformers` library overriding some attributes of the config due to the explicit definition here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/f497f564bb76697edab09184a252fc1b1a326d1e/src/transformers/models/gpt2/tokenization_gpt2.py#L53-L60\r\n\r\nThis should have been fixed by https://github.com/huggingface/transformers/pull/29001, if you install from `main` you shouldn't get this issue anymore:\r\n\r\nRunning your code example before that commit:\r\n```py\r\nIn [1]: from transformers import AutoTokenizer\r\n ...: tokenizer1 = AutoTokenizer.from_pretrained(\"gpt2\")\r\n ...: tokenizer2 = AutoTokenizer.from_pretrained(\"openai-community/gpt2\")\r\n ...: print(tokenizer1.model_max_length, tokenizer2.model_max_length)\r\n1024 1000000000000000019884624838656\r\n```\r\n\r\nRunning your code example after that commit:\r\n```py\r\nIn [1]: from transformers import AutoTokenizer\r\n ...: tokenizer1 = AutoTokenizer.from_pretrained(\"gpt2\")\r\n ...: tokenizer2 = AutoTokenizer.from_pretrained(\"openai-community/gpt2\")\r\n ...: print(tokenizer1.model_max_length, tokenizer2.model_max_length)\r\n1000000000000000019884624838656 1024\r\n```\r\n\r\nThe better solution, however, would be to update the configuration files on the Hub themselves so that they work without the explicit override. I'll take care of that and link the updates here!\r\n\r\nThanks for reporting :hugs: ", "I'm opening PRs on the affected repositories such as:\r\nhttps://huggingface.co./albert/albert-large-v1/discussions/2\r\nhttps://huggingface.co./albert/albert-base-v2/discussions/6\r\nhttps://huggingface.co./albert/albert-large-v1/discussions/2\r\nhttps://huggingface.co./albert/albert-large-v2/discussions/3\r\nhttps://huggingface.co./albert/albert-xlarge-v1/discussions/2\r\nhttps://huggingface.co./albert/albert-xlarge-v2/discussions/2\r\nhttps://huggingface.co./albert/albert-xxlarge-v1/discussions/3\r\nhttps://huggingface.co./albert/albert-xxlarge-v2/discussions/3\r\n", "I opened a PR for `openai-community/gpt2` here: https://huggingface.co./openai-community/gpt2/discussions/80\r\n\r\nCan you please confirm whether it fixes the issue you're encountering? If so, I'll open Prs to the rest of the affected models. A simple way to test it would be to do:\r\n\r\n```py\r\nfrom transformers import AutoTokenizer\r\n\r\ntokenizer1 = AutoTokenizer.from_pretrained(\"gpt2\", revision='refs/pr/80')\r\ntokenizer2 = AutoTokenizer.from_pretrained(\"openai-community/gpt2\")\r\n\r\nprint(tokenizer1.model_max_length, tokenizer2.model_max_length)\r\n```", "> I opened a PR for `openai-community/gpt2` here: https://huggingface.co./openai-community/gpt2/discussions/80\r\n> \r\n> Can you please confirm whether it fixes the issue you're encountering? If so, I'll open Prs to the rest of the affected models. A simple way to test it would be to do:\r\n> \r\n> ```python\r\n> from transformers import AutoTokenizer\r\n> \r\n> tokenizer1 = AutoTokenizer.from_pretrained(\"gpt2\", revision='refs/pr/80')\r\n> tokenizer2 = AutoTokenizer.from_pretrained(\"openai-community/gpt2\")\r\n> \r\n> print(tokenizer1.model_max_length, tokenizer2.model_max_length)\r\n> ```\r\n\r\nThe fix looks good thank you!\r\n" ]
1,708
1,708
null
NONE
null
### System Info transformers version: 4.38.0.dev0 python version: 3.10.12 ### Who can help? @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction **Example Snippet** ``` from transformers import AutoTokenizer tokenizer1 = AutoTokenizer.from_pretrained("gpt2") tokenizer2 = AutoTokenizer.from_pretrained("openai-community/gpt2") print(tokenizer1.model_max_length, tokenizer2.model_max_length) ``` **Output** `1024 1000000000000000019884624838656` ### Expected behavior Somewhat related to this issue https://github.com/huggingface/transformers/issues/14561 , I feel like the tokenizers for the migrated models should mimic the same behavior as the pre-"Hub" checkpoints since they are referring to the same model and the pre-Hub checkpoints don't show up in the model cards anymore. Some other models that got migrated and face issues would be t5-base -> google-t5/t5-base distilbert-base-uncased -> distilbert/distilbert-base-uncased bert-base-uncased -> google-bert/bert-base-uncased The easy, yet long way, would be to add the additional pathing to each models tokenization.py file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29050/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29049
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29049/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29049/comments
https://api.github.com/repos/huggingface/transformers/issues/29049/events
https://github.com/huggingface/transformers/issues/29049
2,137,354,132
I_kwDOCUB6oc5_ZW-U
29,049
Getting Long text generation after fine tuning Mistral 7b Model
{ "login": "Rishita32", "id": 56127736, "node_id": "MDQ6VXNlcjU2MTI3NzM2", "avatar_url": "https://avatars.githubusercontent.com/u/56127736?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rishita32", "html_url": "https://github.com/Rishita32", "followers_url": "https://api.github.com/users/Rishita32/followers", "following_url": "https://api.github.com/users/Rishita32/following{/other_user}", "gists_url": "https://api.github.com/users/Rishita32/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rishita32/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rishita32/subscriptions", "organizations_url": "https://api.github.com/users/Rishita32/orgs", "repos_url": "https://api.github.com/users/Rishita32/repos", "events_url": "https://api.github.com/users/Rishita32/events{/privacy}", "received_events_url": "https://api.github.com/users/Rishita32/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nGeneral comments: \r\n* Setting `add_eos_token` instructs the tokenizer to add an EOS token at the end of a sequence of tokens but don't control the length. \r\n* Looking through the code base, this flag is only used for Llama, CLVP and code llama. \r\n* When generating, you can't set a word limit, but you can set a limit on the number of tokens generated by passing `max_new_tokens`. You can read the [generate docs here](https://huggingface.co./docs/transformers/en/main_classes/text_generation) and [here](https://huggingface.co./docs/transformers/v4.37.2/en/generation_strategies). " ]
1,708
1,708
null
NONE
null
### System Info Hi, I am fine tuning Mistral7b model. I am getting long automated text generation using the fine tuned model. I have kept the eos_token=True. Can someone please tell me how to add a word limit to the responses? This is the code for initializing tokenizer: base_model = "mistralai/Mistral-7B-v0.1" bnb_config = BitsAndBytesConfig( load_in_4bit= True, bnb_4bit_quant_type= "nf4", bnb_4bit_compute_dtype= torch.bfloat16, bnb_4bit_use_double_quant= False, ) model = AutoModelForCausalLM.from_pretrained( base_model, load_in_4bit=True, quantization_config=bnb_config, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True, ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! model.config.pretraining_tp = 1 model.gradient_checkpointing_enable() ### Load tokenizer tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True) tokenizer.padding_side = 'right' tokenizer.pad_token = tokenizer.unk_token tokenizer.add_eos_token = True tokenizer.max_length = 200 tokenizer.truncation = True ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction base_model = "mistralai/Mistral-7B-v0.1" bnb_config = BitsAndBytesConfig( load_in_4bit= True, bnb_4bit_quant_type= "nf4", bnb_4bit_compute_dtype= torch.bfloat16, bnb_4bit_use_double_quant= False, ) model = AutoModelForCausalLM.from_pretrained( base_model, load_in_4bit=True, quantization_config=bnb_config, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True, ) model.config.use_cache = False # silence the warnings. Please re-enable for inference! model.config.pretraining_tp = 1 model.gradient_checkpointing_enable() ### Load tokenizer tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True) tokenizer.padding_side = 'right' tokenizer.pad_token = tokenizer.unk_token tokenizer.add_eos_token = True tokenizer.max_length = 200 tokenizer.truncation = True ### Expected behavior Looking for a solution to avoid long text generation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29049/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29048
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29048/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29048/comments
https://api.github.com/repos/huggingface/transformers/issues/29048/events
https://github.com/huggingface/transformers/pull/29048
2,137,352,371
PR_kwDOCUB6oc5nBVAZ
29,048
Fix - don't return pixel mask for yolos
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29048). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Closing as change was added in #28312" ]
1,708
1,708
1,708
COLLABORATOR
null
# What does this PR do? #28363 introduced a bug where the pixel mask was now being returned for YOLOS. `pixel_mask` isn't a valid YOLOS input, and so this breaks this. The PR fixes that. Weirdly, this wasn't caught on the original PR, but was triggered in #28312 cc @ydshieh for reference - we can try and debug together why we're not catching all the necessary tests when you're back :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29048/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29048", "html_url": "https://github.com/huggingface/transformers/pull/29048", "diff_url": "https://github.com/huggingface/transformers/pull/29048.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29048.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29047
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29047/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29047/comments
https://api.github.com/repos/huggingface/transformers/issues/29047/events
https://github.com/huggingface/transformers/issues/29047
2,137,276,126
I_kwDOCUB6oc5_ZD7e
29,047
[BUG] Unexpected GPU memory consumption when using transformers PEFT in DeepSpeed Zero3
{ "login": "alekseymalakhov11", "id": 131314005, "node_id": "U_kgDOB9OxVQ", "avatar_url": "https://avatars.githubusercontent.com/u/131314005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alekseymalakhov11", "html_url": "https://github.com/alekseymalakhov11", "followers_url": "https://api.github.com/users/alekseymalakhov11/followers", "following_url": "https://api.github.com/users/alekseymalakhov11/following{/other_user}", "gists_url": "https://api.github.com/users/alekseymalakhov11/gists{/gist_id}", "starred_url": "https://api.github.com/users/alekseymalakhov11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alekseymalakhov11/subscriptions", "organizations_url": "https://api.github.com/users/alekseymalakhov11/orgs", "repos_url": "https://api.github.com/users/alekseymalakhov11/repos", "events_url": "https://api.github.com/users/alekseymalakhov11/events{/privacy}", "received_events_url": "https://api.github.com/users/alekseymalakhov11/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @younesbelkada too :) ", "Hi @alekseymalakhov11 ! Thanks very much for the issue, just for us to understand better the issue, can you share the full command you are using for training? \r\nIt might be unrelated but just to be on the safe zone, could you try out on PEFT==0.8.2 & PEFT main to include some fixes such as https://github.com/huggingface/peft/pull/1450 ?", "Thank you for your quick response!\r\n\r\nWe have attempted to update PEFT to the versions you suggested; however, this didn't resolve the issue. Additionally, we updated DeepSpeed and Accelerate to their latest versions, but the problem still exists.\r\n\r\nI have attached the code snippets that we use for training\r\n\r\n# training.py\r\n```python\r\nfrom peft import get_peft_model, LoraConfig\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(tokenizer_path)\r\n\r\ntokenizer.add_special_tokens({'additional_special_tokens': ['<UNK>']})\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_path,\r\n torch_dtype=torch.bfloat16,\r\n low_cpu_mem_usage=True,\r\n )\r\n\r\nmodel.resize_token_embeddings(len(tokenizer))\r\n\r\n\r\npeft_config = LoraConfig(\r\n r=16,\r\n lora_alpha=16,\r\n lora_dropout=0.05,\r\n target_modules=[\r\n \"q_proj\",\r\n \"v_proj\",\r\n \"k_proj\",\r\n \"o_proj\"\r\n ],\r\n task_type=\"CAUSAL_LM\",\r\n modules_to_save=[\"embed_tokens\", \"lm_head\"],\r\n)\r\n\r\nmodel = get_peft_model(model, peft_config)\r\n\r\n# https://github.com/huggingface/peft/issues/341\r\nfor name, module in model.named_modules():\r\n if name.endswith('modules_to_save'):\r\n module.default.weight.data = module.default.weight.data.float()\r\n elif name.endswith('original_module'):\r\n module.weight.data = module.weight.data.float()\r\n\r\n\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=str(TRAINER_LOGS_FOLDER),\r\n report_to=[],\r\n evaluation_strategy=\"epoch\",\r\n save_strategy=\"epoch\",\r\n load_best_model_at_end=False,\r\n save_total_limit=5,\r\n per_device_train_batch_size=1,\r\n per_device_eval_batch_size=1,\r\n gradient_accumulation_steps=16,\r\n logging_steps=1,\r\n learning_rate=0.0004,\r\n num_train_epochs=5,\r\n lr_scheduler_type=\"linear\",\r\n warmup_steps=1,\r\n fp16=False,\r\n bf16=True,\r\n deepspeed=\"deepspeed_config.json\",\r\n optim=\"adamw_torch\",\r\n adam_beta1=0.9,\r\n adam_beta2=0.98,\r\n adam_epsilon=1e-6,\r\n weight_decay=0.01,\r\n max_grad_norm=0.11\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=val_dataset,\r\n callbacks=[],\r\n data_collator=data_collator,\r\n tokenizer=tokenizer,\r\n)\r\n\r\ntrainer.train()\r\n```\r\n# DeepSpeed config\r\n\r\n```json\r\n{\r\n\"fp16\":\r\n {\r\n \"enabled\": \"auto\"\r\n },\r\n\"bf16\": \r\n {\r\n \"enabled\": \"auto\"\r\n },\r\n\"optimizer\": \r\n {\r\n \"type\": \"AdamW\",\r\n \"params\": \r\n {\r\n \"lr\": \"auto\",\r\n \"betas\": \"auto\",\r\n \"eps\": \"auto\",\r\n \"weight_decay\": \"auto\"\r\n }\r\n },\r\n\"scheduler\": \r\n {\r\n \"type\": \"WarmupDecayLR\",\r\n \"params\": \r\n {\r\n \"warmup_min_lr\": \"auto\",\r\n \"warmup_max_lr\": \"auto\",\r\n \"warmup_num_steps\": \"auto\",\r\n \"total_num_steps\": \"auto\"\r\n }\r\n },\r\n\"zero_optimization\":\r\n {\r\n \"stage\": 2,\r\n \"overlap_comm\": true,\r\n \"contiguous_gradients\": true,\r\n \"sub_group_size\": 1e9,\r\n \"reduce_bucket_size\": \"auto\"\r\n },\r\n\r\n\"gradient_accumulation_steps\": \"auto\",\r\n\"gradient_clipping\": \"auto\",\r\n\"steps_per_print\": 2,\r\n\"train_batch_size\": \"auto\",\r\n\"train_micro_batch_size_per_gpu\": \"auto\",\r\n\"wall_clock_breakdown\": false\r\n}\r\n```\r\n\r\n# We launch our code using\r\n\r\n`deepspeed --num_gpus=8 --no_local_rank training.py`" ]
1,708
1,708
null
NONE
null
### System Info transformers = "4.35.0" peft = "0.7.1" torch = ">=2.0.0" accelerate = "^0.24.1" deepspeed = "^0.9.5" ### Who can help? @muellerzr @pacman100 @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ### Description Llama30B with Lora adapters cannot fit into 8 x A100 (80GB). ### Demonstration of Problem and Experiment Setups I will illustrate this issue using various experiment setups on smaller models: 1. 7b+lora+stage 3 ![image](https://github.com/huggingface/peft/assets/131314005/5d30fd07-2b4f-4da2-a2fb-3b9434fbb6c8) 2. 7b+stage 3 ![image](https://github.com/huggingface/peft/assets/131314005/b9453c69-a576-40a4-8e9c-0a4bd47bb4ab) 3. 7b+lora+stage 2 ![image](https://github.com/huggingface/peft/assets/131314005/4a754f4d-d89b-4bc9-bf19-bcd02710a2a7) 4. 7b + stage 2 ![image](https://github.com/huggingface/peft/assets/131314005/5e5bfa69-99d7-4918-9b0b-9c432ef02bef) All other parameters remain consistent in the experiments below. ### Expected behavior ### Suspected Cause The possible reason for this issue might be that Zero3 does not partition non-trainable weights across GPUs. The basis for this assumption is: - The memory consumption is consistent with predicted values when Lora is not used. - When training the model with both Zero2 and Zero3 using Lora, I observe nearly the same memory consumption. - A [code examination](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/zero/stage3.py#L318C14-L318C58) of the Zero Runtime sources also suggests this could be the case. ### Expected behavior Training the model with Zero3 while using Lora should consume significantly less memory than Zero2 with Lora. We also opened an [issue in Deepspeed](https://github.com/microsoft/DeepSpeed/issues/5109), but no one has assisted us. Additionally, you might have more experience with PEFT and Deepspeed integration in the Transformers trainer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29047/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29046
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29046/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29046/comments
https://api.github.com/repos/huggingface/transformers/issues/29046/events
https://github.com/huggingface/transformers/pull/29046
2,137,186,717
PR_kwDOCUB6oc5nAw77
29,046
[CI] Quantization workflow
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29046). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I see that `transformers-all-latest-gpu` docker image is not being updated for the last two days since the [installation](https://github.com/huggingface/transformers/actions/runs/7924158967/job/21635225922) fails because of aqml library that requires python 3.10 at least and we uses 3.8 for now. We will have to change that in the quantization dockerfile. \r\n\r\nEdit: I tried to install python 3.10 but it didn't [work](https://github.com/huggingface/transformers/actions/runs/7937198668/job/21673966243) (`1.030 E: Unable to locate package python3.10`). I found this [tutorial](https://computingforgeeks.com/how-to-install-python-on-ubuntu-linux-system/) but not sure if it is the best way to install it. ", "The only reason `aqlm` requires `python>=3.10` is a single `match-case` statement in a non-critical place.\r\n\r\nI was able to run `aqlm` on `python 3.8` no problem otherwise. I can replace the statement with an `if-else` statement and lower the requirement if necessary.", "@SunMarc thanks! \r\nin trl i build a docker image with python 3.10: https://github.com/huggingface/trl/blob/main/docker/trl-source-gpu/Dockerfile maybe you can take some inspiration from that dockerfile? 🙏 I am not sure why you are getting that error currently as the commands looks correct. \r\n@BlackSamorez yes that would be great if you can also support python 3.8 for AQLM 🙏 Thanks ! ", "> I was able to run aqlm on python 3.8 no problem otherwise. I can replace the statement with an if-else statement and lower the requirement if necessary.\r\n\r\nYes that would be for the best ! I prefer to keep running the quantization tests with python 3.8 since this is what we are actually doing for all transformers tests ! Moreover, it will be better for the users since we keep the requirement low. LMK when it is done ! @BlackSamorez \r\nOtherwise, I will modify the dockerfile and use conda to install python 3.10 as suggested by @younesbelkada. ", "@SunMarc `aqlm` will support python `>=3.8` starting version `1.0.2`. I'm [1 PR](https://github.com/Vahe1994/AQLM/pull/26) away from releasing it.", "Perfect ! I will wait for your PR to be merged then + release then if it doesn't take too much time. Please keep me updated ! Otherwise, I can first merge this PR without aqlm and add it afterwards. ", "@SunMarc `aqlm==1.0.2` is out. May I ask you to please update the docker images?", "I was able to build the image but I don't have the permission to push the image cc @ydshieh \r\n\r\n`#22 ERROR: failed to push huggingface/transformers-quantization-latest-gpu: push access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed`", "@SunMarc Do you builded/pushed via transformers' github actions? If so, do you have a job run link?", "Yes ! Here's the link to the [job](https://github.com/huggingface/transformers/actions/runs/7979164961) ", "it's quite strange because the workflow was indeed able to login: https://github.com/huggingface/transformers/actions/runs/7979164961/job/21802381289#step:5:1 but fails to push ...", "In TRL and PEFT I can confirm the build & push works fine : https://github.com/huggingface/peft/actions/workflows/build_docker_images.yml / https://github.com/huggingface/trl/actions/workflows/docker-build.yml so it's not a token issue as we use the same except if the token has expired for transformers cc @glegendre01 ", "Hi @SunMarc it's because there is some change in the infra team on docker hub stuff.\r\n\r\nThe repository `huggingface/transformers-quantization-latest-gpu` has to be created on docker Hub first, then only after this, you can push to it.\r\n\r\nI will ask for it.", "BTW, I will review this PR tomorrow or Friday 🙏 ", "Hi @ydshieh, i was able to run the tests and get the slack notification. See job [here](https://github.com/huggingface/transformers/actions/runs/7992001453). Thx for your help ! " ]
1,708
1,708
null
MEMBER
null
# What does this PR do ? This PR adds a workflow for quantization tests + related dockerfile. Since we merged the [HfQuantizer PR](https://github.com/huggingface/transformers/pull/26610), the community started integrating their own quantizers into transformers (e.g. [AQML](https://github.com/huggingface/transformers/pull/28928) and much more in the future). This will lead to many third party libraries in the Dockerfile `huggingface/transformers-all-latest-gpu`. To limit the impact of these libraries on transformers tests, I propose to create a seperate dockerfile + workflow.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29046/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29046", "html_url": "https://github.com/huggingface/transformers/pull/29046", "diff_url": "https://github.com/huggingface/transformers/pull/29046.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29046.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29045
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29045/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29045/comments
https://api.github.com/repos/huggingface/transformers/issues/29045/events
https://github.com/huggingface/transformers/issues/29045
2,137,164,681
I_kwDOCUB6oc5_YouJ
29,045
[i18n-ar] Translating docs to Arabic
{ "login": "Strikoder", "id": 71812454, "node_id": "MDQ6VXNlcjcxODEyNDU0", "avatar_url": "https://avatars.githubusercontent.com/u/71812454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Strikoder", "html_url": "https://github.com/Strikoder", "followers_url": "https://api.github.com/users/Strikoder/followers", "following_url": "https://api.github.com/users/Strikoder/following{/other_user}", "gists_url": "https://api.github.com/users/Strikoder/gists{/gist_id}", "starred_url": "https://api.github.com/users/Strikoder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Strikoder/subscriptions", "organizations_url": "https://api.github.com/users/Strikoder/orgs", "repos_url": "https://api.github.com/users/Strikoder/repos", "events_url": "https://api.github.com/users/Strikoder/events{/privacy}", "received_events_url": "https://api.github.com/users/Strikoder/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "I will start translating the readme.md, then I will move to the tutorial section." ]
1,708
1,708
null
NONE
null
Hi! Let's bring the documentation to all the Arabic-speaking community 🌐 Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [readme.md](https://github.com/huggingface/transformers/edit/main/README.md) - [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) - [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) - [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md) ## Tutorial section - [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md) - [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md) - [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md) - [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) - [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md) - [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md) - [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29045/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29045/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29044
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29044/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29044/comments
https://api.github.com/repos/huggingface/transformers/issues/29044/events
https://github.com/huggingface/transformers/pull/29044
2,137,035,002
PR_kwDOCUB6oc5nAQle
29,044
Fix a tiny typo in `generation/utils.py::GenerateEncoderDecoderOutput`'s docstring
{ "login": "sadra-barikbin", "id": 22097587, "node_id": "MDQ6VXNlcjIyMDk3NTg3", "avatar_url": "https://avatars.githubusercontent.com/u/22097587?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sadra-barikbin", "html_url": "https://github.com/sadra-barikbin", "followers_url": "https://api.github.com/users/sadra-barikbin/followers", "following_url": "https://api.github.com/users/sadra-barikbin/following{/other_user}", "gists_url": "https://api.github.com/users/sadra-barikbin/gists{/gist_id}", "starred_url": "https://api.github.com/users/sadra-barikbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sadra-barikbin/subscriptions", "organizations_url": "https://api.github.com/users/sadra-barikbin/orgs", "repos_url": "https://api.github.com/users/sadra-barikbin/repos", "events_url": "https://api.github.com/users/sadra-barikbin/events{/privacy}", "received_events_url": "https://api.github.com/users/sadra-barikbin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29044). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@amyeroberts any idea why CI isn't running in this PR? 👀 \r\n\r\n(this PR fixes a typo, it's ready to be merged)", "@gante No :( I can merge it as it's just a typo. cc @ydshieh for reference\r\n\r\nThanks for fixing @sadra-barikbin! " ]
1,708
1,708
1,708
CONTRIBUTOR
null
Hi there! To fix a tiny typo in `generation/utils.py::GenerateEncoderDecoderOutput`'s docstring @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29044/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29044", "html_url": "https://github.com/huggingface/transformers/pull/29044", "diff_url": "https://github.com/huggingface/transformers/pull/29044.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29044.patch", "merged_at": 1708020751000 }
https://api.github.com/repos/huggingface/transformers/issues/29043
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29043/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29043/comments
https://api.github.com/repos/huggingface/transformers/issues/29043/events
https://github.com/huggingface/transformers/pull/29043
2,137,011,581
PR_kwDOCUB6oc5nALjk
29,043
Patch to skip failing `test_save_load_low_cpu_mem_usage` tests
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29043). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks for catching these, and sorry I missed some of them (I didn't know how to run the tests across all models at the time).\r\n\r\nI've also worked to fill up all the failing tests: https://github.com/huggingface/transformers/pull/29024\r\n\r\nDo you mind taking a quick look? I'm hoping to get that in so it doesn't affect other people's workflows. Thanks!\r\n", "> Thanks for catching these, and sorry I missed some of them (I didn't know how to run the tests across all models at the time).\r\n\r\nOh, no need to apologise, you shouldn't need to manually do it, there's something wrong on our end as they should have been run automatically. \r\n\r\n> Do you mind taking a quick look? I'm hoping to get that in so it doesn't affect other people's workflows. Thanks!\r\n\r\nLooking now 🤗 " ]
1,708
1,708
1,708
COLLABORATOR
null
# What does this PR do? A handful of tests started failing after the merging in of #28948. Tests didn't fail on PR or initial main commit, but now failing. Looks like might be relevant tests not fetched for the runners. This PR skips the tests for now. cc @ylacombe As you might want to enable this feature for Music Gen cc @ArthurZucker As this touches a few language models. Not sure it's worth digging in and fixing it for these as they have low-ish usage. cc @ydshieh For when you're back in case there's anything else I should address here
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29043/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29043/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29043", "html_url": "https://github.com/huggingface/transformers/pull/29043", "diff_url": "https://github.com/huggingface/transformers/pull/29043.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29043.patch", "merged_at": 1708017994000 }
https://api.github.com/repos/huggingface/transformers/issues/29042
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29042/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29042/comments
https://api.github.com/repos/huggingface/transformers/issues/29042/events
https://github.com/huggingface/transformers/issues/29042
2,136,985,386
I_kwDOCUB6oc5_X88q
29,042
Neuron Trainium --Gradient_Accumulation_Steps > 1
{ "login": "mathephysicist", "id": 25594384, "node_id": "MDQ6VXNlcjI1NTk0Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/25594384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mathephysicist", "html_url": "https://github.com/mathephysicist", "followers_url": "https://api.github.com/users/mathephysicist/followers", "following_url": "https://api.github.com/users/mathephysicist/following{/other_user}", "gists_url": "https://api.github.com/users/mathephysicist/gists{/gist_id}", "starred_url": "https://api.github.com/users/mathephysicist/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathephysicist/subscriptions", "organizations_url": "https://api.github.com/users/mathephysicist/orgs", "repos_url": "https://api.github.com/users/mathephysicist/repos", "events_url": "https://api.github.com/users/mathephysicist/events{/privacy}", "received_events_url": "https://api.github.com/users/mathephysicist/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @muellerzr as it seems to cover trainer + TPU", "Thanks for the flag @mathephysicist! Can you confirm this works (with no change in Trainer) if installing accelerate from main via `pip install git+https://github.com/huggingface/accelerate@grad-accum-tpu`?", "Will try that out! ", "That seems to uninstall/remove a lot of the Neuron packages, resulting in xla_model related issues. That may be due to environment issues? I am trying the optimum-neuron tag v0.0.18, do you think trying optimum-neuron master would resolve them?", "`RuntimeError: Cannot replicate if number of devices (1) is different from 32`", "This is because optimum is pinned to a much older version of accelerate sadly. We'll need to put in the fix here in transformers it looks like... not ideal... (though the same solution has been put into accelerate)" ]
1,708
1,708
null
NONE
null
### System Info If I use Optimum Neuron on Trainium with --gradient_accumulation_steps > 1 and training failed, Then I modified line https://github.com/huggingface/transformers/blob/6d1f545665ac66420af9f6702d891a30c5d070ea/src/transformers/trainer.py#L1966C21-L1966C23 to include ``` if is_torch_tpu_available() : xm.mark_step() ``` and then set gradient_accumulation_steps > 1 and it worked. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Use any Neuron script that uses huggingface trainer and works, and set --gradient_accumulation_steps 2 ### Expected behavior Should do gradient_accumulation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29042/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29042/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29041
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29041/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29041/comments
https://api.github.com/repos/huggingface/transformers/issues/29041/events
https://github.com/huggingface/transformers/pull/29041
2,136,759,631
PR_kwDOCUB6oc5m_Ugm
29,041
Fix bug with passing capture_* args to neptune callback
{ "login": "AleksanderWWW", "id": 58885668, "node_id": "MDQ6VXNlcjU4ODg1NjY4", "avatar_url": "https://avatars.githubusercontent.com/u/58885668?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AleksanderWWW", "html_url": "https://github.com/AleksanderWWW", "followers_url": "https://api.github.com/users/AleksanderWWW/followers", "following_url": "https://api.github.com/users/AleksanderWWW/following{/other_user}", "gists_url": "https://api.github.com/users/AleksanderWWW/gists{/gist_id}", "starred_url": "https://api.github.com/users/AleksanderWWW/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AleksanderWWW/subscriptions", "organizations_url": "https://api.github.com/users/AleksanderWWW/orgs", "repos_url": "https://api.github.com/users/AleksanderWWW/repos", "events_url": "https://api.github.com/users/AleksanderWWW/events{/privacy}", "received_events_url": "https://api.github.com/users/AleksanderWWW/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "## What does this PR do\r\n\r\nThis PR aims to fix a bug that appears when using `NeptuneCallback` with `run=None` and at least one of the `capture_*` params set. \r\n\r\n### Cause of the problem\r\n\r\nThis is due to the fact, that in one of the methods those params have hardcoded values, but the kwargs passed to the constructor are also passed to that method. Therefore a `TypeError` like this occurs during evaluation:\r\n`TypeError: neptune.metadata_containers.run.Run() got multiple values for keyword argument 'capture_stdout' `\r\n\r\n### Solution\r\n\r\nBefore invoking the problematic method we want to make sure that the `capture_*` params are not present in the callback's state by **temporarily** removing them. Once the method returns, the original kwargs state is restored.\r\n", "Thanks for opening this PR @AleksanderWWW! Let us know when you want a review 🤗 ", "Thank you @amyeroberts !\n\nLet me wait for my collegues at Neptune to share their thougths on my proposal 😁" ]
1,708
1,708
null
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29041/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29041/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29041", "html_url": "https://github.com/huggingface/transformers/pull/29041", "diff_url": "https://github.com/huggingface/transformers/pull/29041.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29041.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29040
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29040/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29040/comments
https://api.github.com/repos/huggingface/transformers/issues/29040/events
https://github.com/huggingface/transformers/issues/29040
2,136,718,515
I_kwDOCUB6oc5_W7yz
29,040
i am getting this error while trying to run the example script for finetuning t5 on squad for question answering
{ "login": "preethip02", "id": 84133769, "node_id": "MDQ6VXNlcjg0MTMzNzY5", "avatar_url": "https://avatars.githubusercontent.com/u/84133769?v=4", "gravatar_id": "", "url": "https://api.github.com/users/preethip02", "html_url": "https://github.com/preethip02", "followers_url": "https://api.github.com/users/preethip02/followers", "following_url": "https://api.github.com/users/preethip02/following{/other_user}", "gists_url": "https://api.github.com/users/preethip02/gists{/gist_id}", "starred_url": "https://api.github.com/users/preethip02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/preethip02/subscriptions", "organizations_url": "https://api.github.com/users/preethip02/orgs", "repos_url": "https://api.github.com/users/preethip02/repos", "events_url": "https://api.github.com/users/preethip02/events{/privacy}", "received_events_url": "https://api.github.com/users/preethip02/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @preethip02, thanks for raising this issue! \r\n\r\nCould you provide a minimal code example to reproduce this error? Specifically, how are you launching the script? ", "I followed the steps as given in the README file at huggingface/examples\r\n\r\nThe code i executed was \r\n\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install .\r\ncd examples/pytorch/question-answering\r\npip install -r requirements.txt\r\npython run_seq2seq_qa.py \\\r\n --model_name_or_path t5-small \\\r\n --dataset_name squad_v2 \\\r\n --context_column context \\\r\n --question_column question \\\r\n --answer_column answers \\\r\n --do_train \\\r\n --do_eval \\\r\n --per_device_train_batch_size 12 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir /tmp/debug_seq2seq_squad/", "Hi @preethip02, thanks for sharing the script. \r\n\r\nI'm unsure what's causing this issue, but I'm unable to replicate it. I can successfully run the script and run again so it resumes from the last checkpoint on main. \r\n\r\nCould you share your versions of accelerate, torch and evaluate? ", "Are the steps I have followed correct? Am I missing something? I ran this script on paperspace, could that be the reason behind the error?", "Name: accelerate\r\nVersion: 0.27.2\r\nName: datasets\r\nVersion: 2.17.0\r\nName: torch\r\nVersion: 1.12.1+cu116\r\nName: evaluate\r\nVersion: 0.4.1", "Hi @preethip02, \r\n\r\nI don't know if paperspace could be affecting this. I was running with:\r\n* accelerate 0.27.0\r\n* datasets 2.17.0\r\n* torch 2.20\r\n* evaluate 0.4.1", "I am able to run it successfully in google cloud gpu, except for some caching errors. ", "@preethip02 Great, I'm glad to hear it runs now. Is it OK to mark this issue as closed?\r\n", "Yes" ]
1,708
1,708
null
NONE
null
### System Info System Information: - transformers version: 4.38.0.dev0 - Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.27.2 - Accelerate config: not found - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): 2.9.2 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.3 (gpu) - Jax version: 0.4.1 - JaxLib version: 0.4.1 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @muellerzr ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Error: Traceback (most recent call last): File "/notebooks/transformers/examples/pytorch/question-answering/run_seq2seq_qa.py", line 758, in <module> main() File "/notebooks/transformers/examples/pytorch/question-answering/run_seq2seq_qa.py", line 694, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1606, in train return inner_training_loop( File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 1635, in _inner_training_loop train_dataloader = self.get_train_dataloader() File "/usr/local/lib/python3.9/dist-packages/transformers/trainer.py", line 845, in get_train_dataloader return self.accelerator.prepare(DataLoader(train_dataset, **dataloader_params)) File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 236, in _init_ raise ValueError('prefetch_factor option could only be specified in multiprocessing.' ValueError: prefetch_factor option could only be specified in multiprocessing.let num_workers > 0 to enable multiprocessing. ### Expected behavior The expected output of running the finetuning script would be a message indicating that the model has been succesfully trained and saved to the checkpoint.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29040/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/29039
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29039/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29039/comments
https://api.github.com/repos/huggingface/transformers/issues/29039/events
https://github.com/huggingface/transformers/pull/29039
2,136,595,226
PR_kwDOCUB6oc5m-v26
29,039
FIX: Fix error with `logger.warning` + inline with recent refactor
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks ! That's on me as well :D ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29039). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,708
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? In fact current on transformers main some legacy setup that call `model._is_quantized_training_enabled` throw an error: ```bash Arguments: (<class 'FutureWarning'>,) --- Logging error --- Traceback (most recent call last): File "/home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/logging/__init__.py", line 1083, in emit msg = self.format(record) File "/home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/logging/__init__.py", line 927, in format return fmt.format(record) File "/home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/logging/__init__.py", line 663, in format record.message = record.getMessage() File "/home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/logging/__init__.py", line 367, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting ``` Because `logger.warning` does take a second positional argument I also realised we should use `warnings.warn` to be in line with deprecation warning guildelines presented here: https://github.com/huggingface/transformers/pull/26527 🤯 cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29039/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29039", "html_url": "https://github.com/huggingface/transformers/pull/29039", "diff_url": "https://github.com/huggingface/transformers/pull/29039.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29039.patch", "merged_at": 1708007606000 }
https://api.github.com/repos/huggingface/transformers/issues/29038
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29038/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29038/comments
https://api.github.com/repos/huggingface/transformers/issues/29038/events
https://github.com/huggingface/transformers/pull/29038
2,136,581,159
PR_kwDOCUB6oc5m-su2
29,038
Remove timm in modeling files
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29038). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,708
1,708
null
COLLABORATOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29038/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29038", "html_url": "https://github.com/huggingface/transformers/pull/29038", "diff_url": "https://github.com/huggingface/transformers/pull/29038.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29038.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29037
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29037/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29037/comments
https://api.github.com/repos/huggingface/transformers/issues/29037/events
https://github.com/huggingface/transformers/pull/29037
2,136,545,870
PR_kwDOCUB6oc5m-ksU
29,037
Fix copies between DETR and DETA
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29037). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,708
1,708
1,708
COLLABORATOR
null
# What does this PR do? Fixes failing quality checks on main: https://app.circleci.com/pipelines/github/huggingface/transformers/84538/workflows/0d4691b0-4988-4040-a6bf-bd1ad90f523b/jobs/1093017?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-checks-link&utm_content=summary
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29037/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29037/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29037", "html_url": "https://github.com/huggingface/transformers/pull/29037", "diff_url": "https://github.com/huggingface/transformers/pull/29037.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29037.patch", "merged_at": 1708005778000 }
https://api.github.com/repos/huggingface/transformers/issues/29036
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29036/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29036/comments
https://api.github.com/repos/huggingface/transformers/issues/29036/events
https://github.com/huggingface/transformers/issues/29036
2,136,524,983
I_kwDOCUB6oc5_WMi3
29,036
`object of type 'NoneType' has no len()` when trying to use `WhisperNoSpeechDetection`
{ "login": "cifkao", "id": 8046580, "node_id": "MDQ6VXNlcjgwNDY1ODA=", "avatar_url": "https://avatars.githubusercontent.com/u/8046580?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cifkao", "html_url": "https://github.com/cifkao", "followers_url": "https://api.github.com/users/cifkao/followers", "following_url": "https://api.github.com/users/cifkao/following{/other_user}", "gists_url": "https://api.github.com/users/cifkao/gists{/gist_id}", "starred_url": "https://api.github.com/users/cifkao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cifkao/subscriptions", "organizations_url": "https://api.github.com/users/cifkao/orgs", "repos_url": "https://api.github.com/users/cifkao/repos", "events_url": "https://api.github.com/users/cifkao/events{/privacy}", "received_events_url": "https://api.github.com/users/cifkao/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @sanchit-gandhi @ylacombe ", "Thanks for opening the issue! Opened a PR to fix this!" ]
1,708
1,708
null
NONE
null
### System Info - `transformers` version: 4.38.0.dev0 (5b6fa23 – after merging #28687) - Platform: macOS-14.2.1-arm64-arm-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.2.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @patrickvonplaten @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python # %% from transformers import WhisperForConditionalGeneration, AutoProcessor import torch import numpy as np # %% processor = AutoProcessor.from_pretrained("openai/whisper-base") model = WhisperForConditionalGeneration.from_pretrained( "openai/whisper-base", torch_dtype=torch.float16 ) model.to("mps") # %% # 60 seconds of silence raw_audio = np.zeros((1, 16_000 * 60)) inputs = processor( raw_audio, return_tensors="pt", truncation=False, padding="longest", return_attention_mask=True, sampling_rate=16_000, ) inputs = inputs.to(model.device, torch.float16) # %% result = model.generate( **inputs, # chosen here to make it trigger the filter: no_speech_threshold=0.2, logprob_threshold=-0.2, # no_speech_threshold requires logprob_threshold, which in turn assumes temperature to be a list/tuple temperature=(0.0,), ) decoded = processor.batch_decode( result, skip_special_tokens=False, decode_with_timestamps=True ) print(decoded) ``` Result: ```pytb Traceback (most recent call last): File "/Users/ondra/test-hf/test.py", line 27, in <module> result = model.generate( File "/Users/ondra/mambaforge/envs/transformers/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py", line 735, in generate sequences = _pad_to_max_length(final_segments, generation_config.pad_token_id, padding="right") File "/Users/ondra/mambaforge/envs/transformers/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py", line 148, in _pad_to_max_length pad_length = max_total_length - len(sequences[i]) TypeError: object of type 'NoneType' has no len() ``` ### Expected behavior Should not error out, should output `['']`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29036/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29035
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29035/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29035/comments
https://api.github.com/repos/huggingface/transformers/issues/29035/events
https://github.com/huggingface/transformers/pull/29035
2,136,293,976
PR_kwDOCUB6oc5m9soV
29,035
Add a clone method for model configs
{ "login": "FremyCompany", "id": 364405, "node_id": "MDQ6VXNlcjM2NDQwNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/364405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FremyCompany", "html_url": "https://github.com/FremyCompany", "followers_url": "https://api.github.com/users/FremyCompany/followers", "following_url": "https://api.github.com/users/FremyCompany/following{/other_user}", "gists_url": "https://api.github.com/users/FremyCompany/gists{/gist_id}", "starred_url": "https://api.github.com/users/FremyCompany/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FremyCompany/subscriptions", "organizations_url": "https://api.github.com/users/FremyCompany/orgs", "repos_url": "https://api.github.com/users/FremyCompany/repos", "events_url": "https://api.github.com/users/FremyCompany/events{/privacy}", "received_events_url": "https://api.github.com/users/FremyCompany/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Guidance on where to add the tests, and what changes to make to the docs would be welcome, too.", "@FremyCompany so I can better understand the purpose behind this PR: `new_config = copy.deepcopy(config)` doesn't work in some settings, right? If so, in which situations?\r\n\r\nThe following works on my end:\r\n```py\r\nimport copy\r\nfrom transformers import AutoConfig\r\nconfig = AutoConfig.from_pretrained(\"gpt2\")\r\nnew_config = copy.deepcopy(config)\r\n```", "@gante This probably works in most cases, but it requires knowing about this API and I'm not versed enough into the implementation details of `deepcopy` to guarantee it always works flawlessly. Additionally, whether it works (or not) is probably not backed by tests in the library.\r\n\r\nBut yeah, I'm mostly suggesting a convenience method, the functionality is not difficult to emulate." ]
1,707
1,708
null
CONTRIBUTOR
null
# What does this PR do? Adding a convenience `clone()` method to `PretrainedConfig` that creates a deep copy of the current configuration. Useful to make changes to it without modifying the original. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @gante This is a draft PR. Before working on tests and documentation, I wanted to make sure the change proposed in this PR was welcome.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29035/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29035", "html_url": "https://github.com/huggingface/transformers/pull/29035", "diff_url": "https://github.com/huggingface/transformers/pull/29035.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29035.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29034
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29034/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29034/comments
https://api.github.com/repos/huggingface/transformers/issues/29034/events
https://github.com/huggingface/transformers/pull/29034
2,136,181,353
PR_kwDOCUB6oc5m9TdM
29,034
Removed obsolete attribute setting for AQLM quantization.
{ "login": "BlackSamorez", "id": 16901341, "node_id": "MDQ6VXNlcjE2OTAxMzQx", "avatar_url": "https://avatars.githubusercontent.com/u/16901341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BlackSamorez", "html_url": "https://github.com/BlackSamorez", "followers_url": "https://api.github.com/users/BlackSamorez/followers", "following_url": "https://api.github.com/users/BlackSamorez/following{/other_user}", "gists_url": "https://api.github.com/users/BlackSamorez/gists{/gist_id}", "starred_url": "https://api.github.com/users/BlackSamorez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BlackSamorez/subscriptions", "organizations_url": "https://api.github.com/users/BlackSamorez/orgs", "repos_url": "https://api.github.com/users/BlackSamorez/repos", "events_url": "https://api.github.com/users/BlackSamorez/events{/privacy}", "received_events_url": "https://api.github.com/users/BlackSamorez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "In the meantime, I updated [the Colab notebook](https://colab.research.google.com/drive/1-xZmBRXT5Fm3Ghn4Mwa2KRypORXb855X?usp=sharing) to this branch.\r\nSeems to be working again.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29034). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes latest errors mentioned in #28928 . ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29034/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29034", "html_url": "https://github.com/huggingface/transformers/pull/29034", "diff_url": "https://github.com/huggingface/transformers/pull/29034.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29034.patch", "merged_at": 1708020673000 }
https://api.github.com/repos/huggingface/transformers/issues/29033
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29033/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29033/comments
https://api.github.com/repos/huggingface/transformers/issues/29033/events
https://github.com/huggingface/transformers/issues/29033
2,135,817,420
I_kwDOCUB6oc5_TfzM
29,033
Trainer doesn't handle torch.compiled QLoRA models correctly
{ "login": "readwriteexec", "id": 129907247, "node_id": "U_kgDOB746Lw", "avatar_url": "https://avatars.githubusercontent.com/u/129907247?v=4", "gravatar_id": "", "url": "https://api.github.com/users/readwriteexec", "html_url": "https://github.com/readwriteexec", "followers_url": "https://api.github.com/users/readwriteexec/followers", "following_url": "https://api.github.com/users/readwriteexec/following{/other_user}", "gists_url": "https://api.github.com/users/readwriteexec/gists{/gist_id}", "starred_url": "https://api.github.com/users/readwriteexec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/readwriteexec/subscriptions", "organizations_url": "https://api.github.com/users/readwriteexec/orgs", "repos_url": "https://api.github.com/users/readwriteexec/repos", "events_url": "https://api.github.com/users/readwriteexec/events{/privacy}", "received_events_url": "https://api.github.com/users/readwriteexec/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @readwriteexec !\r\nThanks for the issue ! \r\nCan you try out: https://github.com/huggingface/transformers/pull/29055 ? I will also try to run some trainign on my end usign QLoRA + compile but from what I have understood it is not really supported I think. But in any case we should not throw that error on the Trainer so the fix is good to have on our end" ]
1,707
1,708
1,708
NONE
null
### System Info - `transformers` version: 4.38.0.dev0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.28.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.15.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.8.1 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` !pip install bitsandbytes !pip install git+https://github.com/huggingface/accelerate !pip install git+https://github.com/huggingface/datasets !pip install git+https://github.com/huggingface/peft !pip install git+https://github.com/huggingface/transformers !pip install git+https://github.com/huggingface/trl import torch import accelerate import datasets import peft import transformers import trl import bitsandbytes train_dataset = datasets.load_dataset('imdb', split='train') bnb_config = transformers.BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_quant_type="nf4", ) lora_config = peft.LoraConfig( r=8, lora_alpha=32, target_modules=["q_proj", "k_proj", "v_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) tokenizer = transformers.AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") model = transformers.AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=bnb_config) model = peft.prepare_model_for_kbit_training(model) model = peft.get_peft_model(model, lora_config) trainer = trl.SFTTrainer( # model=model, # Does not raise a ValueError model=torch.compile(model), # Raises a ValueError train_dataset=train_dataset, dataset_text_field='text', max_seq_length=512, ) ``` ### Expected behavior Expected Behaviour: Best Case: Calling torch.compile has no effect on whether an exception is raised. Worst Case: Raising an exception that reflects that torch.compile isn't supported. Current behaviour: ``` [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers, preprocess_logits_for_metrics) 431 # At this stage the model is already loaded 432 if _is_quantized_and_base_model and not _is_peft_model(model): --> 433 raise ValueError( 434 "You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of" 435 " the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co./docs/transformers/peft" ValueError: You cannot perform fine-tuning on purely quantized models. Please attach trainable adapters on top of the quantized model to correctly perform fine-tuning. Please see: https://huggingface.co./docs/transformers/peft for more details ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29033/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/29032
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29032/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29032/comments
https://api.github.com/repos/huggingface/transformers/issues/29032/events
https://github.com/huggingface/transformers/pull/29032
2,135,816,940
PR_kwDOCUB6oc5m8DQv
29,032
Feature: Option to set the tracking URI for MLflowCallback.
{ "login": "seanswyi", "id": 20367759, "node_id": "MDQ6VXNlcjIwMzY3NzU5", "avatar_url": "https://avatars.githubusercontent.com/u/20367759?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seanswyi", "html_url": "https://github.com/seanswyi", "followers_url": "https://api.github.com/users/seanswyi/followers", "following_url": "https://api.github.com/users/seanswyi/following{/other_user}", "gists_url": "https://api.github.com/users/seanswyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/seanswyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seanswyi/subscriptions", "organizations_url": "https://api.github.com/users/seanswyi/orgs", "repos_url": "https://api.github.com/users/seanswyi/repos", "events_url": "https://api.github.com/users/seanswyi/events{/privacy}", "received_events_url": "https://api.github.com/users/seanswyi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sounds good, thanks for the feedback @amyeroberts! I just made another minor change in the docstring and committed it: the default value of `MLFLOW_TRACKING_URI` should be an empty string rather than `None`.", "Is there any way to rerun tests? The failed `tests_torch` seems to be a timeout-related issue and wasn't there before my most recent commit. If passing all tests is absolutely necessary regardless of whether or not my code is related to it, then I feel like a simple rerun may solve this.\r\n\r\n```Python\r\nSome tests failed!\r\n\r\n============================= FAILURES SHORT STACK =============================\r\n_ LayoutLMTokenizationTest.test_add_tokens [type (microsoft/layoutlm-base-uncased)] _\r\n\r\n\r\nself = <huggingface_hub.utils._http.UniqueRequestIdAdapter object at 0x7f75b9f8c730>\r\nrequest = <PreparedRequest [GET]>, stream = True\r\ntimeout = Timeout(connect=10, read=10, total=None), verify = True, cert = None\r\nproxies = OrderedDict([('no', '127.0.0.1,localhost,circleci-internal-outer-build-agent')])\r\n\r\n try:\r\n resp = conn.urlopen(\r\n method=request.method,\r\n url=url,\r\n body=request.body,\r\n headers=request.headers,\r\n redirect=False,\r\n assert_same_host=False,\r\n preload_content=False,\r\n decode_content=False,\r\n retries=self.max_retries,\r\n timeout=timeout,\r\n chunked=chunked,\r\n )\r\n ...\r\n except (_SSLError, _HTTPError) as e:\r\n if isinstance(e, _SSLError):\r\n # This branch is for urllib3 versions earlier than v1.22\r\n raise SSLError(e, request=request)\r\n elif isinstance(e, ReadTimeoutError):\r\n> raise ReadTimeout(e, request=request)\r\nE requests.exceptions.ReadTimeout: (ReadTimeoutError(\"HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)\"), '(Request ID: e8e1debd-e8f7-45eb-b23d-fa63400a0c0b)')\r\n\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/requests/adapters.py:532: ReadTimeout\r\n\r\n\r\n\r\n\r\nExited with code exit status 255\r\n```", "@seanswyi I can re-run tests for you. I've just set `tests_torch` run off again", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29032). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@seanswyi As everything is passing and both @muellerzr and I approve, let's just merge and if @noise-field has any comments we can do a follow-up PR. \r\n\r\nThanks again for adding! ", "I'm an MLflow maintainer. Found this PR while I'm investigating https://github.com/mlflow-automation/mlflow/actions/runs/7949278234/job/21702921711#step:15:2462. What if we run this code like this?\r\n\r\n```python\r\nimport mlflow\r\nimport os\r\n\r\nassert \"MLFLOW_TRACKING_URI\" not in os.environ\r\nmlflow.set_tracking_uri(\"sqlite:///my.db\")\r\n\r\n# train\r\n...\r\n\r\n# Since MLFLOW_TRACKING_URI is not set, the tracking URI is set to an empty string, which is undesired in this case.\r\n```", "@harupy Could you elaborate a bit on why you think that would be better? The reason I wrote the initial PR the way I did is because to me it made sense to allow the user to keep `MLFLOW_TRACKING_URI` as an environment variable along with the other MLflow-related environment variables such as `MLFLOW_RUN_NAME`. I also left the default value as an empty string because that seemed to be consistent with MLflow's documentation as well (https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri). I feel like the snippet you provided takes away that control from the user, as it actually seems to be enforcing the user keep the tracking URI to a hardcoded value of `\"sqlite:///my.db\"`.\r\n\r\nPlease let me know if I'm misunderstanding or missing something. I haven't been able to fully look into the failed tests you provided but it seems like there's an issue between this PR and the [autologging module](https://github.com/mlflow/mlflow/blob/e27821c256ff08879f17721adb0b034552f2cf2c/mlflow/tracking/fluent.py#L2013)? ", "@seanswyi\r\n\r\n> I feel like the snippet you provided takes away that control from the user, as it actually seems to be enforcing the user keep the tracking URI to a hardcoded value of \"sqlite:///my.db\".\r\n\r\nTrue, but there may be users who use the code like my snippet (we do something similar in our tests, no `MLFLOW_TRACKING_URI` environment variable, just call `mlflow.set_tracking_uri` with a temporary sqlite file). Can we call `set_tracking_uri` only when `MLFLOW_TRACKING_URI` exists?\r\n", "@harupy If you believe that only calling the function when the environment variable exists is better then how about changing the current code to something like this?\r\n\r\n```Python\r\nif \"MLFLOW_TRACKING_URI\" in os.environ:\r\n self._tracking_uri = os.environ[\"MLFLOW_TRACKING_URI\"]\r\n logger.debug(f\"MLflow tracking URI is set to {self._tracking_uri}\")\r\n self._ml_flow.set_tracking_uri(self._tracking_uri)\r\nelse:\r\n logger.debug(f\"Environment variable `MLFLOW_TRACKING_URI` is not provided and therefore will not be explicitly set.\")\r\n```\r\n\r\nJust curious, is there any reason why a temporary SQLite DB is used? It seems like the documentation would suggest that setting a tracking URI isn't necessary and that the data will just be stored locally at `./mlruns`.", "Yeah that looks good to me.\r\n\r\n> is there any reason why a temporary SQLite DB is used?\r\n\r\n- To clean up runs and experiments after each test invocation.\r\n- To run tests faster.\r\n" ]
1,707
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? Previously, the MLflowCallback was only able to set MLflow experiments or runs. This PR adds the option to also set the tracking URI. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/28961 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/28961 - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - Trainer maintainers: @muellerzr @pacman100 - Original author of MLflowCallback: @noise-field (https://github.com/huggingface/transformers/commit/c48b16b8da991ba87ca82ed03e33481fc712fbfa) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29032/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29032", "html_url": "https://github.com/huggingface/transformers/pull/29032", "diff_url": "https://github.com/huggingface/transformers/pull/29032.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29032.patch", "merged_at": 1708094839000 }
https://api.github.com/repos/huggingface/transformers/issues/29031
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29031/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29031/comments
https://api.github.com/repos/huggingface/transformers/issues/29031/events
https://github.com/huggingface/transformers/issues/29031
2,135,744,898
I_kwDOCUB6oc5_TOGC
29,031
[i18n-<languageCode>] Translating docs to <languageName>
{ "login": "goalend", "id": 110501477, "node_id": "U_kgDOBpYeZQ", "avatar_url": "https://avatars.githubusercontent.com/u/110501477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goalend", "html_url": "https://github.com/goalend", "followers_url": "https://api.github.com/users/goalend/followers", "following_url": "https://api.github.com/users/goalend/following{/other_user}", "gists_url": "https://api.github.com/users/goalend/gists{/gist_id}", "starred_url": "https://api.github.com/users/goalend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goalend/subscriptions", "organizations_url": "https://api.github.com/users/goalend/orgs", "repos_url": "https://api.github.com/users/goalend/repos", "events_url": "https://api.github.com/users/goalend/events{/privacy}", "received_events_url": "https://api.github.com/users/goalend/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
closed
false
null
[]
[]
1,707
1,707
1,707
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the <languageName>-speaking community 🌐 (currently 0 out of 267 complete) Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Get Started section - [x] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180 - [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through) - [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md). ## Tutorial section - [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md) - [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md) - [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md) - [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) - [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md) - [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md) - [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md) <!-- Keep on adding more as you go 🔥 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29031/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/29030
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29030/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29030/comments
https://api.github.com/repos/huggingface/transformers/issues/29030/events
https://github.com/huggingface/transformers/pull/29030
2,135,702,557
PR_kwDOCUB6oc5m7qCo
29,030
FEAT [`Generation`]: Introduce a centralized API to switch between cache implementations
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29030). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,708
null
CONTRIBUTOR
null
# What does this PR do? I would like to introduce a new API before the release to centralize switching between cache implementations ! Right now to load SInkCache one needs to do: ```python from transformers import AutoTokenizer, AutoModelForCausalLM, SinkCache tokenizer = AutoTokenizer.from_pretrained("TheBloke/LLaMa-7B-GPTQ") model = AutoModelForCausalLM.from_pretrained("TheBloke/LLaMa-7B-GPTQ", device_map="auto") cache = SinkCache(window_length=508, num_sink_tokens=4) inputs = tokenizer(["Vaswani et al. (2017) introduced the Transformers"], return_tensors="pt").to(model.device) gen_out = model.generate(**inputs, do_sample=False, max_new_tokens=300, past_key_values=cache) decoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True) ``` For static cache: ```python from transformers import AutoTokenizer, AutoModelForCausalLM, SinkCache tokenizer = AutoTokenizer.from_pretrained("TheBloke/LLaMa-7B-GPTQ") model = AutoModelForCausalLM.from_pretrained("TheBloke/LLaMa-7B-GPTQ", device_map="auto") model.generation_config.cache_implementation = "static" inputs = tokenizer(["Vaswani et al. (2017) introduced the Transformers"], return_tensors="pt").to(model.device) gen_out = model.generate(**inputs, do_sample=False, max_new_tokens=300) decoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True) ``` With this PR: ```diff from transformers import AutoTokenizer, AutoModelForCausalLM, SinkCache tokenizer = AutoTokenizer.from_pretrained("TheBloke/LLaMa-7B-GPTQ") model = AutoModelForCausalLM.from_pretrained("TheBloke/LLaMa-7B-GPTQ", device_map="auto") - cache = SinkCache(window_length=508, num_sink_tokens=4) + model.set_cache_implementation("sink", sink_window_length=508, num_sink_tokens=4) inputs = tokenizer(["Vaswani et al. (2017) introduced the Transformers"], return_tensors="pt").to(model.device) - gen_out = model.generate(**inputs, do_sample=False, max_new_tokens=300, past_key_value=cache) + gen_out = model.generate(**inputs, do_sample=False, max_new_tokens=300) decoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True) ``` ```diff from transformers import AutoTokenizer, AutoModelForCausalLM, SinkCache tokenizer = AutoTokenizer.from_pretrained("TheBloke/LLaMa-7B-GPTQ") model = AutoModelForCausalLM.from_pretrained("TheBloke/LLaMa-7B-GPTQ", device_map="auto") - model.generation_config.cache_implementation = "static" + model.set_cache_implementation("static") inputs = tokenizer(["Vaswani et al. (2017) introduced the Transformers"], return_tensors="pt").to(model.device) gen_out = model.generate(**inputs, do_sample=False, max_new_tokens=300, past_key_values=cache) decoded = tokenizer.batch_decode(gen_out, skip_special_tokens=True) ``` What do you think @gante @tomaarsen @ArthurZucker @amyeroberts ? If you are happy with the design and idea I can move forward with adding tests and docs !
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29030/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29030", "html_url": "https://github.com/huggingface/transformers/pull/29030", "diff_url": "https://github.com/huggingface/transformers/pull/29030.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29030.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29029
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29029/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29029/comments
https://api.github.com/repos/huggingface/transformers/issues/29029/events
https://github.com/huggingface/transformers/issues/29029
2,135,678,463
I_kwDOCUB6oc5_S93_
29,029
Padding causes forward to produce different logits (Llama2-7b)
{ "login": "c3ianwu", "id": 92783433, "node_id": "U_kgDOBYfDSQ", "avatar_url": "https://avatars.githubusercontent.com/u/92783433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/c3ianwu", "html_url": "https://github.com/c3ianwu", "followers_url": "https://api.github.com/users/c3ianwu/followers", "following_url": "https://api.github.com/users/c3ianwu/following{/other_user}", "gists_url": "https://api.github.com/users/c3ianwu/gists{/gist_id}", "starred_url": "https://api.github.com/users/c3ianwu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/c3ianwu/subscriptions", "organizations_url": "https://api.github.com/users/c3ianwu/orgs", "repos_url": "https://api.github.com/users/c3ianwu/repos", "events_url": "https://api.github.com/users/c3ianwu/events{/privacy}", "received_events_url": "https://api.github.com/users/c3ianwu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Cc @younesbelkada too :)", "I believe this comment is relevant to this issue: https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535", "On point @amyeroberts TLDR it's expected ", "Thanks @amyeroberts @ArthurZucker \r\n\r\nI did a few more experiments based on the issue linked by @amyeroberts \r\n\r\n**KV Cache**\r\n\r\nI turned these off:\r\n\r\n```\r\nwith torch.no_grad():\r\n combined_outputs_1 = model(**combined_inputs_1, use_cache=False)\r\n```\r\netc. but this did not lead to any change. My understanding is that KV caching is disabled by default for `forward` so I'm not surprised.\r\n\r\n**FP32**\r\n\r\nI loaded model weights in fp32. There is still a noticeable difference but the difference is smaller.\r\n\r\n```\r\ntorch.sum(torch.abs(logits_1 - combined_logits_1))\r\n```\r\n```\r\n>>> tensor(169.4670, device='cuda:2')\r\n```\r\n```\r\ntorch.max(torch.abs(torch.nn.Softmax()(logits_1) - torch.nn.Softmax()(combined_logits_1)))\r\n```\r\n```\r\n>>> tensor(0.0002, device='cuda:2')\r\n```\r\n\r\n**FP16**\r\n\r\nI loaded model weights in fp16. There is still a noticeable difference but the difference is smaller than for bf16 but larger than for fp32.\r\n\r\n```\r\ntorch.sum(torch.abs(logits_1 - combined_logits_1))\r\n```\r\n```\r\n>>> tensor(510.8704, device='cuda:2')\r\n```\r\n```\r\ntorch.max(torch.abs(torch.nn.Softmax()(logits_1) - torch.nn.Softmax()(combined_logits_1)))\r\n```\r\n```\r\n>>> tensor(0.0006, device='cuda:2')\r\n```\r\n\r\n**CPU**\r\n\r\nI ran forward prop on CPU rather than GPU. The difference is now tiny.\r\n\r\n```\r\ntorch.sum(torch.abs(logits_1 - combined_logits_1))\r\n```\r\n```\r\n>>> tensor(0.3935)\r\n```\r\n```\r\ntorch.max(torch.abs(torch.nn.Softmax()(logits_1) - torch.nn.Softmax()(combined_logits_1)))\r\n```\r\n```\r\n>>> tensor(6.5565e-07)\r\n```\r\n\r\n**Right Padding**\r\n\r\nI changed padding to right padding on CPU. The error is now even smaller but still non-zero:\r\n\r\n```\r\ntorch.sum(torch.abs(logits_1 - combined_logits_1))\r\n```\r\n```\r\n>>> tensor(0.2899)\r\n```\r\n```\r\ntorch.max(torch.abs(torch.nn.Softmax()(logits_1) - torch.nn.Softmax()(combined_logits_1)))\r\n```\r\n```\r\n>>> tensor(1.1325e-06)\r\n```\r\n\r\nThoughts?\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "I found this bug too. You can test whether enable sqda or flash attention. When sqda is used, the result seems to be correct. I did not know why this bug happen.", "This was already answered, basically eager attention still attend to padding tokens (because the output of the softmax is never non zero) but with exact implementations / kernels, you have 0 for the padding tokens instead of a very tiny number. See #27050 ", "I have done some experiments. If I use the eager attention with sdpa attention mask (version==4.37.2), the results are correct. However, with the eager mode attention mask, the results are wrong. This happens when using left padding for inference.\r\n\r\nThe generated 4d attention mask looks like,\r\n**eager mode**\r\n\r\n```\r\n[[[[ 0., -65504., -65504., ..., -65504., -65504., -65504.],\r\n [ 0., 0., -65504., ..., -65504., -65504., -65504.],\r\n [ 0., 0., 0., ..., -65504., -65504., -65504.],\r\n ...,\r\n [ 0., 0., 0., ..., 0., -65504., -65504.],\r\n [ 0., 0., 0., ..., 0., 0., -65504.],\r\n [ 0., 0., 0., ..., 0., 0., 0.]]],\r\n [[[-65504., -65504., -65504., ..., -65504., -65504., -65504.],\r\n [-65504., 0., -65504., ..., -65504., -65504., -65504.],\r\n [-65504., 0., 0., ..., -65504., -65504., -65504.],\r\n ...,\r\n [-65504., 0., 0., ..., 0., -65504., -65504.],\r\n [-65504., 0., 0., ..., 0., 0., -65504.],\r\n [-65504., 0., 0., ..., 0., 0., 0.]]]]\r\n```\r\n**sdpa mode**\r\n\r\n```\r\ntensor([[[[ 0., -65504., -65504., ..., -65504., -65504., -65504.],\r\n [ 0., 0., -65504., ..., -65504., -65504., -65504.],\r\n [ 0., 0., 0., ..., -65504., -65504., -65504.],\r\n ...,\r\n [ 0., 0., 0., ..., 0., -65504., -65504.],\r\n [ 0., 0., 0., ..., 0., 0., -65504.],\r\n [ 0., 0., 0., ..., 0., 0., 0.]]],\r\n\r\n\r\n [[[0, 0, 0, ..., 0, 0, 0],\r\n [-65504., 0., -65504., ..., -65504., -65504., -65504.],\r\n [-65504., 0., 0., ..., -65504., -65504., -65504.],\r\n ...,\r\n [-65504., 0., 0., ..., 0., -65504., -65504.],\r\n [-65504., 0., 0., ..., 0., 0., -65504.],\r\n [-65504., 0., 0., ..., 0., 0., 0.]]]],\r\n```\r\n", "I don't understand what is wrong? ", "Please take a look at the discussions above. **If left padding is used, the output of the model is wrong.** I found that the attention mask can be generated with eager mode and sdpa mode. The difference is that if no element is attended the sdpa mode will set attention mask to zero. If sdpa mode attention mask generation is used, the output of the model is correct. I test with eager attention module and sdpa attention module. I am wondering why this happens.\r\n", "@SY-Xuan let's try to be clear when we say: \r\n- results are correct / wrong: what is wrong for you? You did not share generation, nor did you provide a snippet. Should I assume you are talking about @c3ianwu's results? Do you have the same setup as he does? \r\n- I used x and y: there are many different combination, sdpa attention, eager attention etc. Providing a small snippet of what you tests will help us understand what you mean by ` if no element is attended the sdpa mode will set attention mask to zero. If sdpa mode attention mask generation is used, the output of the model is correct`. \r\n- outputs are wrong: are you talking about logits? About generation? Are you doing greedy decoding ? Sampling? etc etc \r\n\r\nThe reason why `sdpa` uses `0` attention is because there is a bug with `sdpa` that does not support un-attended lines. `0` in the *causal* mask means that it will be attended. \r\n\r\nNow if you have a snippet with a reproducer, that will help.", "Thanks for your kind reply. I think I made a mistake by using different dtypes. I have fixed this by now. Sorry for the wasting of your time.", "No worries! 🤗 I'll close this as completed " ]
1,707
1,708
null
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.15.107+-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.22.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu118 (True) ### Who can help? @ArthurZucker @yun ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I have noticed that running `forward` on a padded sequence and an unpadded sequence yields (slightly) different logits, even with an attention mask specified. Here is what I ran: ``` from transformers import AutoTokenizer, AutoModelForCausalLM from torch import tensor import torch tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf") model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", torch_dtype=torch.bfloat16) model.to(2) model.eval() tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = "left" prompt_1 = "I am so so confused." prompt_2 = "I have never been more lost in my life!" combined_prompt = [prompt_1, prompt_2] combined_inputs = tokenizer(combined_prompt, padding=True, return_tensors="pt").to(2) # batch size 2 combined_inputs ``` ``` >>> {'input_ids': tensor([[ 2, 2, 2, 2, 1, 306, 626, 577, 577, 9613, 29889], [ 1, 306, 505, 2360, 1063, 901, 5714, 297, 590, 2834, 29991]], device='cuda:2'), 'attention_mask': tensor([[0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], device='cuda:2')} ``` ``` combined_inputs_1 = {"input_ids": combined_inputs["input_ids"][0].unsqueeze(0), "attention_mask": combined_inputs["attention_mask"][0].unsqueeze(0)} # extracting just the first item in the batch combined_inputs_1 ``` ``` >>> {'input_ids': tensor([[ 2, 2, 2, 2, 1, 306, 626, 577, 577, 9613, 29889]], device='cuda:2'), 'attention_mask': tensor([[0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]], device='cuda:2')} ``` ``` # running forward prop and then visualising the last 7 logits with torch.no_grad(): combined_outputs_1 = model(**combined_inputs_1) combined_logits_1 = combined_outputs_1.logits[0, 4:, :] combined_logits_1 ``` ``` >>> tensor([[-12.5000, -7.0625, -0.6406, ..., -6.6250, -7.9062, -7.2812], [ -9.5000, -12.1875, -1.1172, ..., -5.0625, -8.9375, -3.6250], [ -7.0312, -4.4688, 2.1875, ..., -1.8438, -5.6562, -1.8984], ..., [ -6.9375, -7.4062, 4.3438, ..., -2.8594, -3.1875, -3.1875], [ -2.4219, -2.0000, 11.0625, ..., -0.6914, -0.1133, -1.4141], [-11.8750, -10.8750, 8.3750, ..., -4.8125, -4.3750, -3.6094]], device='cuda:2') ``` ``` inputs_1 = tokenizer(prompt_1, padding=True, return_tensors="pt").to(2) # batch size 1 inputs_1 ``` ``` >>> {'input_ids': tensor([[ 1, 306, 626, 577, 577, 9613, 29889]], device='cuda:2'), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]], device='cuda:2')} ``` Notice that `inputs_1` is the same as `combined_inputs_1`, except with the left padding omitted and the attention mask altered to match. ``` # running forward prop again with torch.no_grad(): outputs_1 = model(**inputs_1) logits_1 = outputs_1.logits[0, :, :] logits_1 ``` ``` >>> tensor([[-12.5000, -7.0625, -0.6406, ..., -6.6250, -7.9062, -7.2812], [ -9.5000, -12.1875, -1.1016, ..., -5.0312, -8.9375, -3.6250], [ -7.0625, -4.4375, 2.2188, ..., -1.8750, -5.7188, -1.9219], ..., [ -6.9062, -7.3125, 4.3438, ..., -2.8594, -3.1875, -3.1406], [ -2.4219, -2.0000, 11.0625, ..., -0.6680, -0.1445, -1.4062], [-11.8125, -10.8125, 8.3750, ..., -4.7812, -4.3125, -3.5938]], device='cuda:2') ``` Upon close inspection, you'll see that this tensor is slightly different to `combined_logits_1`. We can show this more clearly: ``` torch.sum(torch.abs(logits_1 - combined_logits_1)) ``` ``` >>> tensor(3722.9448, device='cuda:2') ``` Is this meaningful? Well, if we look at the probabilities: ``` torch.max(torch.abs(torch.nn.Softmax()(logits_1) - torch.nn.Softmax()(combined_logits_1))) ``` ``` >>> tensor(0.0053, device='cuda:2') ``` That's a pretty non-trivial probability! ### Expected behavior I would expect the attention mask to mask out the left padding, making the two sequences `inputs_1` and `combined_inputs_1` identical during forward prop, which should in turn mean that the logits produced are equivalent. I realise that there may be small errors arising from batched GPU computations, but this error doesn't seem very small...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29029/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29028
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29028/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29028/comments
https://api.github.com/repos/huggingface/transformers/issues/29028/events
https://github.com/huggingface/transformers/issues/29028
2,135,441,284
I_kwDOCUB6oc5_SD-E
29,028
Perplexity calculation in the official tutorial is not correct
{ "login": "balaabhijit", "id": 132952260, "node_id": "U_kgDOB-ywxA", "avatar_url": "https://avatars.githubusercontent.com/u/132952260?v=4", "gravatar_id": "", "url": "https://api.github.com/users/balaabhijit", "html_url": "https://github.com/balaabhijit", "followers_url": "https://api.github.com/users/balaabhijit/followers", "following_url": "https://api.github.com/users/balaabhijit/following{/other_user}", "gists_url": "https://api.github.com/users/balaabhijit/gists{/gist_id}", "starred_url": "https://api.github.com/users/balaabhijit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/balaabhijit/subscriptions", "organizations_url": "https://api.github.com/users/balaabhijit/orgs", "repos_url": "https://api.github.com/users/balaabhijit/repos", "events_url": "https://api.github.com/users/balaabhijit/events{/privacy}", "received_events_url": "https://api.github.com/users/balaabhijit/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,707
1,707
null
NONE
null
### System Info ```yaml Pytorch: 2.1.0+cu121 datasets: 2.17.0 transformers: 4.35.2 ``` ### Who can help? @ArthurZucker @stevhliu @younesbelkada ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The colab notebook where the problem is found: [Perplexity of Fixed length models](https://huggingface.co./docs/transformers/perplexity). Especially in the following region of the code: ```python import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nlls = [] prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc # may be different from stride on last loop input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() # Edit: This target setting is not right. It should be target_ids[:, :-1] = -100 or target_ids[:, :trg_len-1] = -100. # I am guessing the goal of this line is to make every other value except the one we are predicting to be zero. # Currently, the below line does not add any -100 to the target_ids tensor at all. target_ids[:, :-trg_len] = -100 #Edit: If the above line is not corrected, the following added line will pass everytime which should not happen. assert torch.equal(input_ids, target_ids), f"input_ids: {input_ids}\ntarget_ids: {target_ids}" with torch.no_grad(): outputs = model(input_ids, labels=target_ids) # loss is calculated using CrossEntropyLoss which averages over valid labels # N.B. the model only calculates loss over trg_len - 1 labels, because it internally shifts the labels # to the left by 1. neg_log_likelihood = outputs.loss nlls.append(neg_log_likelihood) prev_end_loc = end_loc if end_loc == seq_len: break ppl = torch.exp(torch.stack(nlls).mean()) ``` Thus the calculated Perplexity values are wrong. Kindly let me know if I am missing something or kindly correct the example. ### Expected behavior the expected behavior is something like this: ```python input_ids = tensor([[1,2,3,4,5,6]]) target_ids = tensor([[-100,-100,-100,-100,-100,6]]) ``` The current behavior is: ```python input_ids = tensor([[1,2,3,4,5,6]]) target_ids = tensor([[1,2,3,4,5,6]]) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29028/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29027
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29027/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29027/comments
https://api.github.com/repos/huggingface/transformers/issues/29027/events
https://github.com/huggingface/transformers/pull/29027
2,135,368,695
PR_kwDOCUB6oc5m6iz2
29,027
[`CLeanup`] Revert SDPA attention changes that got in the static kv cache PR
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29027). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,707
1,707
COLLABORATOR
null
# What does this PR do? cc @younesbelkada #27931 removed `copied from` statements for Persimmon, Qwen2 and Mixstral / mistral which introduced unwanted changes for SDPA Superseed #29026 Closes: https://github.com/huggingface/transformers/pull/29026
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29027/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29027", "html_url": "https://github.com/huggingface/transformers/pull/29027", "diff_url": "https://github.com/huggingface/transformers/pull/29027.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29027.patch", "merged_at": 1707954948000 }
https://api.github.com/repos/huggingface/transformers/issues/29026
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29026/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29026/comments
https://api.github.com/repos/huggingface/transformers/issues/29026/events
https://github.com/huggingface/transformers/pull/29026
2,135,353,774
PR_kwDOCUB6oc5m6fl8
29,026
[`CI` / `core`] Fix CI with GC + pytorch 2.2
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29026). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "It appears the rootcause was slightlly different, see: https://github.com/huggingface/transformers/pull/29027" ]
1,707
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? Fixes the current failing CI for Mistral, Mixtral and Qwen2 for gradient checkpointing. For some reason, since pytorch 2.2, gradient checkpointing raises an error when going through in-place operations such as `tensor.mul_(xxx)` which was not the case in earlier versions. Simply replacing `causal_mask.mul_(~torch.eq(causal_mask, causal_mask.min()).all(dim=-1)[..., None])` by `causal_mask = causal_mask * (~torch.eq(causal_mask, causal_mask.min()).all(dim=-1)[..., None])` This makes me think we should maybe have a job that runs the CI on torch nightly to catch these early bugs, do we have that already? If not, happy to have a look cc @ArthurZucker @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29026/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29026/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29026", "html_url": "https://github.com/huggingface/transformers/pull/29026", "diff_url": "https://github.com/huggingface/transformers/pull/29026.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29026.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29025
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29025/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29025/comments
https://api.github.com/repos/huggingface/transformers/issues/29025/events
https://github.com/huggingface/transformers/issues/29025
2,135,280,570
I_kwDOCUB6oc5_Rcu6
29,025
What optimizations are available for AutoModelForVision2Seq
{ "login": "FurkanGozukara", "id": 19240467, "node_id": "MDQ6VXNlcjE5MjQwNDY3", "avatar_url": "https://avatars.githubusercontent.com/u/19240467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FurkanGozukara", "html_url": "https://github.com/FurkanGozukara", "followers_url": "https://api.github.com/users/FurkanGozukara/followers", "following_url": "https://api.github.com/users/FurkanGozukara/following{/other_user}", "gists_url": "https://api.github.com/users/FurkanGozukara/gists{/gist_id}", "starred_url": "https://api.github.com/users/FurkanGozukara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FurkanGozukara/subscriptions", "organizations_url": "https://api.github.com/users/FurkanGozukara/orgs", "repos_url": "https://api.github.com/users/FurkanGozukara/repos", "events_url": "https://api.github.com/users/FurkanGozukara/events{/privacy}", "received_events_url": "https://api.github.com/users/FurkanGozukara/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @FurkanGozukara !\r\nThanks for the issue !\r\n\r\n`AutoModelForVision2Seq` inherits from HF transformers' `PreTrainedModel`, therefore it benefits from most of the optimizations that are available, for instance you can load it in fp16, bf16 as usual through the `torch_dtype` argument, CPU offloading through device_map, and you can also load the model in 4-bit / 8-bit through the `load_in_kbit=True` argument (some models require to have the `accelerate` integration, so if it's not the case you can let us know). \r\n\r\nFor more \"advanced\" optimisations such as SDPA or Flash Attention 2, it is on per-architecture basis, for your specific question you are asking for `microsoft/kosmos-2-patch14-224`. From what I can see kosmos2 does not support SDPA or FA2 so you can open a separate issue as a feature request so that we can work on adding that feature", "> Hi @FurkanGozukara ! Thanks for the issue !\r\n> \r\n> `AutoModelForVision2Seq` inherits from HF transformers' `PreTrainedModel`, therefore it benefits from most of the optimizations that are available, for instance you can load it in fp16, bf16 as usual through the `torch_dtype` argument, CPU offloading through device_map, and you can also load the model in 4-bit / 8-bit through the `load_in_kbit=True` argument (some models require to have the `accelerate` integration, so if it's not the case you can let us know).\r\n> \r\n> For more \"advanced\" optimisations such as SDPA or Flash Attention 2, it is on per-architecture basis, for your specific question you are asking for `microsoft/kosmos-2-patch14-224`. From what I can see kosmos2 does not support SDPA or FA2 so you can open a separate issue as a feature request so that we can work on adding that feature\r\n\r\nAwesome\r\n\r\nthis model is simply just amazing\r\n\r\nDo you suggest FP16 or BF16?\r\n\r\nI added 4, 8, 16 and 32 bit options\r\nBy default it was running at 32 bit" ]
1,707
1,708
null
NONE
null
Hello. I am using AutoModelForVision2Seq for Kosmos 2 model like below ``` model_source = "microsoft/kosmos-2-patch14-224" model = AutoModelForVision2Seq.from_pretrained(model_source).to("cuda") processor = AutoProcessor.from_pretrained(model_source) ``` I checked this link but couldn't find anything regarding optimizations Such as load as BF16? Enable xformers? Enable CPU offloading? anything that can reduce VRAM usage, quantize or speed up inference? Thank you https://huggingface.co./docs/transformers/model_doc/auto transformers==4.37.2 ### Who can help? text models: @ArthurZucker and @younesbelkada vision models: @amyeroberts generate: @gante pipelines: @Narsil Big Model Inference: @SunMarc
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29025/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29025/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29024
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29024/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29024/comments
https://api.github.com/repos/huggingface/transformers/issues/29024/events
https://github.com/huggingface/transformers/pull/29024
2,135,278,649
PR_kwDOCUB6oc5m6PHt
29,024
Adding _tie_weights() to more models
{ "login": "hackyon", "id": 1557853, "node_id": "MDQ6VXNlcjE1NTc4NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hackyon", "html_url": "https://github.com/hackyon", "followers_url": "https://api.github.com/users/hackyon/followers", "following_url": "https://api.github.com/users/hackyon/following{/other_user}", "gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hackyon/subscriptions", "organizations_url": "https://api.github.com/users/hackyon/orgs", "repos_url": "https://api.github.com/users/hackyon/repos", "events_url": "https://api.github.com/users/hackyon/events{/privacy}", "received_events_url": "https://api.github.com/users/hackyon/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Still working through another 10 failing cases (where it's not so obvious), so only marking as draft for now.\r\n\r\ntests/models/flava/test_modeling_flava.py .....F \r\ntests/models/encodec/test_modeling_encodec.py F\r\ntests/models/fsmt/test_modeling_fsmt.py F \r\ntests/models/lxmert/test_modeling_lxmert.py F \r\ntests/models/marian/test_modeling_marian.py F.\r\n\r\ntests/models/musicgen/test_modeling_musicgen.py .F \r\n\r\ntests/models/sew/test_modeling_sew.py F \r\ntests/models/sew_d/test_modeling_sew_d.py F \r\n\r\ntests/models/timm_backbone/test_modeling_timm_backbone.py F \r\n", "Cool, I fixed the remaining missing _tie_weights(), and also added some more skip tests for some special audio/vision models (many failing due to use of `nn.utils.weight_norm`). \r\n\r\nI ran the following and made sure all the tests pass:\r\n`pytest -k test_save_load_low_cpu_mem_usage tests/`\r\n\r\nAlso, what do you think documenting the need to run the above command when modifying common tests? Perhaps it can go into the [Contributions](https://huggingface.co./docs/transformers/contributing) page? I can follow up with this if you think it makes sense.", "The test failure in tests_tf looks unrelated. Any chance you can kick off a re-run of the CI checks? 🙏\r\n\r\nAlso, I've verified again that `pytest -k test_save_load_low_cpu_mem_usage tests/` passes.", "cc @SunMarc :)", "Thanks for the review!\r\n\r\nI added the explanation of tie_weights() from my research, but it'd be great to get some feedback from someone who's more knowledgeable on this.", "cc @SunMarc @muellerzr \r\n\r\nDon't meant to be pushy, but the tests for the models in this change are currently broken in main/HEAD, so I'd be grateful if you could give this a look in the next couple of days. Thanks!", "> cc @SunMarc @muellerzr\r\n> \r\n> Don't meant to be pushy, but the tests for the models in this change are currently broken in main/HEAD, so I'd be grateful if you could give this a look in the next couple of days. Thanks!\r\n\r\nI added PR #29118 to skip the failing tests, so we'd have more time to discuss this change. Feel free to comment/review whenever you get a chance. Thanks and greatly appreciate your input on this!\r\n\r\nFor more context, the tie_weights() I'm adding here should enable loading those models with low_cpu_mem_usage=True (it's currently unsupported for those models). \r\n\r\nThe tie_weights() should also be helpful in getting accelerate to work on more models, since we need it to properly infer the device map. \r\n\r\nCheers.", "Sorry I added the wrong link in the PR description, this issue is a follow up of #28948. There's context in that link (tl;dr adding the tie_weights() enable those models to be loaded with low_cpu_mem_usage=True)\r\n\r\nWe're adding new functionality with these tie_weights(). We're basically adding support for low_cpu_mem_usage=True for all these models. \r\n\r\nThe functionality kind of snowballed out another unrelated change for SDPA #28802, since the [common test for SDPA uses low_cpu_mem_usage](https://github.com/huggingface/transformers/blob/0996a10077219de0556281511fc02f3ab68002d5/tests/test_modeling_common.py#L3715). I looked into it, and figured it could be a good idea to add support for low_cpu_mem_usage to a bunch of models as well while I'm at it.\r\n\r\n> * this feels like a big regression: having to write and add the ` def _tie_weights(self):` for all of these model is a bit strange to me\r\n> * are we not changing the API by supporting bias tie? If yes I am against it. That should only be done manually by the user!\r\n\r\nThe weights were already tied in `__init__` (with self.decoder.bias = self.bias), but those get untied when loading through low_cpu_mem_usage, and needs to get retied. \r\n\r\nIf we are not loading with low_cpu_mem_usage, those biases will already be tied. If we save with save_pretrained, only one copy of those biases will be saved.\r\n\r\n> * could you sum up in 1 line what was wrong with the previous behaviour and why we need to tie biases for these models?\r\n\r\nThose models fail to load with low_cpu_mem_usage=True.\r\n\r\n> * copied from should be used\r\n\r\nGood point. A lot of these prediction heads were more-or-less copied from other existing heads, but not sure why they were not marked copied-from. I'll see if I can add back some fix-copies.\r\n\r\n", "Hi @hackyon I haven't followed your work (btw, thank you for the contribution!), but just jump in to let you know:\r\n\r\nif you put a prefix `[test all]` in a commit message, that commit will trigger a full CI run.\r\n\r\nFor example, a commit message like `[test all] check my commit is perfect`.\r\n\r\nThis way, you have an easy way to check if the changes are all good.\r\n\r\n(Our test fetcher is a trade-off of coverage and speed. But we will try to improve it)", "Thanks for the details! \r\nAlright let's make sure we take everything into account, safe serialization and unsafe (bin as well). \r\nYou can also just use copied from for single functions, but the idea is to make sure we have a fix in a single place and the rest is just the copy of the fix! Same for the test, copied from can be used for the new test " ]
1,707
1,708
null
CONTRIBUTOR
null
# What does this PR do? This is a follow-up to ~#28947~ #28948. It turns out that the CI only runs a small subset of tests, so there are quite a bit of sneaky failures throughout the tests. I had to explicitly run the following command (it will take quite a long time): pytest -k "test_save_load_low_cpu_mem_usage" tests/ Going forward, we should probably ask devs to run this command when they modify a common test. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29024/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29024/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29024", "html_url": "https://github.com/huggingface/transformers/pull/29024", "diff_url": "https://github.com/huggingface/transformers/pull/29024.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29024.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29023
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29023/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29023/comments
https://api.github.com/repos/huggingface/transformers/issues/29023/events
https://github.com/huggingface/transformers/pull/29023
2,135,246,093
PR_kwDOCUB6oc5m6H-h
29,023
[Quantization] Quanto quantizer
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29023). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @dacorvo " ]
1,707
1,708
null
MEMBER
null
# What does this PR do ? This PR adds the quantization methods from quanto library. We will support inference + model quantization if the user perform weights only quantization since we don't require a calibration dataset. TODO: - [ ] Couple of fix to do on quanto side (e.g safetensors saving) - [ ] docs - [ ] tests
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29023/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29023/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29023", "html_url": "https://github.com/huggingface/transformers/pull/29023", "diff_url": "https://github.com/huggingface/transformers/pull/29023.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29023.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29022
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29022/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29022/comments
https://api.github.com/repos/huggingface/transformers/issues/29022/events
https://github.com/huggingface/transformers/issues/29022
2,134,772,595
I_kwDOCUB6oc5_Pgtz
29,022
`get_torch_version()` doesn't return same result as `import torch; torch.__version__`
{ "login": "relh", "id": 3629411, "node_id": "MDQ6VXNlcjM2Mjk0MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/3629411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/relh", "html_url": "https://github.com/relh", "followers_url": "https://api.github.com/users/relh/followers", "following_url": "https://api.github.com/users/relh/following{/other_user}", "gists_url": "https://api.github.com/users/relh/gists{/gist_id}", "starred_url": "https://api.github.com/users/relh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/relh/subscriptions", "organizations_url": "https://api.github.com/users/relh/orgs", "repos_url": "https://api.github.com/users/relh/repos", "events_url": "https://api.github.com/users/relh/events{/privacy}", "received_events_url": "https://api.github.com/users/relh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @relh, thanks for raising this issue! \r\n\r\nIndeed, despite the promised profit, I don't think this is something we should be handling within `generic.py`. Assuming there's one version installed is a fair one, and being able to parse the torch version is something we do throughout the library e.g. [here](https://github.com/huggingface/transformers/blob/5f06053dd821c91f7bd697309109abaa3396b605/src/transformers/pytorch_utils.py#L31). It's better to have this failure hit with the parsed version not matching and raising and error because a function couldn't be found instead of trying to do some magic on the inspection. After all, it caught this problem :)! \r\n\r\nWe should expect `get_torch_version` to match `import torch; torch.__version__` though. I don't think this is an easy fix, as the reason this is happening is `importlib` isn't reading the \"correct\" torch. The question is, how can we know which is the \"correct\" version without importing (which `_is_package_available` does)?" ]
1,707
1,707
null
NONE
null
### System Info ubuntu linux with conda and pip ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `get_torch_version()` doesn't return same result as `import torch; torch.__version__` Hi, if you happen to have a version of torch installed with `pip` and one with `conda`, there can be an issue where the `import_utils` function doesn't return the same thing as actually importing. This was recently an issue for me because I had a version `2.1.0` and a version `2.2.0` and the naming for `_torch_pytree.register_pytree_node` had been `_torch_pytree._register_pytree_node`. The solution is someone shouldn't have both installed (duh) but this problem could be avoided if line 311 in `generic.py` checked for the existence of a function and then fell back instead of parsing a version. ### Expected behavior 1. have 2 versions of torch installed in an environment 2. use code that involves `generic.py` 3. ??? 4. profit
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29022/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29021
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29021/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29021/comments
https://api.github.com/repos/huggingface/transformers/issues/29021/events
https://github.com/huggingface/transformers/pull/29021
2,134,769,927
PR_kwDOCUB6oc5m4e5s
29,021
Flax: Flax examples without pytorch dependencies
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Nope, the speech recognition example relies on torch's Dataloader :(", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29021). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,707
1,707
MEMBER
null
# What does this PR do? WIP
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29021/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29021", "html_url": "https://github.com/huggingface/transformers/pull/29021", "diff_url": "https://github.com/huggingface/transformers/pull/29021.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29021.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29020
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29020/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29020/comments
https://api.github.com/repos/huggingface/transformers/issues/29020/events
https://github.com/huggingface/transformers/issues/29020
2,134,475,811
I_kwDOCUB6oc5_OYQj
29,020
NotImplementedError: A model class needs to define a `prepare_inputs_for_generation` method in order to use `.generate()`.
{ "login": "nikhilajoshy", "id": 37141775, "node_id": "MDQ6VXNlcjM3MTQxNzc1", "avatar_url": "https://avatars.githubusercontent.com/u/37141775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikhilajoshy", "html_url": "https://github.com/nikhilajoshy", "followers_url": "https://api.github.com/users/nikhilajoshy/followers", "following_url": "https://api.github.com/users/nikhilajoshy/following{/other_user}", "gists_url": "https://api.github.com/users/nikhilajoshy/gists{/gist_id}", "starred_url": "https://api.github.com/users/nikhilajoshy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikhilajoshy/subscriptions", "organizations_url": "https://api.github.com/users/nikhilajoshy/orgs", "repos_url": "https://api.github.com/users/nikhilajoshy/repos", "events_url": "https://api.github.com/users/nikhilajoshy/events{/privacy}", "received_events_url": "https://api.github.com/users/nikhilajoshy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @nikhilajoshy, thanks for raising an issue! \r\n\r\nPlease make sure to share the full error traceback in the issue information. \r\n\r\nCould you also make sure that the code is properly formatted in the example and that it can be run to fully reproduce the error? " ]
1,707
1,708
1,708
NONE
null
### System Info - `transformers` version: 4.37.2 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.0 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ` model = EncoderDecoderModel(encoder=encoder, decoder=decoder) model.eval() enc_tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") dec_tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt2") with torch.no_grad(): inputs = enc_tokenizer("I like bananas", return_tensors="pt") outputs = model.generate(**inputs, max_new_tokens=200) print(dec_tokenizer.batch_decode(**outputs)) ` ### Expected behavior Not able to perform model.generate with bert base and gpt2 from huggingface
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29020/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/29019
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29019/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29019/comments
https://api.github.com/repos/huggingface/transformers/issues/29019/events
https://github.com/huggingface/transformers/pull/29019
2,134,369,847
PR_kwDOCUB6oc5m3HFW
29,019
Update important model list
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29019). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I hesitated but Mistral and Llama are very close in terms of architecture and support so I didn't find it necessary; do you think it's different enough to warrant being added?", "mistral uses sliding window attention which is not the case in llama, for that I thought maybe better to test it but I think all good to leave it as is as well" ]
1,707
1,708
1,708
MEMBER
null
Adds LLaMa to the important models to test on each commit
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29019/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29019", "html_url": "https://github.com/huggingface/transformers/pull/29019", "diff_url": "https://github.com/huggingface/transformers/pull/29019.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29019.patch", "merged_at": 1708079511000 }
https://api.github.com/repos/huggingface/transformers/issues/29018
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29018/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29018/comments
https://api.github.com/repos/huggingface/transformers/issues/29018/events
https://github.com/huggingface/transformers/pull/29018
2,134,309,104
PR_kwDOCUB6oc5m25gs
29,018
Make LogitsProcessor compatible with torch.compile
{ "login": "zucchini-nlp", "id": 100715397, "node_id": "U_kgDOBgDLhQ", "avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zucchini-nlp", "html_url": "https://github.com/zucchini-nlp", "followers_url": "https://api.github.com/users/zucchini-nlp/followers", "following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions", "organizations_url": "https://api.github.com/users/zucchini-nlp/orgs", "repos_url": "https://api.github.com/users/zucchini-nlp/repos", "events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/zucchini-nlp/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29018). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@gante I went through the comments and fixed where possible. I am wondering if it is a good idea to add warnings as I did? Maybe there is a better way to do it, so that the users do not see lots of unrelated warning. I guess not everyone will use `compile` to generate ", "@gante Ready to review. I fixed tests and the generation utils to work with \"cur_len\", everything runs successfully in my machine. " ]
1,707
1,708
null
MEMBER
null
# What does this PR do? Small part of the issue #28981 . This PR makes sure that Logits Processor and Stopping Criteria are compatible with `torch.compile` when `fullgraph=True`. The changes were tested with dummy inputs and logits and also with Llama. For now only the Processors used in `generate` were checked, those that are used in bark/whisper models can be checked later if needed. The below processors are not compatible, exceptions will be added later: - EncoderNoRepeatNGramLogitsProcessor and NoRepeatNGramLogitsProcessor -> tries to get a value from dict, which is input dependent - PrefixConstrainedLogitsProcessor -> relies on user provided functions, which mostly probably are also input dependent - SequenceBiasLogitsProcessor will not work at the same time with NoBadWordsProcessor, only one needs to be defined -> both call the same `_prepare_bias_variables`, which leads to recompiling it the second time we call with new arguments. Can be fixed if we either merge them into one processor or separate as two distinct. - UnbatchedClassifierFreeGuidanceLogitsProcessor -> calls the model forward, current Llama with sdpa failed due to providing not None attention_mask. - MaxTimeCriteria -> uses built-in time.time() FYI @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29018/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29018", "html_url": "https://github.com/huggingface/transformers/pull/29018", "diff_url": "https://github.com/huggingface/transformers/pull/29018.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29018.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29017
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29017/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29017/comments
https://api.github.com/repos/huggingface/transformers/issues/29017/events
https://github.com/huggingface/transformers/issues/29017
2,134,307,588
I_kwDOCUB6oc5_NvME
29,017
Error: The selected decoder is not prepared for the encoder hidden states to be passed.
{ "login": "nikhilajoshy", "id": 37141775, "node_id": "MDQ6VXNlcjM3MTQxNzc1", "avatar_url": "https://avatars.githubusercontent.com/u/37141775?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikhilajoshy", "html_url": "https://github.com/nikhilajoshy", "followers_url": "https://api.github.com/users/nikhilajoshy/followers", "following_url": "https://api.github.com/users/nikhilajoshy/following{/other_user}", "gists_url": "https://api.github.com/users/nikhilajoshy/gists{/gist_id}", "starred_url": "https://api.github.com/users/nikhilajoshy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikhilajoshy/subscriptions", "organizations_url": "https://api.github.com/users/nikhilajoshy/orgs", "repos_url": "https://api.github.com/users/nikhilajoshy/repos", "events_url": "https://api.github.com/users/nikhilajoshy/events{/privacy}", "received_events_url": "https://api.github.com/users/nikhilajoshy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @nikhilajoshy, thanks for raising an issue! \r\n\r\nCould you please provide a full traceback of the error encountered? \r\n\r\nPlease note that the not all encoder-decoder pairs are compatible - we don't guarantee all possible pairings can be loaded and run. \r\n\r\n> Not able to load any EncoderDecoderModel using any hugging face transformer models\r\n\r\nCould you clarify this comment. Are you unable to load any encoder-decoder pairing, or just `\"google/mt5-small\"` and `\"facebook/opt-350m\"`? Note, this is likely happening because `mt5` is a encoder-decoder and loading with `AutoModel` is loading both the encoder and decoder. I'd suggest using `MT5EncoderModel` to load the model instead. ", "@amyeroberts it was a problem with the decoder only models. It works with gpt2" ]
1,707
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.37.2 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.0 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.26.1 - Accelerate config: not found - PyTorch version (GPU?): 2.2.0+cpu (False) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction def init_enc_dec(enc_model_name: str = "google/mt5-small", dec_model_name: str = "facebook/opt-350m"): config_encoder = AutoConfig.from_pretrained(enc_model_name) config_encoder.is_encoder_decoder = False config_encoder.add_cross_attention = False config_encoder.is_decoder = False config_encoder.num_decoder_layers = 0 config_decoder = AutoConfig.from_pretrained(dec_model_name) config_decoder.add_cross_attention = True config_decoder.is_decoder = True encoder = AutoModel.from_pretrained(enc_model_name, config=config_encoder) decoder = AutoModel.from_pretrained(dec_model_name, config=config_decoder) model = EncoderDecoderModel(encoder=encoder, decoder=decoder) ### Expected behavior Not able to load any EncoderDecoderModel using any hugging face transformer models
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29017/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29017/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/29016
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29016/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29016/comments
https://api.github.com/repos/huggingface/transformers/issues/29016/events
https://github.com/huggingface/transformers/issues/29016
2,133,884,761
I_kwDOCUB6oc5_MH9Z
29,016
Trainer: Functions to inspect model and optimizer status
{ "login": "yqy2001", "id": 55196500, "node_id": "MDQ6VXNlcjU1MTk2NTAw", "avatar_url": "https://avatars.githubusercontent.com/u/55196500?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yqy2001", "html_url": "https://github.com/yqy2001", "followers_url": "https://api.github.com/users/yqy2001/followers", "following_url": "https://api.github.com/users/yqy2001/following{/other_user}", "gists_url": "https://api.github.com/users/yqy2001/gists{/gist_id}", "starred_url": "https://api.github.com/users/yqy2001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yqy2001/subscriptions", "organizations_url": "https://api.github.com/users/yqy2001/orgs", "repos_url": "https://api.github.com/users/yqy2001/repos", "events_url": "https://api.github.com/users/yqy2001/events{/privacy}", "received_events_url": "https://api.github.com/users/yqy2001/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @muellerzr @pacman100 " ]
1,707
1,707
null
CONTRIBUTOR
null
### Feature request In huggingface Trainer, are there any functions to inspect model and optimizer status? such as, how many parameters require grad, learning rate of each parameter, which optimizer group each parameter belong... I didn't find any related function in Trainer, and I know implementing it by myself is easy, but I just want to know whether such functions already exist. ### Motivation Such inspection is useful for correcting training. ### Your contribution I propose a question.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29016/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29015
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29015/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29015/comments
https://api.github.com/repos/huggingface/transformers/issues/29015/events
https://github.com/huggingface/transformers/pull/29015
2,133,811,967
PR_kwDOCUB6oc5m1LxC
29,015
Support resuming of deepspeed + Lora + offloading
{ "login": "thepowerfuldeez", "id": 11796343, "node_id": "MDQ6VXNlcjExNzk2MzQz", "avatar_url": "https://avatars.githubusercontent.com/u/11796343?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thepowerfuldeez", "html_url": "https://github.com/thepowerfuldeez", "followers_url": "https://api.github.com/users/thepowerfuldeez/followers", "following_url": "https://api.github.com/users/thepowerfuldeez/following{/other_user}", "gists_url": "https://api.github.com/users/thepowerfuldeez/gists{/gist_id}", "starred_url": "https://api.github.com/users/thepowerfuldeez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thepowerfuldeez/subscriptions", "organizations_url": "https://api.github.com/users/thepowerfuldeez/orgs", "repos_url": "https://api.github.com/users/thepowerfuldeez/repos", "events_url": "https://api.github.com/users/thepowerfuldeez/events{/privacy}", "received_events_url": "https://api.github.com/users/thepowerfuldeez/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @pacman100 @younesbelkada ", "Could you please provide any updates on this PR?", "Sure @thepowerfuldeez ! \r\n@pacman100 is currently working on fixing issues with repsect to deepspeed and providing working scripts that you can run out of the box: https://github.com/huggingface/peft/pull/1489 we'll review this PR asap with sourab!", "Hello, this has been already fixed in https://github.com/huggingface/transformers/pull/28746. I ran experiments today and can confirm resuming training when using PEFT+DeepSpeed works" ]
1,707
1,708
null
NONE
null
This PR is a upstream version of @kazemf78 PR to support resuming of Lora training when using deepspeed. Without setting `load_module_strict=False` as a default, checkpoint is not loaded due to Lora not containing all weights, throwing an error `deepspeed resume Error(s) in loading state_dict for PeftModelForCausalLM` Related discussion: https://github.com/huggingface/peft/issues/746
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29015/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29015", "html_url": "https://github.com/huggingface/transformers/pull/29015", "diff_url": "https://github.com/huggingface/transformers/pull/29015.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29015.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29013
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29013/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29013/comments
https://api.github.com/repos/huggingface/transformers/issues/29013/events
https://github.com/huggingface/transformers/pull/29013
2,133,666,705
PR_kwDOCUB6oc5m0rzi
29,013
DeformableDetrModel support fp16
{ "login": "DonggeunYu", "id": 17740653, "node_id": "MDQ6VXNlcjE3NzQwNjUz", "avatar_url": "https://avatars.githubusercontent.com/u/17740653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DonggeunYu", "html_url": "https://github.com/DonggeunYu", "followers_url": "https://api.github.com/users/DonggeunYu/followers", "following_url": "https://api.github.com/users/DonggeunYu/following{/other_user}", "gists_url": "https://api.github.com/users/DonggeunYu/gists{/gist_id}", "starred_url": "https://api.github.com/users/DonggeunYu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DonggeunYu/subscriptions", "organizations_url": "https://api.github.com/users/DonggeunYu/orgs", "repos_url": "https://api.github.com/users/DonggeunYu/repos", "events_url": "https://api.github.com/users/DonggeunYu/events{/privacy}", "received_events_url": "https://api.github.com/users/DonggeunYu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Actually, one thing we'll need to add is a test e.g. [like here for MT5](https://github.com/huggingface/transformers/blob/1ecf5f7c982d761b4daaa96719d162c324187c64/tests/models/mt5/test_modeling_mt5.py#L424). \r\n\r\nFor the quality checks, running `make fix-copies` and pushing the changes should resolve the issues. You make need to make some additional adjustments to other modeling files to properly reflect the changes.", "@amyeroberts\r\nI took all of your feedback on board.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29013). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? This PR for DeformableDetrModel support fp16. https://github.com/huggingface/transformers/issues/29011 ## Who can review? @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29013/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29013/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29013", "html_url": "https://github.com/huggingface/transformers/pull/29013", "diff_url": "https://github.com/huggingface/transformers/pull/29013.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29013.patch", "merged_at": 1708000269000 }
https://api.github.com/repos/huggingface/transformers/issues/29012
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29012/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29012/comments
https://api.github.com/repos/huggingface/transformers/issues/29012/events
https://github.com/huggingface/transformers/pull/29012
2,133,655,466
PR_kwDOCUB6oc5m0pT7
29,012
Add LLaVa 1.6
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29012). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,708
null
CONTRIBUTOR
null
# What does this PR do? This PR adds the new LLaVa 1.6 model. To do: - [x] not sure how batched generation works - [x] make `image_sizes` a tensor instead of a list - [x] make sure llava 1.5 still works
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29012/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29012/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29012", "html_url": "https://github.com/huggingface/transformers/pull/29012", "diff_url": "https://github.com/huggingface/transformers/pull/29012.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29012.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29011
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29011/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29011/comments
https://api.github.com/repos/huggingface/transformers/issues/29011/events
https://github.com/huggingface/transformers/issues/29011
2,133,651,104
I_kwDOCUB6oc5_LO6g
29,011
Need to DeformableDetrModel support fp16
{ "login": "DonggeunYu", "id": 17740653, "node_id": "MDQ6VXNlcjE3NzQwNjUz", "avatar_url": "https://avatars.githubusercontent.com/u/17740653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DonggeunYu", "html_url": "https://github.com/DonggeunYu", "followers_url": "https://api.github.com/users/DonggeunYu/followers", "following_url": "https://api.github.com/users/DonggeunYu/following{/other_user}", "gists_url": "https://api.github.com/users/DonggeunYu/gists{/gist_id}", "starred_url": "https://api.github.com/users/DonggeunYu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DonggeunYu/subscriptions", "organizations_url": "https://api.github.com/users/DonggeunYu/orgs", "repos_url": "https://api.github.com/users/DonggeunYu/repos", "events_url": "https://api.github.com/users/DonggeunYu/events{/privacy}", "received_events_url": "https://api.github.com/users/DonggeunYu/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,707
1,707
null
CONTRIBUTOR
null
### Feature request Need to DeformableDetrModel using fp16. ~~~ from transformers import AutoImageProcessor from trf.models import DeformableDetrModel from PIL import Image import torch import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr") model = DeformableDetrModel.from_pretrained("SenseTime/deformable-detr") model = model.cuda().half() inputs = image_processor(images=image, return_tensors="pt") inputs["pixel_values"] = inputs["pixel_values"].cuda().half() inputs["pixel_mask"] = inputs["pixel_mask"].cuda() outputs = model(**inputs) print(outputs) ~~~ Output: ~~~ File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/deformable_detr/modeling_deformable_detr.py", line 699, in forward output = MultiScaleDeformableAttentionFunction.apply( File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 539, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/usr/local/lib/python3.10/dist-packages/transformers/models/deformable_detr/modeling_deformable_detr.py", line 81, in forward output = MultiScaleDeformableAttention.ms_deform_attn_forward( RuntimeError: "ms_deform_attn_forward_cuda" not implemented for 'Half' ~~~ ### Motivation FP16 for speed-up training. ### Your contribution Yes. I'm planning to submit a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29011/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29010
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29010/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29010/comments
https://api.github.com/repos/huggingface/transformers/issues/29010/events
https://github.com/huggingface/transformers/issues/29010
2,133,599,341
I_kwDOCUB6oc5_LCRt
29,010
KV Cache Size Issue during Inference
{ "login": "gopikrishnajha", "id": 96072995, "node_id": "U_kgDOBbn1Iw", "avatar_url": "https://avatars.githubusercontent.com/u/96072995?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gopikrishnajha", "html_url": "https://github.com/gopikrishnajha", "followers_url": "https://api.github.com/users/gopikrishnajha/followers", "following_url": "https://api.github.com/users/gopikrishnajha/following{/other_user}", "gists_url": "https://api.github.com/users/gopikrishnajha/gists{/gist_id}", "starred_url": "https://api.github.com/users/gopikrishnajha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gopikrishnajha/subscriptions", "organizations_url": "https://api.github.com/users/gopikrishnajha/orgs", "repos_url": "https://api.github.com/users/gopikrishnajha/repos", "events_url": "https://api.github.com/users/gopikrishnajha/events{/privacy}", "received_events_url": "https://api.github.com/users/gopikrishnajha/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @ArthurZucker ", "Hey! That's the normal behavior for auto regressive transformers. The key value states's shape decreases, while the actual cache increases in size. \nI don't know how you debugged, but the prefill (first forward) will be big, then each new forward will only add 1 to the sequence length dimension. \nHave never seen this leading to a \"decrease\" shape of the actual key_states and value_states", "> Hey! That's the normal behavior for auto regressive transformers. The key value states's shape decreases, while the actual cache increases in size. I don't know how you debugged, but the prefill (first forward) will be big, then each new forward will only add 1 to the sequence length dimension. Have never seen this leading to a \"decrease\" shape of the actual key_states and value_states\r\n\r\nThat's what I was expecting too. But the size of ```past_key_value``` variable is decreasing in the sequence length dimension with every new forward call. \r\n\r\n```past_key_value``` variable should grow in size with every forward call, right? Or is ```past_key_value``` not the kv cache?", "I don't know what you are checking, but the `key_states` used to compute the attention score, so after you update the `past_key_value` should grow. That is exactly why you cannot compile it: the size of the key and values grows", "> I don't know what you are checking, but the `key_states` used to compute the attention score, so after you update the `past_key_value` should grow. That is exactly why you cannot compile it: the size of the key and values grows\r\n\r\nIn the ```modeling_mistral.py``` I am adding the following prints (after ```past_key_value.update```) for knowing the current size.\r\n\r\n```\r\nif past_key_value is not None:\r\n cache_kwargs = {\"sin\": sin, \"cos\": cos} # Specific to RoPE models\r\n key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)\r\nif (self.layer_idx == 0):\r\n print(len(past_key_value)) #always prints 1\r\n print(past_key_value[0][0].shape, past_key_value[0][1].shape)\r\n print(key_states.shape, value_states.shape)\r\n```\r\nThe output of first few forward calls is following.\r\n\r\n1\r\ntorch.Size([10, 8, 75, 128]) torch.Size([10, 8, 75, 128])\r\ntorch.Size([10, 8, 75, 128]) torch.Size([10, 8, 75, 128])\r\n1\r\ntorch.Size([10, 8, 58, 128]) torch.Size([10, 8, 58, 128])\r\ntorch.Size([10, 8, 58, 128]) torch.Size([10, 8, 58, 128])\r\n1\r\ntorch.Size([10, 8, 51, 128]) torch.Size([10, 8, 51, 128])\r\ntorch.Size([10, 8, 51, 128]) torch.Size([10, 8, 51, 128])\r\n1\r\ntorch.Size([10, 8, 45, 128]) torch.Size([10, 8, 45, 128])\r\ntorch.Size([10, 8, 45, 128]) torch.Size([10, 8, 45, 128])\r\n1\r\ntorch.Size([10, 8, 43, 128]) torch.Size([10, 8, 43, 128])\r\ntorch.Size([10, 8, 43, 128]) torch.Size([10, 8, 43, 128])\r\n\r\nYou can see that the cache size is decreasing from the ```seq_len``` dimension.\r\n\r\nAlso, when I am confirming the DRAM usage using ```htop```, it seems to be decreasing with each iteration.", "Well this can also be due to beam search. After a few iterations you have less beams for example. There are various factors but recommending you to use greedy generation ", "> Well this can also be due to beam search. After a few iterations you have less beams for example. There are various factors but recommending you to use greedy generation\r\n\r\nI am already using greedy generation.", "If you don't provide a reproduce there's no way I can know that 🤗 could you share a small snippet of how you are calling the model? " ]
1,707
1,708
null
NONE
null
This is w.r.t inference using models like mistral7b or llama. In my understanding, KV cache size should grow as we process more tokens, however I see in the code that it shrinks as more tokens are processed. For example, in transformers/src/transformers/models/mistral/modeling_mistral.py, see the following code. ```key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)``` The cache ```past_key_value``` instead of growing, shrinks in size. Initial size is ```[batch_size, num_heads, seq_len, head_dim]``` and with increasing iterations, while ```batch_size, num_heads and head_dim``` remain the same, ```seq_len``` decreases. This results in a shrinking cache. Can anyone explain why is the cache shrinking instead of growing in size?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29010/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29009
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29009/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29009/comments
https://api.github.com/repos/huggingface/transformers/issues/29009/events
https://github.com/huggingface/transformers/pull/29009
2,133,392,507
PR_kwDOCUB6oc5mzw8O
29,009
FIX [`Trainer` / tags]: Fix trainer + tags when users do not pass `"tags"` to `trainer.push_to_hub()`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,707
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? As per title - and fixes: https://github.com/hiyouga/LLaMA-Factory/pull/2474#issuecomment-1941603142 raised by @hiyouga Indeed, we should always push tags if there are any that are saved on the model. Currently the logic is wrong, as it pushes the tags only if `"tags"` is passed to `trainer.push_to_hub()`, in fact we should always push the tag if `model.add_model_tags` has been called, regardless if `tags` is passed or not in `push_to_hub` cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29009/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29009/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29009", "html_url": "https://github.com/huggingface/transformers/pull/29009", "diff_url": "https://github.com/huggingface/transformers/pull/29009.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29009.patch", "merged_at": 1707951395000 }
https://api.github.com/repos/huggingface/transformers/issues/29008
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29008/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29008/comments
https://api.github.com/repos/huggingface/transformers/issues/29008/events
https://github.com/huggingface/transformers/pull/29008
2,133,272,790
PR_kwDOCUB6oc5mzXfk
29,008
Add CrystalCoder Model
{ "login": "TianhuaTao", "id": 9389466, "node_id": "MDQ6VXNlcjkzODk0NjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9389466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TianhuaTao", "html_url": "https://github.com/TianhuaTao", "followers_url": "https://api.github.com/users/TianhuaTao/followers", "following_url": "https://api.github.com/users/TianhuaTao/following{/other_user}", "gists_url": "https://api.github.com/users/TianhuaTao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TianhuaTao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TianhuaTao/subscriptions", "organizations_url": "https://api.github.com/users/TianhuaTao/orgs", "repos_url": "https://api.github.com/users/TianhuaTao/repos", "events_url": "https://api.github.com/users/TianhuaTao/events{/privacy}", "received_events_url": "https://api.github.com/users/TianhuaTao/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @TianhuaTao, thanks for opening this PR! \r\n\r\nThe easiest and recommended way to make a model available in `transformers` is to add the modeling code directly on the hub: https://huggingface.co./docs/transformers/custom_models. We have as much support there as we can (let us know if anything isn't working 🤗 !)\r\n\r\nThis means, once working, the model can be found and used immediately without having to go through the PR process. We find this is a lot quicker as the bar for adding code into the library is high due to the maintenance cost of every new model, and so reviews take quite a while.", "Hi @amyeroberts , thanks for your prompt reply! \r\nWe already implemented CrystalCoder as a custom model which is published here (https://huggingface.co./LLM360/CrystalCoder). However, we also want to submit CrystalCoder to the HF leaderboard which requires CrystalCoder to live natively in the transformers library --- thus this PR is created.\r\nHelp and guidance would be much appreciated", "@TianhuaTao OK, understood! Happy to hear you were able to add the remote code for the model. Let us know when this PR is ready for review 🤗 " ]
1,707
1,708
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This is the implementation for [CrystalCoder](https://huggingface.co./LLM360/CrystalCoder) and [CrystalChat](https://huggingface.co./LLM360/CrystalChat) model by LLM360. There is an additional example script on running the model at `src/transformers/models/crystalcoder/crystalchat_example.py` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29008/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29008", "html_url": "https://github.com/huggingface/transformers/pull/29008", "diff_url": "https://github.com/huggingface/transformers/pull/29008.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29008.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29007
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29007/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29007/comments
https://api.github.com/repos/huggingface/transformers/issues/29007/events
https://github.com/huggingface/transformers/issues/29007
2,133,162,466
I_kwDOCUB6oc5_JXni
29,007
Many checkpoints are outdated (torch.save'd with torch < 1.6) and don't support mmap
{ "login": "thiagocrepaldi", "id": 5469809, "node_id": "MDQ6VXNlcjU0Njk4MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/5469809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thiagocrepaldi", "html_url": "https://github.com/thiagocrepaldi", "followers_url": "https://api.github.com/users/thiagocrepaldi/followers", "following_url": "https://api.github.com/users/thiagocrepaldi/following{/other_user}", "gists_url": "https://api.github.com/users/thiagocrepaldi/gists{/gist_id}", "starred_url": "https://api.github.com/users/thiagocrepaldi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thiagocrepaldi/subscriptions", "organizations_url": "https://api.github.com/users/thiagocrepaldi/orgs", "repos_url": "https://api.github.com/users/thiagocrepaldi/repos", "events_url": "https://api.github.com/users/thiagocrepaldi/events{/privacy}", "received_events_url": "https://api.github.com/users/thiagocrepaldi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @thiagocrepaldi, thanks for raising this issue! \r\n\r\nUnfortunately, it's simply not possible for us to convert all checkpoints to be compatible. There are currently more than 800k models listed on the hub, as well as many private models and models which haven't been uploaded. Backwards compatibility is important in the library and although our currently supported version of pytorch is >= 1.11, enforcing this would likely break many things for many users. \r\n\r\nOne option would be to open PRs on model's on the hub with the converted weights and an explanation of the advantages. It would then be up to the repo's owner whether or not they would like to update the checkpoints. Care would need to be taken to make sure the conversions are correct and to avoid spamming users. \r\n\r\nI'd suggest instead just doing this conversion on the fly, as and when you need it. \r\n\r\nNote: the default serialization of weights for models is now safetensors, and we use the safetensors library to open. \r\n\r\ncc @Narsil In case I got any of the facts wrong here of there's anything else to add. ", "Hi @amyeroberts, \r\n\r\n* Indeed the number of models might be an issue for such conversion, but if that is something that huggingface servers can handle, Backward Compatibility wouldn't be a problem. \r\n* A new file with a different name could be created, say `pytorch_model.bin` -> `pytorch_model_mmap.bin` for example\r\n* The numbers would match because we would essentialy do `torch.save(torch.load(f), new_file))` without any possibility of changing the content of the file\r\n* Doing the conversion on the fly is possible, bu defeats the purpose of having a hub with pretrained weights :)", "@thiagocrepaldi \r\n\r\n> Indeed the number of models might be an issue for such conversion, but if that is something that huggingface servers can handle, Backward Compatibility wouldn't be a problem.\r\n\r\nThat unfortunately isn't the case. As mentioned in my previous comment, there are private models on the hub which we don't have access to and model which aren't hosted on the hub at all, which would break. ", "> @thiagocrepaldi\r\n> \r\n> > Indeed the number of models might be an issue for such conversion, but if that is something that huggingface servers can handle, Backward Compatibility wouldn't be a problem.\r\n> \r\n> That unfortunately isn't the case. As mentioned in my previous comment, there are private models on the hub which we don't have access to and model which aren't hosted on the hub at all, which would break.\r\n\r\nThank you. How about the publicly accessible ones? Would it be reasonable to update or add an updated checkpoint alongside the outdated ones?", "@thiagocrepaldi As this isn't something we've had requested or mentioned before, I don't think it's worth setting off a massive scale conversion of weights on the hub, especially as many weights will be compatible (> torch 1.6). If this issue gets a lot of attention from the community then we can reconsider targeting models with a threshold number of downloads. \r\n\r\nIn the meantime, you are welcome to open PRs on affected models on the hub, updating the weights and explaining the advantages of the conversion. This way the repo owners can decide if this is something they want. You could also open an issue with a list of known affected checkpoints and get other members of the community to help in the effort of opening PRs.", "Thank you, I will try proposing individual PRs. It would be nice to establish minimal Pytorch versions for the newer models to guarantee performance and compatibility with newer features provided by PyTorch 2.x\r\n\r\nFeel free to close this issue, if needed" ]
1,707
1,708
null
NONE
null
### System Info - `transformers` version: 4.36.0 - Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.31 - Python version: 3.11.5 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.3.0a0+git78a84f1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Older checkpoint format (PyTorch < 1.6) didn't support mmap, which is recommended for faster loading and, especially, for loading large models in memory. ```python import os from huggingface_hub import snapshot_download import torch from torch.serialization import _open_file_like, _is_zipfile def is_outdated_torch_load(f): with _open_file_like(f, 'rb') as opened_file: if _is_zipfile(opened_file): return False return True def update_torch_checkpoint(f): res = torch.load(f) torch.save(res, f) # Download known outdated checkpoint model_name = "sshleifer/tiny-gpt2" checkpoint_name = "pytorch_model.bin" ret = snapshot_download(repo_id=model_name, allow_patterns=checkpoint_name, local_dir_use_symlinks=True, local_dir="./") # Load and assert it is outdated (aka torch.save used from pytorch <= 1.5) checkpoint_path = os.path.join(ret, checkpoint_name) assert is_outdated_torch_load(checkpoint_path), f"{checkpoint_path} is outdated!" # Refresh checkpoint with PyTorch >= 1.6 and assert it is NOT outdated anymore update_torch_checkpoint(checkpoint_path) assert not is_outdated_torch_load(checkpoint_path), f"{checkpoint_path} is NOT outdated!" ``` ### Expected behavior In order to support mmap which is faster and support larger models without OOM, all checkpoints should be refreshed with torch.save from Pytorch >= 1.6
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29007/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29007/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29006
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29006/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29006/comments
https://api.github.com/repos/huggingface/transformers/issues/29006/events
https://github.com/huggingface/transformers/issues/29006
2,133,043,866
I_kwDOCUB6oc5_I6qa
29,006
load_state_dict doesnt support torch._subclasses.fake_tensor.FakeTensorMode
{ "login": "thiagocrepaldi", "id": 5469809, "node_id": "MDQ6VXNlcjU0Njk4MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/5469809?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thiagocrepaldi", "html_url": "https://github.com/thiagocrepaldi", "followers_url": "https://api.github.com/users/thiagocrepaldi/followers", "following_url": "https://api.github.com/users/thiagocrepaldi/following{/other_user}", "gists_url": "https://api.github.com/users/thiagocrepaldi/gists{/gist_id}", "starred_url": "https://api.github.com/users/thiagocrepaldi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thiagocrepaldi/subscriptions", "organizations_url": "https://api.github.com/users/thiagocrepaldi/orgs", "repos_url": "https://api.github.com/users/thiagocrepaldi/repos", "events_url": "https://api.github.com/users/thiagocrepaldi/events{/privacy}", "received_events_url": "https://api.github.com/users/thiagocrepaldi/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hi @thiagocrepaldi, thanks for raising this issue! \r\n\r\nI'm going to cc in @Narsil, the king of safetensors here. \r\n\r\nIf you want to be able to create an empty model, you can use [accelerate's `init_empty_weights` utility](https://huggingface.co./docs/accelerate/v0.11.0/en/big_modeling): \r\n\r\n```py\r\nfrom accelerate import init_empty_weights\r\n\r\nwith init_empty_weights():\r\n my_model = ModelClass(...)\r\n```\r\n\r\n", "> Hi @thiagocrepaldi, thanks for raising this issue!\r\n> \r\n> I'm going to cc in @Narsil, the king of safetensors here.\r\n> \r\n> If you want to be able to create an empty model, you can use [accelerate's `init_empty_weights` utility](https://huggingface.co./docs/accelerate/v0.11.0/en/big_modeling):\r\n> \r\n> ```python\r\n> from accelerate import init_empty_weights\r\n> \r\n> with init_empty_weights():\r\n> my_model = ModelClass(...)\r\n> ```\r\n\r\nThanks, I will look into it. This API is specific to transformers, whereas the `with torch._subclasses.fake_tensor.FakeTensorMode()` should work with any model, transformers or otherwise. \r\n\r\nIt is because of this generality that we feel this is an important feature to support on transformers. @ezyang from PyTorch project might have an insight on this\r\n", "This is also tracked by https://github.com/pytorch/pytorch/issues/106732 on PyTorch\r\n\r\nPyTorch's `torch.load` fixes `FakeTensorMode` support with #119990, but the issue above will still repro. Note that the fix for PyTorch side also uses the `torch._subclasses.fake_tensor.FakeTensorMode` like my proposed/rejected PR for HF https://github.com/huggingface/safetensors/pull/318", "I wouldn't have accepted https://github.com/huggingface/safetensors/pull/318 lol. But if the relevant library in HF/safetensors is directly manipulating storages, some sort of PR will be necessary", "`get_tensor` is a [Rust implementation](https://github.com/huggingface/safetensors/blob/08db34094e9e59e2f9218f2df133b7b4aaff5a99/bindings/python/src/lib.rs#L510) which instantiates a `Storage` of their own\r\n\r\n```rust\r\n pub fn get_tensor(&self, name: &str) -> PyResult<PyObject> {\r\n let info = self.metadata.info(name).ok_or_else(|| {\r\n SafetensorError::new_err(format!(\"File does not contain tensor {name}\",))\r\n })?;\r\n // let info = tensors.get(name).ok_or_else(|| {\r\n // SafetensorError::new_err(format!(\"File does not contain tensor {name}\",))\r\n // })?;\r\n\r\n match &self.storage.as_ref() {\r\n Storage::Mmap(mmap) => {\r\n let data =\r\n &mmap[info.data_offsets.0 + self.offset..info.data_offsets.1 + self.offset];\r\n\r\n let array: PyObject = Python::with_gil(|py| PyByteArray::new(py, data).into_py(py));\r\n\r\n create_tensor(\r\n &self.framework,\r\n info.dtype,\r\n &info.shape,\r\n array,\r\n &self.device,\r\n )\r\n }\r\n Storage::TorchStorage(storage) => {\r\n Python::with_gil(|py| -> PyResult<PyObject> {\r\n let torch = get_module(py, &TORCH_MODULE)?;\r\n let dtype: PyObject = get_pydtype(torch, info.dtype, false)?;\r\n let torch_uint8: PyObject = get_pydtype(torch, Dtype::U8, false)?;\r\n let kwargs = [(intern!(py, \"dtype\"), torch_uint8)].into_py_dict(py);\r\n let view_kwargs = [(intern!(py, \"dtype\"), dtype)].into_py_dict(py);\r\n let shape = info.shape.to_vec();\r\n let shape: PyObject = shape.into_py(py);\r\n\r\n let start = (info.data_offsets.0 + self.offset) as isize;\r\n let stop = (info.data_offsets.1 + self.offset) as isize;\r\n let slice = PySlice::new(py, start, stop, 1);\r\n let storage: &PyObject = storage\r\n .get(py)\r\n .ok_or_else(|| SafetensorError::new_err(\"Could not find storage\"))?;\r\n let storage: &PyAny = storage.as_ref(py);\r\n let storage_slice = storage\r\n .getattr(intern!(py, \"__getitem__\"))?\r\n .call1((slice,))?;\r\n\r\n let sys = PyModule::import(py, intern!(py, \"sys\"))?;\r\n let byteorder: String = sys.getattr(intern!(py, \"byteorder\"))?.extract()?;\r\n\r\n let mut tensor = torch\r\n .getattr(intern!(py, \"asarray\"))?\r\n .call((storage_slice,), Some(kwargs))?\r\n .getattr(intern!(py, \"view\"))?\r\n .call((), Some(view_kwargs))?;\r\n\r\n if byteorder == \"big\" {\r\n let inplace_kwargs =\r\n [(intern!(py, \"inplace\"), false.into_py(py))].into_py_dict(py);\r\n if info.dtype == Dtype::BF16 {\r\n let torch_f16: PyObject = get_pydtype(torch, Dtype::F16, false)?;\r\n tensor = tensor.getattr(intern!(py, \"to\"))?.call(\r\n (),\r\n Some([(intern!(py, \"dtype\"), torch_f16)].into_py_dict(py)),\r\n )?;\r\n }\r\n\r\n let numpy = tensor\r\n .getattr(intern!(py, \"numpy\"))?\r\n .call0()?\r\n .getattr(\"byteswap\")?\r\n .call((), Some(inplace_kwargs))?;\r\n tensor = torch.getattr(intern!(py, \"from_numpy\"))?.call1((numpy,))?;\r\n\r\n if info.dtype == Dtype::BF16 {\r\n let torch_bf16: PyObject = get_pydtype(torch, Dtype::BF16, false)?;\r\n tensor = tensor.getattr(intern!(py, \"to\"))?.call(\r\n (),\r\n Some([(intern!(py, \"dtype\"), torch_bf16)].into_py_dict(py)),\r\n )?;\r\n }\r\n }\r\n\r\n tensor = tensor.getattr(intern!(py, \"reshape\"))?.call1((shape,))?;\r\n if self.device != Device::Cpu {\r\n let device: PyObject = self.device.clone().into_py(py);\r\n let kwargs = PyDict::new(py);\r\n tensor = tensor\r\n .getattr(intern!(py, \"to\"))?\r\n .call((device,), Some(kwargs))?;\r\n }\r\n Ok(tensor.into_py(py))\r\n // torch.asarray(storage[start + n : stop + n], dtype=torch.uint8).view(dtype=dtype).reshape(shape)\r\n })\r\n }\r\n }\r\n }\r\n```", "Well big rip. Then yes, I agree that I would route the code to avoid calling into Rust when fake mode is enabled." ]
1,707
1,708
null
NONE
null
### System Info - `transformers` version: 4.36.0 - Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.31 - Python version: 3.11.5 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.3.0a0+git78a84f1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When PyTorch's `FakeTensorMode` is active, the underlying storage is changed to be `UntypedStorage` as a way to not really allocate the memory for the parameters. As a consequence, transformers's `get_tensor` failed with `ValueError: could not determine the shape of object type 'torch.storage.UntypedStorage'` ```python from torch._subclasses import fake_tensor import transformers fake_mode = fake_tensor.FakeTensorMode(allow_non_fake_inputs=False) with fake_mode: fake_model = transformers.AutoModel.from_pretrained("sshleifer/tiny-gpt2") ``` Error: ```bash Loading checkpoint shards: 0%| | 0/19 [00:00<?, ?it/s] Traceback (most recent call last): File "/opt/pytorch/test_mixtral.py", line 9, in <module> model = AutoModelForCausalLM.from_pretrained(model_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/ptca/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained return model_class.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/ptca/lib/python3.11/site-packages/transformers/modeling_utils.py", line 3694, in from_pretrained ) = cls._load_pretrained_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/ptca/lib/python3.11/site-packages/transformers/modeling_utils.py", line 4079, in _load_pretrained_model state_dict = load_state_dict(shard_file) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/ptca/lib/python3.11/site-packages/transformers/modeling_utils.py", line 510, in load_state_dict return safe_load_file(checkpoint_file) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/ptca/lib/python3.11/site-packages/safetensors/torch.py", line 310, in load_file result[k] = f.get_tensor(k) ^^^^^^^^^^^^^^^ ValueError: could not determine the shape of object type 'torch.storage.UntypedStorage' ``` ### Expected behavior `transformers` `get_tensor` should be able to load fake tensors from a fakefied checkpoint
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29006/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/29005
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29005/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29005/comments
https://api.github.com/repos/huggingface/transformers/issues/29005/events
https://github.com/huggingface/transformers/pull/29005
2,132,933,259
PR_kwDOCUB6oc5myOmB
29,005
Cache: standardize cache interface
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Model correctness depends on #28937, rebasing with its contents", "Closed in place of #29180 (merge conflicts 🤷 )" ]
1,707
1,708
null
MEMBER
null
# What does this PR do? In #27931, where the static cache was introduced, we noticed it had the following hard requirements: 1. The model instance holds the cache, as opposed to being a tensor passed around; 2. Each layer has its own cache, as opposed to a single cache for all layers. This contrasts with previous implementations (e.g. `DynamicCache`). Given the hard requirements of the static cache, and the resulting benefits, this PR migrates the interface for all cache classes so as to match the static cache. As a result, the modeling code becomes slightly simpler 🤗
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29005/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29005", "html_url": "https://github.com/huggingface/transformers/pull/29005", "diff_url": "https://github.com/huggingface/transformers/pull/29005.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29005.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29004
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29004/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29004/comments
https://api.github.com/repos/huggingface/transformers/issues/29004/events
https://github.com/huggingface/transformers/pull/29004
2,132,847,728
PR_kwDOCUB6oc5mx8NS
29,004
fix for custom pipeline configuration
{ "login": "not-lain", "id": 70411813, "node_id": "MDQ6VXNlcjcwNDExODEz", "avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/not-lain", "html_url": "https://github.com/not-lain", "followers_url": "https://api.github.com/users/not-lain/followers", "following_url": "https://api.github.com/users/not-lain/following{/other_user}", "gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}", "starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/not-lain/subscriptions", "organizations_url": "https://api.github.com/users/not-lain/orgs", "repos_url": "https://api.github.com/users/not-lain/repos", "events_url": "https://api.github.com/users/not-lain/events{/privacy}", "received_events_url": "https://api.github.com/users/not-lain/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@Rocketknight1 could you review this one too ?\r\nfixed the tests (i forgor to update my branch :D ) ", "Sure, I'll try to take a look at this one and the pipeline upload one!" ]
1,707
1,708
null
CONTRIBUTOR
null
# What does this PR do? fixes configuration file not pointing at a remote repo with custom pipeline architecture Fixes #28907 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29004/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29004", "html_url": "https://github.com/huggingface/transformers/pull/29004", "diff_url": "https://github.com/huggingface/transformers/pull/29004.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29004.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29003
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29003/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29003/comments
https://api.github.com/repos/huggingface/transformers/issues/29003/events
https://github.com/huggingface/transformers/pull/29003
2,132,689,223
PR_kwDOCUB6oc5mxZzC
29,003
[DO NOT MERGE] Remove big block of code in _from_pretrained()
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> I'm going to see what the CI says.\r\n\r\n🙃 ", "The whole block seems to be there just to throw a warning when you load a tokenizer with the wrong class! " ]
1,707
1,707
1,707
MEMBER
null
I can't figure out what this code is doing, and I suspect we don't need to run it all. I'm going to see what the CI says.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29003/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29003/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29003", "html_url": "https://github.com/huggingface/transformers/pull/29003", "diff_url": "https://github.com/huggingface/transformers/pull/29003.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29003.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/29002
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29002/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29002/comments
https://api.github.com/repos/huggingface/transformers/issues/29002/events
https://github.com/huggingface/transformers/pull/29002
2,132,623,369
PR_kwDOCUB6oc5mxLWu
29,002
[`Doc`] Fix docbuilder - make `BackboneMixin` and `BackboneConfigMixin` importable from `utils`.
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29002). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @ydshieh ", "I'm merging as this is quite urgent for resolving failing tests on `main`" ]
1,707
1,707
1,707
COLLABORATOR
null
# What does this PR do? Several CIs have started having the doc builds failing e.g.: * https://github.com/huggingface/transformers/actions/runs/7881443714/job/21529678872 * https://github.com/huggingface/transformers/actions/runs/7884708184/job/21530274589 On one case rerunning lead to a successful build: * Failed: https://github.com/huggingface/transformers/actions/runs/7881054565/attempts/1 * Passed: https://github.com/huggingface/transformers/actions/runs/7881054565/job/21518984718 However, trying this on other runs wasn't successful e.g.: * https://github.com/huggingface/transformers/actions/runs/7884708184/attempts/1 * https://github.com/huggingface/transformers/actions/runs/7884708184/attempts/2 * https://github.com/huggingface/transformers/actions/runs/7884708184 I haven't been able to identify why these tests have started failing. Obvious dependencies haven't been affected: * `backbones.md` was touched two weeks ago * Latest update to doc-builder was two weeks ago * `backbone_utils` was updated three weeks ago This PR moves the offending backbone classes to be importable from `utils`. This was to match all other `~utils` references in the doc, which have the form `~utils.module.object`. Doing this appears to have resolved the issue: * Two runs on the same commit on github CI have passed - [run 1](https://github.com/huggingface/transformers/actions/runs/7889497104/attempts/1), [run 2](https://github.com/huggingface/transformers/actions/runs/7889497104) * Another run on a different, empty commit passed - [here](https://github.com/huggingface/transformers/actions/runs/7890483169/job/21532741409?pr=29002)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29002/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29002/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29002", "html_url": "https://github.com/huggingface/transformers/pull/29002", "diff_url": "https://github.com/huggingface/transformers/pull/29002.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29002.patch", "merged_at": 1707906563000 }
https://api.github.com/repos/huggingface/transformers/issues/29001
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29001/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29001/comments
https://api.github.com/repos/huggingface/transformers/issues/29001/events
https://github.com/huggingface/transformers/pull/29001
2,132,617,158
PR_kwDOCUB6oc5mxJ97
29,001
Update all references to canonical models
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_29001). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "The failing tests are also failing on `main` and due to the static cache PR. The `tests_pr_documentation_tests` unfortunately cannot run as it exceeds 10 minutes" ]
1,707
1,708
1,708
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29001/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29001", "html_url": "https://github.com/huggingface/transformers/pull/29001", "diff_url": "https://github.com/huggingface/transformers/pull/29001.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29001.patch", "merged_at": 1708067818000 }
https://api.github.com/repos/huggingface/transformers/issues/29000
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29000/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29000/comments
https://api.github.com/repos/huggingface/transformers/issues/29000/events
https://github.com/huggingface/transformers/pull/29000
2,132,557,752
PR_kwDOCUB6oc5mw9CS
29,000
Extend import utils to cover "editable" torch versions
{ "login": "bhack", "id": 1710528, "node_id": "MDQ6VXNlcjE3MTA1Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/1710528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhack", "html_url": "https://github.com/bhack", "followers_url": "https://api.github.com/users/bhack/followers", "following_url": "https://api.github.com/users/bhack/following{/other_user}", "gists_url": "https://api.github.com/users/bhack/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhack/subscriptions", "organizations_url": "https://api.github.com/users/bhack/orgs", "repos_url": "https://api.github.com/users/bhack/repos", "events_url": "https://api.github.com/users/bhack/events{/privacy}", "received_events_url": "https://api.github.com/users/bhack/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Why formatting error we have? it isn't clear from the CI log", "The repository uses double quotes for string literals. You can format your code by running 'make style' (see [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request)).", "Done, who can review this?", "@bhack Thanks for opening this PR! \r\n\r\nFor anyone who's coming to this PR in the future, could you share the torch version in your environment i.e. when running `pip list | grep torch`?\r\n\r\nI've requested a review from @ydshieh as he is the king of all things version and package handling. I'd like to get his opinion before we merge. He's off for a for days, so would next week when this can get merged in. " ]
1,707
1,707
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28999 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29000/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29000", "html_url": "https://github.com/huggingface/transformers/pull/29000", "diff_url": "https://github.com/huggingface/transformers/pull/29000.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29000.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28999
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28999/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28999/comments
https://api.github.com/repos/huggingface/transformers/issues/28999/events
https://github.com/huggingface/transformers/issues/28999
2,132,306,791
I_kwDOCUB6oc5_GGtn
28,999
Pytorch not detected in official "editable" nightly
{ "login": "bhack", "id": 1710528, "node_id": "MDQ6VXNlcjE3MTA1Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/1710528?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhack", "html_url": "https://github.com/bhack", "followers_url": "https://api.github.com/users/bhack/followers", "following_url": "https://api.github.com/users/bhack/following{/other_user}", "gists_url": "https://api.github.com/users/bhack/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhack/subscriptions", "organizations_url": "https://api.github.com/users/bhack/orgs", "repos_url": "https://api.github.com/users/bhack/repos", "events_url": "https://api.github.com/users/bhack/events{/privacy}", "received_events_url": "https://api.github.com/users/bhack/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,707
1,707
null
NONE
null
### System Info Pytorch is not detecting in the offical editable nighlty conda env: https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md#nightly-checkout--pull ```python File "/opt/conda/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1325, in __getattribute__ requires_backends(cls, cls._backends) File "/opt/conda/lib/python3.11/site-packages/transformers/utils/import_utils.py", line 1313, in requires_backends raise ImportError("".join(failed)) ImportError: BertForMaskedLM requires the PyTorch library but it was not found in your environment. Checkout the instructions on the installation page: https://pytorch.org/get-started/locally/ and follow the ones that match your environment. Please note that you may need to restart your runtime after installation. ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://github.com/pytorch/pytorch/issues/119740#issuecomment-1941470322 ### Expected behavior pytorch need to be detected also in this official dev-env
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28999/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28998
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28998/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28998/comments
https://api.github.com/repos/huggingface/transformers/issues/28998/events
https://github.com/huggingface/transformers/issues/28998
2,132,280,613
I_kwDOCUB6oc5_GAUl
28,998
`dataset.map` for tokenization hangs at 60%
{ "login": "sarahpannn", "id": 62582677, "node_id": "MDQ6VXNlcjYyNTgyNjc3", "avatar_url": "https://avatars.githubusercontent.com/u/62582677?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahpannn", "html_url": "https://github.com/sarahpannn", "followers_url": "https://api.github.com/users/sarahpannn/followers", "following_url": "https://api.github.com/users/sarahpannn/following{/other_user}", "gists_url": "https://api.github.com/users/sarahpannn/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahpannn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahpannn/subscriptions", "organizations_url": "https://api.github.com/users/sarahpannn/orgs", "repos_url": "https://api.github.com/users/sarahpannn/repos", "events_url": "https://api.github.com/users/sarahpannn/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahpannn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey, I ran this without issues, not stuck but just super slow. Might be a lot of things from the tokenizer to the dataset to your CPUs" ]
1,707
1,708
null
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.35 - Python version: 3.11.5 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", use_fast=True) tokenizer.pad_token = tokenizer.eos_token dataset = load_dataset("sarahpann/AMPS") dataset = dataset.map(lambda x: tokenizer(x["problem"] + x["step_by_step"], truncation=True, max_length=2048)) ``` ### Expected behavior The map freezes at 60%. I've also tried a similar thing with batched tokenization, but that freezes at around 93%. There don't seem to be issues with the dataset and weird Unicode characters. A simple for loop method also freezes at 60%. The specific example it freezes at varies by run.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28998/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28997
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28997/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28997/comments
https://api.github.com/repos/huggingface/transformers/issues/28997/events
https://github.com/huggingface/transformers/issues/28997
2,132,254,332
I_kwDOCUB6oc5_F558
28,997
Automatically add tokens when using model.generate()
{ "login": "MikeDean2367", "id": 65744560, "node_id": "MDQ6VXNlcjY1NzQ0NTYw", "avatar_url": "https://avatars.githubusercontent.com/u/65744560?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MikeDean2367", "html_url": "https://github.com/MikeDean2367", "followers_url": "https://api.github.com/users/MikeDean2367/followers", "following_url": "https://api.github.com/users/MikeDean2367/following{/other_user}", "gists_url": "https://api.github.com/users/MikeDean2367/gists{/gist_id}", "starred_url": "https://api.github.com/users/MikeDean2367/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MikeDean2367/subscriptions", "organizations_url": "https://api.github.com/users/MikeDean2367/orgs", "repos_url": "https://api.github.com/users/MikeDean2367/repos", "events_url": "https://api.github.com/users/MikeDean2367/events{/privacy}", "received_events_url": "https://api.github.com/users/MikeDean2367/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @MikeDean2367, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "ok, i'll go to the forums. Thanks!" ]
1,707
1,707
1,707
NONE
null
### Feature request When calling `model.generate()`, I hope that when a certain token is generated, a new token is automatically added after it. For example, when the model generates "The founder of Apple is Steve," I would like it to add a token "Jobs" at the end automatically. After that, the input of the next step is "The founder of Apple is Steve Jobs". Below is a snippet of pseudocode to illustrate this concept: ```python while eos_token: if current_last_token == 'Steve': # let the `rule` generate the next token current_token_seq.append('Jobs') else: # let the `model` generate the next token current_token_seq.append(generate_next_token()) ``` ### Motivation In natural language processing, a single person's name can have multiple representations, including abbreviations, full names, and nicknames. In the context of my application, "Steve Jobs" is a frequently used term. Nonetheless, the model might produce variations such as "Steve jobs" with inconsistent capitalization or "Steven Paul Jobs," using his full name. These inconsistencies present challenges during the evaluation phase of my process. ### Your contribution Do speculative decoding solve it?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28997/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28996
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28996/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28996/comments
https://api.github.com/repos/huggingface/transformers/issues/28996/events
https://github.com/huggingface/transformers/issues/28996
2,132,190,989
I_kwDOCUB6oc5_FqcN
28,996
Starcoder/GPTBigCode has broken beam search when converted to ONNX runtime model
{ "login": "lidingsnyk", "id": 139234713, "node_id": "U_kgDOCEyNmQ", "avatar_url": "https://avatars.githubusercontent.com/u/139234713?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lidingsnyk", "html_url": "https://github.com/lidingsnyk", "followers_url": "https://api.github.com/users/lidingsnyk/followers", "following_url": "https://api.github.com/users/lidingsnyk/following{/other_user}", "gists_url": "https://api.github.com/users/lidingsnyk/gists{/gist_id}", "starred_url": "https://api.github.com/users/lidingsnyk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lidingsnyk/subscriptions", "organizations_url": "https://api.github.com/users/lidingsnyk/orgs", "repos_url": "https://api.github.com/users/lidingsnyk/repos", "events_url": "https://api.github.com/users/lidingsnyk/events{/privacy}", "received_events_url": "https://api.github.com/users/lidingsnyk/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @gante ", "@lidingsnyk GPTBigCode does have an unconventional cache, yes, which may be the underlying cause for the bug you're seeing. However, in terms of code, the error stems from `optimum` (or from the interface between `transformers` and `optimum`, also usually handled on the `optimum` side) 🤗 \r\n\r\nI'd suggest adding and following the issue there. Happy to sort issues on the `transformers` side if they arise along the way!", "@gante Thanks! I'll add a few comments to the [git issue](https://github.com/huggingface/optimum/issues/1475) in `optimum` 5 months ago. Hopefully someone will acknowledge the problem exist." ]
1,707
1,707
null
NONE
null
Not sure if the root cause of the issue is in `huggingface/tranformers` or `huggingface/onnxruntime`, but posting it here in case people have more context. Sorry if this ended up being noise for this forum. ### System Info ``` transformers version: 4.37.2 optimum[onnxruntime-gpu]==1.16.2 onnxruntime-gpu==1.17.0 Platform: linux_x86_64 cp310 ubuntu-22.04 Python version: 3.10 Huggingface_hub version: 0.20.3 Safetensors version: 0.4.2 Accelerate version: 0.26.1 Accelerate config: not found PyTorch version (GPU?): 2.1.2 (True) torch-2.1.2-cu118-cp310-cp310-linux_x86_64.whl Tensorflow version (GPU?): not installed Flax version (CPU?/GPU?/TPU?): not installed Jax version: not installed JaxLib version: not installed Using GPU in script?: yes. A100 CUDA_VERSION: 11.8.0 Using distributed or parallel set-up in script?: yes (deepspeed 0.11.2) ``` ### Who can help? @amyeroberts @pacman100 @JingyaHuang @younesbelkada @michaelbenayoun ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction GPTBigCode is listed as supported model under onnxruntime. After loading a `bigcode/starcoderbase-3b` as `optimum.onnxruntime.modeling_decoder.ORTGPTBigCodeForCausalLM`, inference runs fine for greedy search (`num_beams = 1`) but crashes for beam search (`num_beams > 1`) with the following stacktrace: ``` 2024-02-08 15:37:47 [info ] loaded the model as ORTModel model_type=<class 'optimum.onnxruntime.modeling_decoder.ORTGPTBigCodeForCausalLM'> 2024-02-08 15:37:47 [warning ] switching the tokenizer padding side from 'right' to 'left' for a causal LM 2024-02-08 15:37:51.218582394 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 36 Memcpy nodes are added to the graph main_graph for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message. 2024-02-08 15:37:51.237708474 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf. 2024-02-08 15:37:51.237736553 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments. 2024-02-08 15:37:55 [info ] Successfully created the pipeline device=cuda:0 Traceback (most recent call last): File "/opt/tools/redacted/ml_deps_transformers/site-packages/transformers/pipelines/text_generation.py", line 219, in __call__ return super().__call__(text_inputs, **kwargs) File "/opt/tools/redacted/ml_deps_transformers/site-packages/transformers/pipelines/base.py", line 1143, in __call__ outputs = list(final_iterator) File "/opt/tools/redacted/ml_deps_transformers/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__ item = next(self.iterator) File "/opt/tools/redacted/ml_deps_transformers/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__ processed = self.infer(item, **self.params) File "/opt/tools/redacted/ml_deps_transformers/site-packages/transformers/pipelines/base.py", line 1068, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/opt/tools/redacted/__main__/pantscode/redacted/ml_inference/inference_pipeline.py", line 66, in _forward return super()._forward(model_inputs, **generate_kwargs) File "/opt/tools/redacted/ml_deps_transformers/site-packages/transformers/pipelines/text_generation.py", line 295, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "/opt/tools/redacted/ml_deps_torch/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/opt/tools/redacted/ml_deps_transformers/site-packages/transformers/generation/utils.py", line 1558, in generate return self.beam_search( File "/opt/tools/redacted/ml_deps_transformers/site-packages/transformers/generation/utils.py", line 2940, in beam_search model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) File "/opt/tools/redacted/ml_deps_optimum/site-packages/optimum/onnxruntime/modeling_decoder.py", line 684, in prepare_inputs_for_generation past_length = past_key_values[0].shape[1] AttributeError: 'tuple' object has no attribute 'shape ``` Model is loaded as `ORTModelForCausalLM` instead of `transformers.AutoModelForCausalLM`, but still with `transformers.TextGenerationPipeline` ``` model = onnxruntime.ORTModelForCausalLM.from_pretrained( model_path, use_io_binding=True, provider="CUDAExecutionProvider", use_cache=True, export=True ) ``` Not every model is having the bug, but perhaps GPTBigCode is not the only one, as mentioned in another [git issue](https://github.com/huggingface/optimum/issues/1475). Since there's no acknowledgement of the bug in the aforementioned issue, I'm wondering if it was posted in the correct place. ### Expected behavior When `num_beams > 1`, inference would not crash with `AttributeError: 'tuple' object has no attribute 'shape`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28996/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28995
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28995/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28995/comments
https://api.github.com/repos/huggingface/transformers/issues/28995/events
https://github.com/huggingface/transformers/pull/28995
2,132,036,961
PR_kwDOCUB6oc5mvLAb
28,995
fix(CLIP): make clip model exportable using torch.jit.trace
{ "login": "Bycob", "id": 15674552, "node_id": "MDQ6VXNlcjE1Njc0NTUy", "avatar_url": "https://avatars.githubusercontent.com/u/15674552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bycob", "html_url": "https://github.com/Bycob", "followers_url": "https://api.github.com/users/Bycob/followers", "following_url": "https://api.github.com/users/Bycob/following{/other_user}", "gists_url": "https://api.github.com/users/Bycob/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bycob/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bycob/subscriptions", "organizations_url": "https://api.github.com/users/Bycob/orgs", "repos_url": "https://api.github.com/users/Bycob/repos", "events_url": "https://api.github.com/users/Bycob/events{/privacy}", "received_events_url": "https://api.github.com/users/Bycob/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @Bycob, thanks for opening this PR! \r\n\r\nCould you share how you're tracing the model? \r\n\r\nOn `main` I'm able to trace CLIP without any issue:\r\n\r\n```py\r\nimport torch\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import AutoProcessor, CLIPModel\r\n\r\nmodel = CLIPModel.from_pretrained(\"openai/clip-vit-base-patch32\", torchscript=True)\r\nprocessor = AutoProcessor.from_pretrained(\"openai/clip-vit-base-patch32\")\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\n\r\ninputs = processor(\r\n text=[\"a photo of a cat\", \"a photo of a dog\"], images=image, return_tensors=\"pt\", padding=True\r\n)\r\ninput_ids = inputs['input_ids']\r\npixel_values = inputs['pixel_values']\r\ntraced_model = torch.jit.trace(model, [input_ids, pixel_values])\r\ntorch.jit.save(traced_model, \"traced_clip.pt\")\r\n\r\n# Load model back in\r\nmodel = torch.jit.load(\"traced_clip.pt\")\r\noutputs = model(input_ids, pixel_values)\r\n```" ]
1,707
1,707
null
NONE
null
# What does this PR do? I added `.long()` on two places to make the model exportable using `torch.jit.trace`. If they are ommited we get this error: ``` mllib internal error: Libtorch error:The following operation failed in the TorchScript interpreter. Traceback of TorchScript, serialized code (most recent call last): File "code/__torch__/___torch_mangle_1181.py", line 13, in forward visionmodel0 = self.visionmodel vision_model = visionmodel0.vision_model _0 = (visual_projection).forward((vision_model).forward(x, ), ) ~~~~~~~~~~~~~~~~~~~~~ <--- HERE return _0 File "code/__torch__/transformers/models/clip/modeling_clip/___torch_mangle_1178.py", line 16, in forward pre_layrnorm = self.pre_layrnorm embeddings = self.embeddings _0 = (pre_layrnorm).forward((embeddings).forward(x, ), ) ~~~~~~~~~~~~~~~~~~~ <--- HERE _1 = torch.slice((encoder).forward(_0, ), 0, 0, 9223372036854775807) input = torch.slice(torch.select(_1, 1, 0), 1, 0, 9223372036854775807) File "code/__torch__/transformers/models/clip/modeling_clip/___torch_mangle_885.py", line 23, in forward class_embeds = torch.expand(class_embedding, [_0, 1, -1]) embeddings = torch.cat([class_embeds, patch_embeds], 1) _2 = (position_embedding).forward(position_ids, ) ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE return torch.add(embeddings, _2) File "code/__torch__/torch/nn/modules/sparse/___torch_mangle_884.py", line 10, in forward input: Tensor) -> Tensor: weight = self.weight return torch.embedding(weight, input) ~~~~~~~~~~~~~~~ <--- HERE Traceback of TorchScript, original code (most recent call last): /usr/local/lib/python3.10/dist-packages/torch/nn/functional.py(2233): embedding /usr/local/lib/python3.10/dist-packages/torch/nn/modules/sparse.py(162): forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1508): _slow_forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl /usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py(187): forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1508): _slow_forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl /usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py(843): forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1508): _slow_forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl /usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py(1288): forward <ipython-input-8-8cd9df86cbb5>(14): forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1508): _slow_forward /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl /usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py(1065): trace_module /usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py(798): trace <ipython-input-10-1c73f7172f28>(10): <cell line: 10> /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3553): run_code /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3473): run_ast_nodes /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3257): run_cell_async /usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py(78): _pseudo_sync_runner /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3030): _run_cell /usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(2975): run_cell /usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py(539): run_cell /usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py(302): do_execute /usr/local/lib/python3.10/dist-packages/tornado/gen.py(234): wrapper /usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py(539): execute_request /usr/local/lib/python3.10/dist-packages/tornado/gen.py(234): wrapper /usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py(261): dispatch_shell /usr/local/lib/python3.10/dist-packages/tornado/gen.py(234): wrapper /usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py(361): process_one /usr/local/lib/python3.10/dist-packages/tornado/gen.py(786): run /usr/local/lib/python3.10/dist-packages/tornado/gen.py(825): inner /usr/local/lib/python3.10/dist-packages/tornado/ioloop.py(738): _run_callback /usr/local/lib/python3.10/dist-packages/tornado/ioloop.py(685): <lambda> /usr/lib/python3.10/asyncio/events.py(80): _run /usr/lib/python3.10/asyncio/base_events.py(1909): _run_once /usr/lib/python3.10/asyncio/base_events.py(603): run_forever /usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py(195): start /usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py(619): start /usr/local/lib/python3.10/dist-packages/traitlets/config/application.py(992): launch_instance /usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py(37): <module> /usr/lib/python3.10/runpy.py(86): _run_code /usr/lib/python3.10/runpy.py(196): _run_module_as_main RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got CPUFloatType instead (while checking arguments for embedding) ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28995/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28995", "html_url": "https://github.com/huggingface/transformers/pull/28995", "diff_url": "https://github.com/huggingface/transformers/pull/28995.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28995.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28994
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28994/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28994/comments
https://api.github.com/repos/huggingface/transformers/issues/28994/events
https://github.com/huggingface/transformers/pull/28994
2,131,907,734
PR_kwDOCUB6oc5muue_
28,994
Fix max_length criteria when using inputs_embeds
{ "login": "zucchini-nlp", "id": 100715397, "node_id": "U_kgDOBgDLhQ", "avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zucchini-nlp", "html_url": "https://github.com/zucchini-nlp", "followers_url": "https://api.github.com/users/zucchini-nlp/followers", "following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}", "gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}", "starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions", "organizations_url": "https://api.github.com/users/zucchini-nlp/orgs", "repos_url": "https://api.github.com/users/zucchini-nlp/repos", "events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}", "received_events_url": "https://api.github.com/users/zucchini-nlp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "oh, i see, added a new fix and checked that creating an empty tensor does not break anything", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28994). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@amyeroberts unrelated CI failures, I believe this can be merged 🤗 ", "@zucchini-nlp Can you try rebasing? Fixes should have been merged into main with resolve the currently failing tests ", "@amyeroberts thanks, now it's all green and can be merged" ]
1,707
1,708
1,708
MEMBER
null
# What does this PR do? Fixes #28953 . StoppingCriteria with max_length behaves differently when provided `input_ids` or `inputs_embeds`, this happens only on decoder-only models. The PR fixes it so that the criteria accounts for the length of `input_embeds` when generating ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @gante
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28994/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28994", "html_url": "https://github.com/huggingface/transformers/pull/28994", "diff_url": "https://github.com/huggingface/transformers/pull/28994.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28994.patch", "merged_at": 1708082712000 }
https://api.github.com/repos/huggingface/transformers/issues/28993
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28993/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28993/comments
https://api.github.com/repos/huggingface/transformers/issues/28993/events
https://github.com/huggingface/transformers/issues/28993
2,131,905,783
I_kwDOCUB6oc5_Ekz3
28,993
Add Hiera model
{ "login": "p1atdev", "id": 60182057, "node_id": "MDQ6VXNlcjYwMTgyMDU3", "avatar_url": "https://avatars.githubusercontent.com/u/60182057?v=4", "gravatar_id": "", "url": "https://api.github.com/users/p1atdev", "html_url": "https://github.com/p1atdev", "followers_url": "https://api.github.com/users/p1atdev/followers", "following_url": "https://api.github.com/users/p1atdev/following{/other_user}", "gists_url": "https://api.github.com/users/p1atdev/gists{/gist_id}", "starred_url": "https://api.github.com/users/p1atdev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/p1atdev/subscriptions", "organizations_url": "https://api.github.com/users/p1atdev/orgs", "repos_url": "https://api.github.com/users/p1atdev/repos", "events_url": "https://api.github.com/users/p1atdev/events{/privacy}", "received_events_url": "https://api.github.com/users/p1atdev/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" }, { "id": 5769473378, "node_id": "LA_kwDOCUB6oc8AAAABV-MtYg", "url": "https://api.github.com/repos/huggingface/transformers/labels/Vision", "name": "Vision", "color": "C079EF", "default": false, "description": "" } ]
open
false
null
[]
[ "Can I work on this ?", "@Namangarg110 Certainly! Feel free to open a PR when you're ready and ping us for review 🤗. To avoid issues from becoming too stale, we will prioritise the first open PR when reviewing over the first comment on issues. ", "Thanks @amyeroberts. This is my first open-source issue. Would it be possible for you to please share any helpful resources or similar PR to understand the code structure?\r\n", "Sure! \r\n\r\n* Docs page: https://huggingface.co./docs/transformers/en/add_new_model\r\n* Example model PR: https://github.com/huggingface/transformers/pull/26668\r\n\r\nAdding models is quite a big project. If you want to tackle something smaller for your first issue to get used to the workflow of contributing to transformers, resolving a [good first issue](https://github.com/huggingface/transformers/labels/Good%20First%20Issue) is a great place to start. ", "Thank you for the resources, @amyeroberts.\r\n\r\nI've begun the task and have completed 50% of the work.\r\n\r\nI recognize that contributing a new model can be exceptionally challenging, but I am eager to give it a try. :)" ]
1,707
1,708
null
NONE
null
### Model description Hiera is a hierarchical vision transformer that is fast, powerful, and, above all, simple. It outperforms the state-of-the-art across a wide array of image and video tasks while being much faster. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation GitHub Repo: https://github.com/facebookresearch/hiera/ (but licensed under CC BY-NC 4.0) arXiv: https://arxiv.org/abs/2306.00989
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28993/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28993/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28992
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28992/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28992/comments
https://api.github.com/repos/huggingface/transformers/issues/28992/events
https://github.com/huggingface/transformers/issues/28992
2,131,814,303
I_kwDOCUB6oc5_EOef
28,992
Where size 5 comes from in LlamaModelforCasualLM??
{ "login": "daehuikim", "id": 40377750, "node_id": "MDQ6VXNlcjQwMzc3NzUw", "avatar_url": "https://avatars.githubusercontent.com/u/40377750?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daehuikim", "html_url": "https://github.com/daehuikim", "followers_url": "https://api.github.com/users/daehuikim/followers", "following_url": "https://api.github.com/users/daehuikim/following{/other_user}", "gists_url": "https://api.github.com/users/daehuikim/gists{/gist_id}", "starred_url": "https://api.github.com/users/daehuikim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daehuikim/subscriptions", "organizations_url": "https://api.github.com/users/daehuikim/orgs", "repos_url": "https://api.github.com/users/daehuikim/repos", "events_url": "https://api.github.com/users/daehuikim/events{/privacy}", "received_events_url": "https://api.github.com/users/daehuikim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @daehuikim, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nThe value 5 comes from the input sequence length (the number of input tokens) i.e. `len(input_ids[0])` ", "@amyeroberts \r\nThank you for your kindness!" ]
1,707
1,707
1,707
NONE
null
``` from transformers import ( AutoModelForCausalLM, AutoTokenizer ) model_name = "meta-llama/Llama-2-7b-chat-hf" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) input_text = "How are you?" input_ids = tokenizer.encode(input_text, return_tensors="pt") outputs = model(input_ids,output_hidden_states=True) hidden_states = outputs.hidden_states output_last = hidden_states[-1].detach() print(output_last.shape) ``` Sorry for question here. I am watching llama model codes for getting hidden states values from here(https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py) When I try the code above, I can get the output like this ``` torch.Size([1, 5, 4096]) ``` From here, I can understand ```1``` is batch size and ```4096```is size of max_position_embedding of this model. Then What does ```5``` means from the hidden states of this model? (hidden states became logits once they go through ```lm_head``` module) https://github.com/huggingface/transformers/blob/da20209dbc26a6a870a6e7be87faa657b571b7bc/src/transformers/models/llama/modeling_llama.py#L1155 Thanks for reading my question!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28992/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28991
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28991/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28991/comments
https://api.github.com/repos/huggingface/transformers/issues/28991/events
https://github.com/huggingface/transformers/pull/28991
2,131,473,179
PR_kwDOCUB6oc5mtPkS
28,991
ENH [`AutoQuantizer`]: enhance trainer + not supported quant methods
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28991). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? Currently, if a quantization method do not support PEFT fine-tuning an old error with bistandbytes is raised: ```bash The model you want to train is loaded in 8-bit precision. if you want to fine-tune an 8-bit model, please make sure that you have installed `bitsandbytes>=0.37.0`. ``` Regardless of the quantization method. In fact, for example if one uses AWQ + Trainer (which is not supported yet, but soon with https://github.com/huggingface/transformers/pull/28987 / https://github.com/huggingface/peft/pull/1399), they'll get the old error which is very confusing. Moreover, we should rely on the variable `hf_quantizer.is_trainable` instead of `_is_quantized_training_enabled` We should instead be more precise and throw an error that states why this is not supported and how to request a fix. cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28991/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28991", "html_url": "https://github.com/huggingface/transformers/pull/28991", "diff_url": "https://github.com/huggingface/transformers/pull/28991.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28991.patch", "merged_at": 1707870623000 }
https://api.github.com/repos/huggingface/transformers/issues/28990
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28990/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28990/comments
https://api.github.com/repos/huggingface/transformers/issues/28990/events
https://github.com/huggingface/transformers/issues/28990
2,131,390,171
I_kwDOCUB6oc5_Cm7b
28,990
MBartForConditionalGeneration to do mask filling task with mbart-large-50-many-to-many-mmt
{ "login": "Aureole-1210", "id": 59786603, "node_id": "MDQ6VXNlcjU5Nzg2NjAz", "avatar_url": "https://avatars.githubusercontent.com/u/59786603?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aureole-1210", "html_url": "https://github.com/Aureole-1210", "followers_url": "https://api.github.com/users/Aureole-1210/followers", "following_url": "https://api.github.com/users/Aureole-1210/following{/other_user}", "gists_url": "https://api.github.com/users/Aureole-1210/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aureole-1210/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aureole-1210/subscriptions", "organizations_url": "https://api.github.com/users/Aureole-1210/orgs", "repos_url": "https://api.github.com/users/Aureole-1210/repos", "events_url": "https://api.github.com/users/Aureole-1210/events{/privacy}", "received_events_url": "https://api.github.com/users/Aureole-1210/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @ArthurZucker " ]
1,707
1,708
null
NONE
null
### System Info I also have problem with this. I want to use 【facebook/mbart-large-50-many-to-many-mmt】 to do mask filling task. But the output is always strange. I modify the input format as the Model Card from https://huggingface.co./facebook/mbart-large-50-many-to-many-mmt suggested. My code is as follows: ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import ( AutoTokenizer, BertForMaskedLM, MBart50TokenizerFast, MBartForConditionalGeneration, DataCollatorForLanguageModeling ) model_name_or_path = 'my_path/mbart-large-50-many-to-many-mmt' model = MBartForConditionalGeneration.from_pretrained(model_name_or_path) tokenizer = MBart50TokenizerFast.from_pretrained(model_name_or_path) tokenizer.src_lang = 'en_XX' src = "So that such a thing won’t happen <mask>." encoded_src = tokenizer([src], return_tensors="pt") input_ids = encoded_src["input_ids"] src_tokens = tokenizer.convert_ids_to_tokens(input_ids[0]) model_outputs = model(**encoded_src) logits = model_outputs.logits masked_index = torch.nonzero((input_ids[0] == tokenizer.mask_token_id)).item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(5) print(tokenizer.convert_ids_to_tokens(predictions)) ``` The output is: ['.', '☎', '↔', '∏', '∴'] ### Expected behavior When I change my input, it always output strange symbols. I think this is wrong. I think the output should be English words at least. I am confused whether this model is not suitable for this task. How should I modify to get proper outputs? Thank you so much!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28990/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28989
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28989/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28989/comments
https://api.github.com/repos/huggingface/transformers/issues/28989/events
https://github.com/huggingface/transformers/pull/28989
2,131,359,266
PR_kwDOCUB6oc5ms4F-
28,989
Add cuda_custom_kernel in DETA
{ "login": "SangbumChoi", "id": 34004152, "node_id": "MDQ6VXNlcjM0MDA0MTUy", "avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SangbumChoi", "html_url": "https://github.com/SangbumChoi", "followers_url": "https://api.github.com/users/SangbumChoi/followers", "following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}", "gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}", "starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions", "organizations_url": "https://api.github.com/users/SangbumChoi/orgs", "repos_url": "https://api.github.com/users/SangbumChoi/repos", "events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}", "received_events_url": "https://api.github.com/users/SangbumChoi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28989). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @amyeroberts Hi, can you check this PR? This will make training DETA with custom_cuda_kernel same in Deformable DETR! I also confirmed that training with this kernel working properly in custom dataset
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28989/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28989", "html_url": "https://github.com/huggingface/transformers/pull/28989", "diff_url": "https://github.com/huggingface/transformers/pull/28989.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28989.patch", "merged_at": 1707998979000 }
https://api.github.com/repos/huggingface/transformers/issues/28988
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28988/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28988/comments
https://api.github.com/repos/huggingface/transformers/issues/28988/events
https://github.com/huggingface/transformers/pull/28988
2,131,357,148
PR_kwDOCUB6oc5ms3qX
28,988
ENH: Do not pass warning message in case `quantization_config` is in config but not passed as an arg
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28988). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? Currently, transformers always warns users with a wrong message when loading a model that has a quantization_config without even passing quantization_config to from_pretrained. Indeed we should warn users only when `quantization_config_from_args` instead of all the time cc @amyeroberts ```bash /usr/local/lib/python3.10/dist-packages/transformers/quantizers/auto.py:151: UserWarning: You passed `quantization_config` or equivalent parameters to `from_pretrained` but the model you're loading already has a `quantization_config` attribute. The `quantization_config` from the model will be prevail. warnings.warn(warning_msg) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28988/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28988", "html_url": "https://github.com/huggingface/transformers/pull/28988", "diff_url": "https://github.com/huggingface/transformers/pull/28988.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28988.patch", "merged_at": 1707869982000 }
https://api.github.com/repos/huggingface/transformers/issues/28987
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28987/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28987/comments
https://api.github.com/repos/huggingface/transformers/issues/28987/events
https://github.com/huggingface/transformers/pull/28987
2,131,305,610
PR_kwDOCUB6oc5mss6V
28,987
[`Awq`] Add peft support for AWQ
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28987). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "cc @amyeroberts - we just got a release from autoawq ! this is ready for review 🙏 \r\nAfter merging this, I'll merge https://github.com/huggingface/peft/pull/1399 :) " ]
1,707
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? As per title, adds Trainer + AWQ + PEFT support. Needs to be merged at the same time as: https://github.com/huggingface/peft/pull/1399 The tests are directly added on https://github.com/huggingface/peft/pull/1399 cc @casper-hansen @pacman100 @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28987/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28987", "html_url": "https://github.com/huggingface/transformers/pull/28987", "diff_url": "https://github.com/huggingface/transformers/pull/28987.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28987.patch", "merged_at": 1708302699000 }
https://api.github.com/repos/huggingface/transformers/issues/28986
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28986/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28986/comments
https://api.github.com/repos/huggingface/transformers/issues/28986/events
https://github.com/huggingface/transformers/pull/28986
2,131,108,419
PR_kwDOCUB6oc5msBoV
28,986
Fix a configuration key error in forward() of MusicgenForConditionalGeneration
{ "login": "IntelliNik", "id": 37289946, "node_id": "MDQ6VXNlcjM3Mjg5OTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/37289946?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IntelliNik", "html_url": "https://github.com/IntelliNik", "followers_url": "https://api.github.com/users/IntelliNik/followers", "following_url": "https://api.github.com/users/IntelliNik/following{/other_user}", "gists_url": "https://api.github.com/users/IntelliNik/gists{/gist_id}", "starred_url": "https://api.github.com/users/IntelliNik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IntelliNik/subscriptions", "organizations_url": "https://api.github.com/users/IntelliNik/orgs", "repos_url": "https://api.github.com/users/IntelliNik/repos", "events_url": "https://api.github.com/users/IntelliNik/events{/privacy}", "received_events_url": "https://api.github.com/users/IntelliNik/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,707
1,707
null
NONE
null
Hi everyone, I think I noticed a bug in the forward function of MusicgenForConditionalGeneration. When calculating the loss with given labels, I get the error "'MusicgenConfig' object has no attribute 'vocab_size'" as only the decoder.config has a vocab_size entry. I think this should be the correct way to implement the loss calculation.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28986/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28986", "html_url": "https://github.com/huggingface/transformers/pull/28986", "diff_url": "https://github.com/huggingface/transformers/pull/28986.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28986.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28985
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28985/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28985/comments
https://api.github.com/repos/huggingface/transformers/issues/28985/events
https://github.com/huggingface/transformers/pull/28985
2,130,913,115
PR_kwDOCUB6oc5mrW-2
28,985
[`pipeline`] Add pool option to image feature extraction pipeline
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28985). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@ArthurZucker Updated the error message and added tests for exact outputs in [b94d08c](https://github.com/huggingface/transformers/pull/28985/commits/b94d08cb0bebd6f7854422b989b4bd93a9b56e46)" ]
1,707
1,708
1,708
COLLABORATOR
null
# What does this PR do? Adds the flag `pool` which will return the pooled output, rather than the raw hidden states. Doesn't work for data2vecvision as the model doesn't [add the pooling layer by default](https://github.com/huggingface/transformers/blob/78ba9f4617370a41c436126bbbb6f8d75924837c/src/transformers/models/data2vec/modeling_data2vec_vision.py#L637). At the moment, [model kwargs aren't passed to the model constructor](https://github.com/huggingface/transformers/blob/78ba9f4617370a41c436126bbbb6f8d75924837c/src/transformers/pipelines/base.py#L817), so it's not simple to add passing `add_pooling_layer` here. I chose to raise an error when getting the outputs of the model. Although this means we fail quite late, it avoids any complex inspection needed on the model. From the list of models in #28944: Models which work with the `pool` option: - beit - bit - convnext - convnextv2 - deit - dinov2 - dpt - efficientnet - focalnet - levit - mobilenet_v1 - mobilenet_v2 - mobilevit - mobilevitv2 - nat - regnet - resnet - swin - swinv2 - van - vit - vit_hybrid - vivit - yolos Models that don't have a pooling layer: - conditional_detr - deformable_detr - deta - detr - dinat - efficientformer - glpn - imagegpt - poolformer - pvt - segformer - siglip_vision_model - swiftformer - swin2sr - table - timesformer - timm_backbone - videomae - vit_msn - vitdet - vit_mae Not working - data2vec-vision. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28985/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28985/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28985", "html_url": "https://github.com/huggingface/transformers/pull/28985", "diff_url": "https://github.com/huggingface/transformers/pull/28985.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28985.patch", "merged_at": 1708460529000 }
https://api.github.com/repos/huggingface/transformers/issues/28984
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28984/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28984/comments
https://api.github.com/repos/huggingface/transformers/issues/28984/events
https://github.com/huggingface/transformers/pull/28984
2,130,838,955
PR_kwDOCUB6oc5mrGZh
28,984
[WIP] Word level timestamp for long-form generation
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28984). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "It's actually much harder to do this than I thought and I sadly won't have time to finish this PR, so I'll leave it in this form.\r\n\r\nWe're facing the following problematic here.\r\n1) It's actually kind of tricky to run the `_extract_timestamps` funtion for the whole batch when doing long-form => after some thought it's better to run this function for every batch index. This should be changed and would then also make the tricky cross_attention re-ordering easier / redundant\r\n2) We need to split the cross attention both by input and output length. Essentially the output length is defined by each individual segment and the input length by the `start` and `end` timestamps that are passed. This should be done in the `_extract_timestamps` function.\r\n\r\nIf anybody in the community is willing to give this PR a try, feel free to use any/all my code.\r\n\r\ncc @sanchit-gandhi as well", "I will be taking over this issue, since I found that no-one else is working on it." ]
1,707
1,708
null
MEMBER
null
# What does this PR do? Fixes: https://github.com/huggingface/transformers/issues/28977 We haven't added word level timestamp for long-form generation yet. It's definitely possible, but it'll require some more changes in `generate`. Happy to take a closer look here the next days. With the PR in its current state, one can retrieve word level timestamps, but they are not correct because the`_postprocess_outputs` is not correct. Test it with: ```py #!/usr/bin/env python3 from transformers import WhisperForConditionalGeneration, WhisperProcessor import torch import librosa DEVICE = "cuda" model_id = "openai/whisper-tiny" processor = WhisperProcessor.from_pretrained(model_id) model = WhisperForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.float16) model.to(DEVICE) audio, _ = librosa.load("./common_voice_fr_17299386.mp3", sr=16_000) inputs = processor(audio, sampling_rate=16_000, return_tensors="pt", truncation=False, # False so the audio isn't truncated and whole audio is sent to the model return_attention_mask=True, padding="longest") input_features = inputs.to(DEVICE, dtype=torch.float16) inputs["input_features"] = inputs.input_features.repeat(1, 1, 8) print(inputs.input_features.shape) outputs = model.generate(**input_features, return_token_timestamps=True, return_segments=True) # decode token ids to text transcription = processor.batch_decode(outputs["sequences"], skip_special_tokens=False) print(transcription[0]) per_segment_word_timestamps = [segment["result"]["token_timestamps"] for segment in outputs["segments"][0]] all_word_timestamps = [x + y["start"] for x, y in zip(per_segment_word_timestamps, outputs["segments"][0])] print("Word level timestamps", all_word_timestamps) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28984/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28984", "html_url": "https://github.com/huggingface/transformers/pull/28984", "diff_url": "https://github.com/huggingface/transformers/pull/28984.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28984.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28983
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28983/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28983/comments
https://api.github.com/repos/huggingface/transformers/issues/28983/events
https://github.com/huggingface/transformers/issues/28983
2,130,708,973
I_kwDOCUB6oc5_AAnt
28,983
Fix custom architectures
{ "login": "not-lain", "id": 70411813, "node_id": "MDQ6VXNlcjcwNDExODEz", "avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/not-lain", "html_url": "https://github.com/not-lain", "followers_url": "https://api.github.com/users/not-lain/followers", "following_url": "https://api.github.com/users/not-lain/following{/other_user}", "gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}", "starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/not-lain/subscriptions", "organizations_url": "https://api.github.com/users/not-lain/orgs", "repos_url": "https://api.github.com/users/not-lain/repos", "events_url": "https://api.github.com/users/not-lain/events{/privacy}", "received_events_url": "https://api.github.com/users/not-lain/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@Rocketknight1 this might help with keeping track to everything.\r\nI'm going to start working on the 2nd issue above hope I fix it soon. " ]
1,707
1,707
null
CONTRIBUTOR
null
opening this issue for better visibility and to keep track to what needs to be fixed, any contributions are welcome. | name | issue | pull request | comment | | ------------- | ------------- | ------------- | ------------- | | dependency issue when working with a custom architecture in a repo that has a dot in its name| #28919 | |possible solution by https://github.com/huggingface/transformers/issues/28919#issuecomment-1937728036 | | wrongly annotated configuration when saving a model that has a custom pipeline| #28907 | #29004 | awaiting review | | add `push_to_hub( )` method when working with pipelines| #28857 | #28870 | working pull request, wrong documentation for `push_to_hub` method |
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28983/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28982
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28982/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28982/comments
https://api.github.com/repos/huggingface/transformers/issues/28982/events
https://github.com/huggingface/transformers/pull/28982
2,130,602,975
PR_kwDOCUB6oc5mqSZ_
28,982
Correct zero division error in inverse sqrt scheduler
{ "login": "DavidAfonsoValente", "id": 74915610, "node_id": "MDQ6VXNlcjc0OTE1NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/74915610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavidAfonsoValente", "html_url": "https://github.com/DavidAfonsoValente", "followers_url": "https://api.github.com/users/DavidAfonsoValente/followers", "following_url": "https://api.github.com/users/DavidAfonsoValente/following{/other_user}", "gists_url": "https://api.github.com/users/DavidAfonsoValente/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavidAfonsoValente/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavidAfonsoValente/subscriptions", "organizations_url": "https://api.github.com/users/DavidAfonsoValente/orgs", "repos_url": "https://api.github.com/users/DavidAfonsoValente/repos", "events_url": "https://api.github.com/users/DavidAfonsoValente/events{/privacy}", "received_events_url": "https://api.github.com/users/DavidAfonsoValente/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @DavidAfonsoValente, thanks for opening this PR! \r\n\r\nCould you give some more context about the issue this resolves, ideally with a reproducible snippet? \r\n\r\nJust looking at the PR, it implies that `timescale` is 0, which I don't think should ever be the case. ", "I corrected the description link to the issue, after testing it seems like both the timescale and (current_step + shift) can possibly be zero, leading to the zero division error", "Hi @DavidAfonsoValente, thanks for linking to the relevant issue. I still have an outstanding question about how this can occur i.e. when is `timescale` 0? \r\n\r\ncc @muellerzr who can provide some more context over the behaviour", "@DavidAfonsoValente reading https://github.com/huggingface/transformers/pull/21495, correct me if I'm wrong but the whole scheduler works off a timescale which is equal to the number of warmup steps. Which, at least in my understanding, means that we can't have a timescale of 0 and thus should raise a `NotImplementedError` directing the user to ensure they have a number of warmup steps set in the scheduler, no?", " cc @Sangh0", "Yes, this problem occurs when num_warmup_steps is 0, the check that is made is to ensure that num_warmup_steps is not None so it goes through. However most of the training examples provided with get_scheduler initiate the scheduler with num_warmup_steps = 0. One other possible correction could be defaulting the timescale to 10_000 as it is done in :\r\nhttps://github.com/google-research/big_vision/blob/fd2d3bd2efc9d89ea959f16cd2f58ae8a495cd44/big_vision/configs/proj/clippo/train_clippo.py#L144\r\nhttps://github.com/google-research/big_vision/blob/6ff6d080d62c1f47e2e4eeb8b6474deb38dfe406/big_vision/configs/proj/scaling_laws/train_vit_g.py#L79\r\nI believe maybe the current implementation came from interpreting the original implementation as having timescale==num_warmup_steps however a more accurate implementation could be one where these both default to 10_000, what do you think?\r\n", "@muellerzr Thank you! I'm glad this issue has been resolved well.", "Should I default timescale to 10_000 instead of the current solution?\r\n", "@muellerzr is off atm, so we'll have to wait from him to confirm. From what I understand, yes, let's a better default for `timescale`. For backwards compatibility, I'd suggest default to `10_000` is `num_warmup_step` is 0 and `timescale` is not set. " ]
1,707
1,708
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28835 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28982/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28982", "html_url": "https://github.com/huggingface/transformers/pull/28982", "diff_url": "https://github.com/huggingface/transformers/pull/28982.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28982.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28981
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28981/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28981/comments
https://api.github.com/repos/huggingface/transformers/issues/28981/events
https://github.com/huggingface/transformers/issues/28981
2,130,600,607
I_kwDOCUB6oc5-_mKf
28,981
tracker: `generate` compatibility with `torch.compile`
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[ { "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false } ]
[]
1,707
1,707
null
MEMBER
null
# `generate` 🤜 🤛 `torch.compile` This issue is a tracker of the compatibility between `.generate` and `torch.compile` ([intro docs by pytorch](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html)). The goal is to enable `fullgraph=True` compilation on the main `generate` use cases. ⚠️ Is *your* `generate` use case not covered by this tracker? Check if it was requested below and upvote it if it was. Otherwise, add a comment. We will consider expanding the selection below on widely requested use cases 🤗 ### Decoding Strategies - [ ] `greedy_search` / `sample` are compatible - [ ] `beam_search` / `beam_sample` are compatible - [ ] `assisted_decoding` (aka speculative decoding) is compatible ### Generate Flags and Options - [ ] all `LogitsProcessor` classes were checked for compatibility (and the appropriate exceptions are raised when not compatible) - [ ] all `StoppingCriteria` classes were checked for compatibility (and the appropriate exceptions are raised when not compatible) ### Models - [ ] BART is compatible - [ ] GPT2 is compatible - [x] Llama is compatible (#27931) - [ ] Llava is compatible - [ ] Mistral is compatible - [ ] Mixtral is compatible - [ ] T5 is compatible - [ ] Whisper is compatible ### Quantization - [ ] BNB support - [ ] GPTQ support - [ ] AWQ support ### Others - [ ] We have a benchmark script to quickly compare the impact of PRs - [ ] Add section to existing docs on the topic - [ ] Confirm that pipelines work after compiling generate
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28981/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28981/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28980
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28980/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28980/comments
https://api.github.com/repos/huggingface/transformers/issues/28980/events
https://github.com/huggingface/transformers/issues/28980
2,130,570,113
I_kwDOCUB6oc5-_euB
28,980
Add sliding window attention to sdpa in mistral
{ "login": "ehuaa", "id": 5137359, "node_id": "MDQ6VXNlcjUxMzczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5137359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ehuaa", "html_url": "https://github.com/ehuaa", "followers_url": "https://api.github.com/users/ehuaa/followers", "following_url": "https://api.github.com/users/ehuaa/following{/other_user}", "gists_url": "https://api.github.com/users/ehuaa/gists{/gist_id}", "starred_url": "https://api.github.com/users/ehuaa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ehuaa/subscriptions", "organizations_url": "https://api.github.com/users/ehuaa/orgs", "repos_url": "https://api.github.com/users/ehuaa/repos", "events_url": "https://api.github.com/users/ehuaa/events{/privacy}", "received_events_url": "https://api.github.com/users/ehuaa/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" } ]
open
false
null
[]
[ "cc @fxmarty ", "Hi, thank you for the suggestion, SDPA support for mistral was added by @ArthurZucker in https://github.com/huggingface/transformers/pull/28133, maybe he has more insight.", "I think it comes down to just adding `sliding_window` to the call for `_prepare_4d_causal_attention_mask_for_sdpa` yes. Would you like to open a PR?", "> I think it comes down to just adding `sliding_window` to the call for `_prepare_4d_causal_attention_mask_for_sdpa` yes. Would you like to open a PR?\r\n\r\nSure,and i'll open a PR later in this week" ]
1,707
1,708
null
NONE
null
### Feature request https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L1006-L1023 ![image](https://github.com/huggingface/transformers/assets/5137359/9601a5d2-cf9f-4ef6-a0ab-047a8cd7f1cd) In the code listed above, the latest version of transformers cannot use sliding window feature in mistral model. I doubt that the reason is you mentioned above, https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L687-L688 ![image](https://github.com/huggingface/transformers/assets/5137359/997cd770-e17f-4eb4-997b-fe65f30ddc85) And this issue in PyTorch makes you bugged with custom attn_mask like sliding window attention mask. https://github.com/pytorch/pytorch/issues/112577 While this issue has been fixed since torch 2.2.0, and it has been released two weeks ago, can you add this feature back to sdpa kernel in mistral? ### Motivation I cannot use sliding window with sdpa right now, cause my gpu card is V100, i cannot work with flashattention2. ### Your contribution I think we can pass sliding_window param to _prepare_4d_causal_attention_mask_for_sdpa function.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28980/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28979
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28979/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28979/comments
https://api.github.com/repos/huggingface/transformers/issues/28979/events
https://github.com/huggingface/transformers/issues/28979
2,130,560,594
I_kwDOCUB6oc5-_cZS
28,979
transformers/configuration_utils.py: TypeError: Object of type ResNetConfig is not JSON serializable ( AutoModelForObjectDetection.from_pretrained("microsoft/table-transformer-detection"..))
{ "login": "dokondr", "id": 1510880, "node_id": "MDQ6VXNlcjE1MTA4ODA=", "avatar_url": "https://avatars.githubusercontent.com/u/1510880?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dokondr", "html_url": "https://github.com/dokondr", "followers_url": "https://api.github.com/users/dokondr/followers", "following_url": "https://api.github.com/users/dokondr/following{/other_user}", "gists_url": "https://api.github.com/users/dokondr/gists{/gist_id}", "starred_url": "https://api.github.com/users/dokondr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dokondr/subscriptions", "organizations_url": "https://api.github.com/users/dokondr/orgs", "repos_url": "https://api.github.com/users/dokondr/repos", "events_url": "https://api.github.com/users/dokondr/events{/privacy}", "received_events_url": "https://api.github.com/users/dokondr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @dokondr, thanks for raising an issue! \r\n\r\nI'm unable to replicate this issue locally. Could you try installing with pip and using the most recent version of transformers? \r\n\r\n```\r\npip install -U transformers\r\n```", "> Hi @dokondr, thanks for raising an issue!\r\n> \r\n> I'm unable to replicate this issue locally. Could you try installing with pip and using the most recent version of transformers?\r\n> \r\n> ```\r\n> pip install -U transformers\r\n> ```\r\n\r\nHi amyeroberts!\r\n\r\nIt solved the problem, many thanks!\r\n\r\n" ]
1,707
1,707
1,707
NONE
null
### System Info When loading model: AutoModelForObjectDetection.from_pretrained("microsoft/table-transformer-detection", revision="no_timm") transformers/configuration_utils.py returns a TypeError: Object of type ResNetConfig is not JSON serializable This error happens when I run in virtual environment (Win10): torch==2.2.0 torchvision==0.17.0 transformers==4.31.0 **And also with latest versions of these libraries.** Yet it does not happen in Conda default environment that was created long ago: torch==2.2.0 torchvision==0.17.0 transformers==4.31.0.dev0 Today when I try: pip install transformers==4.31.0.dev0 I get: ERROR: No matching distribution found for transformers==4.31.0.dev0 So, it looks like 'transformers 4.31.0.dev0' no longer exist. How to solve then TypeError: Object of type ResNetConfig is not JSON serializable ? ### Detailed environment info where this error happens ### - `transformers` version: 4.31.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.15 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.2.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction import torch from torchvision import transforms from transformers import AutoModelForObjectDetection model = AutoModelForObjectDetection.from_pretrained("microsoft/table-transformer-detection", revision="no_timm") ### Expected behavior "microsoft/table-transformer-detection" model should load without errors
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28979/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28978
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28978/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28978/comments
https://api.github.com/repos/huggingface/transformers/issues/28978/events
https://github.com/huggingface/transformers/issues/28978
2,130,494,276
I_kwDOCUB6oc5-_MNE
28,978
Whisper Sequential long-form decoding doesn't work when forcing task
{ "login": "antoinethl", "id": 56915854, "node_id": "MDQ6VXNlcjU2OTE1ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/56915854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antoinethl", "html_url": "https://github.com/antoinethl", "followers_url": "https://api.github.com/users/antoinethl/followers", "following_url": "https://api.github.com/users/antoinethl/following{/other_user}", "gists_url": "https://api.github.com/users/antoinethl/gists{/gist_id}", "starred_url": "https://api.github.com/users/antoinethl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antoinethl/subscriptions", "organizations_url": "https://api.github.com/users/antoinethl/orgs", "repos_url": "https://api.github.com/users/antoinethl/repos", "events_url": "https://api.github.com/users/antoinethl/events{/privacy}", "received_events_url": "https://api.github.com/users/antoinethl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @sanchit-gandhi @ylacombe ", "Hey @antoinethl,\r\n\r\nThanks for reporting the bug! Note that the bug is already solved on \"main\" with https://github.com/huggingface/transformers/pull/28687. Could you try to install transformers as follows:\r\n\r\n```\r\n!pip install git+https://github.com/huggingface/transformers\r\n```\r\n\r\nand run your code snippet again?", "> Hey @antoinethl,\r\n> \r\n> Thanks for reporting the bug! Note that the bug is already solved on \"main\" with #28687. Could you try to install transformers as follows:\r\n> \r\n> ```\r\n> !pip install git+https://github.com/huggingface/transformers\r\n> ```\r\n> \r\n> and run your code snippet again?\r\n\r\nHi, thanks for the quick reply, seems indeed fixed with PR #28687 . Working when updating to the 4.38 dev version" ]
1,707
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.37.2 - Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.23 - Python version: 3.10.11 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): 2.12.0 (True) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Similar as #28977 , the long-form decoding recently added in [[Whisper] Add sequential longform decoding](https://github.com/huggingface/transformers/pull/27492) seems to have issues in some parameters. There it's the task specification that seems problematic. It is also linked to another issue : transformers's Whisper implementation seems to force the output language to be English. Tested with French, German, Dutch audios, result is always the same : Whisper **translate** the audio into English when the task isn't set (and language aswell obviously). Here is the discussion about the issue : https://huggingface.co./openai/whisper-large-v3/discussions/71 So while trying to bypass this issue of English-only output, I tried, as mentionned in the discussion, to set the `task="transcribe"` to force the model to transcribe the audio. But when working with long audio and the new implementation of long-form decoding, the issue occured. Here is a minimal example to reproduce the issue: ```python from transformers import WhisperForConditionalGeneration, WhisperProcessor, pipeline import librosa SR = 16000 model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium") processor = WhisperProcessor.from_pretrained("openai/whisper-medium") file_path = "path_to_more_than_30_sec_audio" audio, _ = librosa.load(file_path, sr=SR) # Long-form transcription with model.generate() input_features = processor(audio, sampling_rate=SR, return_tensors="pt", truncation=False, # False so the audio isn't truncated and whole audio is sent to the model return_attention_mask=True, padding="longest") predicted_ids = model.generate(**input_features, task="transcribe") # If you remove this parameter, it works as expected ``` ## Traceback ```shell TypeError Traceback (most recent call last) Cell In[39], line 19 11 # Long-form generation 12 input_features = processor(audio, 13 sampling_rate=16000, 14 return_tensors="pt", 15 truncation=False, 16 return_attention_mask=True, 17 padding="longest") ---> 19 predicted_ids = model.generate(**input_features, task="transcribe") File ~/miniconda3/envs/py310-fast/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:614, in WhisperGenerationMixin.generate(self, input_features, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, return_timestamps, task, language, is_multilingual, prompt_ids, condition_on_prev_tokens, temperature, compression_ratio_threshold, logprob_threshold, no_speech_threshold, num_segment_frames, attention_mask, time_precision, return_token_timestamps, return_segments, return_dict_in_generate, **kwargs) 610 # 6.5 prepare decoder input ids 611 suppress_tokens = _get_attr_from_logit_processors( 612 logits_processor, SuppressTokensLogitsProcessor, "suppress_tokens" 613 ) --> 614 decoder_input_ids, kwargs = self._prepare_decoder_input_ids( 615 cur_bsz=cur_bsz, 616 init_tokens=init_tokens, 617 current_segments=current_segments, 618 batch_idx_map=batch_idx_map, 619 do_condition_on_prev_tokens=do_condition_on_prev_tokens, 620 generation_config=generation_config, 621 config=self.config, 622 device=segment_input.device, 623 suppress_tokens=suppress_tokens, 624 kwargs=kwargs, 625 ) 627 # 6.6 set max new tokens or max length 628 kwargs = self._set_max_new_tokens_and_length( 629 config=self.config, 630 decoder_input_ids=decoder_input_ids, 631 generation_config=generation_config, 632 kwargs=kwargs, 633 ) File ~/miniconda3/envs/py310-fast/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:1322, in WhisperGenerationMixin._prepare_decoder_input_ids(cur_bsz, init_tokens, current_segments, batch_idx_map, do_condition_on_prev_tokens, generation_config, config, device, suppress_tokens, kwargs) 1319 cut_off_length = config.max_target_positions // 2 - 1 1321 one_tensor = torch.ones((cur_bsz, 1), device=device, dtype=torch.long) -> 1322 decoder_input_ids = torch.cat([t * one_tensor for t in init_tokens], dim=-1) 1324 prev_start_of_text = getattr(generation_config, "prev_sot_token_id", None) 1325 if prev_start_of_text is None: File ~/miniconda3/envs/py310-fast/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:1322, in <listcomp>(.0) 1319 cut_off_length = config.max_target_positions // 2 - 1 1321 one_tensor = torch.ones((cur_bsz, 1), device=device, dtype=torch.long) -> 1322 decoder_input_ids = torch.cat([t * one_tensor for t in init_tokens], dim=-1) 1324 prev_start_of_text = getattr(generation_config, "prev_sot_token_id", None) 1325 if prev_start_of_text is None: TypeError: unsupported operand type(s) for *: 'NoneType' and 'Tensor' ``` ### Expected behavior Model should be able to work with the task parameter when processing long audio after https://github.com/huggingface/transformers/pull/27492
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28978/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28977
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28977/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28977/comments
https://api.github.com/repos/huggingface/transformers/issues/28977/events
https://github.com/huggingface/transformers/issues/28977
2,130,442,867
I_kwDOCUB6oc5--_pz
28,977
Whisper Sequential long-form decoding doesn't work with timestamps per token
{ "login": "antoinethl", "id": 56915854, "node_id": "MDQ6VXNlcjU2OTE1ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/56915854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antoinethl", "html_url": "https://github.com/antoinethl", "followers_url": "https://api.github.com/users/antoinethl/followers", "following_url": "https://api.github.com/users/antoinethl/following{/other_user}", "gists_url": "https://api.github.com/users/antoinethl/gists{/gist_id}", "starred_url": "https://api.github.com/users/antoinethl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antoinethl/subscriptions", "organizations_url": "https://api.github.com/users/antoinethl/orgs", "repos_url": "https://api.github.com/users/antoinethl/repos", "events_url": "https://api.github.com/users/antoinethl/events{/privacy}", "received_events_url": "https://api.github.com/users/antoinethl/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "cc @sanchit-gandhi @ylacombe ", "This is more of a feature request than a bug I'd say. Happy to have a look with https://github.com/huggingface/transformers/pull/28984" ]
1,707
1,707
null
NONE
null
### System Info - `transformers` version: 4.37.2 - Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.23 - Python version: 3.10.11 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): 2.12.0 (True) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Following [[Whisper] Add sequential longform decoding](https://github.com/huggingface/transformers/pull/27492), it seems that there is an issue when asking for token timestamps when dealing with the new way of handling long-form transcriptions. If using `model.generate()` method, passing `return_token_timestamps=True` causes the issue. Occurs also with the pipeline object if setting `return_timestamps="word"`. Here is a simple example to reproduce the issue: ```python from transformers import WhisperForConditionalGeneration, WhisperProcessor, pipeline import librosa SR = 16000 model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-medium") processor = WhisperProcessor.from_pretrained("openai/whisper-medium") file_path = "path_to_more_than_30_sec_audio" audio, _ = librosa.load(file_path, sr=SR) # Long-form transcription with model.generate() input_features = processor(audio, sampling_rate=SR, return_tensors="pt", truncation=False, # False so the audio isn't truncated and whole audio is sent to the model return_attention_mask=True, padding="longest") predicted_ids = model.generate(**input_features, return_token_timestamps=True) # With pipeline pipe = pipeline("automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, return_timestamps="word", return_language=True ) pipe(audio) ``` ## Traceback: ```shell AttributeError Traceback (most recent call last) Cell In[26], line 19 11 # Long-form generation 12 input_features = processor(audio, 13 sampling_rate=16000, 14 return_tensors="pt", 15 truncation=False, 16 return_attention_mask=True, 17 padding="longest") ---> 19 predicted_ids = model.generate(**input_features, 20 return_token_timestamps=True) File ~/miniconda3/envs/py310-fast/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:641, in WhisperGenerationMixin.generate(self, input_features, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, return_timestamps, task, language, is_multilingual, prompt_ids, condition_on_prev_tokens, temperature, compression_ratio_threshold, logprob_threshold, no_speech_threshold, num_segment_frames, attention_mask, time_precision, return_token_timestamps, return_segments, return_dict_in_generate, **kwargs) 638 proc.set_begin_index(decoder_input_ids.shape[-1]) 640 # 6.8 Run generate with fallback --> 641 seek_sequences, seek_outputs, should_skip, do_condition_on_prev_tokens = self.generate_with_fallback( 642 segment_input=segment_input, 643 decoder_input_ids=decoder_input_ids, 644 cur_bsz=cur_bsz, 645 batch_idx_map=batch_idx_map, 646 seek=seek, 647 num_segment_frames=num_segment_frames, 648 max_frames=max_frames, 649 temperatures=temperatures, 650 generation_config=generation_config, 651 logits_processor=logits_processor, 652 stopping_criteria=stopping_criteria, 653 prefix_allowed_tokens_fn=prefix_allowed_tokens_fn, 654 synced_gpus=synced_gpus, 655 return_token_timestamps=return_token_timestamps, 656 do_condition_on_prev_tokens=do_condition_on_prev_tokens, 657 kwargs=kwargs, 658 ) 660 # 6.9 In every generated sequence, split by timestamp tokens and extract segments 661 for i, seek_sequence in enumerate(seek_sequences): File ~/miniconda3/envs/py310-fast/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:739, in WhisperGenerationMixin.generate_with_fallback(self, segment_input, decoder_input_ids, cur_bsz, batch_idx_map, seek, num_segment_frames, max_frames, temperatures, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, return_token_timestamps, do_condition_on_prev_tokens, kwargs) 727 seek_outputs = super().generate( 728 segment_input, 729 generation_config, (...) 735 **kwargs, 736 ) 738 # post-process sequence tokens and outputs to be in list form --> 739 sequence_tokens, seek_outputs = self._postprocess_outputs( 740 seek_outputs, return_token_timestamps, generation_config 741 ) 743 # remove all previously passed decoder input ids 744 seek_sequences = sequence_tokens[:, decoder_input_ids.shape[-1] :] File ~/miniconda3/envs/py310-fast/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:825, in WhisperGenerationMixin._postprocess_outputs(self, seek_outputs, return_token_timestamps, generation_config) 822 return values[batch_idx].cpu() 824 sequence_tokens = seek_outputs["sequences"] --> 825 seek_outputs = [ 826 {k: split_by_batch_index(v, k, i) for k, v in seek_outputs.items()} 827 for i in range(sequence_tokens.shape[0]) 828 ] 829 else: 830 sequence_tokens = seek_outputs File ~/miniconda3/envs/py310-fast/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:826, in <listcomp>(.0) 822 return values[batch_idx].cpu() 824 sequence_tokens = seek_outputs["sequences"] 825 seek_outputs = [ --> 826 {k: split_by_batch_index(v, k, i) for k, v in seek_outputs.items()} 827 for i in range(sequence_tokens.shape[0]) 828 ] 829 else: 830 sequence_tokens = seek_outputs File ~/miniconda3/envs/py310-fast/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:826, in <dictcomp>(.0) 822 return values[batch_idx].cpu() 824 sequence_tokens = seek_outputs["sequences"] 825 seek_outputs = [ --> 826 {k: split_by_batch_index(v, k, i) for k, v in seek_outputs.items()} 827 for i in range(sequence_tokens.shape[0]) 828 ] 829 else: 830 sequence_tokens = seek_outputs File ~/miniconda3/envs/py310-fast/lib/python3.10/site-packages/transformers/models/whisper/generation_whisper.py:822, in WhisperGenerationMixin._postprocess_outputs.<locals>.split_by_batch_index(values, key, batch_idx) 819 if key == "past_key_values": 820 # we don't save `past_key_values` as this is too costly 821 return None --> 822 return values[batch_idx].cpu() AttributeError: 'tuple' object has no attribute 'cpu' ``` Works fine if you don't ask the timestamps per token. ### Expected behavior Model should be able to return the timestamps per token when working with long audio after #27492
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28977/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28976
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28976/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28976/comments
https://api.github.com/repos/huggingface/transformers/issues/28976/events
https://github.com/huggingface/transformers/issues/28976
2,130,252,754
I_kwDOCUB6oc5--RPS
28,976
[`spam`]
{ "login": "goalend", "id": 110501477, "node_id": "U_kgDOBpYeZQ", "avatar_url": "https://avatars.githubusercontent.com/u/110501477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goalend", "html_url": "https://github.com/goalend", "followers_url": "https://api.github.com/users/goalend/followers", "following_url": "https://api.github.com/users/goalend/following{/other_user}", "gists_url": "https://api.github.com/users/goalend/gists{/gist_id}", "starred_url": "https://api.github.com/users/goalend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goalend/subscriptions", "organizations_url": "https://api.github.com/users/goalend/orgs", "repos_url": "https://api.github.com/users/goalend/repos", "events_url": "https://api.github.com/users/goalend/events{/privacy}", "received_events_url": "https://api.github.com/users/goalend/received_events", "type": "User", "site_admin": false }
[]
open
true
null
[]
[]
1,707
1,707
null
NONE
spam
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28976/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28975
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28975/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28975/comments
https://api.github.com/repos/huggingface/transformers/issues/28975/events
https://github.com/huggingface/transformers/pull/28975
2,130,220,986
PR_kwDOCUB6oc5mo-vh
28,975
Static Cache: load models with MQA or GQA
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28975). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,707
1,707
MEMBER
null
# What does this PR do? Adds support to loading MQA or GQA models to the static cache (such as [this one](https://huggingface.co./TinyLlama/TinyLlama-1.1B-Chat-v1.0))
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28975/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28975", "html_url": "https://github.com/huggingface/transformers/pull/28975", "diff_url": "https://github.com/huggingface/transformers/pull/28975.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28975.patch", "merged_at": 1707818300000 }
https://api.github.com/repos/huggingface/transformers/issues/28974
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28974/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28974/comments
https://api.github.com/repos/huggingface/transformers/issues/28974/events
https://github.com/huggingface/transformers/pull/28974
2,130,219,207
PR_kwDOCUB6oc5mo-Wp
28,974
Updated requirements for image-classification samples: datasets>=2.14.0
{ "login": "alekseyfa", "id": 26468927, "node_id": "MDQ6VXNlcjI2NDY4OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/26468927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alekseyfa", "html_url": "https://github.com/alekseyfa", "followers_url": "https://api.github.com/users/alekseyfa/followers", "following_url": "https://api.github.com/users/alekseyfa/following{/other_user}", "gists_url": "https://api.github.com/users/alekseyfa/gists{/gist_id}", "starred_url": "https://api.github.com/users/alekseyfa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alekseyfa/subscriptions", "organizations_url": "https://api.github.com/users/alekseyfa/orgs", "repos_url": "https://api.github.com/users/alekseyfa/repos", "events_url": "https://api.github.com/users/alekseyfa/events{/privacy}", "received_events_url": "https://api.github.com/users/alekseyfa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28974). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,707
1,707
1,707
CONTRIBUTOR
null
# What does this PR do? This PR updates the dependency requirements for image-classification case. The run_image_classification.py script in the current implementation uses the token parameter, which was introduced as of datasets version 2.14.0. Thus, using packages below the specified version may cause errors. Please see the release link below: https://github.com/huggingface/datasets/releases/tag/2.14.0 An example of using the argument token in the run_image_classification.py script is shown below: ``` if data_args.dataset_name is not None: dataset = load_dataset( data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir, token=model_args.token, ) ``` https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py#L260 A similar PR has already been merged into the optimum-habana repository: https://github.com/huggingface/optimum-habana/pull/699 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28974/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28974", "html_url": "https://github.com/huggingface/transformers/pull/28974", "diff_url": "https://github.com/huggingface/transformers/pull/28974.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28974.patch", "merged_at": 1707749845000 }
https://api.github.com/repos/huggingface/transformers/issues/28973
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28973/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28973/comments
https://api.github.com/repos/huggingface/transformers/issues/28973/events
https://github.com/huggingface/transformers/pull/28973
2,130,194,809
PR_kwDOCUB6oc5mo5AS
28,973
Image Feature Extraction docs
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28973). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@merveenoyan There's a PR to add the pool option - I'm just waiting for review #28985 " ]
1,707
1,708
null
CONTRIBUTOR
null
This PR adds task guide for image feature extraction. Note that as of now the image feature extraction pipeline doesn't have a pooler output, so the result in the doc is the theoretical result :') This PR can be reviewed after that change is merged since this wouldn't work without `ViTPooler`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28973/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28973", "html_url": "https://github.com/huggingface/transformers/pull/28973", "diff_url": "https://github.com/huggingface/transformers/pull/28973.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28973.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28972
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28972/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28972/comments
https://api.github.com/repos/huggingface/transformers/issues/28972/events
https://github.com/huggingface/transformers/issues/28972
2,130,170,435
I_kwDOCUB6oc5-99JD
28,972
NotImplementedError: Cannot copy out of meta tensor; no data! when moving LLaVa from meta device to CUDA
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Ok I'm seeing the same error with BERT:\r\n```python\r\nfrom transformers import BertConfig, BertModel\r\nimport torch\r\n\r\nconfig = BertConfig()\r\n\r\nwith torch.device(\"meta\"):\r\n model = BertModel(config)\r\n\r\npretrained_model = BertModel.from_pretrained(\"bert-base-uncased\")\r\nmodel.load_state_dict(pretrained_model.state_dict(), assign=True)\r\n\r\ndevice = \"cuda:0\"\r\nmodel.to(device)\r\n```\r\nTrying to figure out why this works for CogVLM but not for BERT or LLaVa.. maybe @muellerzr has some insights given that he knows a lot about big model inference\r\n\r\nUpdate: it also doesn't work for CogVLM if I use the same rotary embedding class as the one of llama", "@NielsRogge the issue lies in the parameters being initialized. Instead of using `with torch.device(\"meta\")` use `init_empty_weights` from accelerate instead and it will work just fine: (basically some buffers and other things causing problems)\r\n\r\n```python\r\nfrom transformers import BertConfig, BertModel\r\nfrom accelerate import init_empty_weights\r\n\r\nconfig = BertConfig.from_pretrained(\"bert-base-uncased\")\r\n\r\nwith init_empty_weights():\r\n model = BertModel(config)\r\n\r\npretrained_model = BertModel.from_pretrained(\"bert-base-uncased\")\r\nmodel.load_state_dict(pretrained_model.state_dict(), assign=True)\r\n\r\nmodel.to(\"cuda\")\r\n```", "Thanks, I indeed noticed that it had to do something with buffers, great, thanks a lot!" ]
1,707
1,707
null
CONTRIBUTOR
null
### System Info Transformers 4.37.0.dev0 ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [x] My own modified scripts ### Reproduction Getting this error: ``` Traceback (most recent call last): File "src/transformers/models/llava/test_meta_device.py", line 10, in <module> model.to(device) File "/home/niels/python_projects/transformers/src/transformers/modeling_utils.py", line 2556, in to return super().to(*args, **kwargs) File "/home/niels/python_projects/transformers/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1160, in to return self._apply(convert) File "/home/niels/python_projects/transformers/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 810, in _apply module._apply(fn) File "/home/niels/python_projects/transformers/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 810, in _apply module._apply(fn) File "/home/niels/python_projects/transformers/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 810, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/home/niels/python_projects/transformers/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 833, in _apply param_applied = fn(param) File "/home/niels/python_projects/transformers/env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1158, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) NotImplementedError: Cannot copy out of meta tensor; no data! ``` when running this: ```python from transformers import LlavaConfig, LlavaForConditionalGeneration import torch config = LlavaConfig() with torch.device("meta"): model = LlavaForConditionalGeneration(config) model.load_state_dict(original_state_dict, assign=True) device = "cuda:0" model.to(device) ``` Taken from [this script](https://github.com/NielsRogge/transformers/blob/5be6e778e091f046f3158d9d2acf7cfeb5f539c3/src/transformers/models/llava/convert_llava_1_6_to_hf.py#L118-L145). Weird enough, the same thing works for CogVLM as seen [here](https://github.com/NielsRogge/transformers/blob/f190a38dbcc25c1a882b060659c6f2abb99d9dcf/src/transformers/models/cogvlm/convert_cogvlm_original_to_pytorch.py#L109-L115), but not for LLaVa. Based on this PR: https://github.com/huggingface/transformers/pull/26849 (which fixed a similar issue for DETR), this may have to do with some modules that cannot be splitted ### Expected behavior I'd like to first load LLaVa on the "meta" device, and then load the weights as I'm converting from the original repository. The "meta" device allows to load the HF model much faster as I've observed with CogVLM (thanks @ArthurZucker as I didn't know about this!)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28972/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28971
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28971/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28971/comments
https://api.github.com/repos/huggingface/transformers/issues/28971/events
https://github.com/huggingface/transformers/pull/28971
2,130,115,339
PR_kwDOCUB6oc5moncX
28,971
Allow setting dtype in rescaling in image_processing_donut.py
{ "login": "archit76", "id": 15254541, "node_id": "MDQ6VXNlcjE1MjU0NTQx", "avatar_url": "https://avatars.githubusercontent.com/u/15254541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/archit76", "html_url": "https://github.com/archit76", "followers_url": "https://api.github.com/users/archit76/followers", "following_url": "https://api.github.com/users/archit76/following{/other_user}", "gists_url": "https://api.github.com/users/archit76/gists{/gist_id}", "starred_url": "https://api.github.com/users/archit76/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/archit76/subscriptions", "organizations_url": "https://api.github.com/users/archit76/orgs", "repos_url": "https://api.github.com/users/archit76/repos", "events_url": "https://api.github.com/users/archit76/events{/privacy}", "received_events_url": "https://api.github.com/users/archit76/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Not required anymore. Works for me!" ]
1,707
1,707
1,707
NONE
null
Fixes #28969 With this fix, dtype can be changed by passing as a kwargs argument if do_rescale: images = [ self.rescale(image=image, scale=rescale_factor, input_data_format=input_data_format, **kwargs) for image in images ]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28971/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28971", "html_url": "https://github.com/huggingface/transformers/pull/28971", "diff_url": "https://github.com/huggingface/transformers/pull/28971.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28971.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28970
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28970/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28970/comments
https://api.github.com/repos/huggingface/transformers/issues/28970/events
https://github.com/huggingface/transformers/issues/28970
2,129,935,531
I_kwDOCUB6oc5-9Dyr
28,970
Question about the use of bias in the Graphormer Model
{ "login": "sarah-af", "id": 74510900, "node_id": "MDQ6VXNlcjc0NTEwOTAw", "avatar_url": "https://avatars.githubusercontent.com/u/74510900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarah-af", "html_url": "https://github.com/sarah-af", "followers_url": "https://api.github.com/users/sarah-af/followers", "following_url": "https://api.github.com/users/sarah-af/following{/other_user}", "gists_url": "https://api.github.com/users/sarah-af/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarah-af/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarah-af/subscriptions", "organizations_url": "https://api.github.com/users/sarah-af/orgs", "repos_url": "https://api.github.com/users/sarah-af/repos", "events_url": "https://api.github.com/users/sarah-af/events{/privacy}", "received_events_url": "https://api.github.com/users/sarah-af/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @clefourrier " ]
1,707
1,707
null
NONE
null
Hi, The documentation of the Graphormerconfig indicates that the parameter, bias (bool, optional, defaults to True) — Uses bias in the attention module - unsupported at the moment. I have 2 questions, 1. Is that the same attention bias introduced in the paper using the shortest path distance? Or where is it applied? 2. What does unsupported mean? I see in the src files that the attention bias has been implemented. Many thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28970/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28969
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28969/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28969/comments
https://api.github.com/repos/huggingface/transformers/issues/28969/events
https://github.com/huggingface/transformers/issues/28969
2,129,867,176
I_kwDOCUB6oc5-8zGo
28,969
Changing dtype(For half precision) not possible in rescale in image_processing_donut
{ "login": "archit76", "id": 15254541, "node_id": "MDQ6VXNlcjE1MjU0NTQx", "avatar_url": "https://avatars.githubusercontent.com/u/15254541?v=4", "gravatar_id": "", "url": "https://api.github.com/users/archit76", "html_url": "https://github.com/archit76", "followers_url": "https://api.github.com/users/archit76/followers", "following_url": "https://api.github.com/users/archit76/following{/other_user}", "gists_url": "https://api.github.com/users/archit76/gists{/gist_id}", "starred_url": "https://api.github.com/users/archit76/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/archit76/subscriptions", "organizations_url": "https://api.github.com/users/archit76/orgs", "repos_url": "https://api.github.com/users/archit76/repos", "events_url": "https://api.github.com/users/archit76/events{/privacy}", "received_events_url": "https://api.github.com/users/archit76/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts \r\nPassing kwargs should resolve this:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/donut/image_processing_donut.py#L441-L445\r\nif do_rescale:\r\nimages = [\r\nself.rescale(image=image, scale=rescale_factor, input_data_format=input_data_format, **kwargs)\r\nfor image in images\r\n]", "Hi @archit76, thanks for raising this issue! \r\n\r\nCould you clarify the expected behaviour here? Is it that you want the returned dtype of `pixel_values` to be `float16`? \r\n\r\nA few comments: \r\n\r\n1. In the provided example, the dummy image being passed in is in `float32`. When you pass this to the image processor you should get the following warning: \r\n\r\n```\r\nIt looks like you are trying to rescale already rescaled images. If the input images have pixel values between 0 and 1, set `do_rescale=False` to avoid rescaling them again.\r\n```\r\n\r\nThis is because `do_rescale` rescales the pixel values to have values between [0, 1], from `[0, 255]`. The dummy image generated from `np.random.rand(...)` will have values in `[0, 1)` and so rescaling isn't required and you should set `do_rescale=False`.\r\n\r\n2. If you're using pytorch, then you can call `to` directly on the image processor outputs to cast to the desired type. \r\n\r\n```\r\nfrom transformers import DonutImageProcessor\r\nimport numpy as np\r\n\r\n# Image of values in the range [0, 255]\r\nimage = np.random.randint(0, 256, (3, 224, 224))\r\n\r\n# Output a batch of torch tensors \r\noutputs = image_processor(image, return_tensors=\"pt\")\r\n\r\n# Cast to the desired type\r\noutputs.to(torch.float16)\r\n``` \r\n\r\n3. In the codebase, we try to avoid passing kwargs wherever possible. Adding in `kwargs` here is a brittle solution. ", "@amyeroberts Yes, I would like to get the dtype of pixel values to be float16.\r\nIn order to use Donut model.half() at inference, it requires input type to be float16 instead of float32.", "@archit76 Then calling `outputs.to(torch.float16)` should be the approach. ", "@amyeroberts I already tried outputs.to(torch.float16), but the dtype is set to float32 because of\r\nhttps://github.com/huggingface/transformers/blob/v4.37.2/src/transformers/image_transforms.py#L92-L98\r\ndef rescale(\r\nimage: np.ndarray,\r\nscale: float,\r\ndata_format: Optional[ChannelDimension] = None,\r\ndtype: np.dtype = np.float32,\r\ninput_data_format: Optional[Union[str, ChannelDimension]] = None,\r\n) -> np.ndarray: ", "@archit76 Could you provide a code snippet to reproduce this? In the example I provided, it correctly casts the input to `torch.float16`. " ]
1,707
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.37.2 - Platform: Linux-6.1.0-1029-oem-x86_64-with-glibc2.35 - Python version: 3.9.18 - Huggingface_hub version: 0.20.3 - Safetensors version: 0.4.2 - Accelerate version: 0.24.1 https://github.com/huggingface/transformers/blob/main/src/transformers/models/donut/image_processing_donut.py#L441-L445 if do_rescale: images = [ self.rescale(image=image, scale=rescale_factor, input_data_format=input_data_format) for image in images ] ------------------------------------------------ self.rescale calls BaseImageProcessor rescale method: https://github.com/huggingface/transformers/blob/v4.37.2/src/transformers/image_processing_utils.py#L557-L564 def rescale( self, image: np.ndarray, scale: float, data_format: Optional[Union[str, ChannelDimension]] = None, input_data_format: Optional[Union[str, ChannelDimension]] = None, **kwargs, ) -> np.ndarray: Which calls rescale(image, scale=scale, data_format=data_format, input_data_format=input_data_format, **kwargs) -------------------------------------------------- rescale function has a fixed dtype: https://github.com/huggingface/transformers/blob/v4.37.2/src/transformers/image_transforms.py#L92-L98 def rescale( image: np.ndarray, scale: float, data_format: Optional[ChannelDimension] = None, dtype: np.dtype = np.float32, input_data_format: Optional[Union[str, ChannelDimension]] = None, ) -> np.ndarray: ### Who can help? @amyeroberts, Can I create a pull request with the change? ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import DonutImageProcessor import numpy as np image = np.random.rand(3, 224, 224) print(DonutImageProcessor().preprocess(images=[image], , dtype=np.float16)) output: {'pixel_values': [array([[[-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], ..., [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.]], [[-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], ..., [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.]], [[-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], ..., [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.]]], dtype=float32)]} ### Expected behavior from transformers import DonutImageProcessor import numpy as np image = np.random.rand(3, 224, 224) print(DonutImageProcessor().preprocess(images=[image], dtype=np.float16)) output: {'pixel_values': [array([[[-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], ..., [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.]], [[-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], ..., [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.]], [[-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], ..., [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.], [-1., -1., -1., ..., -1., -1., -1.]]], dtype=float16)]}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28969/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28968
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28968/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28968/comments
https://api.github.com/repos/huggingface/transformers/issues/28968/events
https://github.com/huggingface/transformers/issues/28968
2,129,599,093
I_kwDOCUB6oc5-7xp1
28,968
The initialized weights of nn.Linear are very large within __init__
{ "login": "zhjohnchan", "id": 37367987, "node_id": "MDQ6VXNlcjM3MzY3OTg3", "avatar_url": "https://avatars.githubusercontent.com/u/37367987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhjohnchan", "html_url": "https://github.com/zhjohnchan", "followers_url": "https://api.github.com/users/zhjohnchan/followers", "following_url": "https://api.github.com/users/zhjohnchan/following{/other_user}", "gists_url": "https://api.github.com/users/zhjohnchan/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhjohnchan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhjohnchan/subscriptions", "organizations_url": "https://api.github.com/users/zhjohnchan/orgs", "repos_url": "https://api.github.com/users/zhjohnchan/repos", "events_url": "https://api.github.com/users/zhjohnchan/events{/privacy}", "received_events_url": "https://api.github.com/users/zhjohnchan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Looks like a precision problem? The values are in the normal range in version 4.35.2.", "cc @ydshieh this is an issue you looked into", "Hi @zhjohnchan We do have similar issue, but I am surprised that v4.35 has normal values. I will have to take a look into this.\r\nBTW, could you share you torch versions, especially if the torch version is the same while you switch transformers from 4.35 to 4.37.\r\n\r\nThanks in advance.", "Thanks @NielsRogge and @ydshieh,\r\n\r\nThe following is my torch version:\r\n```\r\ntorch 2.0.1\r\ntorchaudio 2.0.2\r\ntorchmetrics 0.11.4\r\ntorchvision 0.15.2\r\n```\r\n\r\nThe torch version is fixed in my setting.\r\n\r\nBest,\r\n", "Seems something in Transformers overwrites the operators? I also tried nn.init, it's the same problem." ]
1,707
1,707
null
NONE
null
### System Info Hi, I wrote a classification wrapper for CLIPModel like this: ``` class CLIPVisionTransformerForImageClassification(CLIPPreTrainedModel): config_class = CLIPConfig _no_split_modules = ["CLIPEncoderLayer"] def __init__(self, config: CLIPConfig): super().__init__(config) .... # Classifier head self.classifier = ( nn.Linear(self.vision_config.hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity() ) ``` In `Transformers==4.35.2`, it works well. After switching to the latest version (i.e., 4.37.2), its initialized weights are very large like ``` self.classifier.weight Parameter containing: tensor([[2.7691e+20, 1.7750e+28, 4.1259e-08, ..., 7.0974e+22, 2.1473e+29, 1.7861e+25], [4.8366e+30, 2.1683e-10, 1.8465e+25, ..., 2.0505e-10, 1.8612e+34, 7.6818e+31], [1.1319e+21, 2.0683e-10, 1.7034e+25, ..., 1.1353e+24, 2.8175e+20, 6.8589e+22], [4.5445e+30, 3.0446e+32, 4.3059e+21, ..., 1.8526e+28, 5.3747e-11, 1.7077e+22], [2.0505e-10, 1.8612e+34, 7.6818e+31, ..., 3.6291e-09, 6.8906e+22, 2.6350e+23]], device='cuda:0', requires_grad=True) ``` Any thoughts about this? Thanks! Best, ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` class CLIPVisionTransformerForImageClassification(CLIPPreTrainedModel): config_class = CLIPConfig _no_split_modules = ["CLIPEncoderLayer"] def __init__(self, config: CLIPConfig): super().__init__(config) .... # Classifier head self.classifier = ( nn.Linear(self.vision_config.hidden_size, config.num_labels) if config.num_labels > 0 else nn.Identity() ) ``` ### Expected behavior ``` self.classifier.weight Parameter containing: tensor([[2.7691e+20, 1.7750e+28, 4.1259e-08, ..., 7.0974e+22, 2.1473e+29, 1.7861e+25], [4.8366e+30, 2.1683e-10, 1.8465e+25, ..., 2.0505e-10, 1.8612e+34, 7.6818e+31], [1.1319e+21, 2.0683e-10, 1.7034e+25, ..., 1.1353e+24, 2.8175e+20, 6.8589e+22], [4.5445e+30, 3.0446e+32, 4.3059e+21, ..., 1.8526e+28, 5.3747e-11, 1.7077e+22], [2.0505e-10, 1.8612e+34, 7.6818e+31, ..., 3.6291e-09, 6.8906e+22, 2.6350e+23]], device='cuda:0', requires_grad=True) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28968/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28967
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28967/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28967/comments
https://api.github.com/repos/huggingface/transformers/issues/28967/events
https://github.com/huggingface/transformers/issues/28967
2,129,173,660
I_kwDOCUB6oc5-6Jyc
28,967
### Summary
{ "login": "goalend", "id": 110501477, "node_id": "U_kgDOBpYeZQ", "avatar_url": "https://avatars.githubusercontent.com/u/110501477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goalend", "html_url": "https://github.com/goalend", "followers_url": "https://api.github.com/users/goalend/followers", "following_url": "https://api.github.com/users/goalend/following{/other_user}", "gists_url": "https://api.github.com/users/goalend/gists{/gist_id}", "starred_url": "https://api.github.com/users/goalend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goalend/subscriptions", "organizations_url": "https://api.github.com/users/goalend/orgs", "repos_url": "https://api.github.com/users/goalend/repos", "events_url": "https://api.github.com/users/goalend/events{/privacy}", "received_events_url": "https://api.github.com/users/goalend/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "✌️" ]
1,707
1,707
1,707
NONE
null
### Summary The importance of enforcing a set of quality standards to continuously deploy in a consistent and predictable way can’t be underestimated. Implementing these standards without the duplication of CI/CD configuration code is a challenge many organizations face today. How can workflows solve these problems and allow the ability to “push things down” or enforce requirements from organization down to its repositories? We are building the controls that allow an organization to require a workflow file (or list of workflow files) to pass before code is merged into any of its repositories. Requiring a workflow to pass before merging will be available via Rulesets and will allow access to controls already available to branch rulesets, such as: - enforcement status - bypass rules - repository targeting **By requiring a workflow to pass before merging will allow organization admins to specify:** - which workflow file to run from any of the organization’s repositories - a specific branch or tag for the workflow file - an exact commit sha to pin to the required workflow _(optional)_ **Required workflows can help companies with the following use cases:** - Correctness and Compliance: Ensure that all code meets an enterprise’s quality standards before merging. - DRY: Reducing duplication of CI/CD configuration code - [ ] - _Originally posted by @github-product-roadmap in https://github.com/github/roadmap/issues/638__[@]()_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28967/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/28966
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28966/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28966/comments
https://api.github.com/repos/huggingface/transformers/issues/28966/events
https://github.com/huggingface/transformers/pull/28966
2,129,051,352
PR_kwDOCUB6oc5mlFle
28,966
Implementation of SuperPoint and AutoModelForKeypointDetection
{ "login": "sbucaille", "id": 24275548, "node_id": "MDQ6VXNlcjI0Mjc1NTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/24275548?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sbucaille", "html_url": "https://github.com/sbucaille", "followers_url": "https://api.github.com/users/sbucaille/followers", "following_url": "https://api.github.com/users/sbucaille/following{/other_user}", "gists_url": "https://api.github.com/users/sbucaille/gists{/gist_id}", "starred_url": "https://api.github.com/users/sbucaille/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbucaille/subscriptions", "organizations_url": "https://api.github.com/users/sbucaille/orgs", "repos_url": "https://api.github.com/users/sbucaille/repos", "events_url": "https://api.github.com/users/sbucaille/events{/privacy}", "received_events_url": "https://api.github.com/users/sbucaille/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @amyeroberts @rafaelpadilla @ArthurZucker ,\r\n\r\nI couldn't see other solution than to create a branch from scratch and re-implementing everything that was discussed since the beginning of the original [PR](https://github.com/huggingface/transformers/pull/25786). Here, most of my problems (RegNet related error when using `make fix-copies` which was caused by the fact that my environment was outdated after doing the merge, the `index.md` huge diff, and others...) are fixed.\r\nAs I said, I re-implemented everything that was in the other PR, and addressed all other points that were mentioned in reviews (such as docs) which of course may need more discussion. I tried to make sure everything point discussed were included. Also, I renamed `InterestPointDescription` to `KeypointDetection` as this term is more widely used in the literature.\r\nAnyway, I wanted to implement this model in transformers and contribute, you told me it would take more time than the other method, I'll take all this mess I made myself as part of the learning path 😆\r\nLet me know what you think, what should be the next steps, I believe we're almost there ! 🤗 \r\n\r\nSteven", "Hi @sbucaille, thanks for opening this PR and working from the previous reviews! \r\n\r\nAt the moment, I don't see any tests implemented for the model or image processor. The next step would be adding those 🤗 ", "Looks like I forgot to `git add` the tests folder, fixed !\r\nI also addressed the issue where the model was too big for the tests, I added a configuration for a smaller one and adjusted the other tests.", "Hi @amyeroberts @ydshieh ,\r\nI just looked again at the PR and noticed that there were conflicts due to recent changes to the `main`branch. So instead of making the merge directly through GitHub interface, I did what you advised me to do earlier, I updated my main branch and rebased this branch onto the main, and managed to resolve the conflicts (the implementation of StableLM ended up at the same lines SuperPoint is implemented in all the readme's and __init__'s), then I pushed the changes.\r\nFirst question, is it normal that this PR lists all the other commits ?\r\nSecond question, if I merged this way, why is GitHub still complaining and asking me to merge manually through the interface like such : \r\n![image](https://github.com/huggingface/transformers/assets/24275548/7b3306f2-b34f-43a8-bd8b-c2a5c8aa8902)\r\nIn the end I feel like merging manually through the interface would have been faster. Am I missing something ?\r\n\r\nAlso, appears all the tests pass now 🤗 ", "Hi @sbucaille, regarding your questions about rebasing\r\n\r\n> First question, is it normal that this PR lists all the other commits ?\r\n\r\nNo, it shouldn't. Did you force push after rebasing? It's necessary to do this, as rebasing is effectively rewriting the history of the branch, \r\n\r\n> Second question, if I merged this way, why is GitHub still complaining and asking me to merge manually through the interface like such :\r\n\r\nWhat do you mean by \"the interface\"? \r\n\r\nWhen performing a rebase, it might be necessary to resolve conflicts. What rebasing is trying to do is move the first commit of your branch to the head of main. However, as the branch was originally branched off an older commit, there might be conflicts if files in this branch make changes to files which have also been modified on `main`. It might also happen if you've also merged in `main` into this branch in the interim, as the branch now contains information about newer, upstream commits on `main`. Therefore, how to rewrite the history, so this branch and all its commits are from the tip of `main` isn't obvious. \r\n\r\nThe easiest way to manage this is to rebase main into the branch often, have short-lived branches and avoid merge commits in the branch.\r\n\r\nHere are some nice articles about rebasing which I found useful when first learning about it: \r\n* https://www.atlassian.com/git/tutorials/merging-vs-rebasing\r\n* https://www.atlassian.com/git/tutorials/rewriting-history/git-rebase", "@amyeroberts Alright I think I start to grasp the base and merge things, this PR is my first in any open source project and I knew in advance I would have been in front of such issues, so thank you for taking the time 🤗\r\nAs of now, is my PR still ok ? Should we continue to review the changes ? What are the next steps ?", "@sbucaille I'd suggest first resolving the git history, so this branch only contains commits that are part of this PR. Once our house is tidy we can review and iterate on the PR. From first glance, things are looking good! Doing this now will be a lot easier than trying to do it in the middle of a review. \r\n\r\n", "@amyeroberts holy that was not easy but I think I understood the mechanism, I once more rebased the branch to include the latest commits. It seems I am now 4 commits ahead of main, which should be fine (until tomorrow morning where new commits will be pushed to main I guess).\r\nSo I guess it is better now to see a bunch of \"force push\" in the pull request than the list of all the commits from the main right ? \r\nHow often do you suggest to do this manipulation ? You talked about short lived branches but I guess this does not apply to mine since I've been working on it for a couple of months right ?\r\nAnyway, I'm ready to move on the last review process", "@sbucaille Great! So main will continue to have lots of new commits. The history you're seeing isn't so much number of commits \"ahead\" of main, rather, the number of commits on top of a certain main commit, let's say ABC. When you rebase, you're re-writing so that the commits are now on top of the latest main commit, say EDF. It's kind of like shifting things along in history. \r\n\r\nYou don't need to rebase every time there's new commits on main. The purpose of rebasing is to make sure your branch and main don't diverge and any changes made to the same file has no clashes and a clear order to apply commits. As well as including any important upstream changes e.g. bug fixes. It's basically the same as having a merge commit but a lot cleaner as only the branches commits are shown.\r\n\r\n> So I guess it is better now to see a bunch of \"force push\" in the pull request than the list of all the commits from the main right ?\r\n\r\nYep! This just lets me know there's been a rewrite of the history. This is important as rebasing can also be used for editing the commits in the branch itself.\r\n\r\n> How often do you suggest to do this manipulation ? You talked about short lived branches but I guess this does not apply to mine since I've been working on it for a couple of months right ?\r\n\r\nIt depends. If you touch a lot of common files, you'll want to do it regularly. Particularly in the final steps when getting ready to merge the branch in - at least once a day. This is a model PR, so probably doesn't need it quite so often. Now you've done the difficult rebase, all the rest should be pretty easy to do. In fact, I'd image there would be little to no conflicts. \r\n\r\nWe'd like all branches to live as short as possible. This is less true for model PRs - but we still would like them to be resolved in the order of days/weeks wherever possible. \r\n\r\n> Anyway, I'm ready to move on the last review process\r\n\r\nGreat! Next step is to get all the tests passing on the CI.", "@amyeroberts Seems right, now. Turns out I uncommitted certain files during my rebase, I'll need to practice a bit more in the future 😅 Tests are passing !" ]
1,707
1,708
null
NONE
null
# What does this PR do? This PR implements SuperPoint, one of the few models that generate keypoints and descriptors given an image, as discussed in [this previous pull request](https://github.com/huggingface/transformers/pull/25697) The goal is to implement this model and a new type of AutoModel : `AutoModelForKeypointDetection` (name to be discussed). This PR is also the replacement of a previous [PR](https://github.com/huggingface/transformers/pull/25786) for which the branch ended up being unusable. ## Who can review? @amyeroberts @ArthurZucker @rafaelpadilla ## TODO's - [x] Implement SuperPointConfig and SuperPointModel as PretrainedConfig and PretrainedModel - [x] Generate a conversion script for the original weights - [x] Implement the new `AutoModelForKeypointDetection` mapping - [x] Test the model - [x] Write documentation
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28966/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28966", "html_url": "https://github.com/huggingface/transformers/pull/28966", "diff_url": "https://github.com/huggingface/transformers/pull/28966.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28966.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28965
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28965/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28965/comments
https://api.github.com/repos/huggingface/transformers/issues/28965/events
https://github.com/huggingface/transformers/issues/28965
2,129,043,434
I_kwDOCUB6oc5-5p_q
28,965
Problems with starting meta-llama/Llama-2-7b-hf model using transformers library. HFValidationError
{ "login": "Karoljv", "id": 93725497, "node_id": "U_kgDOBZYjOQ", "avatar_url": "https://avatars.githubusercontent.com/u/93725497?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Karoljv", "html_url": "https://github.com/Karoljv", "followers_url": "https://api.github.com/users/Karoljv/followers", "following_url": "https://api.github.com/users/Karoljv/following{/other_user}", "gists_url": "https://api.github.com/users/Karoljv/gists{/gist_id}", "starred_url": "https://api.github.com/users/Karoljv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Karoljv/subscriptions", "organizations_url": "https://api.github.com/users/Karoljv/orgs", "repos_url": "https://api.github.com/users/Karoljv/repos", "events_url": "https://api.github.com/users/Karoljv/events{/privacy}", "received_events_url": "https://api.github.com/users/Karoljv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,707
1,707
1,707
NONE
null
### System Info I am using: Windows 11 Python 3.10.4 Torch 2.2.0 Transformers 4.37.2 ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi I have problems with Llama-2-7B-hf model. I was granted access to this model. I am facing issue while running this code: model = "meta-llama/Llama-2-7b-hf" model = AutoModelForCausalLM.from_pretrained( model, cache_dir = "./model/", device_map = "auto" ) tokenizer = AutoTokenizer.from_pretrained(model, cache_dir = "./model/") I am logged into hugging face hub by access token. I providing correct model id used in https://huggingface.co./meta-llama/Llama-2-7b-hf. Here is the error: Here is the error: HFValidationError Traceback (most recent call last) File [d:\Magister\llama_hugging_face\venv\lib\site-packages\transformers\utils\hub.py:385](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:385), in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs) [383](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:383) try: [384](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:384) # Load from URL or cache if already cached --> [385](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:385) resolved_file = hf_hub_download( [386](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:386) path_or_repo_id, [387](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:387) filename, [388](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:388) subfolder=None if len(subfolder) == 0 else subfolder, [389](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:389) repo_type=repo_type, [390](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:390) revision=revision, [391](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:391) cache_dir=cache_dir, [392](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:392) user_agent=user_agent, [393](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:393) force_download=force_download, [394](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:394) proxies=proxies, [395](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:395) resume_download=resume_download, [396](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:396) token=token, [397](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:397) local_files_only=local_files_only, [398](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:398) ) [399](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:399) except GatedRepoError as e: File [d:\Magister\llama_hugging_face\venv\lib\site-packages\huggingface_hub\utils\_validators.py:110](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:110), in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs) [109](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:109) if arg_name in ["repo_id", "from_id", "to_id"]: --> [110](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:110) validate_repo_id(arg_value) [112](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:112) elif arg_name == "token" and arg_value is not None: File [d:\Magister\llama_hugging_face\venv\lib\site-packages\huggingface_hub\utils\_validators.py:164](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:164), in validate_repo_id(repo_id) [163](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:163) if not REPO_ID_REGEX.match(repo_id): --> [164](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:164) raise HFValidationError( [165](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:165) "Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are" [166](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:166) " forbidden, '-' and '.' cannot start or end the name, max length is 96:" [167](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:167) f" '{repo_id}'." [168](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:168) ) [170](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/huggingface_hub/utils/_validators.py:170) if "--" in repo_id or ".." in repo_id: HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaSdpaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) )'. The above exception was the direct cause of the following exception: OSError Traceback (most recent call last) Cell In[12], [line 9](vscode-notebook-cell:?execution_count=12&line=9) [1](vscode-notebook-cell:?execution_count=12&line=1) model = "meta-llama/Llama-2-7b-hf" [3](vscode-notebook-cell:?execution_count=12&line=3) model = AutoModelForCausalLM.from_pretrained( [4](vscode-notebook-cell:?execution_count=12&line=4) model, [5](vscode-notebook-cell:?execution_count=12&line=5) cache_dir = "[./model/](https://file+.vscode-resource.vscode-cdn.net/d%3A/Magister/llama_hugging_face/model/)", [6](vscode-notebook-cell:?execution_count=12&line=6) device_map = "auto" [7](vscode-notebook-cell:?execution_count=12&line=7) ) ----> [9](vscode-notebook-cell:?execution_count=12&line=9) tokenizer = AutoTokenizer.from_pretrained(model, cache_dir = "[./model/](https://file+.vscode-resource.vscode-cdn.net/d%3A/Magister/llama_hugging_face/model/)") File [d:\Magister\llama_hugging_face\venv\lib\site-packages\transformers\models\auto\tokenization_auto.py:758](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:758), in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) [755](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:755) return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) [757](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:757) # Next, let's try to use the tokenizer_config file to get the tokenizer class. --> [758](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:758) tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) [759](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:759) if "_commit_hash" in tokenizer_config: [760](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:760) kwargs["_commit_hash"] = tokenizer_config["_commit_hash"] File [d:\Magister\llama_hugging_face\venv\lib\site-packages\transformers\models\auto\tokenization_auto.py:590](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:590), in get_tokenizer_config(pretrained_model_name_or_path, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, **kwargs) [587](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:587) token = use_auth_token [589](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:589) commit_hash = kwargs.get("_commit_hash", None) --> [590](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:590) resolved_config_file = cached_file( [591](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:591) pretrained_model_name_or_path, [592](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:592) TOKENIZER_CONFIG_FILE, [593](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:593) cache_dir=cache_dir, [594](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:594) force_download=force_download, [595](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:595) resume_download=resume_download, [596](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:596) proxies=proxies, [597](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:597) token=token, [598](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:598) revision=revision, [599](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:599) local_files_only=local_files_only, [600](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:600) subfolder=subfolder, [601](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:601) _raise_exceptions_for_missing_entries=False, [602](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:602) _raise_exceptions_for_connection_errors=False, [603](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:603) _commit_hash=commit_hash, [604](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:604) ) [605](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:605) if resolved_config_file is None: [606](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/models/auto/tokenization_auto.py:606) logger.info("Could not locate the tokenizer configuration file, will try to use the model config instead.") File [d:\Magister\llama_hugging_face\venv\lib\site-packages\transformers\utils\hub.py:450](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:450), in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs) [448](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:448) raise EnvironmentError(f"There was a specific connection error when trying to load {path_or_repo_id}:\n{err}") [449](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:449) except HFValidationError as e: --> [450](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:450) raise EnvironmentError( [451](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:451) f"Incorrect path_or_model_id: '{path_or_repo_id}'. Please provide either the path to a local folder or the repo_id of a model on the Hub." [452](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:452) ) from e [453](file:///D:/Magister/llama_hugging_face/venv/lib/site-packages/transformers/utils/hub.py:453) return resolved_file OSError: Incorrect path_or_model_id: 'LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(32000, 4096) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaSdpaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=4096, bias=False) (v_proj): Linear(in_features=4096, out_features=4096, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=11008, bias=False) (up_proj): Linear(in_features=4096, out_features=11008, bias=False) (down_proj): Linear(in_features=11008, out_features=4096, bias=False) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=32000, bias=False) )'. Please provide either the path to a local folder or the repo_id of a model on the Hub. ### Expected behavior Correctly loaded model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28965/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28965/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28964
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28964/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28964/comments
https://api.github.com/repos/huggingface/transformers/issues/28964/events
https://github.com/huggingface/transformers/issues/28964
2,129,025,459
I_kwDOCUB6oc5-5lmz
28,964
RuntimeError: result type Float can't be cast to the desired output type Byte
{ "login": "KaifAhmad1", "id": 98801504, "node_id": "U_kgDOBeOXYA", "avatar_url": "https://avatars.githubusercontent.com/u/98801504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KaifAhmad1", "html_url": "https://github.com/KaifAhmad1", "followers_url": "https://api.github.com/users/KaifAhmad1/followers", "following_url": "https://api.github.com/users/KaifAhmad1/following{/other_user}", "gists_url": "https://api.github.com/users/KaifAhmad1/gists{/gist_id}", "starred_url": "https://api.github.com/users/KaifAhmad1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KaifAhmad1/subscriptions", "organizations_url": "https://api.github.com/users/KaifAhmad1/orgs", "repos_url": "https://api.github.com/users/KaifAhmad1/repos", "events_url": "https://api.github.com/users/KaifAhmad1/events{/privacy}", "received_events_url": "https://api.github.com/users/KaifAhmad1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey ! You are using custom code, we probably won't have the bandwidth to debug it for you! Best recommendation is to put a debug breakpoint and see what is happening! 🤗 ", "@KaifAhmad1 thanks for the issue! The issue seems to be related to the custom code that is on the Hub ! \r\nI recommend opening an issue in https://huggingface.co./RWKV/HF_v5-Eagle-7B/blob/main/modeling_rwkv5.py \r\nMake sure that the trust remote code model has correctly implemented layer re-scaling for quantized layers: https://github.com/huggingface/transformers/blob/d90acc16437e8c9e45e068fa1cc1a263b9a7208f/src/transformers/models/rwkv/modeling_rwkv.py#L711", "@KaifAhmad1 I attempted to fix it here: https://huggingface.co./RWKV/HF_v5-Eagle-7B/discussions/5 can you try to pass `revision=\"refs/pr/5\"` in `transformers.AutoModelForCausalLM.from_pretrained`?", " Hey, @younesbelkada still not getting right results\r\n\r\n``` Python \r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n trust_remote_code=True,\r\n config=model_config,\r\n revision=\"refs/pr/5\",\r\n device_map='auto',\r\n use_auth_token=hf_auth,\r\n quantization_config=bnb_config,\r\n low_cpu_mem_usage=True\r\n)\r\n```\r\n\r\n```\r\n/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.\r\n warnings.warn(\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-6-59c99693831c>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 model = transformers.AutoModelForCausalLM.from_pretrained(\r\n 2 model_id,\r\n 3 trust_remote_code=True,\r\n 4 config=model_config,\r\n 5 revision=\"refs/pr/5\",\r\n\r\n1 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in register(cls, config_class, model_class, exist_ok)\r\n 584 \"\"\"\r\n 585 if hasattr(model_class, \"config_class\") and model_class.config_class != config_class:\r\n--> 586 raise ValueError(\r\n 587 \"The model class you are passing has a `config_class` attribute that is not consistent with the \"\r\n 588 f\"config class you passed (model has {model_class.config_class} and you passed {config_class}. Fix \"\r\n\r\nValueError: The model class you are passing has a `config_class` attribute that is not consistent with the config class you passed (model has <class 'transformers_modules.RWKV.HF_v5-Eagle-7B.5d64a05fb9748b26b55162cb56162c9da83135c4.configuration_rwkv5.Rwkv5Config'> and you passed <class 'transformers_modules.RWKV.HF_v5-Eagle-7B.d777ebf28bd74abf9ada4a07f58c38c6688f5365.configuration_rwkv5.Rwkv5Config'>. Fix one of those so they match!\r\n```", "@KaifAhmad1 you need to pass that argument to both automodelxxx and autoconfig", " @younesbelkada @ArthurZucker Now getting attribute error. \r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\n[<ipython-input-12-cab67dc592cd>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 result = generate_text(\"What are the primary mechanisms underlying antibiotic resistance, and how can we develop strategies to combat it?\")\r\n 2 print(result)\r\n\r\n17 frames\r\n[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in __getattr__(self, name)\r\n 1693 if name in modules:\r\n 1694 return modules[name]\r\n-> 1695 raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\n 1696 \r\n 1697 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:\r\n\r\nAttributeError: 'Rwkv5Model' object has no attribute '_bnb_4bit_dequantize_and_rescale'\r\n```\r\n\r\nHere is the colab link for your reference: \r\nhttps://colab.research.google.com/drive/1glmO6e7Qro3mQtCwnmoBGi537ej7aTHl?usp=sharing", "@KaifAhmad1 thanks for testing out! \r\nI invite you to continue from this branch: https://huggingface.co./RWKV/HF_v5-Eagle-7B/tree/refs%2Fpr%2F5 as it uses code on the Hub feature, and modify the modeling file to your needs. Make sure to have the same quant / dequant logic as in https://github.com/huggingface/transformers/blob/main/src/transformers/models/rwkv/modeling_rwkv.py . Let me know if you need any help !", " @younesbelkada PR is waiting for merge.\r\nhttps://huggingface.co./RWKV/HF_v5-Eagle-7B/discussions/7#65cdc51d0cdcf1ddd539b712" ]
1,707
1,708
1,708
NONE
null
### System Info OS: Windows 11 x64 Cuda: 12.1 transformers: 4.37.2 sentence_transformers: 2.3.1 bitsandbytes: 0.42.0 pip: 23.1.2 Python 3.10.10 ### Who can help? Hey, @ArthurZucker , @younesbelkada Please guide for this issue. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ### Here is the code snippet I am using RWKV architecture model `RWKV/HF_v5-Eagle-7B` for my KG-enhanced RAG use cases on domain-specific datasets. ``` Python from torch import cuda, bfloat16 import transformers model_id = 'RWKV/HF_v5-Eagle-7B' device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu' ``` ``` Python # begin initializing HF items, you need an access token model_config = transformers.AutoConfig.from_pretrained( model_id, use_auth_token=hf_auth, trust_remote_code=True ) ``` ``` # BnB Configuration bnb_config = transformers.BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=bfloat16 ) ``` ``` Python model = transformers.AutoModelForCausalLM.from_pretrained( model_id, trust_remote_code=True, config=model_config, device_map='auto', use_auth_token=hf_auth, quantization_config=bnb_config, low_cpu_mem_usage=True ) ``` ``` Python tokenizer = transformers.AutoTokenizer.from_pretrained( model_id, use_auth_token=hf_auth ) ``` #### Stopping Criteria ``` Python # List of strings representing stop signals or markers stop_list = ['\nHuman:', '\n```\n'] stop_token_ids = [tokenizer(x)['input_ids'] for x in stop_list] stop_token_ids ``` ``` Python # Convert token IDs to LongTensor objects import torch stop_token_ids = [torch.LongTensor(x).to(device) for x in stop_token_ids] stop_token_ids ``` ``` Python from transformers import StoppingCriteria, StoppingCriteriaList # Define a custom stopping criteria class class StopOnTokens(StoppingCriteria): def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool: for stop_ids in stop_token_ids: if torch.equal(input_ids[0][-len(stop_ids):], stop_ids): return True return False stopping_criteria = StoppingCriteriaList([StopOnTokens()]) ``` ``` Python # Set up text generation pipeline generate_text = transformers.pipeline( model=model, tokenizer=tokenizer, return_full_text=True, task='text-generation', stopping_criteria=stopping_criteria, temperature=0.3, max_new_tokens=512, repetition_penalty=1.1 ) ``` ``` Python result = generate_text("What are the primary mechanisms underlying antibiotic resistance, and how can we develop strategies to combat it?") print(result) ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [<ipython-input-26-cab67dc592cd>](https://localhost:8080/#) in <cell line: 1>() ----> 1 result = generate_text("What are the primary mechanisms underlying antibiotic resistance, and how can we develop strategies to combat it?") 2 print(result) 16 frames [~/.cache/huggingface/modules/transformers_modules/RWKV/HF_v5-Eagle-7B/d777ebf28bd74abf9ada4a07f58c38c6688f5365/modeling_rwkv5.py](https://localhost:8080/#) in _rescale_layers(self) 748 block.feed_forward.value.weight.mul_(2 ** int(block_id // self.config.rescale_every)) 749 else: --> 750 block.attention.output.weight.div_(2 ** int(block_id // self.config.rescale_every)) 751 block.feed_forward.value.weight.div_(2 ** int(block_id // self.config.rescale_every)) 752 RuntimeError: result type Float can't be cast to the desired output type Byte ### Expected behavior It will return an answer to the given query without raising any exceptions.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28964/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28963
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28963/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28963/comments
https://api.github.com/repos/huggingface/transformers/issues/28963/events
https://github.com/huggingface/transformers/issues/28963
2,129,019,295
I_kwDOCUB6oc5-5kGf
28,963
2
{ "login": "goalend", "id": 110501477, "node_id": "U_kgDOBpYeZQ", "avatar_url": "https://avatars.githubusercontent.com/u/110501477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goalend", "html_url": "https://github.com/goalend", "followers_url": "https://api.github.com/users/goalend/followers", "following_url": "https://api.github.com/users/goalend/following{/other_user}", "gists_url": "https://api.github.com/users/goalend/gists{/gist_id}", "starred_url": "https://api.github.com/users/goalend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goalend/subscriptions", "organizations_url": "https://api.github.com/users/goalend/orgs", "repos_url": "https://api.github.com/users/goalend/repos", "events_url": "https://api.github.com/users/goalend/events{/privacy}", "received_events_url": "https://api.github.com/users/goalend/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,707
1,707
1,707
NONE
null
2 _Originally posted by @goalend in https://github.com/anchore/grype/pull/1710#discussion_r1485608950_ _Originally posted by @goalend in https://github.com/huggingface/transformers/issues/28962_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28963/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/28962
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28962/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28962/comments
https://api.github.com/repos/huggingface/transformers/issues/28962/events
https://github.com/huggingface/transformers/issues/28962
2,129,019,145
I_kwDOCUB6oc5-5kEJ
28,962
2
{ "login": "goalend", "id": 110501477, "node_id": "U_kgDOBpYeZQ", "avatar_url": "https://avatars.githubusercontent.com/u/110501477?v=4", "gravatar_id": "", "url": "https://api.github.com/users/goalend", "html_url": "https://github.com/goalend", "followers_url": "https://api.github.com/users/goalend/followers", "following_url": "https://api.github.com/users/goalend/following{/other_user}", "gists_url": "https://api.github.com/users/goalend/gists{/gist_id}", "starred_url": "https://api.github.com/users/goalend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/goalend/subscriptions", "organizations_url": "https://api.github.com/users/goalend/orgs", "repos_url": "https://api.github.com/users/goalend/repos", "events_url": "https://api.github.com/users/goalend/events{/privacy}", "received_events_url": "https://api.github.com/users/goalend/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,707
1,707
1,707
NONE
null
2 _Originally posted by @goalend in https://github.com/anchore/grype/pull/1710#discussion_r1485608950_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28962/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/28961
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28961/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28961/comments
https://api.github.com/repos/huggingface/transformers/issues/28961/events
https://github.com/huggingface/transformers/issues/28961
2,128,952,481
I_kwDOCUB6oc5-5Tyh
28,961
Option to set the tracking URI for MLflowCallback.
{ "login": "seanswyi", "id": 20367759, "node_id": "MDQ6VXNlcjIwMzY3NzU5", "avatar_url": "https://avatars.githubusercontent.com/u/20367759?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seanswyi", "html_url": "https://github.com/seanswyi", "followers_url": "https://api.github.com/users/seanswyi/followers", "following_url": "https://api.github.com/users/seanswyi/following{/other_user}", "gists_url": "https://api.github.com/users/seanswyi/gists{/gist_id}", "starred_url": "https://api.github.com/users/seanswyi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seanswyi/subscriptions", "organizations_url": "https://api.github.com/users/seanswyi/orgs", "repos_url": "https://api.github.com/users/seanswyi/repos", "events_url": "https://api.github.com/users/seanswyi/events{/privacy}", "received_events_url": "https://api.github.com/users/seanswyi/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
closed
false
null
[]
[ "Hi @seanswyi, thanks for opening this feature request! \r\n\r\nThe integrations are maintained by third party contributors, rather than the transformers team. If you or anyone else in the community would like to open a PR to add this we'd be happy to review! ", "Thanks for the heads up @amyeroberts! I'll submit a PR when I can." ]
1,707
1,708
1,708
CONTRIBUTOR
null
### Feature request Option to set not only the experiment name but also the tracking URI for MLflow. ### Motivation My company and I use our own MLflow URI and all of our code has `mlflow.set_tracking_uri($URI)` inside. I'm not seeing such an option for the MLflowCallback and am only seeing an option to set the experiment. The `run_name` seems to be using the `run_name` value in TrainingArguments, so there's not really any problem with that. ### Your contribution It seems like adding a few lines of code to the `setup` method would do the trick. https://github.com/huggingface/transformers/blob/345b9b1a6a308a1fa6559251eb33ead2211240ac/src/transformers/integrations/integration_utils.py#L952
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28961/timeline
completed
null
null