url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/28356
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28356/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28356/comments
https://api.github.com/repos/huggingface/transformers/issues/28356/events
https://github.com/huggingface/transformers/issues/28356
2,067,228,709
I_kwDOCUB6oc57N2gl
28,356
[generation] Exact Search Decoding
{ "login": "Saibo-creator", "id": 53392976, "node_id": "MDQ6VXNlcjUzMzkyOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saibo-creator", "html_url": "https://github.com/Saibo-creator", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "FYI @gante ", "Hi @Saibo-creator 👋 \r\n\r\nI'll do my usual bargain! I haven't seen demand for this feature, so my current decision is negative. However, if this comment reaches 10 reactions within a 3-month time spawn, I'll revisit this decision -- it means the community is interested in the feature 🙌 \r\n\r\n(whoever does the 10th reaction, please ping me)" ]
1,704
1,704
null
CONTRIBUTOR
null
### Feature request Hello Hugging Face Transformers Team, I am writing to suggest a feature of an "exact search" decoding method, suggested in https://aclanthology.org/D19-1331/ Greedy search and beam search are both "greedy" in the sense that they are not guaranteed to find the global most likely generation. An exact search is a DFS-based search method with branch pruning that can guarantee to return the global optimal. The original implementation is located here: [DFS.py in SGNMT](https://github.com/ucam-smt/sgnmt/blob/master/cam/sgnmt/decoding/dfs.py). Just need to adapt it to transformers generation module. ### Motivation It has a strong research value because it returns the global optimal. However, it may not be very practical for general users because it may be very slow. ### Your contribution I could take the job to submit a PR if this is interesting for you. Otherwise, I can work on it as a fork under my account.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28356/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28355
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28355/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28355/comments
https://api.github.com/repos/huggingface/transformers/issues/28355/events
https://github.com/huggingface/transformers/issues/28355
2,067,008,954
I_kwDOCUB6oc57NA26
28,355
Using add_generation_prompt with tokenizer.apply_chat_template does not add the required assistant start token
{ "login": "srikant86panda", "id": 18262494, "node_id": "MDQ6VXNlcjE4MjYyNDk0", "avatar_url": "https://avatars.githubusercontent.com/u/18262494?v=4", "gravatar_id": "", "url": "https://api.github.com/users/srikant86panda", "html_url": "https://github.com/srikant86panda", "followers_url": "https://api.github.com/users/srikant86panda/followers", "following_url": "https://api.github.com/users/srikant86panda/following{/other_user}", "gists_url": "https://api.github.com/users/srikant86panda/gists{/gist_id}", "starred_url": "https://api.github.com/users/srikant86panda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srikant86panda/subscriptions", "organizations_url": "https://api.github.com/users/srikant86panda/orgs", "repos_url": "https://api.github.com/users/srikant86panda/repos", "events_url": "https://api.github.com/users/srikant86panda/events{/privacy}", "received_events_url": "https://api.github.com/users/srikant86panda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Rocketknight1 ", "Hi @srikant86panda, Mistral and Mixtral both use the LLaMA prompt format, which does not include a special token at the start of assistant responses. Instead, they enforce that all conversations must alternate user/assistant/user/assistant, and the assistant response is always written immediately after the user message is finished with the `[/INST]` token.\r\n\r\nAs a result, `add_generation_prompt` does not have any effect for those models, and this is the correct and intended behaviour. I'm going to close this issue for now, but please let me know if you have any further problems or questions!", "@Rocketknight1 thanks for clarifying it. My concern is good to be closed." ]
1,704
1,704
1,704
NONE
null
### System Info Version: transformers: 4.36.1 and transformers @ git+https://github.com/huggingface/transformers.git@5d36025ca13d05151b7a0c761e90d429c4644a30 Tokenizer: tokenizers==0.15.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running with ```python from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2") messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"}, ] encodeds = tokenizer.apply_chat_template(messages, add_generation_prompt=False, tokenize=False) print(f'without add_generation_prompt: {encodeds}') encodeds = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False) print(f'with add_generation_prompt: {encodeds}') ``` ### Expected behavior With add_generation_prompt with tokenizer.apply_chat_template does should add the required assistant start token as discussed here: https://github.com/huggingface/transformers/issues/26539 Attached is a screenshot for reference ![Screenshot 2024-01-05 at 2 51 47 PM](https://github.com/huggingface/transformers/assets/18262494/87542f95-c259-47ae-abf0-61eefcaf1253)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28355/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28354
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28354/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28354/comments
https://api.github.com/repos/huggingface/transformers/issues/28354/events
https://github.com/huggingface/transformers/pull/28354
2,066,891,430
PR_kwDOCUB6oc5jSb3a
28,354
fix auxiliary loss training in DetrSegmentation
{ "login": "SangbumChoi", "id": 34004152, "node_id": "MDQ6VXNlcjM0MDA0MTUy", "avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SangbumChoi", "html_url": "https://github.com/SangbumChoi", "followers_url": "https://api.github.com/users/SangbumChoi/followers", "following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}", "gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}", "starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions", "organizations_url": "https://api.github.com/users/SangbumChoi/orgs", "repos_url": "https://api.github.com/users/SangbumChoi/repos", "events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}", "received_events_url": "https://api.github.com/users/SangbumChoi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts Hi back again with the test!\r\nMainly I added function called test_forward_auxiliary_loss. This will check if the two major components when the config.auxiliary_loss is set to True\r\n1. whether output of auxiliary loss is not None\r\n2. check the output len is equivalent to the num_hidden_layers - 1 (-1 is due to last layers is not 'aux')\r\n\r\nBTW, this test will also check in not only segmentation but also object_detection_model!\r\n\r\nfollowing result is the pytest of detr\r\n```\r\nroot@0a2b4fe54761:/mnt/nas2/users/sbchoi/transformers# RUN_SLOW=1 pytest tests/models/detr\r\n================================================================================ test session starts =================================================================================\r\nplatform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\nrootdir: /mnt/nas2/users/sbchoi/transformers\r\nconfigfile: pyproject.toml\r\nplugins: hypothesis-6.92.0, hydra-core-1.3.2\r\ncollected 154 items\r\n\r\ntests/models/detr/test_image_processing_detr.py ................ [ 10%]\r\ntests/models/detr/test_modeling_detr.py .......................ssssss..sssssssss......s...............s......s........s....ssssssss.ss.sssssssssssssss.s..s.......s........... [ 97%]\r\n.... [100%]\r\n\r\n================================================================================== warnings summary ==================================================================================\r\n../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\nsrc/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\r\n declare_namespace(pkg)\r\n\r\ntests/models/detr/test_modeling_detr.py::DetrModelTest::test_disk_offload_bin\r\n /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.__get__(instance, owner)()\r\n\r\ntests/models/detr/test_modeling_detr.py::DetrModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:460: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)\r\n\r\ntests/models/detr/test_modeling_detr.py::DetrModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:463: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(\r\n\r\ntests/models/detr/test_modeling_detr.py::DetrModelTest::test_pipeline_image_segmentation\r\ntests/models/detr/test_modeling_detr.py::DetrModelTest::test_pipeline_object_detection\r\n /opt/conda/lib/python3.10/site-packages/huggingface_hub/repocard.py:105: UserWarning: Repo card metadata block was not found. Setting CardData to empty.\r\n warnings.warn(\"Repo card metadata block was not found. Setting CardData to empty.\")\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n============================================================== 107 passed, 47 skipped, 10 warnings in 110.95s (0:01:50) ==============================================================\r\nroot@0a2b4fe54761:/mnt/nas2/users/sbchoi/transformers#\r\n```", "@amyeroberts No problem! However, should I make another PR or just modify in this PR?", "@SangbumChoi Up to you. I'd suggest making a follow-up PR to prevent holding this one up, but happy to go for whatever you'd prefer", "@amyeroberts Yeah, I will make follow-up PR and link this PR also. If there's nothing else to add for this topic, let's MERGE this first :)", "@SangbumChoi Merged! Thanks again for this contribution 💪 " ]
1,704
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [v] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> Hi, @amyeroberts. I fixed the training error when setting auxiliary loss to set 'True' `outputs_class = self.detr.class_embed(hs)` Since original DETR does not have their own class of self.class_labels_classifier, we have to import from self.detr see https://github.com/facebookresearch/detr/blob/3af9fa878e73b6894ce3596450a8d9b89d918ca9/models/segmentation.py#L49
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28354/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28354", "html_url": "https://github.com/huggingface/transformers/pull/28354", "diff_url": "https://github.com/huggingface/transformers/pull/28354.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28354.patch", "merged_at": 1704795428000 }
https://api.github.com/repos/huggingface/transformers/issues/28353
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28353/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28353/comments
https://api.github.com/repos/huggingface/transformers/issues/28353/events
https://github.com/huggingface/transformers/issues/28353
2,066,865,837
I_kwDOCUB6oc57Md6t
28,353
Weird Tokenization when Training New Tokenizer from GPT2 Tokenizer using train_new_from_iterator
{ "login": "minmie", "id": 40080081, "node_id": "MDQ6VXNlcjQwMDgwMDgx", "avatar_url": "https://avatars.githubusercontent.com/u/40080081?v=4", "gravatar_id": "", "url": "https://api.github.com/users/minmie", "html_url": "https://github.com/minmie", "followers_url": "https://api.github.com/users/minmie/followers", "following_url": "https://api.github.com/users/minmie/following{/other_user}", "gists_url": "https://api.github.com/users/minmie/gists{/gist_id}", "starred_url": "https://api.github.com/users/minmie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/minmie/subscriptions", "organizations_url": "https://api.github.com/users/minmie/orgs", "repos_url": "https://api.github.com/users/minmie/repos", "events_url": "https://api.github.com/users/minmie/events{/privacy}", "received_events_url": "https://api.github.com/users/minmie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! You should have a look at https://github.com/huggingface/tokenizers/issues/203 it's expected! 🤗 ", "@ArthurZucker thanks for your help." ]
1,704
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.33.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3 - Accelerate version: 0.22.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer from datasets import load_dataset path = r'E:\pythonWork\nlp\ner\soucre_data\cluener.jsonl' # a chinese text dataset raw_data = load_dataset("json", data_files=path, split='train') training_corpus = ( raw_data[i : i + 1000]["text"] for i in range(0, len(raw_data), 1000) ) old_tokenizer = AutoTokenizer.from_pretrained("E:\pythonWork\models\gpt2") tokenizer = old_tokenizer.train_new_from_iterator(training_corpus, 10000) example = '就是去美国大使馆的官方网站,它有中文版,去把每一条仔细研究透了,把每一个表格和材料都准备好了' # chinese text old_tokens = old_tokenizer.tokenize(example) print('old_tokens:',old_tokens) new_tokens = tokenizer.tokenize(example) print('new_tokens',new_tokens) ``` cluener.jsonl as follow: ![image](https://github.com/huggingface/transformers/assets/40080081/da268899-8454-47cc-af19-50e3804ff51f) ### Expected behavior I trained my own tokenizer using chinese text base on this demo (https://huggingface.co./learn/nlp-course/chapter6/2) . And the outputs of the two tokenizer are as follow: ``` old_tokens: ['å°', '±', 'æĺ¯', 'åİ', '»', 'ç', '¾', 'İ', 'åĽ', '½', '大', '使', 'é', '¦', 'Ĩ', 'çļĦ', 'å®', 'ĺ', 'æĸ¹', 'ç', '½', 'ij', 'ç«', 'Ļ', 'ï', '¼', 'Į', 'å®', 'ĥ', 'æľ', 'ī', 'ä¸Ń', 'æĸ', 'ĩ', 'çīĪ', 'ï', '¼', 'Į', 'åİ', '»', 'æ', 'Ĭ', 'Ĭ', 'æ', '¯', 'ı', 'ä¸Ģ', 'æĿ', '¡', 'ä»', 'Ķ', 'ç', '»', 'Ĩ', 'ç', 'ł', 'Ķ', 'ç', '©', '¶', 'éĢ', 'ı', 'äº', 'Ĩ', 'ï', '¼', 'Į', 'æ', 'Ĭ', 'Ĭ', 'æ', '¯', 'ı', 'ä¸Ģ', 'ä¸', 'ª', 'è¡', '¨', 'æł', '¼', 'å', 'Ĵ', 'Į', 'æĿ', 'IJ', 'æĸ', 'Ļ', 'éĥ', '½', 'åĩ', 'Ĩ', 'å¤', 'ĩ', 'å¥', '½', 'äº', 'Ĩ'] new_tokens ['å°±æĺ¯', 'åİ»', 'ç¾İåĽ½', '大使é¦Ĩ', 'çļĦ', 'å®ĺæĸ¹ç½ijç«Ļ', 'ï¼Į', 'å®ĥ', 'æľī', 'ä¸ŃæĸĩçīĪ', 'ï¼Į', 'åİ»', 'æĬĬ', 'æ¯ı', 'ä¸ĢæĿ¡', 'ä»Ķç»Ĩ', 'çłĶ究', 'éĢı', 'äºĨ', 'ï¼Į', 'æĬĬ', 'æ¯ıä¸Ģ个', '表', 'æł¼', 'åĴĮ', 'æĿIJæĸĻ', 'éĥ½', 'åĩĨå¤ĩ', '好', 'äºĨ'] ``` emm,both outputs are really weird. is this right?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28353/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28352
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28352/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28352/comments
https://api.github.com/repos/huggingface/transformers/issues/28352/events
https://github.com/huggingface/transformers/pull/28352
2,066,815,165
PR_kwDOCUB6oc5jSLps
28,352
Add: fsdp accelerate version warning
{ "login": "jp1924", "id": 93233241, "node_id": "U_kgDOBY6gWQ", "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jp1924", "html_url": "https://github.com/jp1924", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "organizations_url": "https://api.github.com/users/jp1924/orgs", "repos_url": "https://api.github.com/users/jp1924/repos", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "received_events_url": "https://api.github.com/users/jp1924/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @pacman100 ", "# Reproduction Code\r\n```python\r\nfrom transformers import (\r\n TrainingArguments,\r\n Trainer,\r\n BertForSequenceClassification,\r\n BertTokenizer,\r\n)\r\nfrom datasets import load_dataset\r\nfrom transformers.utils import check_min_version\r\nfrom accelerate import __version__ as accelerate_version\r\nfrom packaging import version\r\n\r\nif version.parse(accelerate_version) > version.parse(\"0.24.1\"):\r\n raise RuntimeError(\r\n \"This error occurs in accelerate versions 0.24.1 and earlier. Please lower your accelerate version\"\r\n )\r\n\r\n# This error occurs in transformers 4.36.0 and later\r\ncheck_min_version(\"4.36.0\")\r\n\r\n\r\ndef main(training_args: TrainingArguments) -> None:\r\n model_name_or_path = \"brenomatos/bert-base-uncased\"\r\n tokenizer = BertTokenizer.from_pretrained(model_name_or_path)\r\n model = BertForSequenceClassification.from_pretrained(model_name_or_path)\r\n\r\n trainer = Trainer(\r\n model=model,\r\n tokenizer=tokenizer,\r\n args=training_args,\r\n )\r\n trainer.save_model()\r\n trainer.save_state()\r\n\r\n\r\nif \"__main__\" in __name__:\r\n training_args = TrainingArguments(\r\n output_dir=\"./output_dir\",\r\n run_name=\"FSDP_save_test\",\r\n fsdp=\"full_shard auto_wrap\",\r\n fsdp_transformer_layer_cls_to_wrap=\"BertEncoder\",\r\n )\r\n main(training_args)\r\n```\r\nrun script\r\n>torchrun --nproc_per_node=4 ./fsdp_error_test.py\r\n# Version\r\ntorch 2.0.1+cu118\r\ntransformers 4.36.2\r\naccelerate 0.24.0\r\n# Description\r\nThank you for your interest. @pacman100 \r\n \r\nFirst of all, this is an issue I discovered while trying to further pretrain the LLAMA2 13B model.\r\nMy computer was running low on memory, so if I tried to save a checkpoint in the middle of training, I would get a CPU OOM and the training would end.\r\nHowever, saving the checkpoint after the training was over did not cause the OOM, so I had been using `trainer.save_model()` to save the model after all the training was finished. \r\nHowever, when I saved the model after updating transformers to the latest version, the model was not saved properly.\r\n(No `*.bin` file was created in output_dir.)\r\n\r\nI looked up the problem and realized that it was caused by the `version.parse(accelerate_version) > version.parse(\"0.24.1\")` code added in transformers 4.36.0.\r\nI can understand why they added it, but I thought they should have at least included a warning, so I issued a PR.\r\n" ]
1,704
1,708
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> When there was a problem that the model checkpoint could not be saved while running fsdp to the latest version of transformers. It turned out that it was a problem caused by the low accelerate version. I think I should at least warn you about this so that problems like mine don't happen. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @muellerzr @pacman100 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28352/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28352", "html_url": "https://github.com/huggingface/transformers/pull/28352", "diff_url": "https://github.com/huggingface/transformers/pull/28352.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28352.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28351
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28351/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28351/comments
https://api.github.com/repos/huggingface/transformers/issues/28351/events
https://github.com/huggingface/transformers/pull/28351
2,066,672,074
PR_kwDOCUB6oc5jRubQ
28,351
Don't check the device when device_map=auto
{ "login": "yuanwu2017", "id": 34643241, "node_id": "MDQ6VXNlcjM0NjQzMjQx", "avatar_url": "https://avatars.githubusercontent.com/u/34643241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuanwu2017", "html_url": "https://github.com/yuanwu2017", "followers_url": "https://api.github.com/users/yuanwu2017/followers", "following_url": "https://api.github.com/users/yuanwu2017/following{/other_user}", "gists_url": "https://api.github.com/users/yuanwu2017/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuanwu2017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuanwu2017/subscriptions", "organizations_url": "https://api.github.com/users/yuanwu2017/orgs", "repos_url": "https://api.github.com/users/yuanwu2017/repos", "events_url": "https://api.github.com/users/yuanwu2017/events{/privacy}", "received_events_url": "https://api.github.com/users/yuanwu2017/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,704
1,704
1,704
CONTRIBUTOR
null
When running the case on multi-cards server with devcie_map-auto, It will not always be allocated to device 0, Because other processes may be using these cards. It will select the devices that can accommodate this model. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28350 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28351/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28351", "html_url": "https://github.com/huggingface/transformers/pull/28351", "diff_url": "https://github.com/huggingface/transformers/pull/28351.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28351.patch", "merged_at": 1704453689000 }
https://api.github.com/repos/huggingface/transformers/issues/28350
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28350/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28350/comments
https://api.github.com/repos/huggingface/transformers/issues/28350/events
https://github.com/huggingface/transformers/issues/28350
2,066,671,557
I_kwDOCUB6oc57LufF
28,350
[tests] Check device failed in test_small_model_pt_bloom_accelerate
{ "login": "yuanwu2017", "id": 34643241, "node_id": "MDQ6VXNlcjM0NjQzMjQx", "avatar_url": "https://avatars.githubusercontent.com/u/34643241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuanwu2017", "html_url": "https://github.com/yuanwu2017", "followers_url": "https://api.github.com/users/yuanwu2017/followers", "following_url": "https://api.github.com/users/yuanwu2017/following{/other_user}", "gists_url": "https://api.github.com/users/yuanwu2017/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuanwu2017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuanwu2017/subscriptions", "organizations_url": "https://api.github.com/users/yuanwu2017/orgs", "repos_url": "https://api.github.com/users/yuanwu2017/repos", "events_url": "https://api.github.com/users/yuanwu2017/events{/privacy}", "received_events_url": "https://api.github.com/users/yuanwu2017/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Sure, would you like to open a pr for a fix? " ]
1,704
1,704
1,704
CONTRIBUTOR
null
### System Info transformers 4.37.0.dev0 pytorch 2.1.2 py3.9_cuda11.8_cudnn8.7.0_0 pytorch pytorch-cuda 11.8 h7e8668a_5 pytorch pytorch-mutex 1.0 cuda pytorch torchaudio 2.1.2 py39_cu118 pytorch torchtriton 2.1.0 py39 pytorch torchvision 0.16.2 py39_cu118 pytorch accelerate 0.25.0 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When running the following case on multi-cards server with devcie_map-auto, It will not always be allocated to device 0, Because other processes may be using these cards. It will select the devices that can accommodate this model. pytest tests/pipelines/test_pipelines_text_generation.py::TextGenerationPipelineTests::test_small_model_pt_bloom_accelerate Error log: FAILED tests/pipelines/test_pipelines_text_generation.py::TextGenerationPipelineTests::test_small_model_pt_bloom_accelerate - AssertionError: device(type='cuda', index=7) != device(type='cuda', index=0) ### Expected behavior If just testing whether the model works, It doesn’t need to strictly check the device, because it may eventually be loaded to the CPU due to insufficient GPU memory.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28350/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28349
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28349/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28349/comments
https://api.github.com/repos/huggingface/transformers/issues/28349/events
https://github.com/huggingface/transformers/pull/28349
2,066,574,181
PR_kwDOCUB6oc5jRa2k
28,349
Enhancing Code Readability and Maintainability with Simplified Activation Function Selection.
{ "login": "hi-sushanta", "id": 93595990, "node_id": "U_kgDOBZQpVg", "avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hi-sushanta", "html_url": "https://github.com/hi-sushanta", "followers_url": "https://api.github.com/users/hi-sushanta/followers", "following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}", "gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}", "starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions", "organizations_url": "https://api.github.com/users/hi-sushanta/orgs", "repos_url": "https://api.github.com/users/hi-sushanta/repos", "events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}", "received_events_url": "https://api.github.com/users/hi-sushanta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,704
1,704
1,704
CONTRIBUTOR
null
This code optimization enhances code readability and maintainability by utilizing aliases, simplified activation function selection, and consistent function definitions. Before submitting ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can Review? @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28349/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28349", "html_url": "https://github.com/huggingface/transformers/pull/28349", "diff_url": "https://github.com/huggingface/transformers/pull/28349.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28349.patch", "merged_at": 1704701946000 }
https://api.github.com/repos/huggingface/transformers/issues/28348
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28348/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28348/comments
https://api.github.com/repos/huggingface/transformers/issues/28348/events
https://github.com/huggingface/transformers/issues/28348
2,066,557,346
I_kwDOCUB6oc57LSmi
28,348
Add flash attention 2.0 support for GPT2LMHeadModel
{ "login": "brresnic", "id": 6865869, "node_id": "MDQ6VXNlcjY4NjU4Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/6865869?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brresnic", "html_url": "https://github.com/brresnic", "followers_url": "https://api.github.com/users/brresnic/followers", "following_url": "https://api.github.com/users/brresnic/following{/other_user}", "gists_url": "https://api.github.com/users/brresnic/gists{/gist_id}", "starred_url": "https://api.github.com/users/brresnic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brresnic/subscriptions", "organizations_url": "https://api.github.com/users/brresnic/orgs", "repos_url": "https://api.github.com/users/brresnic/repos", "events_url": "https://api.github.com/users/brresnic/events{/privacy}", "received_events_url": "https://api.github.com/users/brresnic/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "I think there is a draft PR here #26502 ans #27479, maybe working on sdpa might be better" ]
1,704
1,707
null
NONE
null
``` model = AutoModelForCausalLM.from_pretrained( my_GPT2LMHeadModel_checkpoint, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", ) ``` throws the following error: ``` Error loading Flash_Model_2: GPT2LMHeadModel does not support Flash Attention 2.0 yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28348/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28348/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28347
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28347/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28347/comments
https://api.github.com/repos/huggingface/transformers/issues/28347/events
https://github.com/huggingface/transformers/issues/28347
2,066,544,538
I_kwDOCUB6oc57LPea
28,347
Training doesn't end properly but stops the machine. With no error message.
{ "login": "johnDonor", "id": 70208188, "node_id": "MDQ6VXNlcjcwMjA4MTg4", "avatar_url": "https://avatars.githubusercontent.com/u/70208188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnDonor", "html_url": "https://github.com/johnDonor", "followers_url": "https://api.github.com/users/johnDonor/followers", "following_url": "https://api.github.com/users/johnDonor/following{/other_user}", "gists_url": "https://api.github.com/users/johnDonor/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnDonor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnDonor/subscriptions", "organizations_url": "https://api.github.com/users/johnDonor/orgs", "repos_url": "https://api.github.com/users/johnDonor/repos", "events_url": "https://api.github.com/users/johnDonor/events{/privacy}", "received_events_url": "https://api.github.com/users/johnDonor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey 🤗 thanks for opening an issue! \r\nYou should check [for OOM ](https://stackoverflow.com/questions/624857/finding-which-process-was-killed-by-linux-oom-killer) issues and whether or not this is what crached your machine. If yes, you should try to use float16 or bfloat16 for training rather than full precision. Also check your memory usage during training. \r\n\r\nAs this is related to custom code, could you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help! We try to keep the github issues for bugs/feature requests 🤗 \r\n\r\nThanks!", "Thanks for the kind comment! For OOM issue, do i have to check hard disk memory? If so, more than 300GB is left, but i guess you got the point. \r\nSince I'm using windows I installed bitsandbytes library compiled for windows, made by individual not official, I thought of some unknown incompatibility like thing happened, so I made new WSL2 Linux environment and run the code, which led me to \"Error out of memory at line 380 in file /mmfs1/gscratch/zlab/timdettmers/git/bitsandbytes/csrc/pythonInterface.c\" Does this indicate something??\r\n\r\n+ You mean deleting this issue and upload at forum instead? If so, I'll delete this and reupload this to forum. Waiting for your answer! Thank you so much.\r\n", "Memory issue is both RAM and DRAM (cpu and GPU) depending on the data processing (CPU) and the model running (GPU?) \r\nNo need to delete the issue. \r\nYes this indicates that you ran out of memory. What is the environnement that you are using? ", "Now I'm using Intel(R) Xeon(R) Gold 6326 CPU, and NVIDIA GeForce RTX 4090 GPU. I'm checking the resource used in the training session, dedicated GPU memory is used about 23.5/24GB and shared GPU memory is used about 5GB/128GB. I didn't check the CPU resource. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.37.0.dev0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.5 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.0 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @LysandreJik @stas00 @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. I used own remote local machine via AnyDesk. 2. there happens some warning messages like "You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding." and "The input hidden states seems to be silently casted in float32, this might be related to the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in torch.float16." 3. Training goes well, but after the train, train loss is shown and suddenly the machine stops. All the programs include visual studio turn off and got disconnected from the machine. I think it automatically restarts. Saving isn't performed right, and output isn't saved right either. To be specific, .json files such as config.json or tokenizer.json and README.md are all broken. They do exist but all broken. I cannot check the safetensor file. There is nothing saved in the output_dir which is declared in training_params. 4. To add, I ran the code with commenting out trainer.model.save_pretrained line, but still crashes. But surprisingly, .json files related to tokenizer is saved properly, not broken. PC crash still happens but by commenting out saving model line, broken file problem is partially solved. 5. Here is the code ```python import transformers from transformers import (BitsAndBytesConfig, AutoModelForCausalLM, AutoTokenizer, GenerationConfig, TrainingArguments, logging) import torch import os from datasets import load_dataset, concatenate_datasets import json from peft import LoraConfig from trl import SFTTrainer, DataCollatorForCompletionOnlyLM def main(): base_model = "mistralai/Mistral-7B-Instruct-v0.2" new_model = "Mistral-7B-Instruct-v0.2_newmodel" compute_dtype = getattr(torch, "float16") quant_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=compute_dtype, bnb_4bit_use_double_quant=False ) tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code = True, padding_side = "right") model = AutoModelForCausalLM.from_pretrained(base_model, quantization_config = quant_config, attn_implementation = "flash_attention_2", device_map = {"": 0}) model.config.use_cache = False model.config.pretraining_tp = 1 if tokenizer.pad_token is None: tokenizer.pad_token = tokenizer.eos_token train_dataset = load_dataset('json', data_files = './dataset/mixed_train.json', split = 'train') eval_dataset = load_dataset('json', data_files = './dataset/mixed_val.json', split = 'train') print(f"train dataset size: {len(train_dataset)}, eval dataset size: {len(eval_dataset)}") training_params = TrainingArguments( output_dir="./FT_newmodel", num_train_epochs=1, per_device_train_batch_size=2, per_device_eval_batch_size= 1, evaluation_strategy='steps', eval_steps=25, gradient_accumulation_steps=4, optim="paged_adamw_32bit", logging_steps=25, learning_rate=2e-5, weight_decay=0.001, fp16=False, bf16=False, max_grad_norm=0.3, max_steps=-1, warmup_ratio=0.03, group_by_length=True, lr_scheduler_type="constant", report_to="tensorboard" ) peft_config = LoraConfig( lora_alpha=16, lora_dropout=0.1, r=64, bias="none", target_modules=[ "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "lm_head", ], task_type="CAUSAL_LM" ) trainer = SFTTrainer( model = model, tokenizer = tokenizer, train_dataset = train_dataset, eval_dataset = eval_dataset, dataset_text_field= "text", args = training_params, peft_config = peft_config, max_seq_length = 512, packing = False, neftune_noise_alpha = 5 ) trainer.train() trainer.model.save_pretrained(new_model) trainer.tokenizer.save_pretrained(new_model) if __name__ == "__main__": print("Training starts") main() print("Training ended") ``` 6. Funny part is that actually the machine stops after "Training ended" is printed. There is no error message, machine just stops. I really can't figure out the problem. Please help.. ### Expected behavior I just want it to end properly by saving the fine-tuned model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28347/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28346
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28346/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28346/comments
https://api.github.com/repos/huggingface/transformers/issues/28346/events
https://github.com/huggingface/transformers/issues/28346
2,066,231,647
I_kwDOCUB6oc57KDFf
28,346
Token healing (under 40 LOC)
{ "login": "Ayenem", "id": 50707385, "node_id": "MDQ6VXNlcjUwNzA3Mzg1", "avatar_url": "https://avatars.githubusercontent.com/u/50707385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ayenem", "html_url": "https://github.com/Ayenem", "followers_url": "https://api.github.com/users/Ayenem/followers", "following_url": "https://api.github.com/users/Ayenem/following{/other_user}", "gists_url": "https://api.github.com/users/Ayenem/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ayenem/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ayenem/subscriptions", "organizations_url": "https://api.github.com/users/Ayenem/orgs", "repos_url": "https://api.github.com/users/Ayenem/repos", "events_url": "https://api.github.com/users/Ayenem/events{/privacy}", "received_events_url": "https://api.github.com/users/Ayenem/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "FYI @gante ", "@Ayenem that's a really cool use of `sequence_bias`! I had read about token healing in the past, and never thought about it when I added `sequence_bias`.\r\n\r\nWould you be up for adding it to `transformers`? I think we could include it inside `generate` (e.g. [here](https://github.com/huggingface/transformers/blob/6c78bbcb8320d316434262ef003251ca997db0d1/src/transformers/generation/utils.py#L1638)), passing a simple `token_healing=True` flag. What do you think?", "I'm glad you appreciate this use of your feature! Hyrum's law at play :)\r\n\r\nSounds good, I'll get started on a dev branch. Thanks for the pointer!" ]
1,704
1,704
null
NONE
null
### Feature request Token healing rectifies the token boundary bias in greedy tokenization. It does this by trimming and regrowing the prompt to better align with the model's tokenizer, thus enhancing generation quality. The improvement is clearest with completion models. Token boundary bias is a silent performance killer that doesn't seem very well known. It has clear impact on completion quality, though I'm not sure where it would fit as a transformers feature. A more thorough explanation of the problem: [The Art of Prompt Design: Prompt Boundaries and Token Healing | by Scott Lundberg](https://towardsdatascience.com/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38). ### Motivation Given a completion prompt with a partial url ending with `:`, the model might have seen the expected completion `://` as a _single_ token in training. However, the prompt's tail token `:` tells it that the next token is not `//`, and so it generates a wrong completion. Such errors compound in auto-regressive language models. ### Your contribution My implementation (under 40 LOC): https://github.com/Ayenem/TokenHealer/tree/main
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28346/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28345
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28345/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28345/comments
https://api.github.com/repos/huggingface/transformers/issues/28345/events
https://github.com/huggingface/transformers/issues/28345
2,066,055,450
I_kwDOCUB6oc57JYEa
28,345
Is there some bug in typehint in `modeling_outputs` (or maybe other files)?
{ "login": "gary-young", "id": 56245046, "node_id": "MDQ6VXNlcjU2MjQ1MDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/56245046?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gary-young", "html_url": "https://github.com/gary-young", "followers_url": "https://api.github.com/users/gary-young/followers", "following_url": "https://api.github.com/users/gary-young/following{/other_user}", "gists_url": "https://api.github.com/users/gary-young/gists{/gist_id}", "starred_url": "https://api.github.com/users/gary-young/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gary-young/subscriptions", "organizations_url": "https://api.github.com/users/gary-young/orgs", "repos_url": "https://api.github.com/users/gary-young/repos", "events_url": "https://api.github.com/users/gary-young/events{/privacy}", "received_events_url": "https://api.github.com/users/gary-young/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hey! I think that we used Tuple[X] when there can be N instances of X. I don't mind using ellipsis. \r\ncc @Rocketknight1 who is our type expert! ", "@ArthurZucker Thank you for your reply! I am a new guy for type hint so I am not sure about the situation about the previous version (before Python 3.10) but in the lastest version (Python 3.11 or 3.12), the Tuple[X] just means a tuple with **ONE** element with type X. \r\n![image](https://github.com/huggingface/transformers/assets/56245046/0fcfec13-bb78-4e27-adfe-d992ee831508)\r\nThis is an example provided by the document of Python: https://docs.python.org/3/library/typing.html\r\nAnd my Pylance checker has indeed raise a warning about it.", "And also the type hint `Tuple` and `tuple` are considered as the same:\r\n![image](https://github.com/huggingface/transformers/assets/56245046/31f1656c-8661-41c4-88ae-016e7a853f97)\r\n", "feel free to open a PR for a fix if you want 🤗 ", "Hi @gary-young, I think you're correct! We realize it would be quite a significant job to update all instances of this across the codebase, but if you want to contribute fixes to one or more files, we'd be happy to accept a PR.", "Ok! I am willing to fix some of them for those I can identify." ]
1,704
1,705
1,705
NONE
null
### System Info Hi! When I tried to modify some code which calls some classes in `src/transformers/modeling_outputs.py` such as the class `BaseModelOutput` and `BaseModelOutputwithPast`, I find the typehint of some parameters is different from your comments and my understanding of the code. I believe the correct type of them should be like `Optional[Tuple[torch.FloatTensor, ...]]` instead of `Optional[Tuple[torch.FloatTensor]]` because the comments say the tuple with length which equals to the num of layers or something else but not **ONE**. However, the type hint `Optional[Tuple[torch.FloatTensor]]` means the tuple should be length 1. Is my understanding correct? Or am I misunderstanding what these codes do? ![image](https://github.com/huggingface/transformers/assets/56245046/90295155-0ad3-4f30-b771-aeeee2179740) @ArthurZucker @younesbelkada Thank you! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I use the Pylance as the type checker. ### Expected behavior The typehint would not affect the behavior.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28345/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28344
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28344/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28344/comments
https://api.github.com/repos/huggingface/transformers/issues/28344/events
https://github.com/huggingface/transformers/issues/28344
2,065,985,520
I_kwDOCUB6oc57JG_w
28,344
Problem in using H100 for LLAMA 70 b inference
{ "login": "HelloWorldLTY", "id": 43333475, "node_id": "MDQ6VXNlcjQzMzMzNDc1", "avatar_url": "https://avatars.githubusercontent.com/u/43333475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HelloWorldLTY", "html_url": "https://github.com/HelloWorldLTY", "followers_url": "https://api.github.com/users/HelloWorldLTY/followers", "following_url": "https://api.github.com/users/HelloWorldLTY/following{/other_user}", "gists_url": "https://api.github.com/users/HelloWorldLTY/gists{/gist_id}", "starred_url": "https://api.github.com/users/HelloWorldLTY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HelloWorldLTY/subscriptions", "organizations_url": "https://api.github.com/users/HelloWorldLTY/orgs", "repos_url": "https://api.github.com/users/HelloWorldLTY/repos", "events_url": "https://api.github.com/users/HelloWorldLTY/events{/privacy}", "received_events_url": "https://api.github.com/users/HelloWorldLTY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "How much memory does the H100 have? Note that you need 70 billion * 2 = 140GB of RAM to run the model in bfloat16.", "Thanks, I have not thought that it is a problem caused by memory. I may seek to the help from other models.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info Hi, I notice that I cannot access the LLAMA 2 70 B chat hf for running my response, and here is the bug: ```python Traceback (most recent call last): File "/workspace/demo_llama.py", line 27, in <module> sequences = pipeline( File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 208, in __call__ return super().__call__(text_inputs, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1140, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1147, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1046, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/opt/conda/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 271, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 1764, in generate return self.sample( File "/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py", line 2861, in sample outputs = self( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1181, in forward outputs = self.model( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1068, in forward layer_outputs = decoder_layer( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 796, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 691, in forward query_states = self.q_proj(hidden_states) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` ``` Here is my code: ```python sequences = pipeline( 'Please repharse the following content in normal sentences in one paragraph: ...', do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=600, ) ``` I used H100 and I can successfully run 13b or 7b model. Could you please help me? Thanks. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Mention previously ### Expected behavior The model should give me output.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28344/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28343
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28343/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28343/comments
https://api.github.com/repos/huggingface/transformers/issues/28343/events
https://github.com/huggingface/transformers/issues/28343
2,065,575,304
I_kwDOCUB6oc57Hi2I
28,343
How to log custom value?
{ "login": "xmy0916", "id": 43675899, "node_id": "MDQ6VXNlcjQzNjc1ODk5", "avatar_url": "https://avatars.githubusercontent.com/u/43675899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xmy0916", "html_url": "https://github.com/xmy0916", "followers_url": "https://api.github.com/users/xmy0916/followers", "following_url": "https://api.github.com/users/xmy0916/following{/other_user}", "gists_url": "https://api.github.com/users/xmy0916/gists{/gist_id}", "starred_url": "https://api.github.com/users/xmy0916/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xmy0916/subscriptions", "organizations_url": "https://api.github.com/users/xmy0916/orgs", "repos_url": "https://api.github.com/users/xmy0916/repos", "events_url": "https://api.github.com/users/xmy0916/events{/privacy}", "received_events_url": "https://api.github.com/users/xmy0916/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!" ]
1,704
1,704
1,704
NONE
null
I want to log some info to `{'loss': 2.5234, 'learning_rate': 1.0344827586206896e-06, 'epoch': 0.0}` how can i do that? like: {'loss': 2.5234, 'learning_rate': 1.0344827586206896e-06, 'epoch': 0.0, 'version': 'v1'}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28343/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28342
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28342/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28342/comments
https://api.github.com/repos/huggingface/transformers/issues/28342/events
https://github.com/huggingface/transformers/issues/28342
2,065,481,107
I_kwDOCUB6oc57HL2T
28,342
Switch Transformers Jitter Noise in Inference
{ "login": "drunkcoding", "id": 14305648, "node_id": "MDQ6VXNlcjE0MzA1NjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/14305648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drunkcoding", "html_url": "https://github.com/drunkcoding", "followers_url": "https://api.github.com/users/drunkcoding/followers", "following_url": "https://api.github.com/users/drunkcoding/following{/other_user}", "gists_url": "https://api.github.com/users/drunkcoding/gists{/gist_id}", "starred_url": "https://api.github.com/users/drunkcoding/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drunkcoding/subscriptions", "organizations_url": "https://api.github.com/users/drunkcoding/orgs", "repos_url": "https://api.github.com/users/drunkcoding/repos", "events_url": "https://api.github.com/users/drunkcoding/events{/privacy}", "received_events_url": "https://api.github.com/users/drunkcoding/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Before I can help you, could provide an actual reproducer? \r\nThis one fails with `TypeError: The current model class (SwitchTransformersModel) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'SwitchTransformersForConditionalGeneration'}`.\r\n", "Moreover when I use the correct class, disabeling `return_dict_in_generate` for convenience, I always get the same output. \r\nhttps://github.com/huggingface/transformers/blob/90224dd59e92e11d99b5b09be84d3fe7794636b9/src/transformers/models/switch_transformers/modeling_switch_transformers.py#L171 \r\n\r\nadds jitter only if training", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.15.0-1033-gkeop-x86_64-with-glibc2.17 - Python version: 3.8.18 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ### Who can help? @ArthurZucker Switch Transformers add jitter noise ```python if self.jitter_noise > 0: ``` According to the paper, jitter noise can only be added during training ![image](https://github.com/huggingface/transformers/assets/14305648/1f1ac27a-375a-42d0-8bcc-4c5402221715) ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python model = AutoModel.from_pretrained("google/switch-base-128") tokenizer = AutoTokenizer.from_pretrained("google/switch-base-128") input_text = "Hello, my dog is cute" input_ids = tokenizer(input_text, return_tensors="pt").input_ids attention_mask = torch.ones_like(input_ids) set_seed(42) outputs = model.generate( input_ids=input_ids, attention_mask=attention_mask, max_length=20, decoder_start_token_id=0, do_sample=False, return_dict_in_generate=True, output_attentions=False, output_hidden_states=False, ) ``` ### Expected behavior Output is different every time running generate
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28342/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28341
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28341/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28341/comments
https://api.github.com/repos/huggingface/transformers/issues/28341/events
https://github.com/huggingface/transformers/pull/28341
2,065,393,894
PR_kwDOCUB6oc5jNavK
28,341
fix FA2 when using quantization for remaining models
{ "login": "susnato", "id": 56069179, "node_id": "MDQ6VXNlcjU2MDY5MTc5", "avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susnato", "html_url": "https://github.com/susnato", "followers_url": "https://api.github.com/users/susnato/followers", "following_url": "https://api.github.com/users/susnato/following{/other_user}", "gists_url": "https://api.github.com/users/susnato/gists{/gist_id}", "starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susnato/subscriptions", "organizations_url": "https://api.github.com/users/susnato/orgs", "repos_url": "https://api.github.com/users/susnato/repos", "events_url": "https://api.github.com/users/susnato/events{/privacy}", "received_events_url": "https://api.github.com/users/susnato/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Done! @ArthurZucker ", "CI is green now! @ArthurZucker ", "Thanks for the contribution! FYI @pacman100 and @younesbelkada " ]
1,704
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Replicates this PR #28203 for remaining models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28341/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28341", "html_url": "https://github.com/huggingface/transformers/pull/28341", "diff_url": "https://github.com/huggingface/transformers/pull/28341.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28341.patch", "merged_at": 1704469615000 }
https://api.github.com/repos/huggingface/transformers/issues/28340
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28340/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28340/comments
https://api.github.com/repos/huggingface/transformers/issues/28340/events
https://github.com/huggingface/transformers/pull/28340
2,065,364,022
PR_kwDOCUB6oc5jNUTf
28,340
Fix error in M4T feature extractor
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28340). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hey @amyeroberts, thanks for the quick review and the relevant feedback, I've modified the logic a bit, WDYT ? There's also a pending comment on testing for which I'll wait your comment!" ]
1,704
1,704
1,704
COLLABORATOR
null
# What does this PR do? Really small PR that fixes an error when calling the SeamlessM4TFeatureExtractor without attention mask. I simply added a check to verify if the attention mask exists before operating on it. I've also modified the test suite to test this in the future. cc @amyeroberts or @ArthurZucker ! WDYT ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28340/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28340", "html_url": "https://github.com/huggingface/transformers/pull/28340", "diff_url": "https://github.com/huggingface/transformers/pull/28340.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28340.patch", "merged_at": 1704386454000 }
https://api.github.com/repos/huggingface/transformers/issues/28339
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28339/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28339/comments
https://api.github.com/repos/huggingface/transformers/issues/28339/events
https://github.com/huggingface/transformers/issues/28339
2,065,211,719
I_kwDOCUB6oc57GKFH
28,339
Significantly increased VRAM usage for Mixtral qlora training compared to 4.36.2?
{ "login": "DocShotgun", "id": 126566557, "node_id": "U_kgDOB4tAnQ", "avatar_url": "https://avatars.githubusercontent.com/u/126566557?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DocShotgun", "html_url": "https://github.com/DocShotgun", "followers_url": "https://api.github.com/users/DocShotgun/followers", "following_url": "https://api.github.com/users/DocShotgun/following{/other_user}", "gists_url": "https://api.github.com/users/DocShotgun/gists{/gist_id}", "starred_url": "https://api.github.com/users/DocShotgun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DocShotgun/subscriptions", "organizations_url": "https://api.github.com/users/DocShotgun/orgs", "repos_url": "https://api.github.com/users/DocShotgun/repos", "events_url": "https://api.github.com/users/DocShotgun/events{/privacy}", "received_events_url": "https://api.github.com/users/DocShotgun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for the report, here are potential PRs that I would suspect: \r\n- #28142 which fixes some of the FA2 bugs\r\n- #28061 wich should fix gradient checkpointing issues\r\nPinging @younesbelkada for when he comes back, would be great if you can isolate the commit that lead to this in the meantime 🤗 ", "This may not be relevant to you but I found [this recent change](https://github.com/OpenAccess-AI-Collective/axolotl/commit/4d2e842e46bf8bd6dd0fda4d2667a7e7d80b4cd4) to Axolotl has made a significant difference to VRAM usage. Previously I could just squeeze in a LoRA on a 34B model on my 3x3090s at batch size 2, seq length 4096, now it OOMs immediately. I undid the change and it fits again. ", "> This may not be relevant to you but I found [this recent change](https://github.com/OpenAccess-AI-Collective/axolotl/commit/4d2e842e46bf8bd6dd0fda4d2667a7e7d80b4cd4) to Axolotl has made a significant difference to VRAM usage. Previously I could just squeeze in a LoRA on a 34B model on my 3x3090s at batch size 2, seq length 4096, now it OOMs immediately. I undid the change and it fits again.\r\n\r\nHmm it's certainly possible since that commit was in between when I did my initial train and the run where I had to drop the batch size. Unfortunately don't have a training instance up right now, so I'd have to test it the next time I try to train.", "I've determined that the cause of the increased VRAM usage was indeed axolotl changing the default for use_reentrant to False for gradient checkpointing. Going to go ahead and close the issue.", "thanks for sharing the solution! 🤗 " ]
1,704
1,707
1,706
NONE
null
### System Info The environment is a Runpod container with python 3.10, single A100 80gb, transformers 4.37.0dev (3cefac1d974db5e2825a0cb2b842883a628be7a0), using axolotl training script (https://github.com/OpenAccess-AI-Collective/axolotl). ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hello, just tried doing a training run on the dev version of transformers (as of 3cefac1d974db5e2825a0cb2b842883a628be7a0) via the common training repository axolotl (https://github.com/OpenAccess-AI-Collective/axolotl) and noticed that I went OOM using the same configuration that I had previously used successfully with transformers 4.36.2 stable. And not even just a small difference - I had to reduce my batch size by 4x to make the training fit in VRAM. I was previously able to fit 8192 ctx, batch size 4, grad accum steps 2 without difficulty, but I found that I now had to reduce my batch size to 1 to avoid OOM. The relevant training hyperparameters are: ``` load_in_4bit: true sequence_len: 8192 sample_packing: true pad_to_sequence_len: true lora_r: 32 lora_alpha: 16 lora_dropout: 0.05 lora_target_linear: true optimizer: adamw_bnb_8bit bf16: true fp16: false tf32: true gradient_checkpointing: true flash_attention: true no deepspeed or fsdp no evals ``` Would appreciate any insights into what caused the massive increase in memory usage. I noticed that ehartford's latest dolphin 2.7 qlora used a batch size of 3 per device at 16k ctx on A100 80gb, so surely I'm missing something here? ### Expected behavior The training run should take a relatively similar amount of VRAM as it did previously with the same config.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28339/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28338
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28338/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28338/comments
https://api.github.com/repos/huggingface/transformers/issues/28338/events
https://github.com/huggingface/transformers/pull/28338
2,065,061,516
PR_kwDOCUB6oc5jMTQX
28,338
fix pipeline to support tuple model output
{ "login": "jiqing-feng", "id": 107918818, "node_id": "U_kgDOBm614g", "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiqing-feng", "html_url": "https://github.com/jiqing-feng", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @amyeroberts . Thanks for your review. The default dataclass works well as you said. However, sometimes we would like to enable other optimization like `model = torch.jit.trace(model, inputs)` (which will get a significant acceleration in both GPU and CPU), and it can't recognize the dataclass, so I set `return_dict=False` to return tuple outputs. \r\n\r\nI think considering different types of model outputs would be better since the models support return tuples.", "I agree with @amyeroberts in general with this PR.\r\n\r\nIMO pipeline shouldn't have any excessive amount of code to support use cases like this. If you really want to wrap a jitted model, wrap you own jitted model to return a dict instead of a tuple.\r\n\r\n```python\r\n\r\nclass Wrapper:\r\n def forward(self, ...):\r\n tuple = self.inner.forward(..)\r\n return ModelOutputXXX(tuple)\r\n```\r\n\r\nAlso why using `jit.trace` instead of `torch.compile` ? I though this was the new way to do it.\r\nWhat kind of speedups are you seeing ?", "Hi @amyeroberts @Narsil . Thanks for your review, I propose that since the transformers model supports return tuple, we should support tuple outputs in any other APIs that are related to the model's outputs.\r\n\r\nBut you were right, I can wrap the model in my script.\r\n\r\nThanks again for your review. Feel free to close this PR if you don't have any questions.", "@jiqing-feng Thanks for your time working on this. Following from the discussion, I'm closing this PR." ]
1,704
1,704
1,704
CONTRIBUTOR
null
Hi @Narsil @amyeroberts . This PR considers pipeline.model may output a tuple (if return_dict=False). Would you please help to review it? Thx! Problem can be reproduced by ```python from transformers import pipeline, AutoModel, AutoTokenizer sentences = ["This is an example sentence", "Each sentence is converted"] model = AutoModel.from_pretrained('sentence-transformers/all-mpnet-base-v2', return_dict=False) tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-mpnet-base-v2') extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer) extractor(sentences, return_tensors=True, batch_size=2) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28338/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28338", "html_url": "https://github.com/huggingface/transformers/pull/28338", "diff_url": "https://github.com/huggingface/transformers/pull/28338.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28338.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28337
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28337/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28337/comments
https://api.github.com/repos/huggingface/transformers/issues/28337/events
https://github.com/huggingface/transformers/issues/28337
2,065,052,124
I_kwDOCUB6oc57FjHc
28,337
Failure to produce exact input sequence from output logits
{ "login": "hxiaoyang", "id": 98200137, "node_id": "U_kgDOBdpqSQ", "avatar_url": "https://avatars.githubusercontent.com/u/98200137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hxiaoyang", "html_url": "https://github.com/hxiaoyang", "followers_url": "https://api.github.com/users/hxiaoyang/followers", "following_url": "https://api.github.com/users/hxiaoyang/following{/other_user}", "gists_url": "https://api.github.com/users/hxiaoyang/gists{/gist_id}", "starred_url": "https://api.github.com/users/hxiaoyang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hxiaoyang/subscriptions", "organizations_url": "https://api.github.com/users/hxiaoyang/orgs", "repos_url": "https://api.github.com/users/hxiaoyang/repos", "events_url": "https://api.github.com/users/hxiaoyang/events{/privacy}", "received_events_url": "https://api.github.com/users/hxiaoyang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should use the `generate` function. Otherwise I have no idea what model you are using, but this is expected: the model does not necessarily predict the token you give as an input. Assisted decoding relies on this property. \r\n\r\n```python\r\nprompt = \"Please provide a code sample that reproduces the\"\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\ninput_ids = tokenizer(prompt, return_tensors = \"pt\")\r\nstring = tokenizer.batch_decode(model.generate(**input_ids).tolist(),skip_special_tokens=True)\r\nprint(string)\r\n['Please provide a code sample that reproduces the code.\\n\\nIf you are interested in contributing to']\r\n```\r\nuse `max_new_tokens` if needed. ", "Thank you for the timely response! I've been using `llama-2-13b-chat-hf` FYI.", "A followup question: I'm building a conversational / recursive LM. For my use case, it's important that the previous conversation is not reprocessed. In other words, only previous hidden states and new context should be used. Is this the default behavior of `model.generate()`? If not, what would be the best way to reuse previous hidden states.\r\n\r\nThanks! @ArthurZucker", "By default the `generate` function will use the `past_key_values` instead of re-computing everything which is probably what you are looking for 😉 the past key values still take into account the previous context in an efficient manner. See an answer [here](https://discuss.huggingface.co/t/what-is-the-purpose-of-use-cache-in-decoder/958)", "Thank you!" ]
1,704
1,704
1,704
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.15.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` prompt = "Please provide a code sample that reproduces the" input_ids = torch.tensor([tokenizer.encode(prompt)]) with torch.no_grad(): outputs = model(input_ids) string = tokenizer.decode(torch.argmax(outputs.logits[0], dim=1).tolist(),skip_special_tokens=True) print(string) ``` Output: `note the detailed snippet of demonstrces the issue` ### Expected behavior I was expecting `string` to be `Please provide a code sample that reproduces the issue`, supposing that `issue` is the generated token here. What's the right way to produce the exact input sequence from output logits?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28337/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28336
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28336/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28336/comments
https://api.github.com/repos/huggingface/transformers/issues/28336/events
https://github.com/huggingface/transformers/issues/28336
2,064,930,278
I_kwDOCUB6oc57FFXm
28,336
Does m2m_100 support multiple forced_bos_token_id?
{ "login": "sfc-gh-zhwang", "id": 135062830, "node_id": "U_kgDOCAzlLg", "avatar_url": "https://avatars.githubusercontent.com/u/135062830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sfc-gh-zhwang", "html_url": "https://github.com/sfc-gh-zhwang", "followers_url": "https://api.github.com/users/sfc-gh-zhwang/followers", "following_url": "https://api.github.com/users/sfc-gh-zhwang/following{/other_user}", "gists_url": "https://api.github.com/users/sfc-gh-zhwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/sfc-gh-zhwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sfc-gh-zhwang/subscriptions", "organizations_url": "https://api.github.com/users/sfc-gh-zhwang/orgs", "repos_url": "https://api.github.com/users/sfc-gh-zhwang/repos", "events_url": "https://api.github.com/users/sfc-gh-zhwang/events{/privacy}", "received_events_url": "https://api.github.com/users/sfc-gh-zhwang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You should use `forced_decoder_ids` https://huggingface.co./docs/transformers/main_classes/text_generation#transformers.GenerationConfig.forced_decoder_ids ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
``` generated_tokens = model.generate(input_ids = ids_tensor, attention_mask = attention_padded, forced_bos_token_id=[tokenizer.get_lang_id("es"),tokenizer.get_lang_id("fr")]) ``` Seems above code will output the same language output.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28336/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28335
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28335/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28335/comments
https://api.github.com/repos/huggingface/transformers/issues/28335/events
https://github.com/huggingface/transformers/issues/28335
2,064,920,715
I_kwDOCUB6oc57FDCL
28,335
Peft + gradient checkpointing crashes
{ "login": "snailrowen1337", "id": 45402632, "node_id": "MDQ6VXNlcjQ1NDAyNjMy", "avatar_url": "https://avatars.githubusercontent.com/u/45402632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/snailrowen1337", "html_url": "https://github.com/snailrowen1337", "followers_url": "https://api.github.com/users/snailrowen1337/followers", "following_url": "https://api.github.com/users/snailrowen1337/following{/other_user}", "gists_url": "https://api.github.com/users/snailrowen1337/gists{/gist_id}", "starred_url": "https://api.github.com/users/snailrowen1337/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/snailrowen1337/subscriptions", "organizations_url": "https://api.github.com/users/snailrowen1337/orgs", "repos_url": "https://api.github.com/users/snailrowen1337/repos", "events_url": "https://api.github.com/users/snailrowen1337/events{/privacy}", "received_events_url": "https://api.github.com/users/snailrowen1337/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hey! Could you try upgrading your torch version? 1.13.0 is a bit old. \r\ncc @younesbelkada ", "@ArthurZucker -- I've updated to the latest pytorch but still face the issue", "Alright, now you are using a remote model so would recommend you to open the issue on `https://huggingface.co./NousResearch/Yarn-Mistral-7b-128k/discussions` unless this also fails with `mistralai/Mistral-7B-v0.1` 🤗 ", "@ArthurZucker I still face this issue with the mistral model. Other users have seen similar issues [https://github.com/huggingface/peft/issues/137] but these fixes do not work for me.\r\n\r\nCC @younesbelkada ", "Hi @snailrowen1337 \r\nThe error should disappear if you pass `gradient_checkpointing_kwargs={\"use_reentrant\":False}` in `TrainingArguments`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info >>> transformers.__version__ '4.37.0.dev0' >>> peft.__version__ '0.7.2.dev0' >>> torch.__version__ '1.13.0' ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from functools import partial import torch from datasets import load_dataset import transformers from peft import LoraConfig, get_peft_model, TaskType from transformers import ( AutoModelForCausalLM, AutoTokenizer, DataCollatorForSeq2Seq, TrainingArguments, set_seed, Trainer, ) def prepare_datasets(raw_datasets, train_key, tokenizer, max_seq): # Preprocessing the datasets. if "messages" in raw_datasets[train_key].column_names: encode_function = partial( encode_with_messages_format, tokenizer=tokenizer, max_seq_length=max_seq, ) else: raise ValueError("You need to have either 'prompt'&'completion' or 'messages' in your column names.") # To speed up this part, we use multiprocessing. lm_datasets = raw_datasets.map(encode_function, batched=False, num_proc=32) lm_datasets.set_format(type="pt") lm_datasets = lm_datasets.filter(lambda example: (example['labels'] != -100).any()) return lm_datasets def encode_with_messages_format(example, tokenizer, max_seq_length): ''' Here we assume each example has a 'messages' field Each message is a dict with 'role' and 'content' fields. We concatenate all messages with the roles as delimiters and tokenize them together. ''' messages = example['messages'] if len(messages) == 0: raise ValueError('messages field is empty.') def _concat_messages(messages): message_text = "" for message in messages: if message["role"] == "system": message_text += "<|system|>\n" + message["content"].strip() + "\n" elif message["role"] == "user": message_text += "<|user|>\n" + message["content"].strip() + "\n" elif message["role"] == "assistant": message_text += "<|assistant|>\n" + message["content"].strip() + tokenizer.eos_token + "\n" else: raise ValueError("Invalid role: {}".format(message["role"])) return message_text example_text = _concat_messages(messages).strip() tokenized_example = tokenizer(example_text, return_tensors='pt', max_length=max_seq_length, truncation=True) input_ids = tokenized_example.input_ids labels = input_ids.clone() # mask the non-assistant part for avoiding loss for message_idx, message in enumerate(messages): if message["role"] != "assistant": if message_idx == 0: message_start_idx = 0 else: message_start_idx = tokenizer( _concat_messages(messages[:message_idx]), return_tensors='pt', max_length=max_seq_length, truncation=True ).input_ids.shape[1] if message_idx < len(messages) - 1 and messages[message_idx+1]["role"] == "assistant": # here we also ignore the role of the assistant messages_so_far = _concat_messages(messages[:message_idx+1]) + "<|assistant|>\n" else: messages_so_far = _concat_messages(messages[:message_idx+1]) message_end_idx = tokenizer( messages_so_far, return_tensors='pt', max_length=max_seq_length, truncation=True ).input_ids.shape[1] labels[:, message_start_idx:message_end_idx] = -100 if message_end_idx >= max_seq_length: break attention_mask = torch.ones_like(input_ids) return { 'input_ids': input_ids.flatten(), 'labels': labels.flatten(), 'attention_mask': attention_mask.flatten(), } def get_dataset(dataset_name, train_key, tokenizer, max_seq): raw_datasets = load_dataset(dataset_name) lm_datasets = prepare_datasets(raw_datasets, train_key, tokenizer, max_seq) train_dataset = lm_datasets[train_key] eval_dataset = lm_datasets['test_sft'] return train_dataset, eval_dataset def main(): dataset_name = "HuggingFaceH4/ultrachat_200k" model_name = "NousResearch/Yarn-Mistral-7b-128k" train_key = 'train_sft' max_seq = 512 set_seed(123) model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, use_cache=False) model.enable_input_require_grads() peft_config = LoraConfig(inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1, task_type=TaskType.CAUSAL_LM) model = get_peft_model(model, peft_config) # tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.add_special_tokens({'pad_token': '[PAD]'}) embedding_size = model.get_input_embeddings().weight.shape[0] if len(tokenizer) > embedding_size: model.resize_token_embeddings(len(tokenizer)) train_dataset, eval_dataset = get_dataset(dataset_name, train_key, tokenizer, max_seq) training_args = TrainingArguments( output_dir="output", report_to="tensorboard", per_device_train_batch_size=1, gradient_accumulation_steps=4, learning_rate=7e-5, logging_steps=1, num_train_epochs=5, max_steps=-1, save_steps=100, save_total_limit=10, warmup_ratio=0.05, lr_scheduler_type='cosine', evaluation_strategy ='steps', eval_steps=250, per_device_eval_batch_size=2, gradient_checkpointing=True, ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model), ) train_result = trainer.train() trainer.save_model() # Saves the tokenizer too for easy upload if __name__ == "__main__": main() ``` ### Expected behavior Training should not crash. Instead, I get ``` warnings.warn("None of the inputs have requires_grad=True. Gradients will be None") Traceback (most recent call last): File "repro.py", line 154, in <module> main() File "repro.py", line 149, in main train_result = trainer.train() File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 1543, in train return inner_training_loop( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 1860, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/trainer.py", line 2746, in training_step self.accelerator.backward(loss) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/accelerate/accelerator.py", line 1905, in backward loss.backward(**kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/_tensor.py", line 487, in backward torch.autograd.backward( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn 0%| ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28335/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28334
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28334/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28334/comments
https://api.github.com/repos/huggingface/transformers/issues/28334/events
https://github.com/huggingface/transformers/issues/28334
2,064,878,520
I_kwDOCUB6oc57E4u4
28,334
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
{ "login": "hadim", "id": 528003, "node_id": "MDQ6VXNlcjUyODAwMw==", "avatar_url": "https://avatars.githubusercontent.com/u/528003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hadim", "html_url": "https://github.com/hadim", "followers_url": "https://api.github.com/users/hadim/followers", "following_url": "https://api.github.com/users/hadim/following{/other_user}", "gists_url": "https://api.github.com/users/hadim/gists{/gist_id}", "starred_url": "https://api.github.com/users/hadim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hadim/subscriptions", "organizations_url": "https://api.github.com/users/hadim/orgs", "repos_url": "https://api.github.com/users/hadim/repos", "events_url": "https://api.github.com/users/hadim/events{/privacy}", "received_events_url": "https://api.github.com/users/hadim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "```diff\r\n+ inputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(torch.float32).to(\"mps\")\r\n- inputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(\"mps\")\r\n```\r\nwill fix this. \r\nI think this is more a mps issue as the conversion works well otherwise. Do you want to open a PR for a fix? \r\n\r\nhttps://github.com/huggingface/transformers/blob/b1292bca6923cfbc9cb3f70cb55df57e4e17e630/src/transformers/models/sam/processing_sam.py#L123\r\n", "Using `.to(torch.float32)` makes it work indeed. Thanks.\r\n\r\nI'm happy to open a PR but then I am not sure what kind of fix it should contain here since it seems to be more of an `mps` issue and your simple fix makes it work.", "Yep let's keep it that way then 😉 " ]
1,704
1,704
1,704
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: macOS-14.2.1-arm64-arm-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.20.0 - Safetensors version: 0.3.3 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: `mps` - Using distributed or parallel set-up in script?: NO ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction From https://huggingface.co./facebook/sam-vit-base#usage ```python import os # See https://github.com/pytorch/pytorch/issues/77764 os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" from PIL import Image import requests from transformers import SamModel, SamProcessor device = "mps" model = SamModel.from_pretrained("facebook/sam-vit-base") model = model.to(device) processor = SamProcessor.from_pretrained("facebook/sam-vit-base") img_url = "https://huggingface.co./ybelkada/segment-anything/resolve/main/assets/car.png" raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB") input_points = [[[450, 600]]] # THE BUG HAPPENS ONLY WHEN input_points IS NOT None # input_points = None inputs = processor(raw_image, input_points=input_points, return_tensors="pt").to(device) outputs = model(**inputs) masks = processor.image_processor.post_process_masks( outputs.pred_masks.cpu(), inputs["original_sizes"].cpu(), inputs["reshaped_input_sizes"].cpu(), ) scores = outputs.iou_scores ``` Error ``` { "name": "TypeError", "message": "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.", "stack": "--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[4], line 24 21 input_points = [[[450, 600]]] 22 # input_points = None ---> 24 inputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\").to(device) 25 outputs = model(**inputs) 26 masks = processor.image_processor.post_process_masks( 27 outputs.pred_masks.cpu(), 28 inputs[\"original_sizes\"].cpu(), 29 inputs[\"reshaped_input_sizes\"].cpu(), 30 ) File ~/Code/libs/ishisense/.pixi/env/lib/python3.10/site-packages/transformers/feature_extraction_utils.py:231, in BatchFeature.to(self, *args, **kwargs) 227 for k, v in self.items(): 228 # check if v is a floating point 229 if torch.is_floating_point(v): 230 # cast and send to device --> 231 new_data[k] = v.to(*args, **kwargs) 232 elif device is not None: 233 new_data[k] = v.to(device=device) TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead." } ``` ### Expected behavior The script never fails when using `device = "cpu"` and seems to be specific to `device = "mps"`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28334/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28333
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28333/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28333/comments
https://api.github.com/repos/huggingface/transformers/issues/28333/events
https://github.com/huggingface/transformers/pull/28333
2,064,764,543
PR_kwDOCUB6oc5jLVu6
28,333
Fix `_merge_input_ids_with_image_features` for llava model
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Adressed the comments and moved the dummy test case into proper tests.\r\nlet me know if you would like something more involved test wise!", "Good to go ! Feel free to merge @VictorSanh 🤗 ", "I am not cool enough to have merge access. The time where i am merging stuff whenever i wanted on hf transformers is well passed haha", "so either you @ArthurZucker or @younesbelkada need to merge lol 😅\r\nbut perhaps i can be promoted to core maintainer with that PR @LysandreJik ?", "Ooops 🤣 ", "Considering it @VictorSanh!", "Hi,\r\nI am still getting the following error when I'm trying to finetune the model for ConditionalGeneration in the forward call:\r\n\r\nfinal_labels[batch_indices, text_to_overwrite] = labels[batch_indices, non_image_indices]\r\nIndexError: index 8 is out of bounds for dimension 1 with size 8\r\n\r\nThe same code works fine if I just change the model to another VLLM like InstructBlip.\r\n\r\nThank you and kind regards,\r\nAlexandros Xenos", "@alexandrosXe do you have a reproduction case we can start debugging from?", "> @alexandrosXe do you have a reproduction case we can start debugging from?\r\n@VictorSanh Thank you for replying so fast! \r\nThis code can reproduce my error: \r\n\r\n```\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import AutoProcessor, LlavaForConditionalGeneration\r\n\r\nmodel = LlavaForConditionalGeneration.from_pretrained(\"llava-hf/llava-1.5-7b-hf\")\r\nprocessor = AutoProcessor.from_pretrained(\"llava-hf/llava-1.5-7b-hf\")\r\n\r\nprompt = \"<image>\\nUSER: What's the content of the image?\\nASSISTANT:\"\r\nurl = \"https://www.ilankelman.org/stopsigns/australia.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\nanswer = \"The image has a stop sign in the corner of the road\"\r\n\r\n\r\ninputs = processor(text=prompt, images=image, return_tensors=\"pt\")\r\n\r\nlabels = processor.tokenizer(answer, return_tensors=\"pt\")\r\nlabel_ids = labels[\"input_ids\"]\r\nlabel_mask = labels[\"attention_mask\"].bool()\r\nlabel_ids = label_ids.masked_fill(~label_mask, -100) #We dont count the loss on the padded tokens\r\nloss = model(**inputs, labels = label_ids).loss\r\nprint(\"loss: \", loss)\r\n```", "Can you also make sure you are using the latest version of transformers? ", "> Can you also make sure you are using the latest version of transformers?\r\n\r\n@ArthurZucker I am using the transformers 4.37.2 version. ", "Yes, it seems `oss = model(**inputs, labels = inputs[\"input_ids\"])` works well however. Loss was made to be of the same size as the input ids:\r\n\r\n> labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):\r\n Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,\r\n config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored\r\n (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.\r\n\r\nfrom the doc it should be length sequence length", "@ArthurZucker Thank you, it was my fault using wrong labels. Now everything works fine!" ]
1,704
1,706
1,704
MEMBER
null
Bug detected by @Sakshi-Bhargava The method `LlavaForConditionalGeneration._merge_input_ids_with_image_features` takes care of merging the input_embeds with the hidden states obtained from the vision encoder. The merge output is fed to the language model part of the model. However, `labels` was omitted from the merge, and when trying to compute a loss, the shapes of the logits and the labels are not compatible. This fix ensures that `labels` is also properly merged. Dummy reproduction case (still respect the model hidden sizes): ```python import torch from transformers import LlavaForConditionalGeneration model = LlavaForConditionalGeneration.from_pretrained("llava-hf/llava-1.5-13b-hf") pixel_values = torch.randn( (2, 3, 336, 336), dtype=torch.float ) input_ids = torch.tensor( [ [32001, 32001, 1, 15043, 7084, 32000, 29871, 13, 7900], [1, 15043, 7084, 29901, 29871, 32000, 29871, 13, 7900] ], dtype=torch.long ) attention_mask = torch.tensor( [ [0, 0, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1] ], dtype=torch.long ) output = model( pixel_values=pixel_values, input_ids=input_ids, attention_mask=attention_mask, labels=input_ids, ) ``` will yield the following error without the fix ```bash output = model( File "/victor/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/victor/code/transformers/src/transformers/models/llava/modeling_llava.py", line 486, in forward shift_labels = labels[..., 1:][shift_attention_mask.to(labels.device) != 0].contiguous() IndexError: The shape of the mask [2, 583] at index 1 does not match the shape of the indexed tensor [2, 8] at index 1 ``` cc @gullalc @younesbelkada @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28333/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28333/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28333", "html_url": "https://github.com/huggingface/transformers/pull/28333", "diff_url": "https://github.com/huggingface/transformers/pull/28333.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28333.patch", "merged_at": 1704872013000 }
https://api.github.com/repos/huggingface/transformers/issues/28332
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28332/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28332/comments
https://api.github.com/repos/huggingface/transformers/issues/28332/events
https://github.com/huggingface/transformers/issues/28332
2,064,684,522
I_kwDOCUB6oc57EJXq
28,332
Use mmap to accelerate checkpoint loading
{ "login": "weimingzha0", "id": 38259546, "node_id": "MDQ6VXNlcjM4MjU5NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/38259546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weimingzha0", "html_url": "https://github.com/weimingzha0", "followers_url": "https://api.github.com/users/weimingzha0/followers", "following_url": "https://api.github.com/users/weimingzha0/following{/other_user}", "gists_url": "https://api.github.com/users/weimingzha0/gists{/gist_id}", "starred_url": "https://api.github.com/users/weimingzha0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weimingzha0/subscriptions", "organizations_url": "https://api.github.com/users/weimingzha0/orgs", "repos_url": "https://api.github.com/users/weimingzha0/repos", "events_url": "https://api.github.com/users/weimingzha0/events{/privacy}", "received_events_url": "https://api.github.com/users/weimingzha0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing as the PR was merged" ]
1,704
1,705
1,705
CONTRIBUTOR
null
### Feature request Use torch.load(mmap=True) if possible. ### Motivation Python 2.1 allows mmap() when loading checkpoints ([doc](https://pytorch.org/docs/stable/generated/torch.load.html)) I tested on a 6B model: with mmap(), takes 2.x seconds to load (vs 12.x seconds without using mmap) ### Your contribution https://github.com/huggingface/transformers/pull/28331
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28332/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28332/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28331
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28331/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28331/comments
https://api.github.com/repos/huggingface/transformers/issues/28331/events
https://github.com/huggingface/transformers/pull/28331
2,064,682,839
PR_kwDOCUB6oc5jLEMz
28,331
Use mmap option to load_state_dict
{ "login": "weimingzha0", "id": 38259546, "node_id": "MDQ6VXNlcjM4MjU5NTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/38259546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weimingzha0", "html_url": "https://github.com/weimingzha0", "followers_url": "https://api.github.com/users/weimingzha0/followers", "following_url": "https://api.github.com/users/weimingzha0/following{/other_user}", "gists_url": "https://api.github.com/users/weimingzha0/gists{/gist_id}", "starred_url": "https://api.github.com/users/weimingzha0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weimingzha0/subscriptions", "organizations_url": "https://api.github.com/users/weimingzha0/orgs", "repos_url": "https://api.github.com/users/weimingzha0/repos", "events_url": "https://api.github.com/users/weimingzha0/events{/privacy}", "received_events_url": "https://api.github.com/users/weimingzha0/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "> torch bin files are now deprecated in favor of safetensors but no harm in improving this! Coul you add a test as well in test_modelling_common? 🤗\r\n\r\nSure. Please let me know if my test case is correct.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28331). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> Thanks! My last query would be to make sure this works for the deepspeed / sdpa case (device_map = \"meta\")!\r\n\r\nDo you mean add a test case for device_map == \"meta' ?", "That would be a good way of making sure this will be fine with DeepSpeedZero (the if branch) 🤗 ", "> That would be a good way of making sure this will be fine with DeepSpeedZero (the if branch) 🤗\r\n\r\nCurrently none of existing tests covers the branch: \r\nI even added an assert in the \"if branch\" and all tests still passed except for one irrelevant error (see https://github.com/huggingface/transformers/pull/28401)\r\n\r\nAnyway, I added a guard for \"meta\" device.\r\n\r\n" ]
1,704
1,704
1,704
CONTRIBUTOR
null
# Use torch.load(mmap=True) to accelerate checkpoint loading https://github.com/huggingface/transformers/issues/28332 cc @SunMarc @sgugger
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28331/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28331", "html_url": "https://github.com/huggingface/transformers/pull/28331", "diff_url": "https://github.com/huggingface/transformers/pull/28331.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28331.patch", "merged_at": 1704877051000 }
https://api.github.com/repos/huggingface/transformers/issues/28330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28330/comments
https://api.github.com/repos/huggingface/transformers/issues/28330/events
https://github.com/huggingface/transformers/issues/28330
2,064,444,622
I_kwDOCUB6oc57DOzO
28,330
Error with BetterTransformer Optimizations in Transformers Library with Starcoderplus Model"
{ "login": "Taishi-N324", "id": 82321333, "node_id": "MDQ6VXNlcjgyMzIxMzMz", "avatar_url": "https://avatars.githubusercontent.com/u/82321333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Taishi-N324", "html_url": "https://github.com/Taishi-N324", "followers_url": "https://api.github.com/users/Taishi-N324/followers", "following_url": "https://api.github.com/users/Taishi-N324/following{/other_user}", "gists_url": "https://api.github.com/users/Taishi-N324/gists{/gist_id}", "starred_url": "https://api.github.com/users/Taishi-N324/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Taishi-N324/subscriptions", "organizations_url": "https://api.github.com/users/Taishi-N324/orgs", "repos_url": "https://api.github.com/users/Taishi-N324/repos", "events_url": "https://api.github.com/users/Taishi-N324/events{/privacy}", "received_events_url": "https://api.github.com/users/Taishi-N324/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think the idea is to remove the call to `model.to_bettertransformer()` and `with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):` . The error should be updated to mention this! would you like to open a PR?", "I'm sorry, I don't quite understand. What do you mean?\r\n\r\nAre you saying to remove both `model.to_bettertransformer()` and `with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):`?\r\n\r\nOriginally, what I want to do is to wrap GPTBigCode with a bettertransformer and then wrap it with fsdp to train it. However, I encounter this error during the process. I was trying with a minimal configuration, but I still encounter errors.", "It seems that GPTBigCode was supported during https://github.com/huggingface/optimum/pull/1252, but in the latest branch, it appears to no longer be supported according to https://github.com/huggingface/optimum/tree/main/optimum/bettertransformer/models.", "The error means that now transformers supports sdpa natively so you should not need to use better transformer for the usage you are looking for 🤗 ", "Thank you for your kindness!\r\nI understand now.\r\nThank you for your assistance." ]
1,704
1,704
1,704
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.13 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @fxmarty ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to run a script using PyTorch and the transformers library from Hugging Face to leverage a model called "starcoderplus" for text generation. Here's the script I'm using: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bigcode/starcoderplus") model = AutoModelForCausalLM.from_pretrained("bigcode/starcoderplus", torch_dtype=torch.float16).to("cuda") # convert the model to BetterTransformer model.to_bettertransformer() input_text = "Hello my dog is cute and" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False): outputs = model.generate(**inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` from https://huggingface.co./docs/transformers/perf_infer_gpu_one#flashattention-and-memory-efficient-attention-through-pytorchs-scaleddotproductattention I encounter the following error: ``` ValueError: Transformers now supports natively BetterTransformer optimizations (torch.nn.functional.scaled_dot_product_attention) for the model type gpt_bigcode. Please upgrade to transformers>=4.36 and torch>=2.1.1 to use it. Details: https://huggingface.co./docs/transformers/perf_infer_gpu_one#flashattention-and-memory-efficient-attention-through-pytorchs-scaleddotproductattention ``` However, I am already using torch version 2.1.2+cu118 and transformers version 4.36.2. ### Expected behavior Better transformer available in starcoderplus
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28330/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28329/comments
https://api.github.com/repos/huggingface/transformers/issues/28329/events
https://github.com/huggingface/transformers/issues/28329
2,064,382,134
I_kwDOCUB6oc57C_i2
28,329
[TrOCR] Dealing with occasional multi-line images
{ "login": "aureliusnoble", "id": 16746857, "node_id": "MDQ6VXNlcjE2NzQ2ODU3", "avatar_url": "https://avatars.githubusercontent.com/u/16746857?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aureliusnoble", "html_url": "https://github.com/aureliusnoble", "followers_url": "https://api.github.com/users/aureliusnoble/followers", "following_url": "https://api.github.com/users/aureliusnoble/following{/other_user}", "gists_url": "https://api.github.com/users/aureliusnoble/gists{/gist_id}", "starred_url": "https://api.github.com/users/aureliusnoble/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aureliusnoble/subscriptions", "organizations_url": "https://api.github.com/users/aureliusnoble/orgs", "repos_url": "https://api.github.com/users/aureliusnoble/repos", "events_url": "https://api.github.com/users/aureliusnoble/events{/privacy}", "received_events_url": "https://api.github.com/users/aureliusnoble/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!" ]
1,704
1,707
null
NONE
null
Hi, I am using TrOCR to transcribe historical (18th century) handwritten French data. I am feeding in text-line images which are automatically segmented. However, due to the nature of the documents, sometimes this segmentation is not perfect, and the image contains multiple lines of text. These have been transcribed with the \n token. I know that ideally I would not pass in multi-line images, however these will be occasionally present in the data I want to run inference on. What are your thoughts on dealing with these when fine-tuning TrOCR? About 5-10% of lines include a "\n" character, about 1% contain multiple "\n" characters. Currently I just convert these to whitespaces. cc: @NielsRogge . Any advice would be greatly appreciated.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28329/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28328/comments
https://api.github.com/repos/huggingface/transformers/issues/28328/events
https://github.com/huggingface/transformers/issues/28328
2,064,359,352
I_kwDOCUB6oc57C5-4
28,328
Implement Half-Quadratic Quantization (HQQ)
{ "login": "michaelfeil", "id": 63565275, "node_id": "MDQ6VXNlcjYzNTY1Mjc1", "avatar_url": "https://avatars.githubusercontent.com/u/63565275?v=4", "gravatar_id": "", "url": "https://api.github.com/users/michaelfeil", "html_url": "https://github.com/michaelfeil", "followers_url": "https://api.github.com/users/michaelfeil/followers", "following_url": "https://api.github.com/users/michaelfeil/following{/other_user}", "gists_url": "https://api.github.com/users/michaelfeil/gists{/gist_id}", "starred_url": "https://api.github.com/users/michaelfeil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/michaelfeil/subscriptions", "organizations_url": "https://api.github.com/users/michaelfeil/orgs", "repos_url": "https://api.github.com/users/michaelfeil/repos", "events_url": "https://api.github.com/users/michaelfeil/events{/privacy}", "received_events_url": "https://api.github.com/users/michaelfeil/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "cc @younesbelkada ", "This is very cool ! We are definitely interested in adding HQQ inference support in transformers. The cool thing is that indeed it seems you don't need to pre-quantize the weights in order to quantize the models. We'll explore a bit on our side and let you know how it goes\r\ncc @SunMarc @Titus-von-Koeller ", "Hi! I am the maintainer of the HQQ project, happy to assist with anything needed !", "Very glad to e-meet you @mobicham ! do you have an email I can use so that we can contact you through Slack to iterate quickly?", "Glad to e-meet you @younesbelkada as well! Sure: [email protected]", "thanks @mobicham you should have received an invite by now! " ]
1,704
1,704
null
CONTRIBUTOR
null
### Feature request I would be curious if https://github.com/mobiusml/hqq can be supported in similar fashion to `autogptq` or `autoawq`. hqq is most similar to `bitsandbytes` `nf4/fp4` datatypes, but offers 2,3,4,8 bit quantization. CC: @mobicham ### Motivation HQQ performs 2/3/4 bit quantization and can do drop-in replacement. Its fast for in-place quantization / non-pre-quantized weights and performs similar to bnb a expansion to fp16 at runtime (or similar). Would be cool to support for models like mixtral to cut down the vram requirement. ### Your contribution Currently have no capacity for submitting an integration, but happy to review or assist.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28328/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28328/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28327/comments
https://api.github.com/repos/huggingface/transformers/issues/28327/events
https://github.com/huggingface/transformers/issues/28327
2,064,215,480
I_kwDOCUB6oc57CW24
28,327
BARK: 'GenerationConfig' object has no attribute 'semantic_config'
{ "login": "Cazforshort", "id": 6918831, "node_id": "MDQ6VXNlcjY5MTg4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/6918831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Cazforshort", "html_url": "https://github.com/Cazforshort", "followers_url": "https://api.github.com/users/Cazforshort/followers", "following_url": "https://api.github.com/users/Cazforshort/following{/other_user}", "gists_url": "https://api.github.com/users/Cazforshort/gists{/gist_id}", "starred_url": "https://api.github.com/users/Cazforshort/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Cazforshort/subscriptions", "organizations_url": "https://api.github.com/users/Cazforshort/orgs", "repos_url": "https://api.github.com/users/Cazforshort/repos", "events_url": "https://api.github.com/users/Cazforshort/events{/privacy}", "received_events_url": "https://api.github.com/users/Cazforshort/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @ylacombe as well. Seems like this should be supported from the doc, so might just be the generation config not being initialized properly. \r\nMaybe `model.generation_config = model.config` will fix this in the meantime @Cazforshort ", "> model.generation_config = model.config\r\n\r\nNo luck\r\n\r\n```\r\nMODEL_NAME = \"suno/bark\"\r\nvoice_preset = \"v2/en_speaker_9\"\r\nsampling_rate = 24000\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\nprocessor = AutoProcessor.from_pretrained(MODEL_NAME)\r\nmin_eos_p=0.5\r\n\r\ntext_prompt = \"[frowns slightly] Hmmph, typical. Ignoring people is quite common among adults, especially when faced with unknown situations. However, we must press forward nonetheless.\"\r\nsentences = nltk.sent_tokenize(text_prompt)\r\n\r\n\r\n# Initializing Bark sub-modules configurations.\r\nsemantic_config = BarkSemanticConfig(min_eos_p=min_eos_p)\r\ncoarse_acoustics_config = BarkCoarseConfig()\r\nfine_acoustics_config = BarkFineConfig()\r\ncodec_config = AutoConfig.from_pretrained(\"facebook/encodec_24khz\")\r\nconfiguration = BarkConfig.from_sub_model_configs(\r\n semantic_config, coarse_acoustics_config, fine_acoustics_config, codec_config\r\n )\r\nmodel = BarkModel(config = configuration).to(device)\r\nmodel.generation_config = model.config\r\n```\r\n\r\nTypeError: transformers.models.bark.generation_configuration_bark.BarkSemanticGenerationConfig() argument after ** must be a mapping, not BarkSemanticConfig", "Hi @Cazforshort, you have to use a `BarkGenerationConfig` instead of a `GenerationConfig`:\r\n```python\r\nfrom transformers.models.bark.generation_configuration_bark import BarkGenerationConfig\r\n\r\nmodel.generation_config = BarkGenerationConfig()\r\n```\r\n\r\nBTW, to modify `min_eos_p`, you don't have to necessarily pass by the generation config, you can simply pass `min_eos_p=XXX` to `BarkModel.generate`. \r\n", "> Hi @Cazforshort, you have to use a `BarkGenerationConfig` instead of a `GenerationConfig`:\r\n> \r\n> ```python\r\n> from transformers.models.bark.generation_configuration_bark import BarkGenerationConfig\r\n> \r\n> model.generation_config = BarkGenerationConfig()\r\n> ```\r\n> \r\n> BTW, to modify `min_eos_p`, you don't have to necessarily pass by the generation config, you can simply pass `min_eos_p=XXX` to `BarkModel.generate`.\r\n\r\nStill seeing the same errors. I tried just using that gen fig and also starting with the other config and overwriting, but I just get the same sort of errors.\r\n\r\n```\r\nfrom transformers import (\r\n BarkSemanticConfig,\r\n BarkCoarseConfig,\r\n BarkFineConfig,\r\n BarkModel,\r\n BarkConfig,\r\n AutoConfig,\r\n )\r\nfrom transformers.models.bark.generation_configuration_bark import BarkGenerationConfig\r\nimport torch\r\nfrom transformers import AutoProcessor, set_seed\r\nfrom optimum.bettertransformer import BetterTransformer\r\nimport scipy\r\nimport numpy as np\r\nimport nltk # we'll use this to split into sentences\r\n\r\n\r\nMODEL_NAME = \"suno/bark\"\r\nvoice_preset = \"v2/en_speaker_9\"\r\nsampling_rate = 24000\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\nprocessor = AutoProcessor.from_pretrained(MODEL_NAME)\r\nmin_eos_p=0.5\r\n\r\ntext_prompt = \"[frowns slightly] Hmmph, typical. Ignoring people is quite common among adults, especially when faced with unknown situations. However, we must press forward nonetheless.\"\r\nsentences = nltk.sent_tokenize(text_prompt)\r\n\r\n# Initializing Bark sub-modules configurations.\r\nsemantic_config = BarkSemanticConfig(min_eos_p=min_eos_p)\r\ncoarse_acoustics_config = BarkCoarseConfig()\r\nfine_acoustics_config = BarkFineConfig()\r\ncodec_config = AutoConfig.from_pretrained(\"facebook/encodec_24khz\")\r\n\r\nconfiguration = BarkConfig.from_sub_model_configs(\r\n semantic_config, \r\n coarse_acoustics_config, \r\n fine_acoustics_config, \r\n codec_config\r\n )\r\ngen_configuration = BarkGenerationConfig(\r\n semantic_config, \r\n coarse_acoustics_config, \r\n fine_acoustics_config, \r\n codec_config\r\n )\r\n\r\nmodel = BarkModel(config = configuration).to(device)\r\nmodel.generation_config = gen_configuration\r\ninputs = processor(sentences,voice_preset=voice_preset).to(device)\r\n\r\noutput = model.generate(**inputs)\r\n```\r\n**Error:\r\nTypeError: transformers.models.bark.generation_configuration_bark.BarkSemanticGenerationConfig() argument after ** must be a mapping, not BarkSemanticGenerationConfig**\r\n\r\nmodel.generate seems to be ignoring the min_eos_p parameter. I tried passing it there but the results were identical unlike when I tested it without transformers.", "You have to use `BarkGenerationConfig.from_sub_model_configs` !\r\n\r\n> model.generate seems to be ignoring the min_eos_p parameter. I tried passing it there but the results were identical unlike when I tested it without transformers.\r\n\r\nCan you send a snippet for that ?", "> You have to use `BarkGenerationConfig.from_sub_model_configs` !\r\n> \r\n> > model.generate seems to be ignoring the min_eos_p parameter. I tried passing it there but the results were identical unlike when I tested it without transformers.\r\n> \r\n> Can you send a snippet for that ?\r\n\r\nHmm, same error about passing a config instead of a mapping.\r\n\r\n`MODEL_NAME = \"suno/bark\"\r\nvoice_preset = \"v2/en_speaker_9\"\r\nsampling_rate = 24000\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\nprocessor = AutoProcessor.from_pretrained(MODEL_NAME)\r\nmin_eos_p=0.5\r\n\r\ntext_prompt = \"[frowns slightly] Hmmph, typical. Ignoring people is quite common among adults, especially when faced with unknown situations. However, we must press forward nonetheless.\"\r\nsentences = nltk.sent_tokenize(text_prompt)\r\n\r\n# Initializing Bark sub-modules configurations.\r\nsemantic_config = BarkSemanticConfig(min_eos_p=min_eos_p)\r\ncoarse_acoustics_config = BarkCoarseConfig()\r\nfine_acoustics_config = BarkFineConfig()\r\ncodec_config = AutoConfig.from_pretrained(\"facebook/encodec_24khz\")\r\n\r\nconfiguration = BarkConfig.from_sub_model_configs(\r\n semantic_config, \r\n coarse_acoustics_config, \r\n fine_acoustics_config, \r\n codec_config\r\n )\r\ngen_configuration = BarkGenerationConfig.from_sub_model_configs(\r\n semantic_config, \r\n coarse_acoustics_config, \r\n fine_acoustics_config, \r\n )\r\n# model = BarkModel.from_pretrained(MODEL_NAME, torch_dtype=torch.float16).to(device)\r\nmodel = BarkModel(config = configuration).to(device)\r\nmodel.generation_config = gen_configuration\r\ninputs = processor(sentences,voice_preset=voice_preset).to(device)\r\n\r\noutput = model.generate(**inputs)`\r\n\r\n**Error:\r\nTypeError: transformers.models.bark.generation_configuration_bark.BarkSemanticGenerationConfig() argument after ** must be a mapping, not BarkSemanticGenerationConfig**\r\n\r\nAs for passing it directly to generate, I think I wasn't setting it low enough. It seems to be dropping it, but little static is keeping the clips from shortening. So it probably was working and I just need to play around with temps.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info python 3.10.3 Windows 10 transformers 4.37.0.dev0 torch 2.0.1 ### Who can help? @sanchit-gandhi ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am unable to use the Bark config files (necessary to set min_eos_p). the smeantic configuration is not being passed to the generationconfig properly for some reason. ``` import nltk import torch from transformers import ( BarkSemanticConfig, BarkCoarseConfig, BarkFineConfig, BarkModel, BarkConfig, AutoConfig, ) from transformers import AutoProcessor, set_seed MODEL_NAME = "suno/bark" voice_preset = "v2/en_speaker_9" sampling_rate = 24000 device = "cuda:0" if torch.cuda.is_available() else "cpu" processor = AutoProcessor.from_pretrained(MODEL_NAME) min_eos_p=0.5 text_prompt = "[frowns slightly] Hmmph, typical. Ignoring people is quite common among adults, especially when faced with unknown situations. However, we must press forward nonetheless." sentences = nltk.sent_tokenize(text_prompt) # Initializing Bark sub-modules configurations. semantic_config = BarkSemanticConfig(min_eos_p=min_eos_p) coarse_acoustics_config = BarkCoarseConfig() fine_acoustics_config = BarkFineConfig() codec_config = AutoConfig.from_pretrained("facebook/encodec_24khz") configuration = BarkConfig.from_sub_model_configs( semantic_config, coarse_acoustics_config, fine_acoustics_config, codec_config ) model = BarkModel(config = configuration).to(device) output = model.generate(**inputs) ``` **Error:** **AttributeError: 'GenerationConfig' object has no attribute 'semantic_config'** ### Expected behavior min_eos_p should be set by the config and used to generate speech from bark.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28327/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28326/comments
https://api.github.com/repos/huggingface/transformers/issues/28326/events
https://github.com/huggingface/transformers/pull/28326
2,064,072,295
PR_kwDOCUB6oc5jJAaB
28,326
Add the XPU device check for pipeline mode
{ "login": "yuanwu2017", "id": 34643241, "node_id": "MDQ6VXNlcjM0NjQzMjQx", "avatar_url": "https://avatars.githubusercontent.com/u/34643241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuanwu2017", "html_url": "https://github.com/yuanwu2017", "followers_url": "https://api.github.com/users/yuanwu2017/followers", "following_url": "https://api.github.com/users/yuanwu2017/following{/other_user}", "gists_url": "https://api.github.com/users/yuanwu2017/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuanwu2017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuanwu2017/subscriptions", "organizations_url": "https://api.github.com/users/yuanwu2017/orgs", "repos_url": "https://api.github.com/users/yuanwu2017/repos", "events_url": "https://api.github.com/users/yuanwu2017/events{/privacy}", "received_events_url": "https://api.github.com/users/yuanwu2017/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Feel free to ping me for a review when this is ready! ", "Hi @ArthurZucker,\r\nI tested the failed run_tests case in my local machine, but I got following warning. I think the case failure is not caused by my patch. Do you know how to run the case correctly? Thanks a lot.\r\n\r\n```\r\n(upstream) [yuanwu@skyocean transformers]$ pytest tests/test_modeling_flax_common.py\r\n============================================================================================ test session starts =============================================================================================\r\nplatform linux -- Python 3.9.18, pytest-7.4.0, pluggy-1.0.0\r\nrootdir: /mnt/disk4/yuanwu/workspace/transformers\r\nconfigfile: pyproject.toml\r\nplugins: anyio-3.5.0, xdist-3.5.0, timeout-2.2.0, hypothesis-6.92.2, dash-2.14.2\r\ncollected 0 items\r\n\r\n============================================================================================== warnings summary ==============================================================================================\r\n../../../../../home/yuanwu/.conda/envs/upstream/lib/python3.9/site-packages/_pytest/config/__init__.py:1373\r\n /home/yuanwu/.conda/envs/upstream/lib/python3.9/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n============================================================================================= 1 warning in 0.38s =============================================================================================\r\n```", "@ArthurZucker I think the patch is ready. Please review. Thanks.", "@ArthurZucker Please help to review.", "Yes sorry for the delay! ", "Hi @ArthurZucker \r\nDone for raising error when device is not available.", "Thanks @ArthurZucker \r\nDone.", "Thanks, let's make sure the CI is green!", "@yuanwu2017 There was a fix merged into main which resolves the currently failing tests - could you rebase to include these and trigger an new CI run? " ]
1,704
1,705
1,705
CONTRIBUTOR
null
When setting xpu device for pipeline, It needs to use is_torch_xpu_available to load ipex and determine whether the device is available. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28326/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28326", "html_url": "https://github.com/huggingface/transformers/pull/28326", "diff_url": "https://github.com/huggingface/transformers/pull/28326.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28326.patch", "merged_at": 1705333152000 }
https://api.github.com/repos/huggingface/transformers/issues/28325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28325/comments
https://api.github.com/repos/huggingface/transformers/issues/28325/events
https://github.com/huggingface/transformers/pull/28325
2,064,055,311
PR_kwDOCUB6oc5jI8tT
28,325
Remove token_type_ids from model_input_names (like #24788)
{ "login": "Apsod", "id": 5305850, "node_id": "MDQ6VXNlcjUzMDU4NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5305850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Apsod", "html_url": "https://github.com/Apsod", "followers_url": "https://api.github.com/users/Apsod/followers", "following_url": "https://api.github.com/users/Apsod/following{/other_user}", "gists_url": "https://api.github.com/users/Apsod/gists{/gist_id}", "starred_url": "https://api.github.com/users/Apsod/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Apsod/subscriptions", "organizations_url": "https://api.github.com/users/Apsod/orgs", "repos_url": "https://api.github.com/users/Apsod/repos", "events_url": "https://api.github.com/users/Apsod/events{/privacy}", "received_events_url": "https://api.github.com/users/Apsod/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the quick reply! \r\nI ran the following: \r\n\r\n```\r\nRUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/models/gpt_sw3/test_tokenization_gpt_sw3.py\r\n```\r\nIt did fail a test, namely an equality test where the reference output has token_type_ids.\r\n\r\nIt also (catastrophically) failed a test that referenced a closed model. \r\n\r\nI've removed the offending token_type_ids in the equality check and updated the model reference to an openly available model. It now passes all tests. ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28325). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,704
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes an issue where the GPTSw3Tokenizer returns token_type_ids. These were not used in training and including them significantly degrades performance. This change is the same fix applied in #24788, which was later (erroneously?) reverted by #23909. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @ArthurZucker @amyeroberts Members/contributors who may be interested in your PR: @DarioSucic @ekgren @bjornrun <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28325/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28325", "html_url": "https://github.com/huggingface/transformers/pull/28325", "diff_url": "https://github.com/huggingface/transformers/pull/28325.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28325.patch", "merged_at": 1704306367000 }
https://api.github.com/repos/huggingface/transformers/issues/28324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28324/comments
https://api.github.com/repos/huggingface/transformers/issues/28324/events
https://github.com/huggingface/transformers/issues/28324
2,063,793,570
I_kwDOCUB6oc57Av2i
28,324
FastTokenizer not using the user_defined_symbols defined in the SentencePiece Model
{ "login": "kitkhai", "id": 71968397, "node_id": "MDQ6VXNlcjcxOTY4Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/71968397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kitkhai", "html_url": "https://github.com/kitkhai", "followers_url": "https://api.github.com/users/kitkhai/followers", "following_url": "https://api.github.com/users/kitkhai/following{/other_user}", "gists_url": "https://api.github.com/users/kitkhai/gists{/gist_id}", "starred_url": "https://api.github.com/users/kitkhai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kitkhai/subscriptions", "organizations_url": "https://api.github.com/users/kitkhai/orgs", "repos_url": "https://api.github.com/users/kitkhai/repos", "events_url": "https://api.github.com/users/kitkhai/events{/privacy}", "received_events_url": "https://api.github.com/users/kitkhai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Additionally, is there a way to retrieve (and edit) the merge rules from \"slow\" & \"fast\" tokenizers respectively?", "Hey! Few things here. What you are trying to do is outside the scope of the supported features. Adding a token should be done using `tokenizer.add_tokens` function. \r\nThe fast version is for me more right than what you expect. If there are no `merges`, then there is absolutely no reason for the BPE model to fuse `'▁super', 'long', 'word'` into `superlongword`. Thus the slow version seems more wrong, and specifically because sentencepiece does not really allow adding tokens that way. " ]
1,704
1,704
1,704
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers.convert_slow_tokenizer import import_protobuf from transformers import AutoTokenizer from transformers import NllbTokenizer, NllbTokenizerFast checkpoint = "facebook/nllb-200-distilled-600M" tokenizer = AutoTokenizer.from_pretrained(checkpoint) tokenizer.save_pretrained("old_tokenizer") model_pb2 = import_protobuf() m = model_pb2.ModelProto() m.ParseFromString(open("./old_tokenizer/sentencepiece.bpe.model", 'rb').read()) piece = m.SentencePiece() piece.piece = "superlongword" piece.score = -10 piece.type = 4 m.pieces.extend([piece1]) with open("temp_eng_insert_user_def_sentencepiece.bpe.model", 'wb') as f: f.write(m.SerializeToString()) tokenizer_edited = NllbTokenizer(vocab_file="temp_sentencepiece.bpe.model", src_lang = "zho_Hans", tgt_lang = "eng_Latn") tokenizer_edited_fast = NllbTokenizerFast(vocab_file="temp_sentencepiece.bpe.model", src_lang = "zho_Hans", tgt_lang = "eng_Latn") sent = 'Hi there superlongword' print(sent) > Hi there superlongword print("original tokenizer: ", tokenizer.tokenize(sent)) > original tokenizer: ['▁Hi', '▁there', '▁super', 'long', 'word'] print("tokenizer with tokens: ", tokenizer_edited.tokenize(sent)) > tokenizer with tokens: ['▁Hi', '▁there', '▁', 'superlongword'] print("tokenizer with tokens (Fast): ", tokenizer_edited_fast.tokenize(sent)) > tokenizer with tokens (Fast): ['▁Hi', '▁there', '▁super', 'long', 'word'] ``` ### Expected behavior ```python > Hi there superlongword > original tokenizer: ['▁Hi', '▁there', '▁super', 'long', 'word'] > tokenizer with tokens: ['▁Hi', '▁there', '▁', 'superlongword'] > tokenizer with tokens (Fast): ['▁Hi', '▁there', '▁', 'superlongword'] ``` I faced a similar issue as raised by a [question ](https://discuss.huggingface.co/t/sentencepiece-user-defined-symbols-and-fast-tokenizers/52208)in the HF forum where the OP trainer the tokenizer with **user_defined_symbols** while in my case I added to the SentencePiece model file directly without training. Noted that I can just use the `add_tokens` method to achieve the same outcome but because of another issue that I raised #28218 , I would like to avoid the use of `add_tokens` method if possible.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28324/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/28323
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28323/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28323/comments
https://api.github.com/repos/huggingface/transformers/issues/28323/events
https://github.com/huggingface/transformers/issues/28323
2,063,604,265
I_kwDOCUB6oc57ABop
28,323
OSError: image file is truncated (1 bytes not processed)
{ "login": "andysingal", "id": 20493493, "node_id": "MDQ6VXNlcjIwNDkzNDkz", "avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andysingal", "html_url": "https://github.com/andysingal", "followers_url": "https://api.github.com/users/andysingal/followers", "following_url": "https://api.github.com/users/andysingal/following{/other_user}", "gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}", "starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andysingal/subscriptions", "organizations_url": "https://api.github.com/users/andysingal/orgs", "repos_url": "https://api.github.com/users/andysingal/repos", "events_url": "https://api.github.com/users/andysingal/events{/privacy}", "received_events_url": "https://api.github.com/users/andysingal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [dataset's community tab](https://huggingface.co./datasets/mehul7/captioned_military_aircraft/discussions) instead?\r\n\r\nThanks!", "Can you please forward the issue there?\r\n\r\nOn Wed, Jan 3, 2024 at 20:48 Arthur ***@***.***> wrote:\r\n\r\n> Hey 🤗 thanks for opening an issue! We try to keep the github issues for\r\n> bugs/feature requests.\r\n> Could you ask your question on the dataset's community tab\r\n> <https://huggingface.co./datasets/mehul7/captioned_military_aircraft/discussions>\r\n> instead?\r\n>\r\n> Thanks!\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/28323#issuecomment-1875535846>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNKTATTEDLWOUBRDLJ3YMVZDPAVCNFSM6AAAAABBLEWXN6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZVGUZTKOBUGY>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n", "You found the bug, feel free to do so! 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info RTX 3090 ### Who can help? @younesbelkada @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from datasets import load_dataset dataset = load_dataset("mehul7/captioned_military_aircraft") from transformers import AutoImageProcessor checkpoint = "microsoft/resnet-50" image_processor = AutoImageProcessor.from_pretrained(checkpoint) import re from PIL import Image import io def contains_number(example): try: image = Image.open(io.BytesIO(example["image"]['bytes'])) t = image_processor(images=image, return_tensors="pt")['pixel_values'] except Exception as e: print(f"Error processing image:{example['text']}") return False return bool(re.search(r'\d', example['text'])) # Define a function to add the 'label' field def add_label(example): lab = example['text'].split() temp = 'NOT' for item in lab: if str(item[-1]).isdigit(): temp = item break example['label'] = temp return example # Filter the dataset # filtered_dataset = dataset.filter(contains_number) # Add the 'label' field in the dataset labeled_dataset = dataset.filter(contains_number).map(add_label) # View the structure of the updated dataset print(labeled_dataset) ``` gives error ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) Cell In[24], line 28 23 return example 25 # Filter the dataset 26 # filtered_dataset = dataset.filter(contains_number) 27 # Add the 'label' field in the dataset ---> 28 labeled_dataset = dataset.filter(contains_number).map(add_label) 29 # View the structure of the updated dataset 30 print(labeled_dataset) File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:975, in DatasetDict.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc) 972 if cache_file_names is None: 973 cache_file_names = {k: None for k in self} 974 return DatasetDict( --> 975 { 976 k: dataset.filter( 977 function=function, 978 with_indices=with_indices, 979 input_columns=input_columns, 980 batched=batched, 981 batch_size=batch_size, 982 keep_in_memory=keep_in_memory, 983 load_from_cache_file=load_from_cache_file, 984 cache_file_name=cache_file_names[k], 985 writer_batch_size=writer_batch_size, 986 fn_kwargs=fn_kwargs, 987 num_proc=num_proc, 988 desc=desc, 989 ) 990 for k, dataset in self.items() 991 } 992 ) File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:976, in <dictcomp>(.0) 972 if cache_file_names is None: 973 cache_file_names = {k: None for k in self} 974 return DatasetDict( 975 { --> 976 k: dataset.filter( 977 function=function, 978 with_indices=with_indices, 979 input_columns=input_columns, 980 batched=batched, 981 batch_size=batch_size, 982 keep_in_memory=keep_in_memory, 983 load_from_cache_file=load_from_cache_file, 984 cache_file_name=cache_file_names[k], 985 writer_batch_size=writer_batch_size, 986 fn_kwargs=fn_kwargs, 987 num_proc=num_proc, 988 desc=desc, 989 ) 990 for k, dataset in self.items() 991 } 992 ) File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File /usr/local/lib/python3.10/dist-packages/datasets/fingerprint.py:481, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 477 validate_fingerprint(kwargs[fingerprint_name]) 479 # Call actual function --> 481 out = func(dataset, *args, **kwargs) 483 # Update fingerprint of in-place transforms + update in-place history of transforms 485 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3623, in Dataset.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3620 if len(self) == 0: 3621 return self -> 3623 indices = self.map( 3624 function=partial( 3625 get_indices_from_mask_function, function, batched, with_indices, input_columns, self._indices 3626 ), 3627 with_indices=True, 3628 features=Features({"indices": Value("uint64")}), 3629 batched=True, 3630 batch_size=batch_size, 3631 remove_columns=self.column_names, 3632 keep_in_memory=keep_in_memory, 3633 load_from_cache_file=load_from_cache_file, 3634 cache_file_name=cache_file_name, 3635 writer_batch_size=writer_batch_size, 3636 fn_kwargs=fn_kwargs, 3637 num_proc=num_proc, 3638 suffix_template=suffix_template, 3639 new_fingerprint=new_fingerprint, 3640 input_columns=input_columns, 3641 desc=desc or "Filter", 3642 ) 3643 new_dataset = copy.deepcopy(self) 3644 new_dataset._indices = indices.data File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 590 self: "Dataset" = kwargs.pop("self") 591 # apply actual function --> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 594 for dataset in datasets: 595 # Remove task templates if a column mapping of the template is no longer valid File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3093, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3087 if transformed_dataset is None: 3088 with hf_tqdm( 3089 unit=" examples", 3090 total=pbar_total, 3091 desc=desc or "Map", 3092 ) as pbar: -> 3093 for rank, done, content in Dataset._map_single(**dataset_kwargs): 3094 if done: 3095 shards_done += 1 File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3470, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset) 3466 indices = list( 3467 range(*(slice(i, i + batch_size).indices(shard.num_rows))) 3468 ) # Something simpler? 3469 try: -> 3470 batch = apply_function_on_filtered_inputs( 3471 batch, 3472 indices, 3473 check_same_num_examples=len(shard.list_indexes()) > 0, 3474 offset=offset, 3475 ) 3476 except NumExamplesMismatchError: 3477 raise DatasetTransformationNotAllowedError( 3478 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it." 3479 ) from None File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3349, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset) 3347 if with_rank: 3348 additional_args += (rank,) -> 3349 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 3350 if isinstance(processed_inputs, LazyDict): 3351 processed_inputs = { 3352 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format 3353 } File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:6212, in get_indices_from_mask_function(function, batched, with_indices, input_columns, indices_mapping, *args, **fn_kwargs) 6209 if input_columns is None: 6210 # inputs only contains a batch of examples 6211 batch: dict = inputs[0] -> 6212 num_examples = len(batch[next(iter(batch.keys()))]) 6213 for i in range(num_examples): 6214 example = {key: batch[key][i] for key in batch} File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:272, in LazyDict.__getitem__(self, key) 270 value = self.data[key] 271 if key in self.keys_to_format: --> 272 value = self.format(key) 273 self.data[key] = value 274 self.keys_to_format.remove(key) File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:375, in LazyBatch.format(self, key) 374 def format(self, key): --> 375 return self.formatter.format_column(self.pa_table.select([key])) File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:442, in PythonFormatter.format_column(self, pa_table) 440 def format_column(self, pa_table: pa.Table) -> list: 441 column = self.python_arrow_extractor().extract_column(pa_table) --> 442 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) 443 return column File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:218, in PythonFeaturesDecoder.decode_column(self, column, column_name) 217 def decode_column(self, column: list, column_name: str) -> list: --> 218 return self.features.decode_column(column, column_name) if self.features else column File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in Features.decode_column(self, column, column_name) 1938 def decode_column(self, column: list, column_name: str): 1939 """Decode column with custom feature decoding. 1940 1941 Args: (...) 1948 `list[Any]` 1949 """ 1950 return ( -> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column] 1952 if self._column_requires_decoding[column_name] 1953 else column 1954 ) File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in <listcomp>(.0) 1938 def decode_column(self, column: list, column_name: str): 1939 """Decode column with custom feature decoding. 1940 1941 Args: (...) 1948 `list[Any]` 1949 """ 1950 return ( -> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column] 1952 if self._column_requires_decoding[column_name] 1953 else column 1954 ) File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id) 1336 elif isinstance(schema, (Audio, Image)): 1337 # we pass the token to read and decode files from private repositories in streaming mode 1338 if obj is not None and schema.decode: -> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1340 return obj File /usr/local/lib/python3.10/dist-packages/datasets/features/image.py:185, in Image.decode_example(self, value, token_per_repo_id) 183 else: 184 image = PIL.Image.open(BytesIO(bytes_)) --> 185 image.load() # to avoid "Too many open files" errors 186 return image File /usr/local/lib/python3.10/dist-packages/PIL/ImageFile.py:254, in ImageFile.load(self) 252 break 253 else: --> 254 raise OSError( 255 "image file is truncated " 256 f"({len(b)} bytes not processed)" 257 ) 259 b = b + s 260 n, err_code = decoder.decode(b) OSError: image file is truncated (1 bytes not processed) ``` ### Expected behavior needs to form labels same as : https://www.kaggle.com/code/jiabaowangts/dataset-air/notebook
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28323/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28321/comments
https://api.github.com/repos/huggingface/transformers/issues/28321/events
https://github.com/huggingface/transformers/pull/28321
2,063,466,908
PR_kwDOCUB6oc5jG4tl
28,321
support PeftMixedModel signature inspect
{ "login": "Facico", "id": 56598258, "node_id": "MDQ6VXNlcjU2NTk4MjU4", "avatar_url": "https://avatars.githubusercontent.com/u/56598258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Facico", "html_url": "https://github.com/Facico", "followers_url": "https://api.github.com/users/Facico/followers", "following_url": "https://api.github.com/users/Facico/following{/other_user}", "gists_url": "https://api.github.com/users/Facico/gists{/gist_id}", "starred_url": "https://api.github.com/users/Facico/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Facico/subscriptions", "organizations_url": "https://api.github.com/users/Facico/orgs", "repos_url": "https://api.github.com/users/Facico/repos", "events_url": "https://api.github.com/users/Facico/events{/privacy}", "received_events_url": "https://api.github.com/users/Facico/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "@younesbelkada Thanks for the suggestion, I've pushed the changes!", "@Facico can you confirm the latest commit on this branch fixes your issue?", "Yes, They all work. @younesbelkada ", "@younesbelkada How do I pass the tests_exotic_models?", "Rebasing on main should help! ", "Yes @Facico please merge your branch with upstream main and the test should be fixed. Afrer that we'll be able to merge your PR", "@younesbelkada @ArthurZucker Thanks! That helped.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28321). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,704
1,706
1,706
CONTRIBUTOR
null
support PeftMixedModel signature inspect Use model.base_model.model to get the base model(PeftMixedModel don't have "get_base_model" attribute)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28321/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28321", "html_url": "https://github.com/huggingface/transformers/pull/28321", "diff_url": "https://github.com/huggingface/transformers/pull/28321.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28321.patch", "merged_at": 1706267101000 }
https://api.github.com/repos/huggingface/transformers/issues/28320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28320/comments
https://api.github.com/repos/huggingface/transformers/issues/28320/events
https://github.com/huggingface/transformers/issues/28320
2,063,466,171
I_kwDOCUB6oc56_f67
28,320
Unable to Resume Training from LoRA Checkpoints When Using FSDP
{ "login": "fabianlim", "id": 8325951, "node_id": "MDQ6VXNlcjgzMjU5NTE=", "avatar_url": "https://avatars.githubusercontent.com/u/8325951?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fabianlim", "html_url": "https://github.com/fabianlim", "followers_url": "https://api.github.com/users/fabianlim/followers", "following_url": "https://api.github.com/users/fabianlim/following{/other_user}", "gists_url": "https://api.github.com/users/fabianlim/gists{/gist_id}", "starred_url": "https://api.github.com/users/fabianlim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fabianlim/subscriptions", "organizations_url": "https://api.github.com/users/fabianlim/orgs", "repos_url": "https://api.github.com/users/fabianlim/repos", "events_url": "https://api.github.com/users/fabianlim/events{/privacy}", "received_events_url": "https://api.github.com/users/fabianlim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello @fabianlim, I think the PR https://github.com/huggingface/transformers/pull/28297 should resolve this. ", "@pacman100 yes I think so too, closing this issue. " ]
1,704
1,707
1,707
NONE
null
### System Info transformers==4.35.2 accelerate==0.23.0 peft==0.5.0 `accelerate.yaml` ```yaml compute_environment: LOCAL_MACHINE distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch_policy: BACKWARD_PRE fsdp_forward_prefetch: true fsdp_offload_params: false fsdp_sharding_strategy: 1 fsdp_state_dict_type: FULL_STATE_DICT fsdp_sync_module_states: true fsdp_transformer_layer_cls_to_wrap: "BertLayer" machine_rank: 0 main_training_function: main mixed_precision: 'no' num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` ### Who can help? @pacman100 Following the recommendation https://huggingface.co./docs/trl/v0.7.4/en/sft_trainer#training-adapters to install a `PeftSavingCallback` to ensure that `adapter.bin` is saved. This will be the case when using `FSDP` since it is not a `PretrainedModel`, in which only the `state_dict` will be saved. The recommendation above works great for saving the checkpoint, but does not work when resuming the checkpoint. This is because `model_wrapped` is neither a `PretrainedModel` nora `PEFTModel`, and the `if-else` conditions in `Trainer._load_from_checkpoint` will go all the way to `load_sharded_checkpoint`. This results in the following error: ```shell Traceback (most recent call last): File "/dccstor/flim-ai4it/AI4IT/tofafm/src/scripts/lora_fsdp_bug_demo.py", line 183, in <module> main() File "/dccstor/flim-ai4it/AI4IT/tofafm/src/scripts/lora_fsdp_bug_demo.py", line 143, in main trainer.train(resume_from_checkpoint=True) File "/u/flim/miniconda3/envs/tofafm-rewrite2/lib/python3.10/site-packages/transformers/trainer.py", line 1555, in train return inner_training_loop( File "/u/flim/miniconda3/envs/tofafm-rewrite2/lib/python3.10/site-packages/transformers/trainer.py", line 1712, in _inner_training_loop self._load_from_checkpoint(resume_from_checkpoint, self.model_wrapped) File "/u/flim/miniconda3/envs/tofafm-rewrite2/lib/python3.10/site-packages/transformers/trainer.py", line 2132, in _load_from_checkpoint load_result = load_sharded_checkpoint( File "/u/flim/miniconda3/envs/tofafm-rewrite2/lib/python3.10/site-packages/transformers/modeling_utils.py", line 411, in load_sharded_checkpoint raise ValueError(f"Can't find a checkpoint index ({' or '.join(filenames)}) in {folder}.") ``` The second issue with the recommendation, is that the FSDP optimizer sates are not saved in the `PeftSavingCallback`, so it will not be a clean fix. I was wondering if you may have any thoughts on this. A possible hacky solution will be to override `Trainer._load_from_checkpoint` and use `FSDP.summon_full_params` to unshard the LoRA weights, and then call `load_adapter`, but it doesnt sound very clean given that it will not resume the FSDP optimizer. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Run the below script with the above `accelerate.yaml` configurations. After at least 100 steps, when an `adapter.bin` checkpoint has been populated, stop. 2. Rerun the script but now enabling `trainer.train(resume_from_checkpoint=True) `. ```python from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification, DataCollatorWithPadding from transformers import TrainingArguments, Trainer, TrainerCallback import torch, os def main( model_name: str = 'textattack/bert-base-uncased-SST-2', ): # we set the max sequence length here tokenizer = AutoTokenizer.from_pretrained( model_name, model_max_length=512, ) # load and tokenize a verys mall dataset raw_datasets = load_dataset('glue','sst2') # tokenization function def _tokenize_function(example, tokenizer): return tokenizer( example['sentence'], truncation = True, ) tokenized_datasets = raw_datasets.map( _tokenize_function, fn_kwargs = {'tokenizer': tokenizer}, batched=True ) data_collator = DataCollatorWithPadding( tokenizer=tokenizer, return_tensors='pt' ) model = AutoModelForSequenceClassification.from_pretrained(model_name) from peft import LoraConfig, get_peft_model model = get_peft_model( model, LoraConfig( r=8, lora_alpha=16, target_modules=['query', 'key', 'value'], task_type='SEQ_CLS' ) ) training_args = TrainingArguments( num_train_epochs = 1, output_dir = './results', per_device_train_batch_size = 8, per_device_eval_batch_size = 8, learning_rate = 2e-4, logging_steps = 50, save_strategy = 'steps', save_steps = 100, evaluation_strategy = 'steps', eval_steps = 100, save_total_limit = 2, metric_for_best_model = 'loss', greater_is_better = False, max_steps = 1000, # just make the demo quit after 1000 steps save_safetensors=False, ) class PeftSavingCallback(TrainerCallback): def on_save(self, args, state, control, **kwargs): checkpoint_path = os.path.join(args.output_dir, f"checkpoint-{state.global_step}") kwargs["model"].save_pretrained(checkpoint_path) if "pytorch_model.bin" in os.listdir(checkpoint_path): os.remove(os.path.join(checkpoint_path, "pytorch_model.bin")) trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["validation"], data_collator=data_collator, callbacks=[PeftSavingCallback()], ) import functools from accelerate import DistributedType if trainer.accelerator.distributed_type == DistributedType.FSDP: from torch.distributed.fsdp.wrap import lambda_auto_wrap_policy, _or_policy # inspired by # https://github.com/facebookresearch/llama-recipes/blob/main/src/llama_recipes/utils/fsdp_utils.py def lambda_policy_fn(module): if ( len(list(module.named_children())) == 0 and getattr(module, "weight", None) is not None and module.weight.requires_grad ): return True return False trainer.accelerator.state.fsdp_plugin.set_auto_wrap_policy(model) trainer.accelerator.state.fsdp_plugin.auto_wrap_policy = functools.partial( _or_policy, policies=[ functools.partial(lambda_auto_wrap_policy, lambda_fn=lambda_policy_fn), trainer.accelerator.state.fsdp_plugin.auto_wrap_policy ]) # checkpoints will be saved every 100 steps as `pytorch.bin` trainer.train() # trainer.train(resume_from_checkpoint=True) # activating this will throw the error ``` ### Expected behavior 1. `resume_from_checkpoint=True` will resume the PEFT checkpoint recorded by `PeftSavingCallback`. 3. [bonus]: the FSDP optimizer states can be resumed also.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28320/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28322
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28322/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28322/comments
https://api.github.com/repos/huggingface/transformers/issues/28322/events
https://github.com/huggingface/transformers/issues/28322
2,063,497,821
I_kwDOCUB6oc56_npd
28,322
Unclear Tokenizer Algorithm Documentation
{ "login": "kitkhai", "id": 71968397, "node_id": "MDQ6VXNlcjcxOTY4Mzk3", "avatar_url": "https://avatars.githubusercontent.com/u/71968397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kitkhai", "html_url": "https://github.com/kitkhai", "followers_url": "https://api.github.com/users/kitkhai/followers", "following_url": "https://api.github.com/users/kitkhai/following{/other_user}", "gists_url": "https://api.github.com/users/kitkhai/gists{/gist_id}", "starred_url": "https://api.github.com/users/kitkhai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kitkhai/subscriptions", "organizations_url": "https://api.github.com/users/kitkhai/orgs", "repos_url": "https://api.github.com/users/kitkhai/repos", "events_url": "https://api.github.com/users/kitkhai/events{/privacy}", "received_events_url": "https://api.github.com/users/kitkhai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for raising this issue. Do you want to open a PR for a fix? 🤗 ", "Hey! Thanks for the swift response.\r\n\r\nI'm still rather new and am actually confused about what the \"slow\" & \"fast\" Tokenizers are based on... (I don't really understand why one is based on \"sentencepiece\" and the other is based on \"BPE\"? Aren't they supposed to be trained the same way just implemented in different programming languages?)\r\n\r\nHence, I don't think I am in a position to open a PR for a fix? Sorry about that 😬", "Your intuition is correct, the slow version uses the `sentencepiece` backend, so the BPE implementation of `sentencepiece` library. While the fast uses the `tokenizers` backend, with the `BPE` implementation of `tokenizers` that is based on `sentencepiece`! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
**Bug description.** In the docs, for example for [NLLB ](https://huggingface.co./docs/transformers/model_doc/nllb), the "slow" & "fast" tokenizers are documented to be based on SentencePiece & BPE respectively. I do think that is a little confusing as: - Saying that the "slow" tokenizer is based on SentencePiece, it is unclear whether is it implementing the BPE or unigram model? - Saying that the "fast" tokenizer is based on BPE, is also unclear whether it refers to SentencePiece's implementation of BPE? Or just the standalone BPE algorithm? **Describe the expected behaviour** Can be more explicit when describing what the tokenizer is based on, such as: - Unigram(spm) - BPE (spm) - Unigram - BPE
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28322/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28319/comments
https://api.github.com/repos/huggingface/transformers/issues/28319/events
https://github.com/huggingface/transformers/issues/28319
2,063,383,696
I_kwDOCUB6oc56_LyQ
28,319
Allow gradient for generate()
{ "login": "whitejeep600", "id": 73194181, "node_id": "MDQ6VXNlcjczMTk0MTgx", "avatar_url": "https://avatars.githubusercontent.com/u/73194181?v=4", "gravatar_id": "", "url": "https://api.github.com/users/whitejeep600", "html_url": "https://github.com/whitejeep600", "followers_url": "https://api.github.com/users/whitejeep600/followers", "following_url": "https://api.github.com/users/whitejeep600/following{/other_user}", "gists_url": "https://api.github.com/users/whitejeep600/gists{/gist_id}", "starred_url": "https://api.github.com/users/whitejeep600/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/whitejeep600/subscriptions", "organizations_url": "https://api.github.com/users/whitejeep600/orgs", "repos_url": "https://api.github.com/users/whitejeep600/repos", "events_url": "https://api.github.com/users/whitejeep600/events{/privacy}", "received_events_url": "https://api.github.com/users/whitejeep600/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "The reason is that generate is for inference only, training requires a custom sampling logic ", "> The reason is that generate is for inference only, training requires a custom sampling logic\r\n\r\nThank you for the quick reply.\r\n\r\nHowever, what if I would just like to use the existing sampling algorithms from the generate function? It's a valid training strategy. It would be convenient to have access to the ready-made code for this purpose." ]
1,704
1,704
null
NONE
null
### Feature request The generate function is decorated with @torch.no_grad() and thus can't be used for model training. It would be better to make calculating gradients optional, rather than impossible, so that the function can be used for tuning. The simplest solution is to remove the decorator altogether, as users can set no_grad themselves before calling if they need to. Are there reasons to disable such usage? ### Motivation Allow using generate for tuning ### Your contribution Removing the decorator is a very simple change. I can submit a PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28319/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28318/comments
https://api.github.com/repos/huggingface/transformers/issues/28318/events
https://github.com/huggingface/transformers/pull/28318
2,063,361,670
PR_kwDOCUB6oc5jGhBh
28,318
Port MPT to Flax
{ "login": "shivance", "id": 51750587, "node_id": "MDQ6VXNlcjUxNzUwNTg3", "avatar_url": "https://avatars.githubusercontent.com/u/51750587?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shivance", "html_url": "https://github.com/shivance", "followers_url": "https://api.github.com/users/shivance/followers", "following_url": "https://api.github.com/users/shivance/following{/other_user}", "gists_url": "https://api.github.com/users/shivance/gists{/gist_id}", "starred_url": "https://api.github.com/users/shivance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shivance/subscriptions", "organizations_url": "https://api.github.com/users/shivance/orgs", "repos_url": "https://api.github.com/users/shivance/repos", "events_url": "https://api.github.com/users/shivance/events{/privacy}", "received_events_url": "https://api.github.com/users/shivance/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,706
null
NONE
null
# What does this PR do? This PR adds flax implementation of MPT to transformers ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sanchit-gandhi @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28318/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28318/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28318", "html_url": "https://github.com/huggingface/transformers/pull/28318", "diff_url": "https://github.com/huggingface/transformers/pull/28318.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28318.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28317/comments
https://api.github.com/repos/huggingface/transformers/issues/28317/events
https://github.com/huggingface/transformers/issues/28317
2,063,291,731
I_kwDOCUB6oc56-1VT
28,317
Simple Bug in modeling_attn_mask_utils.py
{ "login": "Adam1679", "id": 32404962, "node_id": "MDQ6VXNlcjMyNDA0OTYy", "avatar_url": "https://avatars.githubusercontent.com/u/32404962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Adam1679", "html_url": "https://github.com/Adam1679", "followers_url": "https://api.github.com/users/Adam1679/followers", "following_url": "https://api.github.com/users/Adam1679/following{/other_user}", "gists_url": "https://api.github.com/users/Adam1679/gists{/gist_id}", "starred_url": "https://api.github.com/users/Adam1679/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Adam1679/subscriptions", "organizations_url": "https://api.github.com/users/Adam1679/orgs", "repos_url": "https://api.github.com/users/Adam1679/repos", "events_url": "https://api.github.com/users/Adam1679/events{/privacy}", "received_events_url": "https://api.github.com/users/Adam1679/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Makes sense, would you like to open a PR for the fix? 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info (torch) (base) anxiang.zhang@n214-176-142:~/DeepSeek-Coder$ transformers-cli env WARNING:tensorflow:From /data02/home/anxiang.zhang/miniconda3/envs/torch/lib/python3.10/site-packages/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not installed. Falling back to cpu. Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.36.0 - Platform: Linux-5.4.56.bsk.9-amd64-x86_64-with-glibc2.28 - Python version: 3.10.13 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (True) - Tensorflow version (GPU?): 2.9.3 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.4 (cpu) - Jax version: 0.4.18 - JaxLib version: 0.4.18 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction At transformers.modeling_attn_mask_utils.py:238. The code is ``` tmp = torch.arange(attention_mask.shape[1], 0, -1) indices = torch.argmax(attention_mask.cpu() * tmp, 1, keepdim=True) ```` The attention_mask.cpu() is clear an error when the global default tensor type is not a cpu dtype. For example, if you set torch.set_default_tensor_type(torch.cuda.HalfTensor). then the torch.arange(attention_mask.shape[1], 0, -1) would return a tensor on cuda instead of CPU, which will lead to error by multiplying a CPU tensor with CUDA tensor. A simple fix would be replace attention_mask.cpu() as attention_mask.to(tmp.device) ### Expected behavior A simple fix would be replace attention_mask.cpu() as attention_mask.to(tmp.device)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28317/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28316/comments
https://api.github.com/repos/huggingface/transformers/issues/28316/events
https://github.com/huggingface/transformers/issues/28316
2,063,015,679
I_kwDOCUB6oc569x7_
28,316
Pythia regression in transformers==4.36.2 vs transformers==4.30.1
{ "login": "vwxyzjn", "id": 5555347, "node_id": "MDQ6VXNlcjU1NTUzNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/5555347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vwxyzjn", "html_url": "https://github.com/vwxyzjn", "followers_url": "https://api.github.com/users/vwxyzjn/followers", "following_url": "https://api.github.com/users/vwxyzjn/following{/other_user}", "gists_url": "https://api.github.com/users/vwxyzjn/gists{/gist_id}", "starred_url": "https://api.github.com/users/vwxyzjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vwxyzjn/subscriptions", "organizations_url": "https://api.github.com/users/vwxyzjn/orgs", "repos_url": "https://api.github.com/users/vwxyzjn/repos", "events_url": "https://api.github.com/users/vwxyzjn/events{/privacy}", "received_events_url": "https://api.github.com/users/vwxyzjn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry but the source of the regression might be pretty much anything. If the model supports SDPA, it can come from SDPA, if the tokenizer had a bug before, it might be the tokenizer etc etc\r\nI can't debug this as is, would you mind comparing with closer transformers releases? This might help isolating this but otherwise the scope is just way too broad. The modeling code has / might have changed, the caching mechanism has changed, torch operators might have been fix etc etc ", "Thanks for the reply! I compared the following transformers releases and noticed that since 4.36.0, the losses become different. I also validated end-to-end that 4.33.2 is fine. \r\n\r\n<img width=\"1048\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/5555347/1bf51581-3f9d-4730-a9a4-3e58a283c9c4\">\r\n\r\n\r\n\r\n```\r\npython repro.py # 4.30.1\r\nepoch: 0\r\nupdate: 9, loss: 0.6904680132865906\r\nupdate: 17, loss: 0.6958459615707397\r\nupdate: 25, loss: 0.6878675818443298\r\nupdate: 33, loss: 0.6945885419845581\r\nupdate: 41, loss: 0.6920362710952759\r\nupdate: 49, loss: 0.6866860389709473\r\nupdate: 57, loss: 0.685932457447052\r\nupdate: 65, loss: 0.6930047273635864\r\nupdate: 73, loss: 0.6854068636894226\r\nupdate: 81, loss: 0.6739884614944458\r\nupdate: 89, loss: 0.6913299560546875\r\nupdate: 97, loss: 0.7025052309036255\r\n\r\n\r\n#4.33.2\r\nepoch: 0\r\nupdate: 9, loss: 0.6904680132865906\r\nupdate: 17, loss: 0.6958459615707397\r\nupdate: 25, loss: 0.6878675818443298\r\nupdate: 33, loss: 0.6945885419845581\r\nupdate: 41, loss: 0.6920362710952759\r\nupdate: 49, loss: 0.6866860389709473\r\nupdate: 57, loss: 0.685932457447052\r\nupdate: 65, loss: 0.6930047273635864\r\nupdate: 73, loss: 0.6854068636894226\r\nupdate: 81, loss: 0.6739884614944458\r\n\r\n# 4.35.1 \r\n===training model===\r\nepoch: 0\r\nupdate: 9, loss: 0.6904680132865906\r\nupdate: 17, loss: 0.6958459615707397\r\nupdate: 25, loss: 0.6878675818443298\r\nupdate: 33, loss: 0.6945885419845581\r\nupdate: 41, loss: 0.6920362710952759\r\nupdate: 49, loss: 0.6866860389709473\r\nupdate: 57, loss: 0.685932457447052\r\nupdate: 65, loss: 0.6930047273635864\r\nupdate: 73, loss: 0.6854068636894226\r\nupdate: 81, loss: 0.6739884614944458\r\nupdate: 89, loss: 0.6913299560546875\r\n\r\n\r\n# 4.36.0\r\n===training model===\r\nepoch: 0\r\nupdate: 9, loss: 0.6855486035346985\r\nupdate: 17, loss: 0.6901922225952148\r\nupdate: 25, loss: 0.6883461475372314\r\nupdate: 33, loss: 0.6975809931755066\r\nupdate: 41, loss: 0.6995139122009277\r\nupdate: 49, loss: 0.6912401914596558\r\nupdate: 57, loss: 0.698995053768158\r\nupdate: 65, loss: 0.7005056142807007\r\nupdate: 73, loss: 0.7048475742340088\r\nupdate: 81, loss: 0.6950501203536987\r\nupdate: 89, loss: 0.7148610949516296\r\n\r\n# 4.36.1\r\nepoch: 0\r\nupdate: 9, loss: 0.6855486035346985\r\nupdate: 17, loss: 0.6901922225952148\r\nupdate: 25, loss: 0.6883461475372314\r\nupdate: 33, loss: 0.6975809931755066\r\nupdate: 41, loss: 0.6995139122009277\r\nupdate: 49, loss: 0.6912401914596558\r\nupdate: 57, loss: 0.698995053768158\r\nupdate: 65, loss: 0.7005056142807007\r\nupdate: 73, loss: 0.7048475742340088\r\nupdate: 81, loss: 0.6950501203536987\r\nupdate: 89, loss: 0.7148610949516296\r\n```", "Could you try using `attn_implementation = \"eager\"` instead of `sdpa` wherever you instantiate a model? One of the biggest changes from 4.36 is this! See [here](https://github.com/huggingface/transformers/issues/28005) ", "Also the number you have don't really seem alarming no? ", "I ran it with \r\n\r\n```\r\n self.lm_backbone = AutoModel.from_pretrained(\r\n config.base_model,\r\n config=self.config.base_config,\r\n trust_remote_code=True,\r\n attn_implementation=\"eager\",\r\n )\r\n```\r\n\r\nand it did not seem to make a difference. \r\n\r\n> Also the number you have don't really seem alarming no?\r\n\r\nYeah, but I guess this is why it's tricky — the numbers do not look that different but it causes a significant regression for reward model training. Maybe the hidden states index are being messed up somehow? It's using `self.scalar_head(output.hidden_states[-1])`.\r\n\r\n", "Oh sorry if using output_hidden_states, `eager` will by default be used. \r\nI have no idea, pinging @pacman100 our training expert for idea and @younesbelkada for SFT training which should be more relevant expertise than me!", "Hi @vwxyzjn !\r\nHappy new year! \r\nHmm this is interesting, I don't have a clear idea either on what could be causing this, but looking at the commit history of GPTNeoX modeling code it could be :\r\n1- Attention dropout support: https://github.com/huggingface/transformers/commit/392740452e86ee6ca523b92bc4ef8527ed4e7a16\r\n2- RoPE scaling: https://github.com/huggingface/transformers/pull/24653\r\n3- Potentially the Gradient checkpointing refactor as well https://github.com/huggingface/transformers/pull/27020\r\nIf the experiments are not too long to be ran, can you try to checkout on each of these commits and see which one might be responsible of the regression?", "I think it's https://github.com/huggingface/transformers/commit/253f9a3f9716d08a81fb305fe71f983122eb608b i'll fix the nans!" ]
1,704
1,705
1,705
CONTRIBUTOR
null
### System Info Happy New Year all! - `transformers` version: 4.36.2 - Platform: Linux-5.15.0-1049-aws-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.3.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (cpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes, via `accelerate` ### Who can help? Maybe @younesbelkada @ArthurZucker? ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here is a minimal reproduction https://gist.github.com/vwxyzjn/e67e0bb28363e6fbb309bd0b78922a93. I ran the same `repro.py` with `transformers==4.36.2` and `transformers==4.30.1`, resulting in slightly different losses. Given the data is and other dependencies are precisely the same. ``` python repro.py # 4.36.2 epoch: 0 update: 9, loss: 0.6855486035346985 update: 17, loss: 0.6901922225952148 update: 25, loss: 0.6883461475372314 update: 33, loss: 0.6975809931755066 update: 41, loss: 0.6995139122009277 update: 49, loss: 0.6912401914596558 update: 57, loss: 0.698995053768158 update: 65, loss: 0.7005056142807007 update: 73, loss: 0.7048475742340088 update: 81, loss: 0.6950501203536987 update: 89, loss: 0.7148610949516296 update: 97, loss: 0.694938063621521 update: 105, loss: 0.6957464814186096 update: 113, loss: 0.6873601675033569 python repro.py # 4.30.1 epoch: 0 update: 9, loss: 0.6904680132865906 update: 17, loss: 0.6958459615707397 update: 25, loss: 0.6878675818443298 update: 33, loss: 0.6945885419845581 update: 41, loss: 0.6920362710952759 update: 49, loss: 0.6866860389709473 update: 57, loss: 0.685932457447052 update: 65, loss: 0.6930047273635864 update: 73, loss: 0.6854068636894226 update: 81, loss: 0.6739884614944458 update: 89, loss: 0.6913299560546875 update: 97, loss: 0.7025052309036255 ``` # Regression in end-to-end reward model training performance This difference causes a regression in training reward models. When setting the code, data to be **exactly** the same, the average reward model accuracy across four random seeds is as follows: * transformers==4.36.2, accelerate==0.25.0, deepspeed==0.12.6 * EleutherAI/pythia-1b-deduped: 0.6276 * EleutherAI/pythia-2.8b-deduped: 0.6438 * EleutherAI/pythia-6.9b-deduped: 0.65 * transformers==4.30.1, accelerate==0.25.0, deepspeed==0.12.6 * EleutherAI/pythia-1b-deduped: 0.6327 * EleutherAI/pythia-2.8b-deduped: 0.6713 * EleutherAI/pythia-6.9b-deduped: 0.6923 The SFT losses are relatively similar (maybe except for 6.9B, there was a minor loss explosion with `transformers==4.36.2`) Here is the report. https://wandb.ai/costa-huang/tldr_summarize/reports/pythia-transformers-regression--Vmlldzo2Mzk3OTQ1 <img width="1071" alt="image" src="https://github.com/huggingface/transformers/assets/5555347/4c8834a2-a956-45b4-bfb5-3ae9bc0cc522"> <img width="1052" alt="image" src="https://github.com/huggingface/transformers/assets/5555347/bacfa6a6-81bf-426d-b6f9-5107143e957c"> <img width="1020" alt="image" src="https://github.com/huggingface/transformers/assets/5555347/e5abee9e-9609-448d-b701-63b4c532b204"> Here is the code comparison: identical code and only the dependencies are different <img width="879" alt="image" src="https://github.com/huggingface/transformers/assets/5555347/786ccdd3-242e-4c93-a857-1db122709e95"> <img width="982" alt="image" src="https://github.com/huggingface/transformers/assets/5555347/70366edd-8c18-4ad0-850c-fcb7576dd97a"> ### Expected behavior There shouldn't be a regression in the performance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28316/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28315/comments
https://api.github.com/repos/huggingface/transformers/issues/28315/events
https://github.com/huggingface/transformers/pull/28315
2,062,918,444
PR_kwDOCUB6oc5jFCq6
28,315
Accelerate support added to Object Detection & Segmentation Models
{ "login": "sam99dave", "id": 37779169, "node_id": "MDQ6VXNlcjM3Nzc5MTY5", "avatar_url": "https://avatars.githubusercontent.com/u/37779169?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sam99dave", "html_url": "https://github.com/sam99dave", "followers_url": "https://api.github.com/users/sam99dave/followers", "following_url": "https://api.github.com/users/sam99dave/following{/other_user}", "gists_url": "https://api.github.com/users/sam99dave/gists{/gist_id}", "starred_url": "https://api.github.com/users/sam99dave/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sam99dave/subscriptions", "organizations_url": "https://api.github.com/users/sam99dave/orgs", "repos_url": "https://api.github.com/users/sam99dave/repos", "events_url": "https://api.github.com/users/sam99dave/events{/privacy}", "received_events_url": "https://api.github.com/users/sam99dave/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey @sam99dave were you waiting for a review? ", "Yeah, I was waiting for one. Not sure if these required changes has already been merged by another PR. There was another PR for this which was being reviewed I think.\n\nPls, let me know on this.", "Seems like #28312 was already approved, but is getting stale. I'll post a message there" ]
1,704
1,707
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28309 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28315/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28315", "html_url": "https://github.com/huggingface/transformers/pull/28315", "diff_url": "https://github.com/huggingface/transformers/pull/28315.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28315.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28314/comments
https://api.github.com/repos/huggingface/transformers/issues/28314/events
https://github.com/huggingface/transformers/issues/28314
2,062,892,647
I_kwDOCUB6oc569T5n
28,314
Whisper OpenBLAS Warnings when running Whisper Inference on aarch64 cpu
{ "login": "DrChrisLevy", "id": 16509365, "node_id": "MDQ6VXNlcjE2NTA5MzY1", "avatar_url": "https://avatars.githubusercontent.com/u/16509365?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DrChrisLevy", "html_url": "https://github.com/DrChrisLevy", "followers_url": "https://api.github.com/users/DrChrisLevy/followers", "following_url": "https://api.github.com/users/DrChrisLevy/following{/other_user}", "gists_url": "https://api.github.com/users/DrChrisLevy/gists{/gist_id}", "starred_url": "https://api.github.com/users/DrChrisLevy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrChrisLevy/subscriptions", "organizations_url": "https://api.github.com/users/DrChrisLevy/orgs", "repos_url": "https://api.github.com/users/DrChrisLevy/repos", "events_url": "https://api.github.com/users/DrChrisLevy/events{/privacy}", "received_events_url": "https://api.github.com/users/DrChrisLevy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems like OpenBlas is at fault here no? I ran your code and had no warning whatsoever (on a M3). https://github.com/OpenMathLib/OpenBLAS might need to be built with USE_OPENMP=1.\r\n", "Okay let me run it in a python virtual env (not docker) on the same machine and see if the issue goes away. You ran it in the above docker container ? Or in a slightly different env? Thanks for the quick reply and checking. I just thought it was something to do with transformers since I only saw that warning when upgrading recently. I'll post back later today. \r\n\r\n**update** (I also updated some of the findings in the description at the top).\r\n- The issue **is** present in the sample docker container on macm1 with transformers versions higher than 4.35.2. It does not happen in 4.35.2 though in the same Docker container.\r\n- The issue is **not** present when running within python venv (not Docker) on mac m1, even for transformers versions higher than 4.35.2.\r\n\r\nIm not sure if you have any other thoughts on that @ArthurZucker . I would be interested in knowing if you ran my reproducible example in the same docker container environment. \r\n\r\nIf you think this is just some environment thing then feel free to close the issue.\r\n", "Sorry for the confusion I simply ran it on pyenv locally as well 😉 \r\nGlad that the findings show this might be more a docker issue than a transformers issue! ", "It should be possible to disable multithreading in openblas without recompiling it, by `export OPENBLAS_NUM_THREADS=1` before running the python program in the terminal https://pythonspeed.com/articles/concurrency-control/", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.10.76-linuxkit-aarch64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction The issue I am seeing happens when I run the whisper model inference below on Apple M1 Pro within certain environments and versions of transformers, but not all the time. - The issue **is** present in docker container below on macm1 with transformers versions higher than 4.35.2. It does not happen in 4.35.2. - The issue is **not** present when running within python venv (not Docker) on mac m1, even for transformers versions higher than 4.35.2. - The issue is **not** present when running on on linux machine `x86_64` , even for transformers versions higher than 4.35.2. To reproduce the issue on Apple M1 Pro: Create this Dockerfile ``` FROM public.ecr.aws/docker/library/python:3.9-slim-bullseye WORKDIR /app RUN apt-get update && apt-get install -y \ vim RUN pip install transformers==4.36.2 RUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu RUN pip install ipython COPY . /app ``` Build the docker image and then get into the docker container shell ``` docker build . -t asr docker run -it asr /bin/bash ``` Now in the container, open `ipython` shell ``` ipython ``` Then copy/paste this code which is almost identical to the example code [here](https://huggingface.co./openai/whisper-large-v3#usage). ```python import torch import numpy as np from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline model_id = "openai/whisper-tiny" device = "cpu" model = AutoModelForSpeechSeq2Seq.from_pretrained( model_id, torch_dtype=torch.float32, use_safetensors=True ) model.to(device) processor = AutoProcessor.from_pretrained(model_id) pipe = pipeline( "automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor, max_new_tokens=128, chunk_length_s=30, batch_size=16, return_timestamps=True, torch_dtype=torch.float32, device=device, ) sample_audio = np.array([-3.7841797e-03, -2.0751953e-03, -3.9978027e-03, -3.5095215e-03, -2.2888184e-03, -1.8920898e-03, -2.1057129e-03, 2.7465820e-04, -1.0986328e-03, -3.0517578e-03, -3.1433105e-03, -2.6245117e-03, -2.0446777e-03, -4.5776367e-04, -4.3334961e-03, -2.5024414e-03, -2.5634766e-03, -2.1667480e-03, -3.0517578e-05, 1.1291504e-03, 2.2888184e-03, 1.1901855e-03, -1.0681152e-03, -1.0986328e-03, -1.2207031e-04, -2.2888184e-03, -9.1552734e-05, -1.1596680e-03, -9.4604492e-04, 1.5258789e-04, 3.0517578e-05, -6.4086914e-04, 1.8310547e-04, -3.0517578e-05, 6.7138672e-04, -1.1291504e-03, -2.3193359e-03, -2.6855469e-03, -8.2397461e-04, -2.5634766e-03, -2.4414062e-03, -3.5095215e-03, -1.8310547e-04, -3.1738281e-03, -1.3122559e-03, -4.4250488e-03, -1.3732910e-03, -3.2348633e-03, -1.8005371e-03, -1.2207031e-03, -4.8828125e-04, -2.6245117e-03, -2.3193359e-03, -1.8920898e-03, -1.4953613e-03, 4.2724609e-04, -1.2207031e-03, -9.4604492e-04, -1.6479492e-03, -2.4719238e-03, -1.0986328e-03, -1.5258789e-03, -4.8828125e-04, -2.4719238e-03, -1.4648438e-03, -7.6293945e-04, 2.1057129e-03, 3.6621094e-04, 1.5258789e-04, 3.9672852e-04, 1.3122559e-03, 3.7231445e-03, 2.9907227e-03, 4.0893555e-03, 2.1362305e-03, 3.1127930e-03, 3.4484863e-03, 5.8898926e-03, 5.7678223e-03, 5.3405762e-03, 5.7678223e-03, 3.6621094e-03, 4.5166016e-03, 2.3498535e-03, 4.7912598e-03, 4.8217773e-03, 7.1105957e-03, 5.7678223e-03, 5.7678223e-03, 4.9438477e-03, 6.5612793e-03, 7.3547363e-03, 7.4462891e-03, 7.2631836e-03, 6.6833496e-03, 4.4860840e-03, 5.0964355e-03, 5.5847168e-03, 5.9204102e-03, 5.0659180e-03], dtype=np.float32) result = pipe(sample_audio) print(result["text"]) ``` When running on aarch64 you will see the warning message printed to the screen hundreds of times ``` OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option. OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option. OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option. OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option. OpenBLAS Warning : Detect OpenMP Loop and this application may hang. Please rebuild the library with USE_OPENMP=1 option. ...... ``` ### Expected behavior The output result which is the string `' you'` with **no** OpenBLAS Warnings.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28314/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28313/comments
https://api.github.com/repos/huggingface/transformers/issues/28313/events
https://github.com/huggingface/transformers/pull/28313
2,062,888,887
PR_kwDOCUB6oc5jE8bf
28,313
README: install transformers from conda-forge channel
{ "login": "kevherro", "id": 10460086, "node_id": "MDQ6VXNlcjEwNDYwMDg2", "avatar_url": "https://avatars.githubusercontent.com/u/10460086?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kevherro", "html_url": "https://github.com/kevherro", "followers_url": "https://api.github.com/users/kevherro/followers", "following_url": "https://api.github.com/users/kevherro/following{/other_user}", "gists_url": "https://api.github.com/users/kevherro/gists{/gist_id}", "starred_url": "https://api.github.com/users/kevherro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kevherro/subscriptions", "organizations_url": "https://api.github.com/users/kevherro/orgs", "repos_url": "https://api.github.com/users/kevherro/repos", "events_url": "https://api.github.com/users/kevherro/events{/privacy}", "received_events_url": "https://api.github.com/users/kevherro/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ArthurZucker I appreciate your review!\r\n\r\n> we should probably link @LysandreJik comment from the issue\r\n\r\n\"In the future we'll likely remove the huggingface channel support for conda unless we see explicit demand here; if that's the case, please put a thumbs up here (we'll monitor), otherwise, the conda-forge channel is usually very up to date.\" ([here](https://github.com/huggingface/transformers/issues/28248#issuecomment-1869765891))\r\n\r\n> mention that the huggingface channel is deprecated for transformers\r\n\r\nGood callout. Should this go in the README as well?\r\n", "If you're up for it, would you also mind updating this in the [Installation](https://huggingface.co./docs/transformers/installation#install-with-conda) docs? 🙂 ", "@stevhliu I think I did this correctly! Note that this also cleans up some whitespaces and newlines in the changed files.", "My pleasure! ", "Just seeing this now, but thanks for the PR @kevherro!" ]
1,704
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? Switch to the conda-forge channel for transformer installation, as the huggingface channel does not offer the latest version. Fixes #28248 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28313/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28313", "html_url": "https://github.com/huggingface/transformers/pull/28313", "diff_url": "https://github.com/huggingface/transformers/pull/28313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28313.patch", "merged_at": 1704389776000 }
https://api.github.com/repos/huggingface/transformers/issues/28312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28312/comments
https://api.github.com/repos/huggingface/transformers/issues/28312/events
https://github.com/huggingface/transformers/pull/28312
2,062,884,738
PR_kwDOCUB6oc5jE7kW
28,312
Support : Leverage Accelerate for object detection/segmentation models
{ "login": "Tanmaypatil123", "id": 77950208, "node_id": "MDQ6VXNlcjc3OTUwMjA4", "avatar_url": "https://avatars.githubusercontent.com/u/77950208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tanmaypatil123", "html_url": "https://github.com/Tanmaypatil123", "followers_url": "https://api.github.com/users/Tanmaypatil123/followers", "following_url": "https://api.github.com/users/Tanmaypatil123/following{/other_user}", "gists_url": "https://api.github.com/users/Tanmaypatil123/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tanmaypatil123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tanmaypatil123/subscriptions", "organizations_url": "https://api.github.com/users/Tanmaypatil123/orgs", "repos_url": "https://api.github.com/users/Tanmaypatil123/repos", "events_url": "https://api.github.com/users/Tanmaypatil123/events{/privacy}", "received_events_url": "https://api.github.com/users/Tanmaypatil123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, thanks for your PR. It looks like there's an issue with the segmentation models.", "@NielsRogge could you take a look at this PR again . I have made required changes for segmentation models.", "Thanks, LGTM.", "hey @amyeroberts removed conflicts .but whenever I tried to run test cases by `RUN_SLOW=1 pytest tests/models/conditional_detr/` I am getting following error : \r\n![image](https://github.com/huggingface/transformers/assets/77950208/69fc8fa4-f9ed-472c-98e0-672f3f0e632f)\r\n", "@Tanmaypatil123 - Not sure exactly what's causing this. The function `is_g2p_en_available` was added in #23439. However, we haven't seen this import error in our CI runs which validate slow tests and I'm able to run `RUN_SLOW=1 pytest tests/models/conditional_detr/` locally on `main`.\r\n\r\nIn a python session, are you about to run: \r\n```\r\nfrom transformers.utils import is_g2p_en_available\r\n```\r\n? \r\n\r\nCould you try rebasing on main to make sure all updates/fixes are included in this branch? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "The issue is related to evaluate I think: https://github.com/huggingface/peft/issues/1351 ", "cc @Tanmaypatil123 let's rebase this and I think we could merge this! ", "@ArthurZucker Changes are approved but there is issue with segmentation model; test cases fails for it and i could not figure out what the problem is. ", "@Tanmaypatil123 you just need to fix the upstream conflicts, like so:\r\n```\r\ngit remote add upstream https://github.com/huggingface/transformers.git\r\ngit fetch upstream\r\ngit merge upstream/main\r\n```", "@ArthurZucker I have resolved the conflicts ", "@Tanmaypatil123 Great! \r\n\r\nFor the failing tests, there's two which look like they might be related to this PR which will needed to be addressed before merge: \r\n\r\n```\r\nFAILED tests/models/yolos/test_modeling_yolos.py::YolosModelTest::test_pipeline_image_feature_extraction - TypeError: forward() got an unexpected keyword argument 'pixel_mask'\r\nFAILED tests/models/yolos/test_modeling_yolos.py::YolosModelTest::test_pipeline_object_detection - TypeError: forward() got an unexpected keyword argument 'pixel_mask'\r\n```\r\n\r\nFor the build documentation tests, there was a recent fix merged into main. Rebasing to include this should resolve. ", "> @Tanmaypatil123 Great!\r\n> \r\n> For the failing tests, there's two which look like they might be related to this PR which will needed to be addressed before merge:\r\n> \r\n> ```\r\n> FAILED tests/models/yolos/test_modeling_yolos.py::YolosModelTest::test_pipeline_image_feature_extraction - TypeError: forward() got an unexpected keyword argument 'pixel_mask'\r\n> FAILED tests/models/yolos/test_modeling_yolos.py::YolosModelTest::test_pipeline_object_detection - TypeError: forward() got an unexpected keyword argument 'pixel_mask'\r\n> ```\r\n> \r\n> For the build documentation tests, there was a recent fix merged into main. Rebasing to include this should resolve.\r\n\r\nDon't know how that test case is failing. I didn't make any changes to the parameters that are provided to `YolosForObjectDetection`. should we add parameters or is there any change in the test case?", "@Tanmaypatil123 Indeed. It's quite weird these tests are failing as they don't seem related to this PR. They aren't however failing on any other PRs. What I think is happening is that some tests weren't collected by the test fetched (cc @ydshieh for reference) when the image feature extraction pipeline was added. And now, because this PR touches yolos they're being run now. I'm going to look into it and let you know asap. ", "@amyeroberts Done 😀 Thanks for the help. Huggingface has the best open source maintainers.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28312). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@Tanmaypatil123 Thanks for this great contribution and patience with our failing CI tests! " ]
1,704
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? Adding support for multi-GPU training in 6 object detection models and segmentation models. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # #28309 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28312/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28312/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28312", "html_url": "https://github.com/huggingface/transformers/pull/28312", "diff_url": "https://github.com/huggingface/transformers/pull/28312.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28312.patch", "merged_at": 1708083539000 }
https://api.github.com/repos/huggingface/transformers/issues/28311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28311/comments
https://api.github.com/repos/huggingface/transformers/issues/28311/events
https://github.com/huggingface/transformers/pull/28311
2,062,676,245
PR_kwDOCUB6oc5jEPCF
28,311
Bump tj-actions/changed-files from 22.2 to 41 in /.github/workflows
{ "login": "dependabot[bot]", "id": 49699333, "node_id": "MDM6Qm90NDk2OTkzMzM=", "avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dependabot%5Bbot%5D", "html_url": "https://github.com/apps/dependabot", "followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers", "following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}", "gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}", "starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions", "organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs", "repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos", "events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}", "received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events", "type": "Bot", "site_admin": false }
[ { "id": 1905493434, "node_id": "MDU6TGFiZWwxOTA1NDkzNDM0", "url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies", "name": "dependencies", "color": "0366d6", "default": false, "description": "Pull requests that update a dependency file" }, { "id": 6384750322, "node_id": "LA_kwDOCUB6oc8AAAABfI-O8g", "url": "https://api.github.com/repos/huggingface/transformers/labels/github_actions", "name": "github_actions", "color": "000000", "default": false, "description": "Pull requests that update GitHub Actions code" } ]
closed
false
null
[]
[ "fyi @ydshieh " ]
1,704
1,704
1,704
CONTRIBUTOR
null
Bumps [tj-actions/changed-files](https://github.com/tj-actions/changed-files) from 22.2 to 41. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/tj-actions/changed-files/releases">tj-actions/changed-files's releases</a>.</em></p> <blockquote> <h2>v41</h2> <h1>Changes in v41.0.1</h1> <h2>What's Changed</h2> <ul> <li>Upgraded to v41 by <a href="https://github.com/tj-actions-bot"><code>@​tj-actions-bot</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1811">tj-actions/changed-files#1811</a></li> <li>chore(deps): update dependency eslint-plugin-prettier to v5.1.2 by <a href="https://github.com/renovate"><code>@​renovate</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1813">tj-actions/changed-files#1813</a></li> <li>fix: update characters escaped by safe output by <a href="https://github.com/jackton1"><code>@​jackton1</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1815">tj-actions/changed-files#1815</a></li> </ul> <p><strong>Full Changelog</strong>: <a href="https://github.com/tj-actions/changed-files/compare/v41...v41.0.1">https://github.com/tj-actions/changed-files/compare/v41...v41.0.1</a></p> <hr /> <h1>Changes in v41.0.0</h1> <h2>🔥 🔥 BREAKING CHANGE 🔥 🔥</h2> <p>A new <code>safe_output</code> input is now available to prevent outputting unsafe filename characters (Enabled by default). This would escape characters in the filename that could be used for command injection.</p> <blockquote> <p>[!NOTE] This can be disabled by setting the <code>safe_output</code> to false this comes with a recommendation to store all outputs generated in an environment variable first before using them.</p> </blockquote> <h4>Example</h4> <pre lang="yaml"><code>... - name: Get changed files id: changed-files uses: tj-actions/changed-files@v40 with: safe_output: false # set to false because we are using an environment variable to store the output and avoid command injection. <pre><code>- name: List all added files env: ADDED_FILES: ${{ steps.changed-files.outputs.added_files }} run: | for file in &amp;quot;$ADDED_FILES&amp;quot;; do echo &amp;quot;$file was added&amp;quot; done </code></pre> <p>... </code></pre></p> <h2>What's Changed</h2> <ul> <li>chore(deps): update typescript-eslint monorepo to v6.15.0 by <a href="https://github.com/renovate"><code>@​renovate</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1801">tj-actions/changed-files#1801</a></li> <li>Upgraded to v40.2.3 by <a href="https://github.com/tj-actions-bot"><code>@​tj-actions-bot</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1800">tj-actions/changed-files#1800</a></li> <li>chore(deps): update dependency eslint-plugin-prettier to v5.1.0 by <a href="https://github.com/renovate"><code>@​renovate</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1802">tj-actions/changed-files#1802</a></li> <li>chore(deps): lock file maintenance by <a href="https://github.com/renovate"><code>@​renovate</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1803">tj-actions/changed-files#1803</a></li> <li>chore(deps): update dependency eslint-plugin-prettier to v5.1.1 by <a href="https://github.com/renovate"><code>@​renovate</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1804">tj-actions/changed-files#1804</a></li> <li>fix: update safe output regex and the docs by <a href="https://github.com/tj-actions-bot"><code>@​tj-actions-bot</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1805">tj-actions/changed-files#1805</a></li> <li>Revert &quot;chore(deps): update actions/download-artifact action to v4&quot; by <a href="https://github.com/jackton1"><code>@​jackton1</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1806">tj-actions/changed-files#1806</a></li> <li>Update README.md by <a href="https://github.com/jackton1"><code>@​jackton1</code></a> in <a href="https://redirect.github.com/tj-actions/changed-files/pull/1808">tj-actions/changed-files#1808</a></li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Changelog</summary> <p><em>Sourced from <a href="https://github.com/tj-actions/changed-files/blob/main/HISTORY.md">tj-actions/changed-files's changelog</a>.</em></p> <blockquote> <h1>Changelog</h1> <h1><a href="https://github.com/tj-actions/changed-files/compare/v41.0.0...v41.0.1">41.0.1</a> - (2023-12-24)</h1> <h2><!-- raw HTML omitted -->🐛 Bug Fixes</h2> <ul> <li>Update characters escaped by safe output (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1815">#1815</a>) (<a href="https://github.com/tj-actions/changed-files/commit/716b1e13042866565e00e85fd4ec490e186c4a2f">716b1e1</a>) - (Tonye Jack)</li> </ul> <h2><!-- raw HTML omitted -->⚙️ Miscellaneous Tasks</h2> <ul> <li><strong>deps:</strong> Update dependency eslint-plugin-prettier to v5.1.2 (<a href="https://github.com/tj-actions/changed-files/commit/7aaf10d9eef19e8a2432a967b88124171152caaf">7aaf10d</a>) - (renovate[bot])</li> </ul> <h2><!-- raw HTML omitted -->⬆️ Upgrades</h2> <ul> <li>Upgraded to v41 (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1811">#1811</a>)</li> </ul> <p>Co-authored-by: jackton1 <a href="mailto:[email protected]">[email protected]</a> (<a href="https://github.com/tj-actions/changed-files/commit/cc08e170f4447237bcaf8acaacfa615b9cb86612">cc08e17</a>) - (tj-actions[bot])</p> <h1><a href="https://github.com/tj-actions/changed-files/compare/v40.2.3...v41.0.0">41.0.0</a> - (2023-12-23)</h1> <h2><!-- raw HTML omitted -->🐛 Bug Fixes</h2> <ul> <li>Update safe output regex and the docs (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1805">#1805</a>) (<a href="https://github.com/tj-actions/changed-files/commit/ff2f6e6b91913a7be42be1b5917330fe442f2ede">ff2f6e6</a>) - (tj-actions[bot])</li> </ul> <h2><!-- raw HTML omitted -->⏪ Reverts</h2> <ul> <li>Revert &quot;chore(deps): update actions/download-artifact action to v4&quot; (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1806">#1806</a>)</li> </ul> <p>(<a href="https://github.com/tj-actions/changed-files/commit/4f573fed06c9abb5da4c72f75c1c320718114ff7">4f573fe</a>) - (Tonye Jack)</p> <h2><!-- raw HTML omitted -->🔄 Update</h2> <ul> <li>Update README.md (<a href="https://github.com/tj-actions/changed-files/commit/6e79d6e3dbe48946636c2939c80ff5c84ff7f9fe">6e79d6e</a>) - (Tonye Jack)</li> <li>Update README.md (<a href="https://github.com/tj-actions/changed-files/commit/d13ac1942fb3c1d7d32017915bb082cebe8a272a">d13ac19</a>) - (Tonye Jack)</li> <li>Update README.md (<a href="https://github.com/tj-actions/changed-files/commit/bb89f97963be96b39e1a303e64d5b91a1af4c340">bb89f97</a>) - (Tonye Jack)</li> <li>Updated README.md (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1810">#1810</a>)</li> </ul> <p>Co-authored-by: renovate[bot] <!-- raw HTML omitted --> (<a href="https://github.com/tj-actions/changed-files/commit/1864078d0afadf68ba489e671ecc09fefe8b70ab">1864078</a>) - (tj-actions[bot])</p> <ul> <li>Update README.md (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1808">#1808</a>)</li> </ul> <p>(<a href="https://github.com/tj-actions/changed-files/commit/47371c50e97c089212d9eb92ca26c8453224e78e">47371c5</a>) - (Tonye Jack)</p> <h2><!-- raw HTML omitted -->📝 Other</h2> <ul> <li>Merge pull request from GHSA-mcph-m25j-8j63</li> </ul> <ul> <li> <p>feat: add <code>safe_output</code> input enabled by default</p> </li> <li> <p>fix: migrate README to safe uses of interpolation</p> </li> </ul> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/tj-actions/changed-files/commit/716b1e13042866565e00e85fd4ec490e186c4a2f"><code>716b1e1</code></a> fix: update characters escaped by safe output (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1815">#1815</a>)</li> <li><a href="https://github.com/tj-actions/changed-files/commit/7aaf10d9eef19e8a2432a967b88124171152caaf"><code>7aaf10d</code></a> chore(deps): update dependency eslint-plugin-prettier to v5.1.2</li> <li><a href="https://github.com/tj-actions/changed-files/commit/cc08e170f4447237bcaf8acaacfa615b9cb86612"><code>cc08e17</code></a> Upgraded to v41 (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1811">#1811</a>)</li> <li><a href="https://github.com/tj-actions/changed-files/commit/6e79d6e3dbe48946636c2939c80ff5c84ff7f9fe"><code>6e79d6e</code></a> Update README.md</li> <li><a href="https://github.com/tj-actions/changed-files/commit/d13ac1942fb3c1d7d32017915bb082cebe8a272a"><code>d13ac19</code></a> Update README.md</li> <li><a href="https://github.com/tj-actions/changed-files/commit/bb89f97963be96b39e1a303e64d5b91a1af4c340"><code>bb89f97</code></a> Update README.md</li> <li><a href="https://github.com/tj-actions/changed-files/commit/1864078d0afadf68ba489e671ecc09fefe8b70ab"><code>1864078</code></a> Updated README.md (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1810">#1810</a>)</li> <li><a href="https://github.com/tj-actions/changed-files/commit/f495a0321d3fffa62da2573adf70b77d5eb2f57a"><code>f495a03</code></a> chore(deps): lock file maintenance</li> <li><a href="https://github.com/tj-actions/changed-files/commit/47371c50e97c089212d9eb92ca26c8453224e78e"><code>47371c5</code></a> Update README.md (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1808">#1808</a>)</li> <li><a href="https://github.com/tj-actions/changed-files/commit/4f573fed06c9abb5da4c72f75c1c320718114ff7"><code>4f573fe</code></a> Revert &quot;chore(deps): update actions/download-artifact action to v4&quot; (<a href="https://redirect.github.com/tj-actions/changed-files/issues/1806">#1806</a>)</li> <li>Additional commits viewable in <a href="https://github.com/tj-actions/changed-files/compare/v22.2...v41">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=tj-actions/changed-files&package-manager=github_actions&previous-version=22.2&new-version=41)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28311/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28311", "html_url": "https://github.com/huggingface/transformers/pull/28311", "diff_url": "https://github.com/huggingface/transformers/pull/28311.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28311.patch", "merged_at": 1704269574000 }
https://api.github.com/repos/huggingface/transformers/issues/28310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28310/comments
https://api.github.com/repos/huggingface/transformers/issues/28310/events
https://github.com/huggingface/transformers/issues/28310
2,062,457,292
I_kwDOCUB6oc567pnM
28,310
OSError: Unable to load weights from pytorch checkpoint file
{ "login": "isRambler", "id": 118053582, "node_id": "U_kgDOBwlazg", "avatar_url": "https://avatars.githubusercontent.com/u/118053582?v=4", "gravatar_id": "", "url": "https://api.github.com/users/isRambler", "html_url": "https://github.com/isRambler", "followers_url": "https://api.github.com/users/isRambler/followers", "following_url": "https://api.github.com/users/isRambler/following{/other_user}", "gists_url": "https://api.github.com/users/isRambler/gists{/gist_id}", "starred_url": "https://api.github.com/users/isRambler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/isRambler/subscriptions", "organizations_url": "https://api.github.com/users/isRambler/orgs", "repos_url": "https://api.github.com/users/isRambler/repos", "events_url": "https://api.github.com/users/isRambler/events{/privacy}", "received_events_url": "https://api.github.com/users/isRambler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "me too", "Hey! I think that a conversion is attempted between pt and tf, but transformers does not support a TF implementation of this model. Could you rather open an issue on the `TensorRT-LLM` library with a full reproducer of what you are trying to do and the full stacktrace ", "> Hey! I think that a conversion is attempted between pt and tf, but transformers does not support a TF implementation of this model. Could you rather open an issue on the `TensorRT-LLM` library with a full reproducer of what you are trying to do and the full stacktrace\r\n\r\nOkay, I'm going to ask the relevant questions" ]
1,704
1,705
1,705
NONE
null
OSError: Unable to load weights from pytorch checkpoint file for '/root/autodl-tmp/llama-2-7b/pytorch_model-00001-of-00002.bin' at '/root/autodl-tmp/llama-2-7b/pytorch_model-00001-of-00002.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint Why do I get an error when I use TensorRT-LLM’s build.py to convert the llam-2-7b model? <img width="199" alt="19e41154a77724674ccbdc88d21d5ce" src="https://github.com/huggingface/transformers/assets/118053582/49fabd99-1ff3-49f1-ba40-7922bde51062">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28310/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28309/comments
https://api.github.com/repos/huggingface/transformers/issues/28309/events
https://github.com/huggingface/transformers/issues/28309
2,062,445,501
I_kwDOCUB6oc567mu9
28,309
Leverage Accelerate for object detection/segmentation models
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
closed
false
null
[]
[ "Hi @NielsRogge, I would like to take this up", "Hi @NielsRogge is this open for just 1 person taking it up as @sam99dave already indicated or multiple contribution? If it is the latter, I can take one of this up.", "Hi, it looks like the PRs above addressed all models at once, which is fine for me.", "Closing as the PR above was merged." ]
1,704
1,708
1,708
CONTRIBUTOR
null
### Feature request Currently there are 6 object detection models which don't support multi-GPU training out-of-the-box. The distributed code was explicitly left out of the modeling code as they wouldn't be compatible with the Trainer API. Refer to [these lines of code](https://github.com/huggingface/transformers/blob/502a10a6f89b2919444aba68cd0def51d5ba618c/src/transformers/models/detr/modeling_detr.py#L2207-L2210) as an example. However, now that the Trainer class uses 🤗 [Accelerate](https://huggingface.co./docs/accelerate/index) behind the scenes, we can include it now by leveraging the following code: ``` from accelerate import PartialState from accelerate.utils import reduce # Check that we have initialized the distributed state world_size = 1 if PartialState._shared_state != {}: num_boxes = reduce(num_boxes) world_size = PartialState().num_processes ``` See this commit as an example: https://github.com/huggingface/transformers/pull/27990/commits/526a8b0801d075ad5f99e87fbfc5de49ea347a9a. I'll add a list here with models to be fixed: - [ ] DETR - [ ] Conditional DETR - [ ] Deformable DETR - [ ] YOLOS - [ ] Table Transformer - [ ] DETA Additionally, there are 3 segmentation models which require a similar update: - [ ] MaskFormer - [ ] Mask2Former - [ ] OneFormer. For these, the `get_num_masks` function requires an update similar to what is present in the [original repository](https://github.com/facebookresearch/MaskFormer/blob/da3e60d85fdeedcb31476b5edd7d328826ce56cc/mask_former/modeling/criterion.py#L169), using Accelerate. ### Motivation Would be great to support multi-GPU training of these models leveraging Accelerate ### Your contribution I can do this but this is a perfect opportunity for a first open-source contribution
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28309/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28308/comments
https://api.github.com/repos/huggingface/transformers/issues/28308/events
https://github.com/huggingface/transformers/issues/28308
2,062,380,948
I_kwDOCUB6oc567W-U
28,308
[Trainer] rename tokenizer to tokenizer_or_processor
{ "login": "Hambaobao", "id": 48345096, "node_id": "MDQ6VXNlcjQ4MzQ1MDk2", "avatar_url": "https://avatars.githubusercontent.com/u/48345096?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hambaobao", "html_url": "https://github.com/Hambaobao", "followers_url": "https://api.github.com/users/Hambaobao/followers", "following_url": "https://api.github.com/users/Hambaobao/following{/other_user}", "gists_url": "https://api.github.com/users/Hambaobao/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hambaobao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hambaobao/subscriptions", "organizations_url": "https://api.github.com/users/Hambaobao/orgs", "repos_url": "https://api.github.com/users/Hambaobao/repos", "events_url": "https://api.github.com/users/Hambaobao/events{/privacy}", "received_events_url": "https://api.github.com/users/Hambaobao/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "fyi @pacman100 and @muellerzr 🤗 " ]
1,704
1,704
null
NONE
null
### Feature request I suggest renaming the `tokenizer` parameter in **Trainer** to `tokenizer_or_processor`. ### Motivation In the future, the training of many **multimodal models** will certainly require the use of **Trainer**. However, in multimodal models, the processors used to process data are not just `tokenizer`, but also include things like `tokenizer` and `image_processor`, which are now mostly referred to as `processor`. Renaming `tokenizer` to `tokenizer_or_processor` will help improve the readability of the **Trainer** code during multimodal model development. ### Your contribution I can help rename `tokenizer` to `tokenizer_or_processor`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28308/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28307/comments
https://api.github.com/repos/huggingface/transformers/issues/28307/events
https://github.com/huggingface/transformers/issues/28307
2,062,271,457
I_kwDOCUB6oc5668Ph
28,307
Can not find best model after training.
{ "login": "ILG2021", "id": 93691919, "node_id": "U_kgDOBZWgDw", "avatar_url": "https://avatars.githubusercontent.com/u/93691919?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ILG2021", "html_url": "https://github.com/ILG2021", "followers_url": "https://api.github.com/users/ILG2021/followers", "following_url": "https://api.github.com/users/ILG2021/following{/other_user}", "gists_url": "https://api.github.com/users/ILG2021/gists{/gist_id}", "starred_url": "https://api.github.com/users/ILG2021/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ILG2021/subscriptions", "organizations_url": "https://api.github.com/users/ILG2021/orgs", "repos_url": "https://api.github.com/users/ILG2021/repos", "events_url": "https://api.github.com/users/ILG2021/events{/privacy}", "received_events_url": "https://api.github.com/users/ILG2021/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey, could you share the training command you used? (a reproducer for training) as well as the full traceback? 🤗 \r\n", "> `Could not locate the best model at whisper-large-v2-fil/checkpoint-2000/pytorch_model.bin, if you are running a distributed training on multiple nodes, you should activate `--save_on_each_node`.`\r\n\r\nI use the finetune whisper notebook in this blog:\r\nhttps://huggingface.co./blog/fine-tune-whisper", "We need the call you used, and the full traceback 😉 ", "I had the same issue, i fixed with instead of this\r\n\r\nmodel.save_pretrained(output_dir)\r\n\r\ni've use\r\n\r\nmodel.save_pretrained(output_dir, safe_serialization=False)\r\n\r\nAnd also on the training arguments, i 've added save_safetensors=False which by default is True\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=output_dir,\r\n overwrite_output_dir=True,\r\n num_train_epochs=epochs, \r\n per_device_train_batch_size=batch_size,\r\n learning_rate=2e-5, \r\n weight_decay=0.02, \r\n max_grad_norm=1.0,\r\n save_steps=30,\r\n save_total_limit=2,\r\n save_strategy=\"steps\",\r\n warmup_steps=1000, \r\n gradient_accumulation_steps=1,\r\n logging_dir = output_dir + '/logs',\r\n logging_steps=30,\r\n save_safetensors=False\r\n )\r\n\r\n\r\n", "\r\n\r\n\r\n> I had the same issue, i fixed with instead of this\r\n> \r\n> model.save_pretrained(output_dir)\r\n> \r\n> i've use\r\n> \r\n> model.save_pretrained(output_dir, safe_serialization=False)\r\n> \r\n> And also on the training arguments, i 've added save_safetensors=False which by default is True\r\n> \r\n> training_args = TrainingArguments( output_dir=output_dir, overwrite_output_dir=True, num_train_epochs=epochs, per_device_train_batch_size=batch_size, learning_rate=2e-5, weight_decay=0.02, max_grad_norm=1.0, save_steps=30, save_total_limit=2, save_strategy=\"steps\", warmup_steps=1000, gradient_accumulation_steps=1, logging_dir = output_dir + '/logs', logging_steps=30, save_safetensors=False )\r\n\r\nThanks for you sharing. That's the problem." ]
1,704
1,706
null
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The error is: `Could not locate the best model at whisper-large-v2-fil/checkpoint-2000/pytorch_model.bin, if you are running a distributed training on multiple nodes, you should activate `--save_on_each_node`.` Because the checkpoint is using safetensor now, but the trainer tried to find pytorch_model.bin. So it can not find. ### Expected behavior works regular.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28307/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28306/comments
https://api.github.com/repos/huggingface/transformers/issues/28306/events
https://github.com/huggingface/transformers/issues/28306
2,062,055,346
I_kwDOCUB6oc566Hey
28,306
LLaMA-MoE
{ "login": "Spico197", "id": 22840952, "node_id": "MDQ6VXNlcjIyODQwOTUy", "avatar_url": "https://avatars.githubusercontent.com/u/22840952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Spico197", "html_url": "https://github.com/Spico197", "followers_url": "https://api.github.com/users/Spico197/followers", "following_url": "https://api.github.com/users/Spico197/following{/other_user}", "gists_url": "https://api.github.com/users/Spico197/gists{/gist_id}", "starred_url": "https://api.github.com/users/Spico197/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Spico197/subscriptions", "organizations_url": "https://api.github.com/users/Spico197/orgs", "repos_url": "https://api.github.com/users/Spico197/repos", "events_url": "https://api.github.com/users/Spico197/events{/privacy}", "received_events_url": "https://api.github.com/users/Spico197/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,704
1,704
null
NONE
null
### Model description LLaMA-MoE is a series of token-choice based Mixture-of-Experts models on LLaMA2. It first partition LLaMA2's FFNs into multiple experts, then apply continual pre-training to recover its language abilities. We believe LLaMA-MoE is a good start for MoE research under limited computing resources. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Our repo: https://github.com/pjlab-sys4nlp/llama-moe HF models (currently set `trust_remote_code=True`): https://huggingface.co./llama-moe
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28306/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28305/comments
https://api.github.com/repos/huggingface/transformers/issues/28305/events
https://github.com/huggingface/transformers/issues/28305
2,061,979,932
I_kwDOCUB6oc5651Ec
28,305
What version of transfomers _make_causal_mask was moved from modeling_clip.py
{ "login": "bhosalems", "id": 10846405, "node_id": "MDQ6VXNlcjEwODQ2NDA1", "avatar_url": "https://avatars.githubusercontent.com/u/10846405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhosalems", "html_url": "https://github.com/bhosalems", "followers_url": "https://api.github.com/users/bhosalems/followers", "following_url": "https://api.github.com/users/bhosalems/following{/other_user}", "gists_url": "https://api.github.com/users/bhosalems/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhosalems/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhosalems/subscriptions", "organizations_url": "https://api.github.com/users/bhosalems/orgs", "repos_url": "https://api.github.com/users/bhosalems/repos", "events_url": "https://api.github.com/users/bhosalems/events{/privacy}", "received_events_url": "https://api.github.com/users/bhosalems/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@bhosalems you will find them [here](https://github.com/huggingface/transformers/blob/6ea3ee3cd215dfe0b32034299da3f876af0e7c4e/src/transformers/models/clip/modeling_clip.py) so you will probably need this version of the [transformers library](https://github.com/huggingface/transformers/releases/tag/v4.32.0) : v4.32.0\r\njust use \r\n```\r\npip install transformers==4.32.0\r\n```\r\nand everything should go back to normal", "` _make_causal_mask, _expand_mask` are private functions. There is no guaranty that they will be properly imported and are not part of the public api. They were removed as such! \r\nI do not recommend you to go back to previous versions but rather use \r\n\r\nhttps://github.com/huggingface/transformers/blob/b1292bca6923cfbc9cb3f70cb55df57e4e17e630/src/transformers/modeling_attn_mask_utils.py#L21\r\n\r\nand it's utilities ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info Name: transformers Version: 4.28.0 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: [email protected] License: Apache 2.0 License Location: anaconda3/envs/pathldm1/lib/python3.8/site-packages Requires: tqdm, packaging, filelock, numpy, tokenizers, regex, huggingface-hub, pyyaml, requests Required-by: ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Try to import `from transformers.models.clip.modeling_clip import _make_causal_mask, _expand_mask` ### Expected behavior It should import both functions without errors. I see that several times the code has been refactored While I can see the below code in a version of the transformers I am not sure if I should just add this code in modeling_clip.py ``` def _make_causal_mask( input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0, sliding_window: Optional[int] = None, ): """ Make causal mask used for bi-directional self-attention. """ bsz, tgt_len = input_ids_shape mask = torch.full((tgt_len, tgt_len), torch.finfo(dtype).min, device=device) mask_cond = torch.arange(mask.size(-1), device=device) mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) mask = mask.to(dtype) if past_key_values_length > 0: mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1) # add lower triangular sliding window mask if necessary if sliding_window is not None: diagonal = past_key_values_length - sliding_window + 1 context_mask = 1 - torch.triu(torch.ones_like(mask, dtype=torch.int), diagonal=diagonal) mask.masked_fill_(context_mask.bool(), torch.finfo(dtype).min) return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28305/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28304/comments
https://api.github.com/repos/huggingface/transformers/issues/28304/events
https://github.com/huggingface/transformers/issues/28304
2,061,854,640
I_kwDOCUB6oc565Wew
28,304
Using LLaMA2 fast tokenizer gives zero loss
{ "login": "cnut1648", "id": 37067883, "node_id": "MDQ6VXNlcjM3MDY3ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/37067883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cnut1648", "html_url": "https://github.com/cnut1648", "followers_url": "https://api.github.com/users/cnut1648/followers", "following_url": "https://api.github.com/users/cnut1648/following{/other_user}", "gists_url": "https://api.github.com/users/cnut1648/gists{/gist_id}", "starred_url": "https://api.github.com/users/cnut1648/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cnut1648/subscriptions", "organizations_url": "https://api.github.com/users/cnut1648/orgs", "repos_url": "https://api.github.com/users/cnut1648/repos", "events_url": "https://api.github.com/users/cnut1648/events{/privacy}", "received_events_url": "https://api.github.com/users/cnut1648/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "BTW similar issue seems to happen on mistral as well. Training on the same data, non-fast tokenizer gives ~1 loss while fast tokenizer gives > 10 loss.", "There are know issues with LlamaFast tokenizer, could you try with this: #26678 \r\n\r\n```python\r\nfrom tokenizers import pre_tokenizers, normalizers\r\nfrom transformers import AutoTokenizer\r\nold_tokenizer = AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-7b-hf\")\r\nold_tokenizer._tokenizer.normalizer = normalizers.Sequence([])\r\nold_tokenizer._tokenizer.pre_tokenizer = pre_tokenizers.Metaspace(\"▁\", True, prepend_scheme = \"first\")\r\n```", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): 2.13.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @pacman100, @ArthurZucker , @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Using fast tokenizer would give me a zero loss on my own data no matter what hyperparameters I use (even with extremely high learning rate like 0.1 and 10). However I notice that if I switch to standard tokenizer (`use_fast=False`) then I can have a normal behavior. The command I have is ```shell deepspeed --master_port 12345 --num_gpus=4 train.py --bf16 --deepspeed ./deepspeed_config/zero3-offload.json --model_name_or_path NousResearch/Llama-2-7b-hf --do_train --data_path dataset/chat_data --output_dir output_path --per_device_train_batch_size=4 --per_device_eval_batch_size=1 --num_train_epochs=3 --lr_scheduler_type=cosine --gradient_accumulation_steps=4 --gradient_checkpointing=True --overwrite_output_dir --seed 42 --report_to=none --learning_rate 2e-5 --weight_decay=0.01 --logging_steps=1 ``` The `train.py` is modified from [`run_clm.py`](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) but I change the train_data with [fastchat](https://github.com/lm-sys/FastChat) formatted dataset. Specifically, the train_data is: ```python class SupervisedDataset(torch.utils.data.Dataset): """Dataset for supervised fine-tuning.""" def __init__(self, raw_data, tokenizer: transformers.PreTrainedTokenizer): super(SupervisedDataset, self).__init__() sources = [example["conversations"] for example in raw_data] data_dict = preprocess(sources, tokenizer) self.input_ids = data_dict["input_ids"] self.labels = data_dict["labels"] self.attention_mask = data_dict["attention_mask"] def __len__(self): return len(self.input_ids) def __getitem__(self, i) -> Dict[str, torch.Tensor]: return dict( input_ids=self.input_ids[i], labels=self.labels[i], attention_mask=self.attention_mask[i], ) train_dataset = SupervisedDataset(raw_datasets["train"], tokenizer) ``` where `preprocess` is [imported](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/train.py#L92-L177) from fastchat, and the `raw_datasets['train']` is a filtered version of https://huggingface.co./datasets/WizardLM/WizardLM_evol_instruct_V2_196k which I unfortunately cannot share. But I mainly would like to ask if there are some issues with LLaMA's fast tokenizer. The tokenizer is built with `AutoTokenizer.from_pretrained("NousResearch/Llama-2-7b-hf", use_fast=True)` Thanks. ### Expected behavior Fasttokenizer should not affect training dynamics.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28304/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28303/comments
https://api.github.com/repos/huggingface/transformers/issues/28303/events
https://github.com/huggingface/transformers/pull/28303
2,061,812,235
PR_kwDOCUB6oc5jBXAb
28,303
Enable the use of batch api using multiple speakers with Bark
{ "login": "MidAtBest", "id": 154687332, "node_id": "U_kgDOCThXZA", "avatar_url": "https://avatars.githubusercontent.com/u/154687332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MidAtBest", "html_url": "https://github.com/MidAtBest", "followers_url": "https://api.github.com/users/MidAtBest/followers", "following_url": "https://api.github.com/users/MidAtBest/following{/other_user}", "gists_url": "https://api.github.com/users/MidAtBest/gists{/gist_id}", "starred_url": "https://api.github.com/users/MidAtBest/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MidAtBest/subscriptions", "organizations_url": "https://api.github.com/users/MidAtBest/orgs", "repos_url": "https://api.github.com/users/MidAtBest/repos", "events_url": "https://api.github.com/users/MidAtBest/events{/privacy}", "received_events_url": "https://api.github.com/users/MidAtBest/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "> Hi @MidAtBest, thanks for taking care of this, this is a great PR! The logic is quite difficult so congrats on handling it so nicely !\r\n> \r\n> Two general comments:\r\n> \r\n> 1. you should add test for bark modeling code as well !\r\n> 2. it's best to stay with `voice_preset` instead of `voice_presets`.\r\n> \r\n> I'm responding to your question below, I hope that it helps!\r\n> \r\n> Thanks again!\r\n> \r\n> In terms of your question, I believe that we should refactor the logic of the coarse generation quite a bit to make it work.\r\n> \r\n> The key to that is IMO in the following lines:\r\n> \r\n> These ones creates `semantic_output`:\r\n> \r\n> https://github.com/huggingface/transformers/blob/87ae2a4632de9c3090272d5cd37db86c0e03a0a9/src/transformers/models/bark/modeling_bark.py#L1175-L1185\r\n> \r\n> Those 3 lines are located in the generation loop and creates `input_coarse`:\r\n> \r\n> https://github.com/huggingface/transformers/blob/87ae2a4632de9c3090272d5cd37db86c0e03a0a9/src/transformers/models/bark/modeling_bark.py#L1196-L1204\r\n> \r\n> As you can see, the only thing that changes here is `semantic_idx`, the rest is quite static.\r\n> \r\n> So in my opinion, you keep track of the length of each `semantic_history` and merge each `semantic_history` with the corresponding `semantic_output`.\r\n> \r\n> Once arriving in the generation loop, you can do the 2 first lines for each sample (taking into account than each sample has its own `semantic_idx`) then use `torch.rnn.pad_sequence` to pad it to the right length.\r\n\r\nThanks a lot for the reply, yes it took a bit of time for me to understand both the model and the shape manipulations going on. \r\nRegarding the tests and other changes I had omitted them as I was a bit puzzled regarding the question I mentioned in the body of the PR. \r\nSetting the PR as WIP and will try to have something by Wednesday evening.\r\n", "cc @ylacombe would be nice if you can review here!\r\n" ]
1,704
1,707
null
NONE
null
# What does this PR do? This PR aims to enable the use of the batch API when using multiple speaker prompts for Bark. Currently, the batch API is available when using multiple text inputs but is limited to a single speaker. The objective of this PR is to enable this when multiple speaker prompts are being used. This is done keeping in mind that we want to minimise the amount of loops when processing the data while keeping the same audio quality. cc @ylacombe and @Selectorrr , could you take a look ? Thanks! Fixes #26921 PS: Currently I have doubts on how to solve the issue in the `BarkCoarseModel` especially these 2 lines: ``` input_coarse = semantic_output[:, np.max([0, semantic_idx - max_semantic_history]) :] input_coarse = input_coarse[:, :max_coarse_input_length] ``` My concern is that with the approach implemented in this PR, if we need to pad after the `preprocess_histories` method because the samples in the batch for `x_coarse` are not the same length then we might run into a scenario where some padding tokens would be used in `input_coarse` instead of "regular tokens" if the sample was processed outside of a batch.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28303/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28303", "html_url": "https://github.com/huggingface/transformers/pull/28303", "diff_url": "https://github.com/huggingface/transformers/pull/28303.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28303.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28302/comments
https://api.github.com/repos/huggingface/transformers/issues/28302/events
https://github.com/huggingface/transformers/pull/28302
2,061,630,391
PR_kwDOCUB6oc5jAwso
28,302
Add models from deit
{ "login": "rajveer43", "id": 64583161, "node_id": "MDQ6VXNlcjY0NTgzMTYx", "avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajveer43", "html_url": "https://github.com/rajveer43", "followers_url": "https://api.github.com/users/rajveer43/followers", "following_url": "https://api.github.com/users/rajveer43/following{/other_user}", "gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions", "organizations_url": "https://api.github.com/users/rajveer43/orgs", "repos_url": "https://api.github.com/users/rajveer43/repos", "events_url": "https://api.github.com/users/rajveer43/events{/privacy}", "received_events_url": "https://api.github.com/users/rajveer43/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@stevhliu", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28302). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,704
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> part of #28301 - [ ] #28301 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @stevhliu -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28302", "html_url": "https://github.com/huggingface/transformers/pull/28302", "diff_url": "https://github.com/huggingface/transformers/pull/28302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28302.patch", "merged_at": 1706807755000 }
https://api.github.com/repos/huggingface/transformers/issues/28301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28301/comments
https://api.github.com/repos/huggingface/transformers/issues/28301/events
https://github.com/huggingface/transformers/issues/28301
2,061,629,952
I_kwDOCUB6oc564foA
28,301
[i18n-en] Translating docs to Japanese
{ "login": "rajveer43", "id": 64583161, "node_id": "MDQ6VXNlcjY0NTgzMTYx", "avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rajveer43", "html_url": "https://github.com/rajveer43", "followers_url": "https://api.github.com/users/rajveer43/followers", "following_url": "https://api.github.com/users/rajveer43/following{/other_user}", "gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}", "starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions", "organizations_url": "https://api.github.com/users/rajveer43/orgs", "repos_url": "https://api.github.com/users/rajveer43/repos", "events_url": "https://api.github.com/users/rajveer43/events{/privacy}", "received_events_url": "https://api.github.com/users/rajveer43/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "@stevhliu would you add the more models here in the list so other can look at this issue and collaborate?" ]
1,704
1,706
null
CONTRIBUTOR
null
Hi! Let's bring the documentation to all the Japanese-speaking community 🌐 Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗). * Please translate in a gender-neutral way. * Add your translations to the folder called `ja` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `ja/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review. * 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/). ## Model doc section - [ ] [deit.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/deit.md) #28302 - [ ] [deplot.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/deplot.md) #28302 - [ ] [deta.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/deta.md) #28302 - [ ] [detr.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/detr.md) #28302 - [ ] [dialogpt.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/dialogpt.md) - [ ] [dinat.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/dinat.md) - [ ] [dinov2.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/dinov2.md) - [ ] [distilbert.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/distilbert.md) Keep on adding more as you go 🔥
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28301/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28300/comments
https://api.github.com/repos/huggingface/transformers/issues/28300/events
https://github.com/huggingface/transformers/issues/28300
2,061,626,878
I_kwDOCUB6oc564e3-
28,300
[Question]Why ther exist a big gap in evaluation step and actual inference test?
{ "login": "daehuikim", "id": 40377750, "node_id": "MDQ6VXNlcjQwMzc3NzUw", "avatar_url": "https://avatars.githubusercontent.com/u/40377750?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daehuikim", "html_url": "https://github.com/daehuikim", "followers_url": "https://api.github.com/users/daehuikim/followers", "following_url": "https://api.github.com/users/daehuikim/following{/other_user}", "gists_url": "https://api.github.com/users/daehuikim/gists{/gist_id}", "starred_url": "https://api.github.com/users/daehuikim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daehuikim/subscriptions", "organizations_url": "https://api.github.com/users/daehuikim/orgs", "repos_url": "https://api.github.com/users/daehuikim/repos", "events_url": "https://api.github.com/users/daehuikim/events{/privacy}", "received_events_url": "https://api.github.com/users/daehuikim/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!" ]
1,704
1,706
null
NONE
null
Hello! Sorry for noob question. I am doing lora fine tuning based on llama-2-13B model. I am adjusting my model using metric "BLEU". While training, my BLEU score shows incredibly high which is much higher than i expected. For example, evaluation bleu for my model scores about 98 scores which is unbelievable like in the graph. ![image](https://github.com/huggingface/transformers/assets/40377750/46d91e10-aab1-4494-961a-59244caa26a5) So, I suspected my code at first, however whole codes seems no matter about bleu metrics like below. (```<SOO> ```token means start of output that indicates beginning point of output to exclude input prompts in calculating bleu score in evaluation.) ``` import area def compute_metrics_bleu(pred): references = pred.label_ids generated_texts = pred.predictions bleu_scores = [] for reference, generated in zip(references, generated_texts): generated = np.where(generated != -100, generated, tokenizer.pad_token_id) generated_text = tokenizer.decode(generated, skip_special_tokens=False) generated_text = generated_text.split("<SOO>")[-1] reference = np.where(reference != -100, reference, tokenizer.pad_token_id) reference_text = tokenizer.decode(reference, skip_special_tokens=False) reference_text = reference_text.split("<SOO>")[-1] bleu_score = sentence_bleu([reference_text], generated_text) bleu_scores.append(bleu_score) return { 'bleu': sum(bleu_scores) / len(bleu_scores) } (skip codes) trainer = SFTTrainer( model=model, train_dataset=train_dataset, eval_dataset=valid_dataset, # Pass validation dataset here peft_config=peft_config, dataset_text_field="text", max_seq_length=max_seq_length, tokenizer=tokenizer, args=training_arguments, packing=packing, compute_metrics=compute_metrics_bleu, preprocess_logits_for_metrics=preprocess_logits_for_metrics, neftune_noise_alpha=5 ) ``` so I checked the codes on ```trainer.py``` to checkout why the predictions during evaluation loop score such a high bleu score here. https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/trainer.py#L3381 When I try my test, i use code like below. ``` model = AutoModelForCausalLM.from_pretrained(params same as training) tokenizer = AutoTokenizer.from_pretrained(params same as training) inputs = tokenizer(prompt_template.format(prompt=input), return_tensors="pt").input_ids.to("cuda:0") result = model.generate(inputs, generation arguments) ``` I tried adjust many hyperparameters on ```model.generate()``` including just overriding ```generation_config.json``` on the model, I failed to get such a high bleu score in the test. (the score was around 30~40 even i tested with same dataset) What could be the reason for this situation? Thanks for reading my question!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28300/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28299/comments
https://api.github.com/repos/huggingface/transformers/issues/28299/events
https://github.com/huggingface/transformers/pull/28299
2,061,608,325
PR_kwDOCUB6oc5jAr-1
28,299
Remove shell=True from subprocess.Popen to Mitigate Security Risk
{ "login": "avimanyu786", "id": 28894462, "node_id": "MDQ6VXNlcjI4ODk0NDYy", "avatar_url": "https://avatars.githubusercontent.com/u/28894462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avimanyu786", "html_url": "https://github.com/avimanyu786", "followers_url": "https://api.github.com/users/avimanyu786/followers", "following_url": "https://api.github.com/users/avimanyu786/following{/other_user}", "gists_url": "https://api.github.com/users/avimanyu786/gists{/gist_id}", "starred_url": "https://api.github.com/users/avimanyu786/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avimanyu786/subscriptions", "organizations_url": "https://api.github.com/users/avimanyu786/orgs", "repos_url": "https://api.github.com/users/avimanyu786/repos", "events_url": "https://api.github.com/users/avimanyu786/events{/privacy}", "received_events_url": "https://api.github.com/users/avimanyu786/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I do agree this is better to remove shell access to this command line. I don't think however this is critical by any means since this is already a CLI utility which we do not leverage directly.", "@michaelbenayoun @amyeroberts \r\n", "@Narsil @michaelbenayoun @amyeroberts Good to know it's not critical. Thanks a lot for your collective feedback and approval!", "Thanks so much for the merge! I must admit, security isn't my primary domain 😅, but this experience piqued my curiosity about the potential security implications in different use cases of the `transformers` library.\r\n\r\nI believe this information could be beneficial for our community, especially for those implementing `transformers` in varied environments. Therefore, I'd like to share a summary of these insights for knowledge sharing purposes only:\r\n\r\nThrough my additional research, including a discussion with GPT4, I've gained a deeper understanding of the potential security implications of using `shell=True` in `subprocess.Popen` within the transformers library. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. This insight highlights the importance of context in evaluating security risks and underscores the need for careful consideration of security best practices, especially in diverse deployment scenarios.\r\n\r\nI'm looking forward to any further insights or thoughts from the community on this matter. It's good to have such conversations going to ensure the security and robustness of the library!\r\n\r\nSincere regards,\r\nAvi" ]
1,704
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? This PR resolves a critical security issue in the transformers package by removing the `shell=True` argument from `subprocess.Popen` calls. This update is in response to a security vulnerability flagged by the Bandit static analysis tool, which could potentially allow for execution of arbitrary code. Despite previous attempts to communicate this issue via email with no response [as suggested](https://github.com/huggingface/transformers/security/policy), this PR is being submitted to ensure the safety and integrity of the transformers package. The vulnerability is documented in Bandit's official recommendations (https://bandit.readthedocs.io/en/1.7.6/plugins/b602_subprocess_popen_with_shell_equals_true.html). By adopting this change, we adhere to best practices for secure subprocess management in Python. ## Before submitting - [x] This PR fixes a critical security issue and does not introduce new dependencies. - [x] I have referred to the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section. - [x] This issue was not previously discussed in a Github issue or forum due to the lack of response to email communications. - [x] I have tested the stability and security of the changes. ## Who can review? Given the security nature of this PR, I would appreciate a prompt review from @Narsil (Library pipelines). I kindly request your review of this security update at your earliest convenience to help ensure the ongoing security and reliability of the transformers library for all users. Thank you for your time and consideration.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28299/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28299", "html_url": "https://github.com/huggingface/transformers/pull/28299", "diff_url": "https://github.com/huggingface/transformers/pull/28299.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28299.patch", "merged_at": 1704724408000 }
https://api.github.com/repos/huggingface/transformers/issues/28298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28298/comments
https://api.github.com/repos/huggingface/transformers/issues/28298/events
https://github.com/huggingface/transformers/issues/28298
2,061,560,374
I_kwDOCUB6oc564Oo2
28,298
Conv1d would be initialized to all zeros in a pretrained_model
{ "login": "Hannibal046", "id": 38466901, "node_id": "MDQ6VXNlcjM4NDY2OTAx", "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hannibal046", "html_url": "https://github.com/Hannibal046", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "repos_url": "https://api.github.com/users/Hannibal046/repos", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It seems that the problem was in `from_pretrained`. When I init the model with config, it works fine.\r\n\r\n![image](https://github.com/huggingface/transformers/assets/38466901/c1324715-9e6f-4b2c-bdab-fc32a50a3e9a)\r\n", "OK, I see. It is a feature for LLM loading. Using the following would solve the problem:\r\n```python\r\nMyBERT.from_pretrained(\"bert-base-uncased\",_fast_init=False)\r\n```", "Hey! It's more like[ the init scheme](https://github.com/huggingface/transformers/blob/b1292bca6923cfbc9cb3f70cb55df57e4e17e630/src/transformers/models/bert/modeling_bert.py#L744)\r\n\r\nneeds to be updated to your needs! `_fast_init` will skip normal inits and `_init_weights` is the only function that will be called for this. ", "Got it! Appreciate" ]
1,704
1,704
1,704
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.15.0-1050-azure-x86_64-with-glibc2.17 - Python version: 3.8.18 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import BertModel import torch class MyBERT(BertModel): def __init__(self,config): super().__init__(config) self.conv = torch.nn.Conv1d( in_channels=768, out_channels=768, kernel_size = 16, ) self.post_init() mybert = MyBERT.from_pretrained("bert-base-uncased") print(mybert.conv.weight.data[0]) ``` ![image](https://github.com/huggingface/transformers/assets/38466901/584e2717-fd9a-48fe-af69-02f2c40d67cc) ![image](https://github.com/huggingface/transformers/assets/38466901/5976396a-0852-4a2a-a501-6bed59b6d64a) ### Expected behavior The conv layer would be initialized properly.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28298/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28297/comments
https://api.github.com/repos/huggingface/transformers/issues/28297/events
https://github.com/huggingface/transformers/pull/28297
2,061,352,208
PR_kwDOCUB6oc5i_0ID
28,297
Support saving only PEFT adapter in checkpoints when using PEFT + FSDP
{ "login": "AjayP13", "id": 5404177, "node_id": "MDQ6VXNlcjU0MDQxNzc=", "avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AjayP13", "html_url": "https://github.com/AjayP13", "followers_url": "https://api.github.com/users/AjayP13/followers", "following_url": "https://api.github.com/users/AjayP13/following{/other_user}", "gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}", "starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions", "organizations_url": "https://api.github.com/users/AjayP13/orgs", "repos_url": "https://api.github.com/users/AjayP13/repos", "events_url": "https://api.github.com/users/AjayP13/events{/privacy}", "received_events_url": "https://api.github.com/users/AjayP13/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "@pacman100 Thanks for the insight on this.\r\n\r\nEdit: @pacman100 's right, I just tested this PR again:\r\n\r\n- PEFT + FSDP + Load Best Checkpoint at End: does work\r\n- PEFT + FSDP + Resume From Checkpoint: does not work", "@pacman100 @younesbelkada - The PR is ready for review again and this time it supports FSDP + PEFT + Resuming. It's now reliant on a PR I made over in `accelerate`: https://github.com/huggingface/accelerate/pull/2321.\r\n\r\n@younesbelkada I was able to rent a multi-GPU machine for development and testing, I've attached a test [branch_test.zip](https://github.com/huggingface/transformers/files/13876998/branch_test.zip) file, that may help create an actual test case for the CI system. You can see the `pytorch_model_fsdp.bin` is only 12.15MB now in the test output log (vs. the size of the full model weights).\r\n\r\nHere is the test information:\r\n\r\n**Summary of test:**\r\n\r\n0. Pull in changes from `transformers` PR and `accelerate` PR via `requirements.txt`\r\n1. Train for 99 epochs\r\n2. Resume from Checkpoint\r\n3. Train for 1 Additional Epoch \r\n4. Load Best Checkpoint\r\n5. Test Final Trained Adapter Weights\r\n\r\n<details>\r\n <summary>Test Dataset</summary>\r\n \r\n ```python\r\ndataset = Dataset.from_dict({\r\n \"text\": [\r\n \"Input: A Output: 14\",\r\n \"Input: B Output: 52\",\r\n \"Input: C Output: 83\",\r\n \"Input: D Output: 57\",\r\n \"Input: E Output: 38\",\r\n \"Input: F Output: 54\",\r\n ]\r\n})\r\n ```\r\n</details>\r\n\r\n<details>\r\n <summary>Output Log of Training / Test</summary>\r\n \r\n ```\r\nInstalling requirements.txt...\r\n========= preparing dataset ============\r\n========= loading models + tokenizer ============\r\n========= preparing for training ============\r\n========= training ============\r\n{'loss': 6.8567, 'learning_rate': 0.000494949494949495, 'epoch': 1.0}\r\n{'eval_loss': 5.881069183349609, 'eval_runtime': 0.06, 'eval_samples_per_second': 99.934, 'eval_steps_per_second': 16.656, 'epoch': 1.0}\r\n{'loss': 5.986, 'learning_rate': 0.0004898989898989899, 'epoch': 2.0}\r\n{'eval_loss': 5.117019176483154, 'eval_runtime': 0.0795, 'eval_samples_per_second': 75.427, 'eval_steps_per_second': 12.571, 'epoch': 2.0}\r\n{'loss': 5.2125, 'learning_rate': 0.0004848484848484849, 'epoch': 3.0}\r\n{'eval_loss': 4.231423854827881, 'eval_runtime': 0.0595, 'eval_samples_per_second': 100.84, 'eval_steps_per_second': 16.807, 'epoch': 3.0}\r\n{'loss': 4.6937, 'learning_rate': 0.0004797979797979798, 'epoch': 4.0}\r\n{'eval_loss': 3.4078128337860107, 'eval_runtime': 0.0651, 'eval_samples_per_second': 92.211, 'eval_steps_per_second': 15.369, 'epoch': 4.0}\r\n{'loss': 4.2886, 'learning_rate': 0.00047474747474747476, 'epoch': 5.0}\r\n{'eval_loss': 2.664665937423706, 'eval_runtime': 0.0593, 'eval_samples_per_second': 101.123, 'eval_steps_per_second': 16.854, 'epoch': 5.0}\r\n{'loss': 2.9141, 'learning_rate': 0.0004696969696969697, 'epoch': 6.0}\r\n{'eval_loss': 2.032987117767334, 'eval_runtime': 0.0612, 'eval_samples_per_second': 97.965, 'eval_steps_per_second': 16.328, 'epoch': 6.0}\r\n{'loss': 2.2988, 'learning_rate': 0.0004646464646464646, 'epoch': 7.0}\r\n{'eval_loss': 1.591854453086853, 'eval_runtime': 0.0731, 'eval_samples_per_second': 82.099, 'eval_steps_per_second': 13.683, 'epoch': 7.0}\r\n{'loss': 1.8455, 'learning_rate': 0.00045959595959595964, 'epoch': 8.0}\r\n{'eval_loss': 1.2650986909866333, 'eval_runtime': 0.0669, 'eval_samples_per_second': 89.719, 'eval_steps_per_second': 14.953, 'epoch': 8.0}\r\n{'loss': 1.7772, 'learning_rate': 0.00045454545454545455, 'epoch': 9.0}\r\n{'eval_loss': 1.072060227394104, 'eval_runtime': 0.0671, 'eval_samples_per_second': 89.479, 'eval_steps_per_second': 14.913, 'epoch': 9.0}\r\n{'loss': 1.4851, 'learning_rate': 0.0004494949494949495, 'epoch': 10.0}\r\n{'eval_loss': 0.9297663569450378, 'eval_runtime': 0.0744, 'eval_samples_per_second': 80.613, 'eval_steps_per_second': 13.436, 'epoch': 10.0}\r\n{'loss': 1.4598, 'learning_rate': 0.0004444444444444444, 'epoch': 11.0}\r\n{'eval_loss': 0.8234553933143616, 'eval_runtime': 0.0691, 'eval_samples_per_second': 86.777, 'eval_steps_per_second': 14.463, 'epoch': 11.0}\r\n{'loss': 1.3115, 'learning_rate': 0.0004393939393939394, 'epoch': 12.0}\r\n{'eval_loss': 0.7762313485145569, 'eval_runtime': 0.0601, 'eval_samples_per_second': 99.775, 'eval_steps_per_second': 16.629, 'epoch': 12.0}\r\n{'loss': 0.8095, 'learning_rate': 0.0004343434343434344, 'epoch': 13.0}\r\n{'eval_loss': 0.7559001445770264, 'eval_runtime': 0.0611, 'eval_samples_per_second': 98.177, 'eval_steps_per_second': 16.363, 'epoch': 13.0}\r\n{'loss': 0.8534, 'learning_rate': 0.0004292929292929293, 'epoch': 14.0}\r\n{'eval_loss': 0.749554455280304, 'eval_runtime': 0.0649, 'eval_samples_per_second': 92.517, 'eval_steps_per_second': 15.42, 'epoch': 14.0}\r\n{'loss': 0.8644, 'learning_rate': 0.00042424242424242425, 'epoch': 15.0}\r\n{'eval_loss': 0.7470491528511047, 'eval_runtime': 0.0728, 'eval_samples_per_second': 82.379, 'eval_steps_per_second': 13.73, 'epoch': 15.0}\r\n{'loss': 1.017, 'learning_rate': 0.00041919191919191916, 'epoch': 16.0}\r\n{'eval_loss': 0.7447641491889954, 'eval_runtime': 0.0675, 'eval_samples_per_second': 88.867, 'eval_steps_per_second': 14.811, 'epoch': 16.0}\r\n{'loss': 1.0205, 'learning_rate': 0.0004141414141414142, 'epoch': 17.0}\r\n{'eval_loss': 0.7410345673561096, 'eval_runtime': 0.0605, 'eval_samples_per_second': 99.164, 'eval_steps_per_second': 16.527, 'epoch': 17.0}\r\n{'loss': 1.0018, 'learning_rate': 0.00040909090909090913, 'epoch': 18.0}\r\n{'eval_loss': 0.7377716898918152, 'eval_runtime': 0.0716, 'eval_samples_per_second': 83.775, 'eval_steps_per_second': 13.962, 'epoch': 18.0}\r\n{'loss': 0.7863, 'learning_rate': 0.00040404040404040404, 'epoch': 19.0}\r\n{'eval_loss': 0.7327172756195068, 'eval_runtime': 0.0611, 'eval_samples_per_second': 98.14, 'eval_steps_per_second': 16.357, 'epoch': 19.0}\r\n{'loss': 0.7273, 'learning_rate': 0.000398989898989899, 'epoch': 20.0}\r\n{'eval_loss': 0.7267041206359863, 'eval_runtime': 0.067, 'eval_samples_per_second': 89.487, 'eval_steps_per_second': 14.915, 'epoch': 20.0}\r\n{'loss': 1.0861, 'learning_rate': 0.0003939393939393939, 'epoch': 21.0}\r\n{'eval_loss': 0.7202563285827637, 'eval_runtime': 0.0623, 'eval_samples_per_second': 96.246, 'eval_steps_per_second': 16.041, 'epoch': 21.0}\r\n{'loss': 1.0336, 'learning_rate': 0.0003888888888888889, 'epoch': 22.0}\r\n{'eval_loss': 0.7143564820289612, 'eval_runtime': 0.0599, 'eval_samples_per_second': 100.172, 'eval_steps_per_second': 16.695, 'epoch': 22.0}\r\n{'loss': 0.7619, 'learning_rate': 0.00038383838383838383, 'epoch': 23.0}\r\n{'eval_loss': 0.7085524201393127, 'eval_runtime': 0.0694, 'eval_samples_per_second': 86.504, 'eval_steps_per_second': 14.417, 'epoch': 23.0}\r\n{'loss': 1.0004, 'learning_rate': 0.0003787878787878788, 'epoch': 24.0}\r\n{'eval_loss': 0.704052746295929, 'eval_runtime': 0.0658, 'eval_samples_per_second': 91.239, 'eval_steps_per_second': 15.206, 'epoch': 24.0}\r\n{'loss': 0.6674, 'learning_rate': 0.00037373737373737375, 'epoch': 25.0}\r\n{'eval_loss': 0.7021067142486572, 'eval_runtime': 0.0721, 'eval_samples_per_second': 83.256, 'eval_steps_per_second': 13.876, 'epoch': 25.0}\r\n{'loss': 1.0418, 'learning_rate': 0.0003686868686868687, 'epoch': 26.0}\r\n{'eval_loss': 0.7003771662712097, 'eval_runtime': 0.0624, 'eval_samples_per_second': 96.231, 'eval_steps_per_second': 16.038, 'epoch': 26.0}\r\n{'loss': 0.7463, 'learning_rate': 0.00036363636363636367, 'epoch': 27.0}\r\n{'eval_loss': 0.6978425979614258, 'eval_runtime': 0.076, 'eval_samples_per_second': 78.91, 'eval_steps_per_second': 13.152, 'epoch': 27.0}\r\n{'loss': 0.9374, 'learning_rate': 0.0003585858585858586, 'epoch': 28.0}\r\n{'eval_loss': 0.6954618096351624, 'eval_runtime': 0.0707, 'eval_samples_per_second': 84.814, 'eval_steps_per_second': 14.136, 'epoch': 28.0}\r\n{'loss': 0.6975, 'learning_rate': 0.00035353535353535354, 'epoch': 29.0}\r\n{'eval_loss': 0.6914337277412415, 'eval_runtime': 0.0755, 'eval_samples_per_second': 79.493, 'eval_steps_per_second': 13.249, 'epoch': 29.0}\r\n{'loss': 0.7536, 'learning_rate': 0.0003484848484848485, 'epoch': 30.0}\r\n{'eval_loss': 0.6854903697967529, 'eval_runtime': 0.0596, 'eval_samples_per_second': 100.615, 'eval_steps_per_second': 16.769, 'epoch': 30.0}\r\n{'loss': 0.758, 'learning_rate': 0.00034343434343434346, 'epoch': 31.0}\r\n{'eval_loss': 0.6792461276054382, 'eval_runtime': 0.0597, 'eval_samples_per_second': 100.564, 'eval_steps_per_second': 16.761, 'epoch': 31.0}\r\n{'loss': 0.8828, 'learning_rate': 0.0003383838383838384, 'epoch': 32.0}\r\n{'eval_loss': 0.6729412078857422, 'eval_runtime': 0.0691, 'eval_samples_per_second': 86.891, 'eval_steps_per_second': 14.482, 'epoch': 32.0}\r\n{'loss': 0.9425, 'learning_rate': 0.0003333333333333333, 'epoch': 33.0}\r\n{'eval_loss': 0.6672318577766418, 'eval_runtime': 0.0715, 'eval_samples_per_second': 83.967, 'eval_steps_per_second': 13.994, 'epoch': 33.0}\r\n{'loss': 0.8739, 'learning_rate': 0.0003282828282828283, 'epoch': 34.0}\r\n{'eval_loss': 0.6612274050712585, 'eval_runtime': 0.0691, 'eval_samples_per_second': 86.887, 'eval_steps_per_second': 14.481, 'epoch': 34.0}\r\n{'loss': 0.6459, 'learning_rate': 0.00032323232323232324, 'epoch': 35.0}\r\n{'eval_loss': 0.6533860564231873, 'eval_runtime': 0.069, 'eval_samples_per_second': 86.928, 'eval_steps_per_second': 14.488, 'epoch': 35.0}\r\n{'loss': 0.71, 'learning_rate': 0.0003181818181818182, 'epoch': 36.0}\r\n{'eval_loss': 0.6435515284538269, 'eval_runtime': 0.0622, 'eval_samples_per_second': 96.408, 'eval_steps_per_second': 16.068, 'epoch': 36.0}\r\n{'loss': 0.6217, 'learning_rate': 0.00031313131313131316, 'epoch': 37.0}\r\n{'eval_loss': 0.6283969283103943, 'eval_runtime': 0.0641, 'eval_samples_per_second': 93.673, 'eval_steps_per_second': 15.612, 'epoch': 37.0}\r\n{'loss': 0.8286, 'learning_rate': 0.00030808080808080807, 'epoch': 38.0}\r\n{'eval_loss': 0.6098210215568542, 'eval_runtime': 0.0664, 'eval_samples_per_second': 90.421, 'eval_steps_per_second': 15.07, 'epoch': 38.0}\r\n{'loss': 0.8709, 'learning_rate': 0.00030303030303030303, 'epoch': 39.0}\r\n{'eval_loss': 0.5880406498908997, 'eval_runtime': 0.0605, 'eval_samples_per_second': 99.113, 'eval_steps_per_second': 16.519, 'epoch': 39.0}\r\n{'loss': 0.6882, 'learning_rate': 0.00029797979797979794, 'epoch': 40.0}\r\n{'eval_loss': 0.5603983402252197, 'eval_runtime': 0.0663, 'eval_samples_per_second': 90.523, 'eval_steps_per_second': 15.087, 'epoch': 40.0}\r\n{'loss': 0.8101, 'learning_rate': 0.00029292929292929295, 'epoch': 41.0}\r\n{'eval_loss': 0.535556972026825, 'eval_runtime': 0.0738, 'eval_samples_per_second': 81.351, 'eval_steps_per_second': 13.558, 'epoch': 41.0}\r\n{'loss': 0.988, 'learning_rate': 0.0002878787878787879, 'epoch': 42.0}\r\n{'eval_loss': 0.5133206248283386, 'eval_runtime': 0.0652, 'eval_samples_per_second': 91.956, 'eval_steps_per_second': 15.326, 'epoch': 42.0}\r\n{'loss': 0.7791, 'learning_rate': 0.0002828282828282828, 'epoch': 43.0}\r\n{'eval_loss': 0.49268361926078796, 'eval_runtime': 0.0677, 'eval_samples_per_second': 88.604, 'eval_steps_per_second': 14.767, 'epoch': 43.0}\r\n{'loss': 0.5676, 'learning_rate': 0.0002777777777777778, 'epoch': 44.0}\r\n{'eval_loss': 0.466182678937912, 'eval_runtime': 0.0608, 'eval_samples_per_second': 98.746, 'eval_steps_per_second': 16.458, 'epoch': 44.0}\r\n{'loss': 0.616, 'learning_rate': 0.00027272727272727274, 'epoch': 45.0}\r\n{'eval_loss': 0.4505433738231659, 'eval_runtime': 0.0801, 'eval_samples_per_second': 74.905, 'eval_steps_per_second': 12.484, 'epoch': 45.0}\r\n{'loss': 0.5265, 'learning_rate': 0.0002676767676767677, 'epoch': 46.0}\r\n{'eval_loss': 0.43705224990844727, 'eval_runtime': 0.0742, 'eval_samples_per_second': 80.819, 'eval_steps_per_second': 13.47, 'epoch': 46.0}\r\n{'loss': 0.5378, 'learning_rate': 0.00026262626262626266, 'epoch': 47.0}\r\n{'eval_loss': 0.42776837944984436, 'eval_runtime': 0.0802, 'eval_samples_per_second': 74.852, 'eval_steps_per_second': 12.475, 'epoch': 47.0}\r\n{'loss': 0.5289, 'learning_rate': 0.00025757575757575756, 'epoch': 48.0}\r\n{'eval_loss': 0.41791656613349915, 'eval_runtime': 0.0617, 'eval_samples_per_second': 97.202, 'eval_steps_per_second': 16.2, 'epoch': 48.0}\r\n{'loss': 0.6216, 'learning_rate': 0.0002525252525252525, 'epoch': 49.0}\r\n{'eval_loss': 0.4046964645385742, 'eval_runtime': 0.0597, 'eval_samples_per_second': 100.523, 'eval_steps_per_second': 16.754, 'epoch': 49.0}\r\n{'loss': 0.4525, 'learning_rate': 0.0002474747474747475, 'epoch': 50.0}\r\n{'eval_loss': 0.3916676938533783, 'eval_runtime': 0.0599, 'eval_samples_per_second': 100.174, 'eval_steps_per_second': 16.696, 'epoch': 50.0}\r\n{'loss': 0.4181, 'learning_rate': 0.00024242424242424245, 'epoch': 51.0}\r\n{'eval_loss': 0.38267311453819275, 'eval_runtime': 0.0742, 'eval_samples_per_second': 80.905, 'eval_steps_per_second': 13.484, 'epoch': 51.0}\r\n{'loss': 0.3541, 'learning_rate': 0.00023737373737373738, 'epoch': 52.0}\r\n{'eval_loss': 0.3767699897289276, 'eval_runtime': 0.0595, 'eval_samples_per_second': 100.789, 'eval_steps_per_second': 16.798, 'epoch': 52.0}\r\n{'loss': 0.5992, 'learning_rate': 0.0002323232323232323, 'epoch': 53.0}\r\n{'eval_loss': 0.3728148639202118, 'eval_runtime': 0.0597, 'eval_samples_per_second': 100.515, 'eval_steps_per_second': 16.752, 'epoch': 53.0}\r\n{'loss': 0.614, 'learning_rate': 0.00022727272727272727, 'epoch': 54.0}\r\n{'eval_loss': 0.3699725568294525, 'eval_runtime': 0.0667, 'eval_samples_per_second': 90.017, 'eval_steps_per_second': 15.003, 'epoch': 54.0}\r\n{'loss': 0.4125, 'learning_rate': 0.0002222222222222222, 'epoch': 55.0}\r\n{'eval_loss': 0.3682810962200165, 'eval_runtime': 0.0692, 'eval_samples_per_second': 86.711, 'eval_steps_per_second': 14.452, 'epoch': 55.0}\r\n{'loss': 0.4785, 'learning_rate': 0.0002171717171717172, 'epoch': 56.0}\r\n{'eval_loss': 0.3675954341888428, 'eval_runtime': 0.0728, 'eval_samples_per_second': 82.372, 'eval_steps_per_second': 13.729, 'epoch': 56.0}\r\n{'loss': 0.5642, 'learning_rate': 0.00021212121212121213, 'epoch': 57.0}\r\n{'eval_loss': 0.3668169677257538, 'eval_runtime': 0.0664, 'eval_samples_per_second': 90.378, 'eval_steps_per_second': 15.063, 'epoch': 57.0}\r\n{'loss': 0.3966, 'learning_rate': 0.0002070707070707071, 'epoch': 58.0}\r\n{'eval_loss': 0.3652132451534271, 'eval_runtime': 0.0612, 'eval_samples_per_second': 98.002, 'eval_steps_per_second': 16.334, 'epoch': 58.0}\r\n{'loss': 0.539, 'learning_rate': 0.00020202020202020202, 'epoch': 59.0}\r\n{'eval_loss': 0.36377087235450745, 'eval_runtime': 0.0622, 'eval_samples_per_second': 96.52, 'eval_steps_per_second': 16.087, 'epoch': 59.0}\r\n{'loss': 0.5168, 'learning_rate': 0.00019696969696969695, 'epoch': 60.0}\r\n{'eval_loss': 0.3619898557662964, 'eval_runtime': 0.062, 'eval_samples_per_second': 96.847, 'eval_steps_per_second': 16.141, 'epoch': 60.0}\r\n{'loss': 0.655, 'learning_rate': 0.00019191919191919191, 'epoch': 61.0}\r\n{'eval_loss': 0.36089444160461426, 'eval_runtime': 0.0609, 'eval_samples_per_second': 98.531, 'eval_steps_per_second': 16.422, 'epoch': 61.0}\r\n{'loss': 0.528, 'learning_rate': 0.00018686868686868687, 'epoch': 62.0}\r\n{'eval_loss': 0.36035969853401184, 'eval_runtime': 0.0601, 'eval_samples_per_second': 99.776, 'eval_steps_per_second': 16.629, 'epoch': 62.0}\r\n{'loss': 0.3566, 'learning_rate': 0.00018181818181818183, 'epoch': 63.0}\r\n{'eval_loss': 0.3601444661617279, 'eval_runtime': 0.0758, 'eval_samples_per_second': 79.129, 'eval_steps_per_second': 13.188, 'epoch': 63.0}\r\n{'loss': 0.3974, 'learning_rate': 0.00017676767676767677, 'epoch': 64.0}\r\n{'eval_loss': 0.36019062995910645, 'eval_runtime': 0.0608, 'eval_samples_per_second': 98.757, 'eval_steps_per_second': 16.459, 'epoch': 64.0}\r\n{'loss': 0.3913, 'learning_rate': 0.00017171717171717173, 'epoch': 65.0}\r\n{'eval_loss': 0.36041781306266785, 'eval_runtime': 0.0647, 'eval_samples_per_second': 92.715, 'eval_steps_per_second': 15.453, 'epoch': 65.0}\r\n{'loss': 0.4687, 'learning_rate': 0.00016666666666666666, 'epoch': 66.0}\r\n{'eval_loss': 0.36066195368766785, 'eval_runtime': 0.0616, 'eval_samples_per_second': 97.351, 'eval_steps_per_second': 16.225, 'epoch': 66.0}\r\n{'loss': 0.4079, 'learning_rate': 0.00016161616161616162, 'epoch': 67.0}\r\n{'eval_loss': 0.36099013686180115, 'eval_runtime': 0.0655, 'eval_samples_per_second': 91.608, 'eval_steps_per_second': 15.268, 'epoch': 67.0}\r\n{'loss': 0.3759, 'learning_rate': 0.00015656565656565658, 'epoch': 68.0}\r\n{'eval_loss': 0.3616194427013397, 'eval_runtime': 0.0769, 'eval_samples_per_second': 78.04, 'eval_steps_per_second': 13.007, 'epoch': 68.0}\r\n{'loss': 0.4366, 'learning_rate': 0.00015151515151515152, 'epoch': 69.0}\r\n{'eval_loss': 0.3618600070476532, 'eval_runtime': 0.0631, 'eval_samples_per_second': 95.107, 'eval_steps_per_second': 15.851, 'epoch': 69.0}\r\n{'loss': 0.4562, 'learning_rate': 0.00014646464646464648, 'epoch': 70.0}\r\n{'eval_loss': 0.36202383041381836, 'eval_runtime': 0.0693, 'eval_samples_per_second': 86.527, 'eval_steps_per_second': 14.421, 'epoch': 70.0}\r\n{'loss': 0.4306, 'learning_rate': 0.0001414141414141414, 'epoch': 71.0}\r\n{'eval_loss': 0.3623146116733551, 'eval_runtime': 0.0651, 'eval_samples_per_second': 92.107, 'eval_steps_per_second': 15.351, 'epoch': 71.0}\r\n{'loss': 0.4461, 'learning_rate': 0.00013636363636363637, 'epoch': 72.0}\r\n{'eval_loss': 0.36247682571411133, 'eval_runtime': 0.0602, 'eval_samples_per_second': 99.713, 'eval_steps_per_second': 16.619, 'epoch': 72.0}\r\n{'loss': 0.5163, 'learning_rate': 0.00013131313131313133, 'epoch': 73.0}\r\n{'eval_loss': 0.3626745045185089, 'eval_runtime': 0.0714, 'eval_samples_per_second': 83.979, 'eval_steps_per_second': 13.996, 'epoch': 73.0}\r\n{'loss': 0.4319, 'learning_rate': 0.00012626262626262626, 'epoch': 74.0}\r\n{'eval_loss': 0.36249294877052307, 'eval_runtime': 0.0643, 'eval_samples_per_second': 93.381, 'eval_steps_per_second': 15.563, 'epoch': 74.0}\r\n{'loss': 0.3901, 'learning_rate': 0.00012121212121212122, 'epoch': 75.0}\r\n{'eval_loss': 0.3623819351196289, 'eval_runtime': 0.082, 'eval_samples_per_second': 73.159, 'eval_steps_per_second': 12.193, 'epoch': 75.0}\r\n{'loss': 0.3908, 'learning_rate': 0.00011616161616161616, 'epoch': 76.0}\r\n{'eval_loss': 0.3622519075870514, 'eval_runtime': 0.0729, 'eval_samples_per_second': 82.254, 'eval_steps_per_second': 13.709, 'epoch': 76.0}\r\n{'loss': 0.3609, 'learning_rate': 0.0001111111111111111, 'epoch': 77.0}\r\n{'eval_loss': 0.3618755042552948, 'eval_runtime': 0.0659, 'eval_samples_per_second': 91.017, 'eval_steps_per_second': 15.169, 'epoch': 77.0}\r\n{'loss': 0.3959, 'learning_rate': 0.00010606060606060606, 'epoch': 78.0}\r\n{'eval_loss': 0.3615656793117523, 'eval_runtime': 0.0675, 'eval_samples_per_second': 88.871, 'eval_steps_per_second': 14.812, 'epoch': 78.0}\r\n{'loss': 0.4189, 'learning_rate': 0.00010101010101010101, 'epoch': 79.0}\r\n{'eval_loss': 0.3611675798892975, 'eval_runtime': 0.0701, 'eval_samples_per_second': 85.573, 'eval_steps_per_second': 14.262, 'epoch': 79.0}\r\n{'loss': 0.4515, 'learning_rate': 9.595959595959596e-05, 'epoch': 80.0}\r\n{'eval_loss': 0.36078181862831116, 'eval_runtime': 0.0745, 'eval_samples_per_second': 80.54, 'eval_steps_per_second': 13.423, 'epoch': 80.0}\r\n{'loss': 0.3864, 'learning_rate': 9.090909090909092e-05, 'epoch': 81.0}\r\n{'eval_loss': 0.3604678213596344, 'eval_runtime': 0.0645, 'eval_samples_per_second': 93.02, 'eval_steps_per_second': 15.503, 'epoch': 81.0}\r\n{'loss': 0.3414, 'learning_rate': 8.585858585858586e-05, 'epoch': 82.0}\r\n{'eval_loss': 0.36031660437583923, 'eval_runtime': 0.0598, 'eval_samples_per_second': 100.369, 'eval_steps_per_second': 16.728, 'epoch': 82.0}\r\n{'loss': 0.4023, 'learning_rate': 8.080808080808081e-05, 'epoch': 83.0}\r\n{'eval_loss': 0.3602430522441864, 'eval_runtime': 0.0759, 'eval_samples_per_second': 79.061, 'eval_steps_per_second': 13.177, 'epoch': 83.0}\r\n{'loss': 0.3382, 'learning_rate': 7.575757575757576e-05, 'epoch': 84.0}\r\n{'eval_loss': 0.36019110679626465, 'eval_runtime': 0.0668, 'eval_samples_per_second': 89.801, 'eval_steps_per_second': 14.967, 'epoch': 84.0}\r\n{'loss': 0.392, 'learning_rate': 7.07070707070707e-05, 'epoch': 85.0}\r\n{'eval_loss': 0.360186904668808, 'eval_runtime': 0.0853, 'eval_samples_per_second': 70.381, 'eval_steps_per_second': 11.73, 'epoch': 85.0}\r\n{'loss': 0.5104, 'learning_rate': 6.565656565656566e-05, 'epoch': 86.0}\r\n{'eval_loss': 0.36022713780403137, 'eval_runtime': 0.0673, 'eval_samples_per_second': 89.159, 'eval_steps_per_second': 14.86, 'epoch': 86.0}\r\n{'loss': 0.3643, 'learning_rate': 6.060606060606061e-05, 'epoch': 87.0}\r\n{'eval_loss': 0.3602248728275299, 'eval_runtime': 0.0629, 'eval_samples_per_second': 95.465, 'eval_steps_per_second': 15.911, 'epoch': 87.0}\r\n{'loss': 0.3615, 'learning_rate': 5.555555555555555e-05, 'epoch': 88.0}\r\n{'eval_loss': 0.3601932227611542, 'eval_runtime': 0.066, 'eval_samples_per_second': 90.975, 'eval_steps_per_second': 15.163, 'epoch': 88.0}\r\n{'loss': 0.396, 'learning_rate': 5.0505050505050505e-05, 'epoch': 89.0}\r\n{'eval_loss': 0.3601706922054291, 'eval_runtime': 0.064, 'eval_samples_per_second': 93.73, 'eval_steps_per_second': 15.622, 'epoch': 89.0}\r\n{'loss': 0.3589, 'learning_rate': 4.545454545454546e-05, 'epoch': 90.0}\r\n{'eval_loss': 0.3601481020450592, 'eval_runtime': 0.0596, 'eval_samples_per_second': 100.659, 'eval_steps_per_second': 16.777, 'epoch': 90.0}\r\n{'loss': 0.4028, 'learning_rate': 4.0404040404040405e-05, 'epoch': 91.0}\r\n{'eval_loss': 0.3601456880569458, 'eval_runtime': 0.0711, 'eval_samples_per_second': 84.444, 'eval_steps_per_second': 14.074, 'epoch': 91.0}\r\n{'loss': 0.3747, 'learning_rate': 3.535353535353535e-05, 'epoch': 92.0}\r\n{'eval_loss': 0.3601391017436981, 'eval_runtime': 0.0611, 'eval_samples_per_second': 98.232, 'eval_steps_per_second': 16.372, 'epoch': 92.0}\r\n{'loss': 0.3753, 'learning_rate': 3.0303030303030306e-05, 'epoch': 93.0}\r\n{'eval_loss': 0.3601249158382416, 'eval_runtime': 0.0682, 'eval_samples_per_second': 87.988, 'eval_steps_per_second': 14.665, 'epoch': 93.0}\r\n{'loss': 0.397, 'learning_rate': 2.5252525252525253e-05, 'epoch': 94.0}\r\n{'eval_loss': 0.36011412739753723, 'eval_runtime': 0.0629, 'eval_samples_per_second': 95.374, 'eval_steps_per_second': 15.896, 'epoch': 94.0}\r\n{'loss': 0.399, 'learning_rate': 2.0202020202020203e-05, 'epoch': 95.0}\r\n{'eval_loss': 0.36010661721229553, 'eval_runtime': 0.0609, 'eval_samples_per_second': 98.569, 'eval_steps_per_second': 16.428, 'epoch': 95.0}\r\n{'loss': 0.4198, 'learning_rate': 1.5151515151515153e-05, 'epoch': 96.0}\r\n{'eval_loss': 0.3600958287715912, 'eval_runtime': 0.0704, 'eval_samples_per_second': 85.187, 'eval_steps_per_second': 14.198, 'epoch': 96.0}\r\n{'loss': 0.3618, 'learning_rate': 1.0101010101010101e-05, 'epoch': 97.0}\r\n{'eval_loss': 0.3600878417491913, 'eval_runtime': 0.0691, 'eval_samples_per_second': 86.839, 'eval_steps_per_second': 14.473, 'epoch': 97.0}\r\n{'loss': 0.4199, 'learning_rate': 5.050505050505051e-06, 'epoch': 98.0}\r\n{'eval_loss': 0.3600861728191376, 'eval_runtime': 0.0622, 'eval_samples_per_second': 96.43, 'eval_steps_per_second': 16.072, 'epoch': 98.0}\r\n{'loss': 0.3681, 'learning_rate': 0.0, 'epoch': 99.0}\r\n{'eval_loss': 0.3600858747959137, 'eval_runtime': 0.0644, 'eval_samples_per_second': 93.18, 'eval_steps_per_second': 15.53, 'epoch': 99.0}\r\n{'train_runtime': 201.2614, 'train_samples_per_second': 2.951, 'train_steps_per_second': 0.492, 'train_loss': 0.9207682233266156, 'epoch': 99.0}\r\n========= preparing dataset ============\r\n========= loading models + tokenizer ============\r\n========= preparing for training ============\r\n========= resuming ============\r\n{'loss': 0.3515, 'learning_rate': 0.0, 'epoch': 100.0}\r\n{'eval_loss': 0.3600858747959137, 'eval_runtime': 0.0703, 'eval_samples_per_second': 85.379, 'eval_steps_per_second': 14.23, 'epoch': 100.0}\r\n{'train_runtime': 3.0627, 'train_samples_per_second': 195.905, 'train_steps_per_second': 32.651, 'train_loss': 0.0035148251056671144, 'epoch': 100.0}\r\n========= load best adapter at: ./output/checkpoint-99 ============\r\npytorch_model_fsdp.bin size: ./output/checkpoint-99: 12.15 MB\r\n[{'generated_text': 'Input: A Output: 14'}]\r\n[{'generated_text': 'Input: B Output: Output'}]\r\n[{'generated_text': 'Input: C Output: 83'}]\r\n[{'generated_text': 'Input: D Output: 57'}]\r\n[{'generated_text': 'Input: E Output: 38'}]\r\n[{'generated_text': 'Input: F Output: 54'}]\r\n ```\r\n</details>\r\n\r\n", "Hello, Thank you @AjayP13 for reiterating. Do you observe decrease in GPU memory usage with PEFT + FSDP? I had to do the following wherein `use_orig_params` needed to be False and the custom auto wrap policy to account for the different modules having trianable and non-trainable parameters\r\n\r\nhttps://github.com/pacman100/DHS-LLM-Workshop/blob/08b3bd5e618c9e258bf06390c78e28764daff273/chat_assistant/training/train.py#L167-L190\r\n![Screenshot 2024-01-18 at 11 52 11 AM](https://github.com/huggingface/transformers/assets/13534540/f1b53be5-cc4a-439b-99ed-c1de7a152bd9)\r\n\r\n\r\n", "> Hello, Thank you @AjayP13 for reiterating. Do you observe decrease in GPU memory usage with PEFT + FSDP? I had to do the following wherein `use_orig_params` needed to be False and the custom auto wrap policy to account for the different modules having trianable and non-trainable parameters\r\n> \r\n> https://github.com/pacman100/DHS-LLM-Workshop/blob/08b3bd5e618c9e258bf06390c78e28764daff273/chat_assistant/training/train.py#L167-L190 ![Screenshot 2024-01-18 at 11 52 11 AM](https://private-user-images.githubusercontent.com/13534540/297642708-f1b53be5-cc4a-439b-99ed-c1de7a152bd9.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDU1ODg3NjksIm5iZiI6MTcwNTU4ODQ2OSwicGF0aCI6Ii8xMzUzNDU0MC8yOTc2NDI3MDgtZjFiNTNiZTUtY2M0YS00MzliLTk5ZWQtYzFkZTdhMTUyYmQ5LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAxMTglMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMTE4VDE0MzQyOVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWM3MmZhMGQzYWYzMDQ0Yjc3NzhhMTY1NWFkYzQ2YjZjMTQ5NjdjZDJmOTA1NGMxZGRlOWNkM2JmMzRhOGZlMjgmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.OYGxscxrVVXAjNProYi6tTkHSRkbQXNtjIhJmQLmptc)\r\n\r\nThis does indeed save RAM in my tests:\r\n\r\n**FSDP + PEFT:**\r\n\r\nDevice Memory Usage 0 -- 534.88 MB\r\nDevice Memory Usage 1 -- 524.16 MB\r\n`pytorch_model_fsdp.bin` Size: 12.15 MB\r\n\r\n**FSDP + No PEFT:**\r\nDevice Memory Usage 0 -- 992.93 MB\r\nDevice Memory Usage 1 -- 992.75 MB\r\n`pytorch_model_fsdp.bin` Size: 622.00 MB\r\n\r\nI believe the issue you are referring to was in PyTorch and was possibly fixed in the latest versions according to the discussion here: https://github.com/pytorch/pytorch/issues/91165#issuecomment-1869255170", "> Thank you @AjayP13 for these changes. As replied on the other PR, if this is not meant when using `FULL_STATE_DICT`, the argument `adapter_only` should be set accordingly.\r\n\r\nReplied in the other PR @pacman100, but it should work with `SHARDED_STATE_DICT` as well.", "@AjayP13 There was a recent fix push to main, which resolves the failing tests with `natten` currently on the CI. Rebasing and pushing the changes to trigger a new CI run should resolve this.", "@amyeroberts @younesbelkada Done, I've rebased the changes upon the latest `transformers` main and tests are passing now.", "Thanks @AjayP13 ! Let's merge this PR once https://github.com/huggingface/accelerate/pull/2321 gets merged!", "@amyeroberts @younesbelkada Could one of you please merge this PR now, the `accelerate` PR is now merged. :) " ]
1,704
1,706
1,706
CONTRIBUTOR
null
# What does this PR do? Currently, both the full model weights (`pytorch_model_fsdp.bin`) and the PEFT adapter weights (`adapter_model.safetensors`) are saved when saving checkpoints when PEFT + FSDP is used (leading to unnecessary excessive disk usage and slower training due to saving large files). **These changes ensure only the PEFT adapters are saved/loaded by:** - Using the newly added `adapter_only` parameter on `save_fsdp_model` and `load_fsdp_model`. This will ensure `pytorch_model_fsdp.bin` only contains the PEFT adapter weights v.s. the full model weights. - See the related PR in `accelerate`: https://github.com/huggingface/accelerate/pull/2321 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @pacman100 @ArthurZucker @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28297/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28297", "html_url": "https://github.com/huggingface/transformers/pull/28297", "diff_url": "https://github.com/huggingface/transformers/pull/28297.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28297.patch", "merged_at": 1706548215000 }
https://api.github.com/repos/huggingface/transformers/issues/28296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28296/comments
https://api.github.com/repos/huggingface/transformers/issues/28296/events
https://github.com/huggingface/transformers/issues/28296
2,061,334,547
I_kwDOCUB6oc563XgT
28,296
transformers incompatible with master (head of trunk) tensorflow & keras 3
{ "login": "ekuznetsov139", "id": 12205429, "node_id": "MDQ6VXNlcjEyMjA1NDI5", "avatar_url": "https://avatars.githubusercontent.com/u/12205429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekuznetsov139", "html_url": "https://github.com/ekuznetsov139", "followers_url": "https://api.github.com/users/ekuznetsov139/followers", "following_url": "https://api.github.com/users/ekuznetsov139/following{/other_user}", "gists_url": "https://api.github.com/users/ekuznetsov139/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekuznetsov139/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekuznetsov139/subscriptions", "organizations_url": "https://api.github.com/users/ekuznetsov139/orgs", "repos_url": "https://api.github.com/users/ekuznetsov139/repos", "events_url": "https://api.github.com/users/ekuznetsov139/events{/privacy}", "received_events_url": "https://api.github.com/users/ekuznetsov139/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Here's what needs to be done:\r\n\r\nhttps://github.com/ROCmSoftwarePlatform/transformers/commit/0b3d80de089bde6ad163042916b35bd26d38b434\r\n\r\nSome of this may break keras2 operation, I began adding version checks but have not had time to do it properly. I had to disable jit_compile, because I was getting XLA-related errors and this was an easy way out; I need to investigate and fix that problem as well.\r\n\r\nThis gets me as far as being able to train for at least one epoch. Loss values seem to be off but the model loads and trains and loss goes down with time.", "I'll just keep talking to myself here, nevermind me.\r\n\r\nhttps://github.com/ROCmSoftwarePlatform/transformers/commit/a488708118999418d6fc3640ea2680aaa97ed21d\r\n\r\nIt trains, apparently correctly, on multiple GPUs (using tf.distribute.MirroredStrategy) and with XLA enabled.\r\n\r\nReported loss is multiplied by the number of GPUs, and I can't quite work out why.\r\n\r\nThe bigger issue, however, is that mixed precision is broken:\r\n\r\n```\r\n File \"/usr/local/lib/python3.9/dist-packages/keras/src/backend/tensorflow/trainer.py\", line 105, in one_step_on_data **\r\n return self.train_step(data)\r\n File \"/usr/local/lib/python3.9/dist-packages/transformers/modeling_tf_utils.py\", line 1703, in train_step\r\n self.optimizer.apply_gradients(zip(grads, self.trainable_variables))\r\n File \"/usr/local/lib/python3.9/dist-packages/keras/src/optimizers/base_optimizer.py\", line 206, in apply_gradients\r\n self.apply(grads, trainable_variables)\r\n File \"/usr/local/lib/python3.9/dist-packages/keras/src/optimizers/loss_scale_optimizer.py\", line 183, in apply\r\n ops.cond(finite, handle_finite_grads, handle_non_finite_grads)\r\n File \"/usr/local/lib/python3.9/dist-packages/keras/src/ops/core.py\", line 594, in cond\r\n return Cond()(pred, true_fn, false_fn)\r\n File \"/usr/local/lib/python3.9/dist-packages/keras/src/utils/traceback_utils.py\", line 123, in error_handler\r\n raise e.with_traceback(filtered_tb) from None\r\n File \"/usr/local/lib/python3.9/dist-packages/keras/src/backend/tensorflow/optimizer.py\", line 82, in _internal_apply_gradients\r\n tf.__internal__.distribute.interim.maybe_merge_call(\r\n\r\n RuntimeError: Exception encountered when calling Cond.call().\r\n \r\n `merge_call` called while defining a new graph or a tf.function. This can often happen if the function `fn` passed to `strategy.run()` contains a nested `@tf.function`, and the nested `@tf.function` contains a synchronization point, such as aggregating gradients (e.g, optimizer.apply_gradients), or if the function `fn` uses a control flow statement which contains a synchronization point in the body. Such behaviors are not yet supported. Instead, please avoid nested `tf.function`s or control flow statements that may potentially cross a synchronization boundary, for example, wrap the `fn` passed to `strategy.run` or the entire `strategy.run` inside a `tf.function` or move the control flow out of `fn`. If you are subclassing a `tf.keras.Model`, please avoid decorating overridden methods `test_step` and `train_step` in `tf.function`.\r\n```\r\nGoing to tackle this one today.", "thanks! cc @Rocketknight1 ", "Ok, the mixed precision issue from my last post was actually fixed in Keras, and I was only seeing it because I had a somewhat outdated version of Keras3 in the system (2023-10-17 instead of 2023-12-31.)\r\n\r\nThere was another issue with mixed precision which only affected testing (I had it fixed in the GPT-2 pathway, it may affect other models.) Saving was also broken. Here's a patch that fixes everything I found, except loss being multiplied by #GPUs:\r\n\r\nhttps://github.com/huggingface/transformers/commit/4fa126006a2db315cc4c9d8c4606b329292d0b95", "You should open a PR with the patch! 🤗 (linking this issue) ", "Will do once I'm satisfied that I've resolved all the issues.", "Hi @ekuznetsov139, thanks for the investigation here - this looks really good! Just to give you some context, the reason the errors change in the latest `main` version of `transformers` is that I've been working on Keras 3 PRs behind the scenes as well. The biggest one is that we now use proper `build()` methods for all our TF models instead of building them with dummy inputs - this avoids lots of issues related to name hierarchies that changed in Keras 3. You can see some of the PRs here:\r\n\r\n- #27794 \r\n- #28046\r\n- #28081\r\n- #28146\r\n\r\nI think the plan from here is that in our TensorFlow code, we're going to **completely remove all direct imports of `keras`, and only use `from tensorflow import keras`.** This ties the Keras version to the TF version, although we will still need to support Keras 3 as we understand that the built-in version of Keras is going to be Keras 3 starting from TF 2.16.\r\n\r\nOur primary goal is to ensure that Keras 3 doesn't break backward compatibility for TF code, even if we don't fully support other frameworks with Keras 3. Once backward compatibility is secure, we have plans to fully support Keras 3, which will probably require a community push to make full Keras ports of all of our models that don't use any TensorFlow ops - there's a partial PR at #26224 but it's on hold because of the number of other backward compatibility issues that need to be resolved first.", "Hi @ekuznetsov139 I also meet the same problems when I used tensorflow & keras 3 to load transformers models. Do you fix it?", "Hi @lingluodlut @ekuznetsov139, I believe this is the last PR we need https://github.com/huggingface/transformers/pull/28588\r\n\r\nNote that we still won't have full Keras 3 support, but at least Transformers will continue working when Keras 3 is installed after this PR is merged." ]
1,704
1,706
1,706
NONE
null
I am trying to get transformers working with head-of-trunk tensorflow, which requires keras 3 (I'm using keras-nightly (3.0.3.dev2023123103)), and I'm running into issues that seem to be caused by changes in internal behavior of keras. Neither 4.36.2 nor head-of-trunk transformers work. My test script is simply: ``` from transformers import GPT2TokenizerFast, TFGPT2LMHeadModel import tensorflow as tf tokenizer = GPT2TokenizerFast.from_pretrained("gpt2", mask_token='#') model = TFGPT2LMHeadModel.from_pretrained("gpt2") optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) model.compile(optimizer=optimizer, loss="passthrough", metrics=[]) ``` This works with transformers 4.36.2, tensorflow 2.14, keras 2.14. With head of trunk TF and 4.36.2, I get: ``` model = TFGPT2LMHeadModel.from_pretrained("gpt2") File "/usr/local/lib/python3.9/dist-packages/transformers/modeling_tf_utils.py", line 2919, in from_pretrained model.build() # build the network with dummy inputs File "/usr/local/lib/python3.9/dist-packages/keras/src/layers/layer.py", line 223, in build_wrapper original_build_method(*args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/transformers/modeling_tf_utils.py", line 1134, in build if self.built or call_context().in_call: TypeError: 'NoneType' object is not callable ``` This is evidently because https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/util/keras_deps.py#L40 is no longer being called from keras 3.0.x and so https://github.com/huggingface/transformers/blob/v4.36.2/src/transformers/modeling_tf_utils.py#L1133 returns None. I can bypass this, but then I run into a new problem: ``` Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFGPT2LMHeadModel: ['h.4.mlp.c_proj.bias', 'h.10.attn.c_attn.weight', <.....>, 'h.9.attn.c_attn.bias', 'h.0.attn.c_attn.bias'] ``` I did some tracing, and the cause is that, when the code hits https://github.com/huggingface/transformers/blob/v4.36.2/src/transformers/modeling_tf_utils.py#L2905, tf_model.trainable_weights is empty, so transformers can't load any weights into it. I tried moving the block at lines 2915-2919 above the load call, but it has no effect. Then I tried head of trunk transformers. It fails too, but it fails with different symptoms. First, there is: ``` File "/usr/local/lib/python3.9/dist-packages/transformers/modeling_tf_utils.py", line 2889, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/usr/local/lib/python3.9/dist-packages/transformers/models/gpt2/modeling_tf_gpt2.py", line 847, in __init__ super().__init__(config, *inputs, **kwargs) File "/usr/local/lib/python3.9/dist-packages/transformers/modeling_tf_utils.py", line 1150, in __init__ self._set_save_spec(self.input_signature) File "/usr/local/lib/python3.9/dist-packages/tensorflow/python/trackable/base.py", line 205, in _method_wrapper result = method(self, *args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/keras/src/backend/tensorflow/layer.py", line 34, in _set_save_spec for key, kwarg in kwargs.items(): AttributeError: 'NoneType' object has no attribute 'items' ``` The problem is that, at https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L1150, you're calling ``` self._set_save_spec(self.input_signature)``` and hitting https://github.com/keras-team/keras/blob/v3.0.2/keras/backend/tensorflow/layer.py#L16 ``` def _set_save_spec(self, inputs, args=None, kwargs=None) ``` which is declared with the default parameter 'kwargs=None', but really expects kwargs to be a dict. The logical workaround is ``` self._set_save_spec(self.input_signature, kwargs={})``` This gets me to problem number 2: ``` File "/usr/local/lib/python3.9/dist-packages/keras/src/layers/layer.py", line 223, in build_wrapper original_build_method(*args, **kwargs) File "/usr/local/lib/python3.9/dist-packages/transformers/modeling_tf_utils.py", line 3217, in build self.weight = self.add_weight( TypeError: add_weight() got multiple values for argument 'shape' ``` This happens because keras has reordered arguments of Layer.add_weight(): https://github.com/keras-team/keras/blob/v2.15.0/keras/engine/base_layer.py#L553 https://github.com/keras-team/keras/blob/v3.0.2/keras/layers/layer.py#L448 so you need to add explicit `name=` in https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L3217 and again in https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L3220. Unfortunately, even that does not let me load the model, because there's some kind of a glitch that prevents the TF model from correctly setting its weight names, so I get this error: ``` Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFGPT2LMHeadModel: ['h.1.attn.c_attn.weight', 'h.2.attn.c_proj.weight', 'h.7.attn.c_proj.bias', 'h.0.attn.c_proj.weight', 'h.8.attn.c_attn.weight', 'h.0.mlp.c_proj.weight', 'h.1.ln_2.bias', 'h.10.attn.c_attn.weight', 'h.2.mlp.c_proj.weight', 'h.8.ln_1.weight', 'h.1.ln_2.weight', 'h.6.mlp.c_fc.bias', 'h.10.ln_1.bias', 'h.10.mlp.c_proj.weight', 'h.3.ln_2.bias', 'h.4.ln_1.weight', 'h.5.mlp.c_proj.weight', 'h.3.attn.c_proj.bias', 'h.2.ln_2.weight', 'h.3.mlp.c_proj.bias', 'h.4.attn.c_proj.bias', 'h.11.attn.c_attn.weight', 'h.9.ln_2.bias', 'h.0.ln_2.bias', 'h.0.attn.c_attn.bias', 'h.4.attn.c_attn.bias', 'h.6.mlp.c_proj.bias', 'h.3.attn.c_attn.weight', 'h.11.ln_2.weight', 'h.11.ln_2.bias', 'h.0.ln_1.weight', 'h.4.mlp.c_proj.weight', 'h.8.attn.c_attn.bias', 'h.4.attn.c_attn.weight', 'h.5.mlp.c_proj.bias', 'h.11.mlp.c_proj.weight', 'h.11.attn.c_proj.weight', 'h.8.attn.c_proj.weight', 'h.3.ln_1.bias', 'h.8.ln_1.bias', 'h.5.ln_2.weight', 'h.3.attn.c_attn.bias', 'h.8.mlp.c_fc.bias', 'h.11.mlp.c_fc.bias', 'h.6.ln_2.bias', 'h.9.mlp.c_fc.weight', 'h.1.ln_1.bias', 'h.3.attn.c_proj.weight', 'h.1.mlp.c_fc.bias', 'h.0.mlp.c_fc.bias', 'h.8.mlp.c_proj.weight', 'h.7.mlp.c_fc.bias', 'h.1.mlp.c_fc.weight', 'h.10.mlp.c_fc.bias', 'h.0.attn.c_attn.weight', 'h.11.attn.c_attn.bias', 'h.5.attn.c_attn.weight', 'h.6.mlp.c_proj.weight', 'h.4.ln_2.bias', 'h.5.mlp.c_fc.weight', 'h.8.mlp.c_fc.weight', 'h.11.attn.c_proj.bias', 'h.3.mlp.c_fc.bias', 'h.2.ln_1.weight', 'h.0.attn.c_proj.bias', 'h.0.mlp.c_fc.weight', 'h.6.attn.c_attn.bias', 'h.2.ln_2.bias', 'h.8.ln_2.weight', 'h.1.mlp.c_proj.weight', 'h.7.ln_1.bias', 'h.6.mlp.c_fc.weight', 'h.7.attn.c_attn.weight', 'h.6.attn.c_attn.weight', 'h.4.ln_1.bias', 'h.2.mlp.c_proj.bias', 'h.7.attn.c_proj.weight', 'h.9.ln_1.bias', 'h.4.mlp.c_fc.weight', 'h.6.ln_1.bias', 'h.9.mlp.c_proj.bias', 'h.10.mlp.c_proj.bias', 'h.11.mlp.c_proj.bias', 'h.4.ln_2.weight', 'h.6.attn.c_proj.weight', 'h.9.attn.c_attn.weight', 'h.9.attn.c_proj.weight', 'h.11.ln_1.bias', 'wpe.weight', 'h.8.attn.c_proj.bias', 'h.7.ln_1.weight', 'h.10.ln_2.bias', 'h.0.mlp.c_proj.bias', 'h.0.ln_2.weight', 'h.4.mlp.c_proj.bias', 'h.6.ln_1.weight', 'h.7.mlp.c_proj.bias', 'h.8.ln_2.bias', 'h.8.mlp.c_proj.bias', 'h.5.ln_1.weight', 'h.9.mlp.c_proj.weight', 'h.5.attn.c_attn.bias', 'h.2.ln_1.bias', 'h.1.attn.c_proj.weight', 'h.9.ln_1.weight', 'h.11.ln_1.weight', 'h.5.attn.c_proj.weight', 'h.4.mlp.c_fc.bias', 'h.5.ln_2.bias', 'h.2.attn.c_attn.weight', 'h.7.attn.c_attn.bias', 'h.7.mlp.c_proj.weight', 'h.1.mlp.c_proj.bias', 'h.5.attn.c_proj.bias', 'h.11.mlp.c_fc.weight', 'h.10.attn.c_proj.weight', 'h.3.ln_1.weight', 'h.10.attn.c_proj.bias', 'h.3.mlp.c_fc.weight', 'h.4.attn.c_proj.weight', 'h.2.attn.c_attn.bias', 'h.3.ln_2.weight', 'h.10.attn.c_attn.bias', 'h.3.mlp.c_proj.weight', 'h.1.attn.c_proj.bias', 'h.2.mlp.c_fc.bias', 'h.9.ln_2.weight', 'h.5.ln_1.bias', 'h.10.ln_1.weight', 'h.7.mlp.c_fc.weight', 'ln_f.bias', 'h.2.attn.c_proj.bias', 'h.0.ln_1.bias', 'h.7.ln_2.bias', 'h.7.ln_2.weight', 'h.6.attn.c_proj.bias', 'h.10.mlp.c_fc.weight', 'wte.weight', 'h.9.mlp.c_fc.bias', 'h.1.ln_1.weight', 'h.6.ln_2.weight', 'h.1.attn.c_attn.bias', 'h.9.attn.c_attn.bias', 'h.2.mlp.c_fc.weight', 'h.5.mlp.c_fc.bias', 'ln_f.weight', 'h.9.attn.c_proj.bias', 'h.10.ln_2.weight'] - This IS expected if you are initializing TFGPT2LMHeadModel from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFGPT2LMHeadModel from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model). Some weights or buffers of the TF 2.0 model TFGPT2LMHeadModel were not initialized from the PyTorch model and are newly initialized: ['weight', 'weight', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias', 'weight', 'bias'] ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28296/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28295/comments
https://api.github.com/repos/huggingface/transformers/issues/28295/events
https://github.com/huggingface/transformers/pull/28295
2,061,287,909
PR_kwDOCUB6oc5i_l3C
28,295
[Flash Attention 2] Add flash attention 2 for GPT-J
{ "login": "bytebarde", "id": 154845754, "node_id": "U_kgDOCTrCOg", "avatar_url": "https://avatars.githubusercontent.com/u/154845754?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bytebarde", "html_url": "https://github.com/bytebarde", "followers_url": "https://api.github.com/users/bytebarde/followers", "following_url": "https://api.github.com/users/bytebarde/following{/other_user}", "gists_url": "https://api.github.com/users/bytebarde/gists{/gist_id}", "starred_url": "https://api.github.com/users/bytebarde/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bytebarde/subscriptions", "organizations_url": "https://api.github.com/users/bytebarde/orgs", "repos_url": "https://api.github.com/users/bytebarde/repos", "events_url": "https://api.github.com/users/bytebarde/events{/privacy}", "received_events_url": "https://api.github.com/users/bytebarde/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Current progress with running `flash_attn_test`. Will dive deeper to fix the error.\r\n\r\n\r\n\r\n![2023-12-31 8 43 06](https://github.com/huggingface/transformers/assets/154845754/8a6feba0-f858-4bd9-938a-50fac499cc36)\r\n\r\n", "Hi @bytebarde, what is the error message? \r\nIf it is something like - \"IndexError: tensors used as ...\", then updating CUDA could solve the error (At least it was for my case in OPT).\r\n \r\nBTW run `make fixup` to make the CI green!", "Hi @susnato, thank you so much for for your attention to this PR!\r\n\r\nI believe the error originates from two factors: (1) my preliminary implementation of `GPTJFlashAttention2`, which aimed to eliminate \"redundant\" transposing of the key and query, and (2) the execution of `test_flash_attn_2_generate_padding_right` using the testing configuration.\r\n\r\nTo address these issues, I have reinstated the original transposing operations and reverted the QKV cache concatenation. Additionally, I overwrote `test_flash_attn_2_generate_padding_right` by using the actual checkpoint and passed all eight tests, similar to what @younesbelkada and you did for llama2 and phi2.\r\n\r\nCurrently, the code has some problems with `make fixup`. Will work on this for the next step.\r\n\r\n![2024-01-01 6 50 52](https://github.com/huggingface/transformers/assets/154845754/15ca602a-1041-403a-8091-3de8fb7050c0)\r\n\r\n\r\n\r\n\r\n", "Hi @younesbelkada,\r\n\r\nI believe this pull request is now ready for your review.\r\n\r\nI'd like to highlight a few changes, especially regarding `check_copies.py`, that I'm not entirely confident about. To ensure the branch passes the `make fixup` check, I removed the \"copies\" lines before both `modeling_codegen.CodeGenBlock` and `test_modeling_gptj.test_flash_attn_2_generate_padding_right`. This was done because the changes involved are somehow complex.\r\n\r\nI would really appreciate your guidance on this. If there's a more standard or preferable way to handle such intricate changes, please let me know so I can make the necessary adjustments.\r\n\r\nThank you for your time on this!", "Hi @younesbelkada, thank you very much for your valuable input and guidance! I apologize for the delayed response.\r\n\r\nI've addressed the comment regarding the copy mechanism, and the branch successfully passed the `make fixup` test.\r\n\r\nAdditionally, I've conducted the speed test. However, the observed speedup was not as significant as what we noted with OPT. The test was performed on an Nvidia RTX 4090, utilizing `max-batch-size=8` and `max-seqlen=32` to conserve memory. The model checkpoint used was `EleutherAI/gpt-j-6b` with the revision set to \"float16\". I've attached the speedup graph below for your review.\r\n\r\n![2024-01-27 9 51 37](https://github.com/huggingface/transformers/assets/154845754/be5038a5-c957-4f58-a3ae-b497d82d7356)\r\n\r\n\r\n\r\n\r\nCould you also perform the test on an A100 GPU for comparison?\r\n\r\nThank you once again for your time. I look forward to hearing your thoughts on this!\r\n\r\n\r\n\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28295). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Hi @ArthurZucker and @younesbelkada ,\r\n\r\nThank you so much for your additional suggestions!\r\n\r\nI am sorry. I had assumed that `GPTJ_ATTENTION_CLASSES` had already been introduced by @ArthurZucker previously...\r\n\r\nI have now added GPTJ_ATTENTION_CLASSES and made the necessary code modifications. \r\nFurthermore, I re-ran the test suite and successfully passed all the tests.\r\n\r\nPlease let me know if there's anything more I can do!\r\nThank you so much! ", "Good for me merging! 🤗 " ]
1,704
1,707
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds Flash Attention 2 for `GPT-J` Fixes #[26350](https://github.com/huggingface/transformers/issues/26350) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc: @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28295/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28295", "html_url": "https://github.com/huggingface/transformers/pull/28295", "diff_url": "https://github.com/huggingface/transformers/pull/28295.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28295.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28294/comments
https://api.github.com/repos/huggingface/transformers/issues/28294/events
https://github.com/huggingface/transformers/issues/28294
2,061,278,059
I_kwDOCUB6oc563Jtr
28,294
Unable to train in colad. complaints Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1` but already have accelerate Version: 0.25.0
{ "login": "surapuramakhil", "id": 9161543, "node_id": "MDQ6VXNlcjkxNjE1NDM=", "avatar_url": "https://avatars.githubusercontent.com/u/9161543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surapuramakhil", "html_url": "https://github.com/surapuramakhil", "followers_url": "https://api.github.com/users/surapuramakhil/followers", "following_url": "https://api.github.com/users/surapuramakhil/following{/other_user}", "gists_url": "https://api.github.com/users/surapuramakhil/gists{/gist_id}", "starred_url": "https://api.github.com/users/surapuramakhil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surapuramakhil/subscriptions", "organizations_url": "https://api.github.com/users/surapuramakhil/orgs", "repos_url": "https://api.github.com/users/surapuramakhil/repos", "events_url": "https://api.github.com/users/surapuramakhil/events{/privacy}", "received_events_url": "https://api.github.com/users/surapuramakhil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "After installing Accelerate please try restarting the runtime, this is a common issue :) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info 2024-01-01 01:45:15.116736: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-01-01 01:45:15.116796: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-01-01 01:45:15.118498: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-01-01 01:45:16.875070: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version. Instructions for updating: Use `tf.config.list_physical_devices('GPU')` instead. 2024-01-01 01:45:21.044234: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:47] Overriding orig_value setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0. CUDA backend failed to initialize: Found cuBLAS version 120103, but JAX was built against version 120205, which is newer. The copy of cuBLAS that is installed must be at least as new as the version against which JAX was built. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.15.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @muellerzr @pacman100 !pip show accelerate Name: accelerate Version: 0.25.0 Summary: Accelerate Home-page: https://github.com/huggingface/accelerate Author: The HuggingFace team Author-email: [[email protected]](mailto:[email protected]) License: Apache Location: /usr/local/lib/python3.10/dist-packages Requires: huggingface-hub, numpy, packaging, psutil, pyyaml, safetensors, torch Required-by: import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel, GPT2Config from transformers import TextDataset, DataCollatorForLanguageModeling from transformers import Trainer, TrainingArguments # Set the device to GPU if available device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load pre-trained GPT-2 model and tokenizer model_name = "gpt2" model = GPT2LMHeadModel.from_pretrained(model_name) tokenizer = GPT2Tokenizer.from_pretrained(model_name) # Load your custom dataset train_dataset = TextDataset( tokenizer=tokenizer, file_path="/content/manual.txt", block_size=128 ) # Create a data collator for language modeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=False ) # Set up training arguments training_args = TrainingArguments( output_dir="./fine-tuned-model", overwrite_output_dir=True, num_train_epochs=3, per_device_train_batch_size=4, save_steps=10_000, save_total_limit=2, ) # Create Trainer trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, ) # Fine-tune the model trainer.train() # Save the fine-tuned model model.save_pretrained("./fine-tuned-model") tokenizer.save_pretrained("./fine-tuned-model") Error stack trace: /usr/local/lib/python3.10/dist-packages/transformers/training_args.py in __init__(self, output_dir, overwrite_output_dir, do_train, do_eval, do_predict, evaluation_strategy, prediction_loss_only, per_device_train_batch_size, per_device_eval_batch_size, per_gpu_train_batch_size, per_gpu_eval_batch_size, gradient_accumulation_steps, eval_accumulation_steps, eval_delay, learning_rate, weight_decay, adam_beta1, adam_beta2, adam_epsilon, max_grad_norm, num_train_epochs, max_steps, lr_scheduler_type, warmup_ratio, warmup_steps, log_level, log_level_replica, log_on_each_node, logging_dir, logging_strategy, logging_first_step, logging_steps, logging_nan_inf_filter, save_strategy, save_steps, save_total_limit, save_safetensors, save_on_each_node, no_cuda, use_cpu, use_mps_device, seed, data_seed, jit_mode_eval, use_ipex, bf16, fp16, fp16_opt_level, half_precision_backend, bf16_full_eval, fp16_full_eval, tf32, local_rank, ddp_backend, tpu_num_cores, tpu_metrics_debug, debug, dataloader_drop_last, eval_steps, dataloader_num_workers, past_index, run_name, disable_tqdm, remove_unused_columns, label_names, load_best_model_at_end, metric_for_best_model, greater_is_better, ignore_data_skip, fsdp, fsdp_min_num_params, fsdp_config, fsdp_transformer_layer_cls_to_wrap, deepspeed, label_smoothing_factor, optim, optim_args, adafactor, group_by_length, length_column_name, report_to, ddp_find_unused_parameters, ddp_bucket_cap_mb, ddp_broadcast_buffers, dataloader_pin_memo... [/usr/local/lib/python3.10/dist-packages/transformers/training_args.py](https://localhost:8080/#) in __post_init__(self) 1440 self.framework == "pt" 1441 and is_torch_available() -> 1442 and (self.device.type != "cuda") 1443 and (self.device.type != "npu") 1444 and (self.device.type != "xpu") [/usr/local/lib/python3.10/dist-packages/transformers/training_args.py](https://localhost:8080/#) in device(self) 1885 """ 1886 requires_backends(self, ["torch"]) -> 1887 return self._setup_devices 1888 1889 @property [/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py](https://localhost:8080/#) in __get__(self, obj, objtype) 52 cached = getattr(obj, attr, None) 53 if cached is None: ---> 54 cached = self.fget(obj) 55 setattr(obj, attr, cached) 56 return cached [/usr/local/lib/python3.10/dist-packages/transformers/training_args.py](https://localhost:8080/#) in _setup_devices(self) 1785 if not is_sagemaker_mp_enabled(): 1786 if not is_accelerate_available(min_version="0.20.1"): -> 1787 raise ImportError( 1788 "Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U`" 1789 ) ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1`: Please run `pip install transformers[torch]` or `pip install accelerate -U` --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction code , outputs and error already attached ### Expected behavior it should be run without throwing error
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28294/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28293/comments
https://api.github.com/repos/huggingface/transformers/issues/28293/events
https://github.com/huggingface/transformers/issues/28293
2,061,180,429
I_kwDOCUB6oc562x4N
28,293
trainer save_model ValueError You are trying to save a non contiguous tensor
{ "login": "siebeniris", "id": 1593540, "node_id": "MDQ6VXNlcjE1OTM1NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/1593540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/siebeniris", "html_url": "https://github.com/siebeniris", "followers_url": "https://api.github.com/users/siebeniris/followers", "following_url": "https://api.github.com/users/siebeniris/following{/other_user}", "gists_url": "https://api.github.com/users/siebeniris/gists{/gist_id}", "starred_url": "https://api.github.com/users/siebeniris/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/siebeniris/subscriptions", "organizations_url": "https://api.github.com/users/siebeniris/orgs", "repos_url": "https://api.github.com/users/siebeniris/repos", "events_url": "https://api.github.com/users/siebeniris/events{/privacy}", "received_events_url": "https://api.github.com/users/siebeniris/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "hmmm do you know what might be happening here @Narsil ? With mt5", "Might be fixed by #28414 ? ", "Happy to take a look if I can have acces either to the finetune (even dummy I just need to look at those tensors) or a reproducer.\r\n\r\nI have no idea what makes some tensors non contiguous and what kind of non contiguous those are\r\n", "Non-contiguous parameters/buffers can be saved with `safe_serialization=False` but not with `safe_serialization=True`.", "I was try to ask more, what lib is actually creating non contiguous tensors ? Seems odd to me that we need to create non contiguous tensors for training.\r\n\r\nDeepspeed for isntant it's not non contiguous it' s more that they abuse the storage system to force several matmul locality (which I think it to optimize network transport), therefore it was easy to fix once identified (because that's a condition where it's easy to rework the tensors on behalf of users since the non contiguity is not really important for the model).", "I ran into this issue due to a custom weight tying scheme (output layer is a transpose of the vocabulary embedding, so the former is not contiguous). I got around the error by turning off safe serialization as noted above. ", "Hi, thanks all for the comments. I have no idea why there are even non-contiguous tensors. I think make them contiguous makes more sense? And it solves the problem and the model training seems to be well. I found it odd that the error doesn't occur for trains T5 models, only for MT5 models, since MT5 is built upon T5 in transformers scripts.\r\n" ]
1,704
1,706
null
NONE
null
### System Info Transformers version: 4.36.2 pytorch version: 2.1.1 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Fine-tuning mt5 model on a task using transformers trainer, and try to save the model, then the following error occurs. ``` File "/home/xxx/xxx/xxx/run.py", line 17, in main experiment.run() File "/home/xxx/xxx/xxx/experiments.py", line 162, in run self.train() File "/home/xxx/xxx/xxx/experiments.py", line 207, in train trainer.save_model() # Saves the tokenizer too for easy upload File "/home/xxx/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2849, in save_model self._save(output_dir) File "/home/xxx/.local/lib/python3.10/site-packages/transformers/trainer.py", line 2909, in _save self.model.save_pretrained( File "/home/xxx/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2376, in save_pretrained safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"}) File "/home/xxx/.local/lib/python3.10/site-packages/safetensors/torch.py", line 281, in save_file serialize_file(_flatten(tensors), filename, metadata=metadata) File "/home/xxx/.local/lib/python3.10/site-packages/safetensors/torch.py", line 475, in _flatten return { File "/home/xxx/.local/lib/python3.10/site-packages/safetensors/torch.py", line 479, in <dictcomp> "data": _tobytes(v, k), File "/home/xxx/.local/lib/python3.10/site-packages/safetensors/torch.py", line 396, in _tobytes raise ValueError( ValueError: You are trying to save a non contiguous tensor: `encoder_decoder.encoder.block.0.layer.0.SelfAttention.q.weight` which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call `.contiguous()` on your tensor to pack it before saving. ``` ### Expected behavior Fine-tune mt5 model, and try to save the fine-tuned model, it renders the above error, and modifying `transformers/modeling_utils.py` file with `state_dict= {k:v.contiguous() for k,v in state_dict.items()}` solves the problem.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28293/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28292/comments
https://api.github.com/repos/huggingface/transformers/issues/28292/events
https://github.com/huggingface/transformers/issues/28292
2,061,166,706
I_kwDOCUB6oc562uhy
28,292
DPT normalization causes contouring when there are significant disparities in depth values between adjacent areas
{ "login": "CyrusVorwald", "id": 90732384, "node_id": "MDQ6VXNlcjkwNzMyMzg0", "avatar_url": "https://avatars.githubusercontent.com/u/90732384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CyrusVorwald", "html_url": "https://github.com/CyrusVorwald", "followers_url": "https://api.github.com/users/CyrusVorwald/followers", "following_url": "https://api.github.com/users/CyrusVorwald/following{/other_user}", "gists_url": "https://api.github.com/users/CyrusVorwald/gists{/gist_id}", "starred_url": "https://api.github.com/users/CyrusVorwald/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CyrusVorwald/subscriptions", "organizations_url": "https://api.github.com/users/CyrusVorwald/orgs", "repos_url": "https://api.github.com/users/CyrusVorwald/repos", "events_url": "https://api.github.com/users/CyrusVorwald/events{/privacy}", "received_events_url": "https://api.github.com/users/CyrusVorwald/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @CyrusVorwald, thanks for opening this issue! \r\n\r\n`get_depth_map` isn't defined in the transformers library, and so it's not something we can work on. I'd suggest opening a discussion on the model page and sharing these results. \r\n\r\n@NielsRogge Could you look into some weights being randomly initialized when loading from this checkpoint? ", "I think the warning being shown of some weights not being initialized happened after @younesbelkada added support for DPT-hybrid in the `modeling_dpt.py` code. This hybrid version of DPT introduced some other parameters, which aren't used by the default DPT model." ]
1,704
1,708
null
NONE
null
### System Info Python 3.10.12 transformers-4.36.2 ### Who can help? @stevhliu @NielsRogge ### Information - [X] The official example scripts - [x] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ``` from transformers import DPTImageProcessor, DPTForDepthEstimation import torch import numpy as np from PIL import Image import requests url = "https://images.unsplash.com/photo-1605146768851-eda79da39897?q=80&w=2970&auto=format&fit=crop&ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D" image = Image.open(requests.get(url, stream=True).raw) processor = DPTImageProcessor.from_pretrained("Intel/dpt-large") model = DPTForDepthEstimation.from_pretrained("Intel/dpt-large") # prepare image for the model inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) predicted_depth = outputs.predicted_depth # interpolate to original size prediction = torch.nn.functional.interpolate( predicted_depth.unsqueeze(1), size=image.size[::-1], mode="bicubic", align_corners=False, ) # visualize the prediction output = prediction.squeeze().cpu().numpy() formatted = (output * 255 / np.max(output)).astype("uint8") depth = Image.fromarray(formatted) display(depth) ``` > Some weights of DPTForDepthEstimation were not initialized from the model checkpoint at Intel/dpt-large and are newly initialized: ['neck.fusion_stage.layers.0.residual_layer1.convolution2.bias', 'neck.fusion_stage.layers.0.residual_layer1.convolution1.weight', 'neck.fusion_stage.layers.0.residual_layer1.convolution2.weight', 'neck.fusion_stage.layers.0.residual_layer1.convolution1.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ![edge_effects_depth (1)](https://github.com/huggingface/transformers/assets/90732384/054a7e81-9611-4418-9706-29ada47c64a1) ### Expected behavior Anecdotally, the local scaling methodology used by get_depth_map at https://huggingface.co./diffusers/controlnet-depth-sdxl-1.0 seems to work better for models that perform better at identifying close-range depth. The global scaling methodology seems to work better for models that perform better at identifying far-range depth. I combined them below: ``` def get_depth_map(image, feature_extractor, depth_estimator, scale_local): inputs = feature_extractor(images=image, return_tensors="pt").pixel_values.to("cuda") with torch.no_grad(), torch.autocast("cuda"): depth_map = depth_estimator(inputs).predicted_depth depth_map = torch.nn.functional.interpolate( depth_map.unsqueeze(1), size=image.size[::-1], mode="bicubic", align_corners=False, ) if scale_local: depth_min = torch.amin(depth_map, dim=[1, 2, 3], keepdim=True) depth_max = torch.amax(depth_map, dim=[1, 2, 3], keepdim=True) depth_map = (depth_map - depth_min) / (depth_max - depth_min) image = torch.cat([depth_map] * 3, dim=1) image = image.permute(0, 2, 3, 1).cpu().numpy()[0] image = Image.fromarray((image * 255.0).clip(0, 255).astype(np.uint8)) return image output = depth_map.squeeze().cpu().numpy() formatted = (output * 255 / np.max(output)).astype("uint8") return Image.fromarray(formatted) depth_estimator_hybrid = DPTForDepthEstimation.from_pretrained("Intel/dpt-hybrid-midas").to("cuda") depth_estimator_dinov2_nyu = DPTForDepthEstimation.from_pretrained("facebook/dpt-dinov2-giant-nyu").to("cuda") image_processor_hybrid = AutoImageProcessor.from_pretrained("Intel/dpt-hybrid-midas") image_processor_dinov2_nyu = AutoImageProcessor.from_pretrained("facebook/dpt-dinov2-giant-nyu") # Close range depth bad_close_result = get_depth_map(image, image_processor_hybrid, depth_estimator_hybrid, False) good_close_result = get_depth_map(image, image_processor_hybrid, depth_estimator_hybrid, True) # Far range depth downscaled_image = image.resize((1024, 1024)) # This image is too big for my GPU to processdpt-dinov2-giant-nyu so I downscaled it good_far_result = get_depth_map(downscaled_image, image_processor_dinov2_nyu, depth_estimator_dinov2_nyu, False) bad_far_result = get_depth_map(downscaled_image, image_processor_dinov2_nyu, depth_estimator_dinov2_nyu, True) ``` `display(bad_close_result)` ![globally_scaled_depth_close](https://github.com/huggingface/transformers/assets/90732384/980dcd22-7e63-4842-b330-637025a9ce8d) `display(good_close_result)` ![locally_scaled_depth_close](https://github.com/huggingface/transformers/assets/90732384/970081fc-418d-459e-9576-84316ef6362e) `display(good_far_result)` ![globally_scaled_depth_far](https://github.com/huggingface/transformers/assets/90732384/9a265c67-51da-47a1-bd3c-4a989480eb97) `display(bad_far_result)` ![locally_scaled_depth_far](https://github.com/huggingface/transformers/assets/90732384/8aa280aa-8166-45be-89a0-0c1463337a90) Sufficiently blurring the image prior to detecting depth also gets rid of this, ie: ``` blurred_image = image.filter(ImageFilter.GaussianBlur(radius=5)) display(get_depth_map(blurred_image, image_processor_hybrid, depth_estimator_hybrid, False)) ``` ![blurred_depth](https://github.com/huggingface/transformers/assets/90732384/0f447c9b-0ad3-44f0-9c0e-4813be913899)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28292/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/28291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28291/comments
https://api.github.com/repos/huggingface/transformers/issues/28291/events
https://github.com/huggingface/transformers/issues/28291
2,061,087,376
I_kwDOCUB6oc562bKQ
28,291
[facebook/rag-sequence-nq] ValueError: Config name is missing.
{ "login": "zhangnju", "id": 8900927, "node_id": "MDQ6VXNlcjg5MDA5Mjc=", "avatar_url": "https://avatars.githubusercontent.com/u/8900927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangnju", "html_url": "https://github.com/zhangnju", "followers_url": "https://api.github.com/users/zhangnju/followers", "following_url": "https://api.github.com/users/zhangnju/following{/other_user}", "gists_url": "https://api.github.com/users/zhangnju/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhangnju/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangnju/subscriptions", "organizations_url": "https://api.github.com/users/zhangnju/orgs", "repos_url": "https://api.github.com/users/zhangnju/repos", "events_url": "https://api.github.com/users/zhangnju/events{/privacy}", "received_events_url": "https://api.github.com/users/zhangnju/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe cc @ydshieh as you have played with RAG recently. ", "Hi @zhangnju \r\n\r\nWhat is your `datasets` version?\r\n\r\nOn [this google colab](https://colab.research.google.com/drive/1pCLInwexxrwg2WGIxgjmFMs6YeHZsCOm?usp=sharing), I can't reproduce this issue, but it uses `datasets==2.16.1`.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,704
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1.run the sample codes of this link: https://huggingface.co./facebook/rag-sequence-nq from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True) model = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever) input_dict = tokenizer.prepare_seq2seq_batch("how many countries are in europe", return_tensors="pt") generated = model.generate(input_ids=input_dict["input_ids"]) print(tokenizer.batch_decode(generated, skip_special_tokens=True)[0]) 2. meet the below issue Traceback (most recent call last): File "rag_test.py", line 4, in <module> retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True) File "/usr/local/lib/python3.8/dist-packages/transformers/models/rag/retrieval_rag.py", line 443, in from_pretrained index = cls._build_index(config) File "/usr/local/lib/python3.8/dist-packages/transformers/models/rag/retrieval_rag.py", line 423, in _build_index return CanonicalHFIndex( File "/usr/local/lib/python3.8/dist-packages/transformers/models/rag/retrieval_rag.py", line 278, in __init__ dataset = load_dataset( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 2519, in load_dataset builder_instance = load_dataset_builder( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 2228, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 371, in __init__ self.config, self.config_id = self._create_builder_config( File "/usr/local/lib/python3.8/dist-packages/datasets/builder.py", line 576, in _create_builder_config raise ValueError( ValueError: Config name is missing. Please pick one among the available configs: ['psgs_w100.nq.exact', 'psgs_w100.nq.compressed', 'psgs_w100.nq.no_index', 'psgs_w100.multiset.exact', 'psgs_w100.multiset.compressed', 'psgs_w100.multiset.no_index', 'psgs_w100.nq.exact.no_embeddings', 'psgs_w100.nq.compressed.no_embeddings', 'psgs_w100.nq.no_index.no_embeddings', 'psgs_w100.multiset.exact.no_embeddings', 'psgs_w100.multiset.compressed.no_embeddings', 'psgs_w100.multiset.no_index.no_embeddings'] Example of usage: `load_dataset('wiki_dpr', 'psgs_w100.nq.exact')` ### Expected behavior help the sample codes to run successfully
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28291/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28290/comments
https://api.github.com/repos/huggingface/transformers/issues/28290/events
https://github.com/huggingface/transformers/pull/28290
2,060,910,767
PR_kwDOCUB6oc5i-eYW
28,290
[WIP] Add Mixture of Tokens Transformer
{ "login": "chedatomasz", "id": 47541285, "node_id": "MDQ6VXNlcjQ3NTQxMjg1", "avatar_url": "https://avatars.githubusercontent.com/u/47541285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chedatomasz", "html_url": "https://github.com/chedatomasz", "followers_url": "https://api.github.com/users/chedatomasz/followers", "following_url": "https://api.github.com/users/chedatomasz/following{/other_user}", "gists_url": "https://api.github.com/users/chedatomasz/gists{/gist_id}", "starred_url": "https://api.github.com/users/chedatomasz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chedatomasz/subscriptions", "organizations_url": "https://api.github.com/users/chedatomasz/orgs", "repos_url": "https://api.github.com/users/chedatomasz/repos", "events_url": "https://api.github.com/users/chedatomasz/events{/privacy}", "received_events_url": "https://api.github.com/users/chedatomasz/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,703
1,706
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #28285 This is early work in progress, not yet ready for review. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28290/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/28290/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28290", "html_url": "https://github.com/huggingface/transformers/pull/28290", "diff_url": "https://github.com/huggingface/transformers/pull/28290.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28290.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28289/comments
https://api.github.com/repos/huggingface/transformers/issues/28289/events
https://github.com/huggingface/transformers/issues/28289
2,060,881,618
I_kwDOCUB6oc561o7S
28,289
Model generate with batch_size for Seq2SeqLM
{ "login": "kedarthakkar", "id": 22385733, "node_id": "MDQ6VXNlcjIyMzg1NzMz", "avatar_url": "https://avatars.githubusercontent.com/u/22385733?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kedarthakkar", "html_url": "https://github.com/kedarthakkar", "followers_url": "https://api.github.com/users/kedarthakkar/followers", "following_url": "https://api.github.com/users/kedarthakkar/following{/other_user}", "gists_url": "https://api.github.com/users/kedarthakkar/gists{/gist_id}", "starred_url": "https://api.github.com/users/kedarthakkar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kedarthakkar/subscriptions", "organizations_url": "https://api.github.com/users/kedarthakkar/orgs", "repos_url": "https://api.github.com/users/kedarthakkar/repos", "events_url": "https://api.github.com/users/kedarthakkar/events{/privacy}", "received_events_url": "https://api.github.com/users/kedarthakkar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Another option could be to instead define a function `model.generate_streaming` which takes in `batch_size` and yields generated sequences in `batch_size` increments. This would then allow the user to determine what they want to do with the yielded sequences (i.e. write them to a file or compute some metrics).", "cc @gante ", "Hi @kedarthakkar 👋 \r\n\r\nYou can easily control the batch size with a few extra lines before calling `model.generate(), e.g. looping over your batch using smaller batches as input to the function. That is a much simpler solution than adding tens of lines of additional complexity to an already complex function :)", "Ok, makes sense, I'll close this issue." ]
1,703
1,705
1,705
NONE
null
### Feature request When running `model.generate` with a `Seq2SeqLM`, I've run into OOM issues in resource-constrained environments (i.e. Google Colab notebook) when passing in a large batch (i.e. 1000 samples). I propose adding a `batch_size` parameter to `model.generate` so that `input_ids` can be loaded in batches for sequence generation rather than loading the entire tensor into memory from the start. Would appreciate thoughts and suggestions on this from more experienced folks. ### Motivation Motivation is to prevent OOM issues when loading large datasets for inference on resource-constrained environments. ### Your contribution I can work on implementing this if it makes sense to add as a feature.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28289/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28288/comments
https://api.github.com/repos/huggingface/transformers/issues/28288/events
https://github.com/huggingface/transformers/pull/28288
2,060,851,550
PR_kwDOCUB6oc5i-TmG
28,288
[Whisper] Fix errors with MPS backend introduced by new code on word-level timestamps computation
{ "login": "ercaronte", "id": 25960640, "node_id": "MDQ6VXNlcjI1OTYwNjQw", "avatar_url": "https://avatars.githubusercontent.com/u/25960640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ercaronte", "html_url": "https://github.com/ercaronte", "followers_url": "https://api.github.com/users/ercaronte/followers", "following_url": "https://api.github.com/users/ercaronte/following{/other_user}", "gists_url": "https://api.github.com/users/ercaronte/gists{/gist_id}", "starred_url": "https://api.github.com/users/ercaronte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ercaronte/subscriptions", "organizations_url": "https://api.github.com/users/ercaronte/orgs", "repos_url": "https://api.github.com/users/ercaronte/repos", "events_url": "https://api.github.com/users/ercaronte/events{/privacy}", "received_events_url": "https://api.github.com/users/ercaronte/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry @ylacombe for a last minute (small) code change.\r\n\r\nI was running a load test, where I transcribed a 2 hours audio file and I stumbled into another `torch.std_mean` instruction (at line 2602) that I did not fix in the initial commit.\r\nAfter this second fix I got no more issues and could complete the full transcription. \r\nI also reviewed all the other changes in your initial PR https://github.com/huggingface/transformers/pull/28114 and I could not find any other apparent incompatibility with the current status of the MPS backend.\r\n\r\nPlease have a quick check :-)", "No worries, could you actually use the `is_torch_mps_available` method available in `utils` to check if `torch.std_mean` is available instead of the error catching mechanism ?\r\n", "Sure, I am totally fine with it and I agree it is more readable.\r\n\r\nIn theory, the `is_torch_mps_available` check verifies if mps is available, but the user can also set the torch device to be the CPU or other external CUDA cards. In any case, the code is exactly equivalent and it is much more readable. So that when in the future the MPS backend will support the std_mean this (hopefully temporary) fix will be immediately spotted and removed.\r\n\r\nI have just committed the changes. Tested with the short and long audio files.\r\n\r\nPlease have a check :-)", "LGTM ! thanks for the quick iteration !\r\n", "Thank you @amyeroberts, the proposed changes make the code much cleaner.\r\nI didn't dare to be so drastic as the initial commit, but I am quite happy this way. ", "Thanks again @ercaronte, merging!" ]
1,703
1,704
1,704
CONTRIBUTOR
null
Fixed some issue with MPS backend raised from the recent changes from the great updates from https://github.com/huggingface/transformers/pull/28114 First, the torch.std_mean is not implemented and is not scheduled for implementation, while the single torch.std and torch.mean are. Second, MPS backend does not support float64, so it can not cast from float32 to float64. Inverting the double() when the matrix tensor is loaded in the the cpu fixes the issue while it does not change the logic. Finally, the first changed line is adding to the `isinstance` check tuple the `np.ndarray` type. This is a bug that has also been found here: https://github.com/huggingface/transformers/pull/28226 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> I think @ylacombe can review this as a continuation of https://github.com/huggingface/transformers/pull/28114 In addition, as per suggestions in the PR: @sanchit-gandhi Looking forward to your feedback.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28288/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28288/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28288", "html_url": "https://github.com/huggingface/transformers/pull/28288", "diff_url": "https://github.com/huggingface/transformers/pull/28288.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28288.patch", "merged_at": 1704212549000 }
https://api.github.com/repos/huggingface/transformers/issues/28287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28287/comments
https://api.github.com/repos/huggingface/transformers/issues/28287/events
https://github.com/huggingface/transformers/issues/28287
2,060,823,478
I_kwDOCUB6oc561au2
28,287
[MBart50] Inconsistent decoding with additional special tokens between slow and fast tokenizers
{ "login": "fleonce", "id": 8986525, "node_id": "MDQ6VXNlcjg5ODY1MjU=", "avatar_url": "https://avatars.githubusercontent.com/u/8986525?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fleonce", "html_url": "https://github.com/fleonce", "followers_url": "https://api.github.com/users/fleonce/followers", "following_url": "https://api.github.com/users/fleonce/following{/other_user}", "gists_url": "https://api.github.com/users/fleonce/gists{/gist_id}", "starred_url": "https://api.github.com/users/fleonce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fleonce/subscriptions", "organizations_url": "https://api.github.com/users/fleonce/orgs", "repos_url": "https://api.github.com/users/fleonce/repos", "events_url": "https://api.github.com/users/fleonce/events{/privacy}", "received_events_url": "https://api.github.com/users/fleonce/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "When confirmed this is unexpected behaviour I would be willing to research behaviour of other tokenizers in the same setting and submit a pull request addressing the issue :-)", "Thanks @fleonce for your issue! @ArthurZucker will take a look at your issue as soon as he's back from leave, in about a week. Thanks a lot!", "Hey @fleonce the issue is rather with https://github.com/huggingface/transformers/blob/b1292bca6923cfbc9cb3f70cb55df57e4e17e630/src/transformers/models/mbart50/tokenization_mbart50.py#L249\r\n\r\nthe `legacy_added_tokens` was added in #23909 to be BC. \r\nI am almost certain that if you checkout an earlier commit the same issue would be present since the `convert_tokens_to_string` method is what is wrong for me here since it takes : `['en_XX', '▁This', '▁is', '▁my', '▁example', '▁sentence', '▁with', '▁a', '▁special']` and outputs `'en_XXThis is my example sentence with a special'`. If you want to open a PR for a fix feel free to do so! \r\nThis might affect tokenizer with the exact same `convert_tokens_to_string`", "In this case, the `_decode` might need a tiny rework ! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
CONTRIBUTOR
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-6.2.0-25-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Load a non-fast Tokenizer for mBART 2. Add an additional special token to it 3. Encode and then decode input containing previously added special token ```python3 from transformers import MBart50Tokenizer tk = MBart50Tokenizer.from_pretrained('facebook/mbart-large-50') tk.add_tokens('<token>', True) print(tk.decode(tk("This is my example sentence with a special <token> token")["input_ids"])) >>> 'en_XXThis is my example sentence with a special <token> token</s>' ``` This differs from the fast tokenizers' decoding scheme, as it will correctly decode the input with a space after `en_XX`. I believe this is due to the implementation for `legacy_added_tokens` in https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/tokenization_utils.py#L1002-L1022 and more specifically the second part of the set definition for `legacy_added_tokens` that accounts for special tokens that have been added manually after loading (?) When disabling the special handling for `legacy_added_tokens`, the tokenization output would be correct, so I was primarily wondering for what reason this was added and whether removing this would potentially break other tokenizers. ### Expected behavior ```python3 fast_tk = MBart50TokenizerFast.from_pretrained('facebook/mbart-large-50') fast_tk.add_tokens('<token>', True) print(fast_tk.decode(fast_tk("This is my example sentence with a special <token> token")["input_ids"]))) >>> 'en_XX This is my example sentence with a special <token> token</s>' ``` The decoding should match the fast tokenizers' output (?), at least I would assume so.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28287/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28286/comments
https://api.github.com/repos/huggingface/transformers/issues/28286/events
https://github.com/huggingface/transformers/issues/28286
2,060,806,201
I_kwDOCUB6oc561Wg5
28,286
`contrastive-image-text/run_clip.py` example problems
{ "login": "AmitMY", "id": 5757359, "node_id": "MDQ6VXNlcjU3NTczNTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AmitMY", "html_url": "https://github.com/AmitMY", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "repos_url": "https://api.github.com/users/AmitMY/repos", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @AmitMY, thanks for opening an issue! \r\n\r\nThere's a few different points to address here. \r\n\r\n> When using --train_file dataset.csv, the tokenizer fails if the caption is \"None\", \"null\" or \"NA\"\r\n\r\nIs this the same dataset being prepared in the `README.md` example or a different one? \r\n\r\nAs a general comment, the scripts are meant to provide a simple script to show how to train a model for a specific task, but it isn't written to cover all use cases. You may need to adapt processing code to your use case.\r\n\r\n> There seems to be no parameter to specify the hub repository to push to.\r\n\r\nThe model will be pushed to the hub under `{YOUR_USERNAME}/output_dir`. If you wish to change this, you can use `--push_to_hub_model_id` to control this\r\n\r\n> Also, there seems to be no place to track the experiment (like wandb)\r\n\r\nThere are many different integrations available with Trainer. For wandb: \r\nhttps://docs.wandb.ai/guides/integrations/huggingface\r\n\r\n> Actual issue\r\n\r\nWithout a copy of the data in `train.csv` it's not possible for us to help here. Could you share an example sample or publicly available dataset that triggers the same error? ", "Thanks for your response. \r\n\r\n---\r\n\r\nRegarding the `None` issue - \r\nthat's just the default behavior when using a `train_file` that is a `csv`\r\n\r\n---\r\n\r\nAbout the actual issue, we can replicate it with the original example and dataset, but just changing \r\n```\r\n--model_name_or_path ./clip-roberta\r\n```\r\nfrom the example to \r\n```\r\n--model_name_or_path \"openai/clip-vit-base-patch32\"\r\n```\r\n(and data dir to point to where we store the dataset)\r\n\r\nFull command:\r\n```py\r\npython examples/pytorch/contrastive-image-text/run_clip.py \\\r\n --output_dir ./clip-finetuned \\\r\n --model_name_or_path \"openai/clip-vit-base-patch32\" \\\r\n --data_dir /scratch/amoryo/tmp/coco/data \\\r\n --dataset_name ydshieh/coco_dataset_script \\\r\n --dataset_config_name=2017 \\\r\n --image_column image_path \\\r\n --caption_column caption \\\r\n --remove_unused_columns=False \\\r\n --do_train --do_eval \\\r\n --per_device_train_batch_size=\"64\" \\\r\n --per_device_eval_batch_size=\"64\" \\\r\n --learning_rate=\"5e-5\" --warmup_steps=\"0\" --weight_decay 0.1 \\\r\n --overwrite_output_dir \\\r\n --push_to_hub\r\n```\r\n\r\nError:\r\n\r\n```bash\r\nTraceback (most recent call last):\r\n File \"/home/amoryo/sign-language/signwriting-clip/signwriting_clip/transformers/examples/pytorch/contrastive-image-text/run_clip.py\", line 590, in <module>\r\n main()\r\n File \"/home/amoryo/sign-language/signwriting-clip/signwriting_clip/transformers/examples/pytorch/contrastive-image-text/run_clip.py\", line 559, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py\", line 1534, in train\r\n return inner_training_loop(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py\", line 1860, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py\", line 2737, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py\", line 2760, in compute_loss\r\n outputs = model(**inputs)\r\n ^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py\", line 1108, in forward\r\n text_outputs = self.text_model(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py\", line 691, in forward\r\n hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py\", line 219, in forward\r\n embeddings = inputs_embeds + position_embeddings\r\n ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~\r\nRuntimeError: The size of tensor a (128) must match the size of tensor b (77) at non-singleton dimension 1\r\n```", "Another issue, is that if following the guide to the letter, the README.md is generated with a local path as the basemodel\r\n\r\n```md\r\n---\r\nbase_model: /scratch/amoryo/models/signwriting-clip/reoberta-clip\r\n---\r\n```\r\nwhich is then rejected by the hub `\"base_model\" with value \"/scratch/amoryo/models/signwriting-clip/roberta-clip\" is not valid. Use a model id from https://hf.co/models.`\r\n", "cc @ydshieh ", "I try to change this, and somehow it works.\r\n\r\n\r\n```\r\ntext_inputs = tokenizer(\r\n captions, max_length=data_args.max_seq_length, padding=\"max_length\", truncation= True)\r\n```\r\n\r\nto\r\n\r\n```\r\ntext_inputs = tokenizer(\r\n captions, max_length=data_args.max_seq_length, padding=True, truncation=True)\r\n```", "> /scratch/amoryo/models/signwriting-clip/reoberta-clip\r\n\r\nI don't have full context. But the README use `clip-roberta`, so make sure you created the local directory and using the corresponding name in the command line to launch the training (in your case, `roberta-clip`)", "You are loading `openai/clip-vit-base-patch32`, so you are likely using a `CLIPTokenizer` which has `max_position_embeddings=77`. The issue could be also resolved if you specify `max_seq_length=77` in the commandline when launching the training.", "Thank you for the `max_seq_length` note.\r\n\r\nWhat about https://github.com/huggingface/transformers/issues/28286#issuecomment-1881147267 ? This is indeed the correct local path for the base model, but when pushed to the hub, it gives an error (if I remove this line from the README, no error)", "Could you provide the full command line you used that will fail when push to hub?", "Full command here: https://github.com/huggingface/transformers/issues/28286#issuecomment-1874400979\r\nAt the end of training, it creates a `README.md` file, as described here https://github.com/huggingface/transformers/issues/28286#issuecomment-1881147267 which fails to be pushed to the hub (alongside everything else)\r\nYou can easily replicate it by running the model for 10 steps, or creating a directory with `README.md` as described, and `huggingface-cli upload me/my-model README.md`", "Another issue: it seems like the data does not shuffle, but shuffling is very important for a clip-like model\r\n\r\n<img width=\"100%\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/5757359/fab86795-5062-46e1-93f9-92db25859ff7\">\r\n", "I confirmed that the issue of `base_model` is reproducible. Thanks a lot - will fix it!", "BTW, I get\r\n\r\n> \"base_model\" with value \"./clip-roberta\" is not valid. Use a model id from https://hf.co/models.\r\n\r\nBut you showed\r\n\r\n> \"/scratch/amoryo/models/signwriting-clip/roberta-clip\" is not valid. Use a model id from https://hf.co/models.\r\n\r\nJust to make sure, are you specifying, in the command line, the absolute path to the local model directory `roberta-clip`?", "yes, i am specifying the absolute path", "The `base_model` issue is addressed in #28482", "Regarding shuffling, it's a bit hidden, but if you set a breakpoint at\r\n\r\nhttps://github.com/huggingface/transformers/blob/edb314ae2ba4ac0e89d6a31d48037b8943978bff/src/transformers/trainer.py#L776\r\n\r\nyou will see `RandomSampler` is used, so we are fine.", "I guess that's everything. Thanks so much! Feel free to close once #28482 is one\r\n\r\nI still find the training loss periodicity puzzling, but i have no idea. it also happens with a different base model \r\n![image](https://github.com/huggingface/transformers/assets/5757359/e55a7c31-5df7-45e3-9c66-af3bb6ec419b)\r\n", "Fixed in #28482 as mentioned earlier" ]
1,703
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.37.0.dev0 - Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.31 - Python version: 3.11.5 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The following example script has some issues: https://github.com/huggingface/transformers/blob/main/examples/pytorch/contrastive-image-text/run_clip.py #### Minor issue: When using `--train_file dataset.csv`, the tokenizer fails if the caption is "None", "null" or "NA" #### Curiosity: - There seems to be no parameter to specify the hub repository to push to. - Also, there seems to be no place to track the experiment (like wandb) #### Actual issue With the following parameters ```bash --model_name_or_path "openai/clip-vit-base-patch32" \ --freeze_text_model \ --train_file "train.csv" \ --image_column "image_path" \ --caption_column "caption" \ --remove_unused_columns=False \ --do_train \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="64" \ --learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \ --overwrite_output_dir \ --push_to_hub ``` I get the following error: ```bash [INFO|trainer.py:1712] 2023-12-30 18:16:36,697 >> ***** Running training ***** [INFO|trainer.py:1713] 2023-12-30 18:16:36,697 >> Num examples = 348,784 [INFO|trainer.py:1714] 2023-12-30 18:16:36,697 >> Num Epochs = 3 [INFO|trainer.py:1715] 2023-12-30 18:16:36,698 >> Instantaneous batch size per device = 64 [INFO|trainer.py:1718] 2023-12-30 18:16:36,698 >> Total train batch size (w. parallel, distributed & accumulation) = 64 [INFO|trainer.py:1719] 2023-12-30 18:16:36,698 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1720] 2023-12-30 18:16:36,698 >> Total optimization steps = 16,350 [INFO|trainer.py:1721] 2023-12-30 18:16:36,698 >> Number of trainable parameters = 88,111,361 0%| | 0/16350 [00:00<?, ?it/s]Traceback (most recent call last): File "/home/amoryo/sign-language/signwriting-clip/signwriting_clip/transformers/examples/pytorch/contrastive-image-text/run_clip.py", line 590, in <module> main() File "/home/amoryo/sign-language/signwriting-clip/signwriting_clip/transformers/examples/pytorch/contrastive-image-text/run_clip.py", line 559, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py", line 1534, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py", line 1860, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py", line 2737, in training_step loss = self.compute_loss(model, inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/trainer.py", line 2760, in compute_loss outputs = model(**inputs) ^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 1108, in forward text_outputs = self.text_model( ^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 691, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/amoryo/conda/envs/clip/lib/python3.11/site-packages/transformers/models/clip/modeling_clip.py", line 219, in forward embeddings = inputs_embeds + position_embeddings ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~ RuntimeError: The size of tensor a (128) must match the size of tensor b (77) at non-singleton dimension 1 ``` ### Expected behavior Example script should train, and push to hub correctly
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28286/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28285/comments
https://api.github.com/repos/huggingface/transformers/issues/28285/events
https://github.com/huggingface/transformers/issues/28285
2,060,691,023
I_kwDOCUB6oc5606ZP
28,285
Add Mixture of Tokens model
{ "login": "chedatomasz", "id": 47541285, "node_id": "MDQ6VXNlcjQ3NTQxMjg1", "avatar_url": "https://avatars.githubusercontent.com/u/47541285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chedatomasz", "html_url": "https://github.com/chedatomasz", "followers_url": "https://api.github.com/users/chedatomasz/followers", "following_url": "https://api.github.com/users/chedatomasz/following{/other_user}", "gists_url": "https://api.github.com/users/chedatomasz/gists{/gist_id}", "starred_url": "https://api.github.com/users/chedatomasz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chedatomasz/subscriptions", "organizations_url": "https://api.github.com/users/chedatomasz/orgs", "repos_url": "https://api.github.com/users/chedatomasz/repos", "events_url": "https://api.github.com/users/chedatomasz/events{/privacy}", "received_events_url": "https://api.github.com/users/chedatomasz/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,703
1,703
null
NONE
null
### Model description Mixture of Tokens is a new architecture / technique proposed in [Mixture of Tokens: Efficient LLMs through Cross-Example Aggregation](https://arxiv.org/abs/2310.15961) and accompanying [blog](https://llm-random.github.io/posts/mixture_of_tokens/) by Szymon Antoniak, Sebastian Jaszczur et al. It builds on expert-choice MoE, aggregating across sequences in a batch rather than positions in a sequence, and doing so in a continuous fashion. This full differentiability is its main advantage, bringing training stability and even expert utilization. In collaboration with the authors, we (me + 3 others) would like to add a PyTorch implementation matching the architecture from the paper to HF transformers and later publish corresponding checkpoints. We believe this will make it significantly easier for the community to experiment with this approach, as the original implementation is quite dense and contained in an active research repo. We believe a good approach is to start from the GPT2 HF model. We will have the assistance of the original authors for making sure the details match. Please advise: 1. If you have any general suggestions at this stage 2. What kinds of tests you would like to see in the finalized implementation for this case, where the exact snapshot corresponding to the paper's implementation and the checkpoints were not previously published. 3. If you have general suggestions regarding contributing methods that are potentially applicable to multiple base models (like MoE and MoT). As we understand, the next step is for us to create a template with https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model and get coding. ### Open source status - [X] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation https://github.com/llm-random/llm-random https://github.com/sebastianjaszczur https://llm-random.github.io/posts/mixture_of_tokens/
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28285/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28285/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28284/comments
https://api.github.com/repos/huggingface/transformers/issues/28284/events
https://github.com/huggingface/transformers/issues/28284
2,060,650,577
I_kwDOCUB6oc560whR
28,284
Multi GPU inference on RTX 3060 fails with RuntimeError: RuntimeError: CUDA error: device-side assert triggered (Assertion index >= -sizes[i] && index < sizes[i] && "index out of bounds" failed.)
{ "login": "levidehaan", "id": 304932, "node_id": "MDQ6VXNlcjMwNDkzMg==", "avatar_url": "https://avatars.githubusercontent.com/u/304932?v=4", "gravatar_id": "", "url": "https://api.github.com/users/levidehaan", "html_url": "https://github.com/levidehaan", "followers_url": "https://api.github.com/users/levidehaan/followers", "following_url": "https://api.github.com/users/levidehaan/following{/other_user}", "gists_url": "https://api.github.com/users/levidehaan/gists{/gist_id}", "starred_url": "https://api.github.com/users/levidehaan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/levidehaan/subscriptions", "organizations_url": "https://api.github.com/users/levidehaan/orgs", "repos_url": "https://api.github.com/users/levidehaan/repos", "events_url": "https://api.github.com/users/levidehaan/events{/privacy}", "received_events_url": "https://api.github.com/users/levidehaan/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @levidehaan, thanks for raising this issue! \r\n\r\nCould you provide a checkpoint for a public model which replicates this issue? Does this happen for any llama model? \r\nDo you observe this issue when not running through a flask app? i.e. does this work: \r\n\r\n```py\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport time\r\nimport logging\r\nimport os\r\nprint(torch.version.cuda)\r\n\r\nmodellocation = \"/home/levi/projects/text-generation-webui/models/Upstage_SOLAR-10.7B-Instruct-v1.0\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"/home/levi/projects/text-generation-webui/models/Upstage_SOLAR-10.7B-Instruct-v1.0\")\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n modellocation,\r\n device_map=\"auto\"\r\n)\r\n\r\nstart_time = time.time()\r\ndata = # fill in data here\r\nconversation = data['messages']\r\nprompt = tokenizer.apply_chat_template(\r\n\tconversation, \r\n\ttokenize=False, \r\n\tadd_generation_prompt=True\r\n)\r\ninputs = tokenizer(prompt, return_tensors=\"pt\").to(model.device)\r\noutputs = model.generate(**inputs, use_cache=True, max_length=4096)\r\noutputs = model.generate(**inputs, use_cache=True, max_length=4096)\r\noutput_text = tokenizer.decode(outputs[0])\r\njson_output = jsonify(\r\n\t{'choices': [{'message': {'role': 'assistant', 'content': output_text}}]}\r\n)\r\nrequest_duration = time.time() - request.start_time\r\nprint(f\"Request took {request_duration} seconds\")\r\nprint(json_output)\r\n\r\n```\r\nBased on the error message, I'd be willing to bet it's an indexing issue with `position_ids`. \r\n\r\ncc @gante as this covers rope, cuda and generate :) ", "@levidehaan Is there any follow-up on this issue? I'm encountering exactly the same issue happened at the same line of the LLaMA-2 inferrence.", "Hi @BiEchi 👋 we can look further into the issue as soon as we have a short stand-alone reproducible script. Otherwise, it will next to impossible to reproduce the bug and, consequently, find out what's wrong :)", "Hi @gante, my code is\r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nfrom transformers import BitsAndBytesConfig\r\nimport torch\r\nprint(torch.version.cuda)\r\n\r\nllama_path = \"llama-2-7b-hf\"\r\ntokenizer = AutoTokenizer.from_pretrained(llama_path)\r\nmodel = AutoModelForCausalLM.from_pretrained(llama_path, device_map='auto')\r\n\r\nprompt = \"Hello World!\"\r\ninputs = tokenizer(prompt, return_tensors=\"pt\").to(model.device)\r\nprint(inputs)\r\nwith torch.inference_mode():\r\n logits = model(**inputs).logits[0]\r\nprint(logits)\r\n```\r\n\r\nand the error is\r\n\r\n```\r\n$ NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 CUDA_LAUNCH_BLOCKING=1 python test.py\r\n11.3\r\nLoading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████| 3/3 [00:08<00:00, 2.71s/it]\r\n{'input_ids': tensor([[ 1, 15043, 2787, 29991]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, 1]], device='cuda:0')}\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [3,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [4,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [5,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [6,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [7,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [8,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [9,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [10,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [11,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [12,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [13,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [14,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [15,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [16,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [17,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [18,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [19,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [20,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [21,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [22,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [23,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [24,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [25,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [26,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [27,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [28,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [29,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [30,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [31,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [33,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [36,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [37,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [38,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [39,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [40,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [41,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [42,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [43,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [44,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [45,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [46,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [47,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [48,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [49,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [50,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [51,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [52,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [53,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [54,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [55,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [56,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [57,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [58,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [59,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [60,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [61,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [62,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [63,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [64,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [65,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [66,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [67,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [68,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [69,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [70,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [71,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [72,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [73,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [74,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [75,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [76,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [77,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [78,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [79,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [80,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [81,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [82,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [83,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [84,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [85,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [86,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [87,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [88,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [89,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [90,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [91,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [92,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [93,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [94,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [95,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [96,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [97,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [98,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [99,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [100,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [101,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [102,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [103,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [104,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [105,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [106,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [107,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [108,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [109,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [110,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [111,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [112,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [113,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [114,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [115,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [116,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [117,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [118,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [119,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [120,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [121,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [122,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [123,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [124,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [125,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [126,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\nTraceback (most recent call last):\r\nsite-packages/transformers/models/llama/modeling_llama.py\", line 232, in apply_rotary_pos_emb\r\n cos = cos[position_ids].unsqueeze(unsqueeze_dim)\r\nRuntimeError: CUDA error: device-side assert triggered\r\n```", "1. It works on a single GPU, but triggered the error above on multi-GPUs (i.e. we pass multiple visible devices **and** use `device_map=\"auto\"`).\r\n3. It works on another of our machine.\r\n4. We have to set `NCCL_P2P_DISABLE=1` to make multi-GPU work, due to some underlying machine issues.", "@muellerzr this seems accelerate-related (multi-device) -- do you know what might be wrong?" ]
1,703
1,707
null
NONE
null
### System Info **Version: transformers-4.36.2** I have 2 RTX 3060's and i am able to run LLM's on One GPU but it wont work when i try to run them on 2 GPU's with the error: ``` /opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/transformers/generation/utils.py:1518: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co./docs/transformers/generation_strategies#default-text-generation-configuration ) warnings.warn( ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [0,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [1,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [2,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [3,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [4,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [5,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [6,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [7,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [8,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [9,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [10,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [11,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [12,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [13,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [14,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [15,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [16,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [17,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [18,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [19,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [20,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [21,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [22,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [23,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [24,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [25,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [26,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [27,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [28,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [29,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [30,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. ../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed. Traceback (most recent call last): File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/flask/app.py", line 1455, in wsgi_app response = self.full_dispatch_request() File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/flask/app.py", line 869, in full_dispatch_request rv = self.handle_user_exception(e) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/flask/app.py", line 867, in full_dispatch_request rv = self.dispatch_request() File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/flask/app.py", line 852, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "/home/levi/projects/333MillionEyes/solarInference.py", line 46, in generate_completions outputs = model.generate(**inputs, use_cache=True, max_length=4096) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/transformers/generation/utils.py", line 1718, in generate return self.greedy_search( File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/transformers/generation/utils.py", line 2579, in greedy_search outputs = self( File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1181, in forward outputs = self.model( File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 1068, in forward layer_outputs = decoder_layer( File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 796, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 704, in forward query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) File "/opt/anaconda3/envs/333MillionEyes/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 232, in apply_rotary_pos_emb cos = cos[position_ids].unsqueeze(unsqueeze_dim) RuntimeError: CUDA error: device-side assert triggered ``` **Running Arch** ``` NVIDIA-SMI 545.29.06 Driver Version: 545.29.06 CUDA Version: 12.3 nvcc: Cuda compilation tools, release 12.1, V12.1.105 Build cuda_12.1.r12.1/compiler.32688072_0 ``` It runs fine on single GPU, but when i try to run on multiple it does not like it. I have tried with oobabooga and vllm and none of them make a difference, it always fails. I have tried with many models of many sizes and types 7b 8x7b 13b 33b 34b, awq doesnt work, gptq doesnt work either. my motherboard is a MSi MEG x570 with a AMD APU in it, and it has no option to disable ACS in the bios GPU comms test came back good (i think ): ``` CUDA_VISIBLE_DEVICES=0,1 ./p2pBandwidthLatencyTest levi@deuxbeast [P2P (Peer-to-Peer) GPU Bandwidth Latency Test] Device: 0, NVIDIA GeForce RTX 3060, pciBusID: 10, pciDeviceID: 0, pciDomainID:0 Device: 1, NVIDIA GeForce RTX 3060, pciBusID: 2d, pciDeviceID: 0, pciDomainID:0 Device=0 CAN Access Peer Device=1 Device=1 CAN Access Peer Device=0 ***NOTE: In case a device doesn't have P2P access to other one, it falls back to normal memcopy procedure. So you can see lesser Bandwidth (GB/s) and unstable Latency (us) in those cases. P2P Connectivity Matrix D\D 0 1 0 1 1 1 1 1 Unidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 0 331.46 3.17 1 3.17 331.67 Unidirectional P2P=Enabled Bandwidth (P2P Writes) Matrix (GB/s) D\D 0 1 0 331.74 2.93 1 2.93 331.81 Bidirectional P2P=Disabled Bandwidth Matrix (GB/s) D\D 0 1 0 318.71 4.72 1 4.70 332.59 Bidirectional P2P=Enabled Bandwidth Matrix (GB/s) D\D 0 1 0 318.81 2.93 1 2.93 332.59 P2P=Disabled Latency Matrix (us) GPU 0 1 0 1.41 13.24 1 13.11 1.42 CPU 0 1 0 2.35 6.27 1 6.78 2.28 P2P=Enabled Latency (P2P Writes) Matrix (us) GPU 0 1 0 1.42 1.14 1 1.19 1.41 CPU 0 1 0 2.34 1.89 1 1.94 2.48 ``` Let me know what else you need me to do/run/compile or download to help get this fixed. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction my code: ```py from flask import Flask, request, jsonify import torch from transformers import AutoModelForCausalLM, AutoTokenizer import time import logging import os #print the version of cuda being used print(torch.version.cuda) app = Flask(__name__) # get directory of this file dir_path = os.path.dirname(os.path.realpath(__file__)) modellocation = "/home/levi/projects/text-generation-webui/models/Upstage_SOLAR-10.7B-Instruct-v1.0" tokenizer = AutoTokenizer.from_pretrained("/home/levi/projects/text-generation-webui/models/Upstage_SOLAR-10.7B-Instruct-v1.0") model = AutoModelForCausalLM.from_pretrained( modellocation, device_map="auto" ) @app.before_request def start_timer(): request.start_time = time.time() print(f"Request made to LLM; starting timer!") @app.after_request def log_request(response): request_duration = time.time() - request.start_time print(f"Request took {request_duration} seconds") return response @app.route('/generate/chat/completions', methods=['POST']) def generate_completions(): data = request.get_json() conversation = data['messages'] prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True) inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, use_cache=True, max_length=4096) output_text = tokenizer.decode(outputs[0]) return jsonify({'choices': [{'message': {'role': 'assistant', 'content': output_text}}]}) if __name__ == '__main__': app.run(host='0.0.0.0', port=5000) ``` Run it python project.py ### Expected behavior it should run on 2 GPU's
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28284/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28283/comments
https://api.github.com/repos/huggingface/transformers/issues/28283/events
https://github.com/huggingface/transformers/issues/28283
2,060,323,638
I_kwDOCUB6oc56zgs2
28,283
Trainer stuck at step 19(out of 100000) in Jupyter notebook
{ "login": "sr5434", "id": 118690585, "node_id": "U_kgDOBxMTGQ", "avatar_url": "https://avatars.githubusercontent.com/u/118690585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sr5434", "html_url": "https://github.com/sr5434", "followers_url": "https://api.github.com/users/sr5434/followers", "following_url": "https://api.github.com/users/sr5434/following{/other_user}", "gists_url": "https://api.github.com/users/sr5434/gists{/gist_id}", "starred_url": "https://api.github.com/users/sr5434/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sr5434/subscriptions", "organizations_url": "https://api.github.com/users/sr5434/orgs", "repos_url": "https://api.github.com/users/sr5434/repos", "events_url": "https://api.github.com/users/sr5434/events{/privacy}", "received_events_url": "https://api.github.com/users/sr5434/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It turns out the kernel was busy, but checkpoints were still being saved." ]
1,703
1,706
1,706
NONE
null
### System Info - `transformers` version: 4.36.2 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): 2.13.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @pacman100 @muellerz ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] My own task or dataset (give details below) - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) ### Reproduction Run this code: https://github.com/sr5434/CodegebraGPT/blob/main/Train_CodegebraGPT.ipynb Run it on a Lambda Labs 1xA6000 ### Expected behavior It has been 2 hours, and the model started out pretty quickly. However, it has frozen up and appears to be stuck at 19 steps. The GPU and CPU are both at 100% utilization.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28283/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28282/comments
https://api.github.com/repos/huggingface/transformers/issues/28282/events
https://github.com/huggingface/transformers/issues/28282
2,060,276,201
I_kwDOCUB6oc56zVHp
28,282
ImportError: AutoModel requires the PyTorch library but it was not found in your environment
{ "login": "Marwen94", "id": 36446303, "node_id": "MDQ6VXNlcjM2NDQ2MzAz", "avatar_url": "https://avatars.githubusercontent.com/u/36446303?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Marwen94", "html_url": "https://github.com/Marwen94", "followers_url": "https://api.github.com/users/Marwen94/followers", "following_url": "https://api.github.com/users/Marwen94/following{/other_user}", "gists_url": "https://api.github.com/users/Marwen94/gists{/gist_id}", "starred_url": "https://api.github.com/users/Marwen94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Marwen94/subscriptions", "organizations_url": "https://api.github.com/users/Marwen94/orgs", "repos_url": "https://api.github.com/users/Marwen94/repos", "events_url": "https://api.github.com/users/Marwen94/events{/privacy}", "received_events_url": "https://api.github.com/users/Marwen94/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @marwen94, I'm struggling to reproduce this here. I'm not familiar with the Poetry dependency manager, but I tried reproducing those package versions in a Python 3.9 env with `pip` and the model loaded fine - the issue seems to be very specific to the environment you're using.\r\n\r\nCan you figure out a list of package versions that I can install with `conda` or `pip` that reproduces the issue? Once I can reproduce it here I can work on diagnosing and fixing it!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### System Info I'm trying to load a AutoModel pre-trained model. However, I receiving the following error : ``` ImportError: AutoModel requires the PyTorch library but it was not found in your environment. However, we were able to find a TensorFlow installation. TensorFlow classes begin with "TF", but are otherwise identically named to our PyTorch classes. This means that the TF equivalent of the class you tried to import would be "TFAutoModel". If you want to use TensorFlow, please use TF classes instead! ``` I do have Pytorch installed : ``` torch==2.0.0 torchvision==0.16.2 ``` transformers-cli env : ``` - `transformers` version: 4.36.2 - Platform: macOS-14.2.1-x86_64-i386-64bit - Python version: 3.11.7 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ``` Thanks a lot! ### Who can help? @gante and @Rocketknight1 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1 . Create an activate a virtual env using this poetry file : ``` [tool.poetry] name = "test" version = "1.0.0" authors = ["Marwen Taleb"] readme = "README.md" [tool.poetry.dependencies] python = ">=3.8,<3.12" transformers="4.36.2" scikit-learn = "^1.3.2" pandas = "2.0.0" torch = "2.0.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" ``` 2 . Run this python script : ``` from transformers import AutoModel model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True) ``` 3. You should received the above described error. ### Expected behavior I expect to be able to instantiate an AutoModel from a pretrained model when having Pytorch installed.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28282/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28282/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28281/comments
https://api.github.com/repos/huggingface/transformers/issues/28281/events
https://github.com/huggingface/transformers/issues/28281
2,060,205,528
I_kwDOCUB6oc56zD3Y
28,281
Issue with installing transformers with poetry on M1 Mac
{ "login": "unography", "id": 5240449, "node_id": "MDQ6VXNlcjUyNDA0NDk=", "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unography", "html_url": "https://github.com/unography", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "organizations_url": "https://api.github.com/users/unography/orgs", "repos_url": "https://api.github.com/users/unography/repos", "events_url": "https://api.github.com/users/unography/events{/privacy}", "received_events_url": "https://api.github.com/users/unography/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @unography, thanks for raising this issue. \r\n\r\nThis appears to be a `poetry` releated issue - `transformers` can be successfully installed with `pip`. Did you run `pip wheel --use-pep517 \"flash-attn (==2.1.0)\"` as recommended by the error message? \r\n\r\nNote, flash-attn is needed if using flash attention for a model. However, it's not required, you can use 'regular' attention for our models, and as such it's not listed as a requirement in our `setup.py`. " ]
1,703
1,706
null
CONTRIBUTOR
null
### System Info Apple M1 Pro, OS: Sonoma 14.2.1 Python: 3.11 Poetry: 1.6.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction On a M1 Mac, running `poetry add transformers` gives the following error - ``` > poetry add transformers Using version ^4.36.2 for transformers Updating dependencies Resolving dependencies... (2.1s) Package operations: 2 installs, 0 updates, 0 removals • Installing flash-attn (2.1.0): Failed ChefBuildError Backend subprocess exited when trying to invoke get_requires_for_build_wheel Traceback (most recent call last): File "/Users/macuser/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/pyproject_hooks/_in_process/_in_process.py", line 353, in <module> main() File "/Users/macuser/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/pyproject_hooks/_in_process/_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/macuser/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "/var/folders/kn/xw570mln5d36vl516r0pxcf40000gn/T/tmp5nx5qcol/.venv/lib/python3.11/site-packages/setuptools/build_meta.py", line 325, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/var/folders/kn/xw570mln5d36vl516r0pxcf40000gn/T/tmp5nx5qcol/.venv/lib/python3.11/site-packages/setuptools/build_meta.py", line 295, in _get_build_requires self.run_setup() File "/var/folders/kn/xw570mln5d36vl516r0pxcf40000gn/T/tmp5nx5qcol/.venv/lib/python3.11/site-packages/setuptools/build_meta.py", line 480, in run_setup super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script) File "/var/folders/kn/xw570mln5d36vl516r0pxcf40000gn/T/tmp5nx5qcol/.venv/lib/python3.11/site-packages/setuptools/build_meta.py", line 311, in run_setup exec(code, locals()) File "<string>", line 8, in <module> ModuleNotFoundError: No module named 'packaging' at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/installation/chef.py:147 in _prepare 143│ 144│ error = ChefBuildError("\n\n".join(message_parts)) 145│ 146│ if error is not None: → 147│ raise error from None 148│ 149│ return path 150│ 151│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path: Note: This error originates from the build backend, and is likely not a problem with poetry but with flash-attn (2.1.0) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "flash-attn (==2.1.0)"'. ``` `pip install transformers` works fine. ### Expected behavior transformers library being installed
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28281/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28280/comments
https://api.github.com/repos/huggingface/transformers/issues/28280/events
https://github.com/huggingface/transformers/issues/28280
2,060,202,511
I_kwDOCUB6oc56zDIP
28,280
Struk during training on multi-GPUs with deepspeed, but work with single GPU
{ "login": "loveunk", "id": 1678567, "node_id": "MDQ6VXNlcjE2Nzg1Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/1678567?v=4", "gravatar_id": "", "url": "https://api.github.com/users/loveunk", "html_url": "https://github.com/loveunk", "followers_url": "https://api.github.com/users/loveunk/followers", "following_url": "https://api.github.com/users/loveunk/following{/other_user}", "gists_url": "https://api.github.com/users/loveunk/gists{/gist_id}", "starred_url": "https://api.github.com/users/loveunk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/loveunk/subscriptions", "organizations_url": "https://api.github.com/users/loveunk/orgs", "repos_url": "https://api.github.com/users/loveunk/repos", "events_url": "https://api.github.com/users/loveunk/events{/privacy}", "received_events_url": "https://api.github.com/users/loveunk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "tried to run the following code, stuck with backend `nccl`, but works with `gloo`:\r\n\r\n```python\r\nimport torch.distributed as dist\r\nimport argparse\r\nimport torch\r\n\r\ntorch.cuda.set_device(int(os.environ['LOCAL_RANK']))\r\ndevice = torch.device(\"cuda\", int(os.environ['LOCAL_RANK']))\r\n\r\ndist.init_process_group(\"nccl\") \r\ndist.all_reduce(torch.ones(1).to(device), op=dist.ReduceOp.SUM)\r\n```\r\n\r\nrun command: \r\n```shell\r\npython -m torch.distributed.launch --nproc_per_node=2 test.py\r\n```", "It looks like there are some NCCL issues with my GPU server.\r\nThe training works after turning off the GPU P2P communication with `export NCCL_P2P_DISABLE=1`.", "FYI: https://blog.csdn.net/qq_40947610/article/details/128118180", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @pacman100 @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction During the training with the offical LLaVA code (https://github.com/haotian-liu/LLaVA), it stucks with multi-GPUs (2 or 4 A6000), but works with a single GPU. Here's my script: ``` deepspeed --include localhost:2,3 --master_port 61000 llava/train/train_mem.py \ --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \ --deepspeed ./scripts/zero2.json \ --model_name_or_path /drive/models/llava-v1.5-13b \ --version v1 \ --data_path ./playground/data/extract_textvqa.json \ --image_folder ./playground/data \ --vision_tower openai/clip-vit-large-patch14-336 \ --mm_projector_type mlp2x_gelu \ --mm_vision_select_layer -2 \ --mm_use_im_start_end False \ --mm_use_im_patch_token False \ --image_aspect_ratio pad \ --group_by_modality_length True \ --bf16 True \ --output_dir ./checkpoints/llava-v1.5-13b-task-lora-2gpu \ --num_train_epochs 1 \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 4 \ --gradient_accumulation_steps 1 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 50000 \ --save_total_limit 1 \ --learning_rate 2e-4 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 True \ --model_max_length 2048 \ --gradient_checkpointing True \ --dataloader_num_workers 4 \ --lazy_preprocess True \ --report_to wandb ``` zero2.json: ``` { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "train_micro_batch_size_per_gpu": "auto", "train_batch_size": "auto", "gradient_accumulation_steps": "auto", "zero_optimization": { "stage": 2, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto" } } ``` It stuck after loading the model, here the call stack with `faulthandler.dump_traceback_later(20, repeat=True)`: ``` Timeout (0:00:20)! Thread 0x00007f8570ecc700 (most recent call first): File "/home/vscode/miniconda3/envs/llava/lib/python3.10/threading.py", line 324 in wait File "/home/vscode/miniconda3/envs/llava/lib/python3.10/threading.py", line 607 in wait File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/tqdm/_monitor.py", line 60 in run File "/home/vscode/miniconda3/envs/llava/lib/python3.10/threading.py", line 1016 in _bootstrap_inner File "/home/vscode/miniconda3/envs/llava/lib/python3.10/threading.py", line 973 in _bootstrap Thread 0x00007f87f88404c0 (most recent call first): File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1570 in broadcast File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1451 in wrapper File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/comm/torch.py", line 129 in broadcast File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 216 in broadcast File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 116 in log_wrapper File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1012 in _broadcast_model File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1084 in _configure_distributed_model File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 267 in __init__ File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/__init__.py", line 165 in initialize File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/accelerate/accelerator.py", line 1537 in _prepare_deepspeed File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/accelerate/accelerator.py", line 1198 in prepare File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/transformers/trainer.py", line 1656 in _inner_training_loop File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/transformers/trainer.py", line 1539 in train File "/drive/LLaVA/llava/train/train.py", line 935 in train File "/drive/LLaVA/llava/train/train_mem.py", line 16 in <module> Timeout (0:00:20)! Thread 0x00007f5f4cacd700 (most recent call first): File "/home/vscode/miniconda3/envs/llava/lib/python3.10/threading.py", line 324 in wait File "/home/vscode/miniconda3/envs/llava/lib/python3.10/threading.py", line 607 in wait File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/tqdm/_monitor.py", line 60 in run File "/home/vscode/miniconda3/envs/llava/lib/python3.10/threading.py", line 1016 in _bootstrap_inner File "/home/vscode/miniconda3/envs/llava/lib/python3.10/threading.py", line 973 in _bootstrap Thread 0x00007f61d0f444c0 (most recent call first): File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1570 in broadcast File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1451 in wrapper File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/comm/torch.py", line 129 in broadcast File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 216 in broadcast File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/comm/comm.py", line 116 in log_wrapper File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1012 in _broadcast_model File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1084 in _configure_distributed_model File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 267 in __init__ File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/deepspeed/__init__.py", line 165 in initialize File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/accelerate/accelerator.py", line 1537 in _prepare_deepspeed File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/accelerate/accelerator.py", line 1198 in prepare File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/transformers/trainer.py", line 1656 in _inner_training_loop File "/home/vscode/miniconda3/envs/llava/lib/python3.10/site-packages/transformers/trainer.py", line 1539 in train File "/drive/LLaVA/llava/train/train.py", line 935 in train File "/drive/LLaVA/llava/train/train_mem.py", line 16 in <module> ``` ### Expected behavior should do training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28280/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28279/comments
https://api.github.com/repos/huggingface/transformers/issues/28279/events
https://github.com/huggingface/transformers/pull/28279
2,060,043,780
PR_kwDOCUB6oc5i8EpO
28,279
refactor replace_linear for ease of additions and add unit tests
{ "login": "Titus-von-Koeller", "id": 9048635, "node_id": "MDQ6VXNlcjkwNDg2MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/9048635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Titus-von-Koeller", "html_url": "https://github.com/Titus-von-Koeller", "followers_url": "https://api.github.com/users/Titus-von-Koeller/followers", "following_url": "https://api.github.com/users/Titus-von-Koeller/following{/other_user}", "gists_url": "https://api.github.com/users/Titus-von-Koeller/gists{/gist_id}", "starred_url": "https://api.github.com/users/Titus-von-Koeller/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Titus-von-Koeller/subscriptions", "organizations_url": "https://api.github.com/users/Titus-von-Koeller/orgs", "repos_url": "https://api.github.com/users/Titus-von-Koeller/repos", "events_url": "https://api.github.com/users/Titus-von-Koeller/events{/privacy}", "received_events_url": "https://api.github.com/users/Titus-von-Koeller/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[ { "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false } ]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28279). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,703
1,706
null
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28279", "html_url": "https://github.com/huggingface/transformers/pull/28279", "diff_url": "https://github.com/huggingface/transformers/pull/28279.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28279.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28278/comments
https://api.github.com/repos/huggingface/transformers/issues/28278/events
https://github.com/huggingface/transformers/pull/28278
2,059,261,361
PR_kwDOCUB6oc5i7Zjn
28,278
add test marker to run all tests with @require_bitsandbytes
{ "login": "Titus-von-Koeller", "id": 9048635, "node_id": "MDQ6VXNlcjkwNDg2MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/9048635?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Titus-von-Koeller", "html_url": "https://github.com/Titus-von-Koeller", "followers_url": "https://api.github.com/users/Titus-von-Koeller/followers", "following_url": "https://api.github.com/users/Titus-von-Koeller/following{/other_user}", "gists_url": "https://api.github.com/users/Titus-von-Koeller/gists{/gist_id}", "starred_url": "https://api.github.com/users/Titus-von-Koeller/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Titus-von-Koeller/subscriptions", "organizations_url": "https://api.github.com/users/Titus-von-Koeller/orgs", "repos_url": "https://api.github.com/users/Titus-von-Koeller/repos", "events_url": "https://api.github.com/users/Titus-von-Koeller/events{/privacy}", "received_events_url": "https://api.github.com/users/Titus-von-Koeller/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Submitting this as a separate PR, to keep things clean/atomic.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28278). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Really strange, I just committed the marker description string change that @ArthurZucker suggested, but the tests are failing with unrelated stuff. Maybe the GH Runner doesn't have the right hardware for those tests?\r\n\r\n```\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_generation - RuntimeError: \"LayerNormKernelImpl\" not implemented for 'Half'\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_audio_classification - ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX).\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_glue - ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX).\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_image_classification - ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX).\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_semantic_segmentation - ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX).\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_speech_recognition_ctc - ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX).\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_speech_recognition_ctc_adapter - ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX).\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_speech_recognition_seq2seq - ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX).\r\nFAILED examples/pytorch/test_pytorch_examples.py::ExamplesTests::test_run_vit_mae_pretraining - ValueError: FP16 Mixed precision training with AMP or APEX (`--fp16`) and FP16 half precision evaluation (`--fp16_full_eval`) can only be used on CUDA or NPU devices or certain XPU devices (with IPEX).\r\n```", "Yes it's unrelated to your PR don't worry! ", "rebasing on main should suffice to fix this ! ", "Hmm it seems to fail on other PRs too, e.g. :https://github.com/huggingface/transformers/pull/28711", "don't have good idea. i just re-triggered that job and let's see" ]
1,703
1,708
1,708
CONTRIBUTOR
null
# What does this PR do? add test marker to run all tests with @require_bitsandbytes ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? Not needed IMO. ## Who can review? @younesbelkada, do I need any documentation for this and if yes, where?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28278/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28278/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28278", "html_url": "https://github.com/huggingface/transformers/pull/28278", "diff_url": "https://github.com/huggingface/transformers/pull/28278.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28278.patch", "merged_at": 1708044789000 }
https://api.github.com/repos/huggingface/transformers/issues/28277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28277/comments
https://api.github.com/repos/huggingface/transformers/issues/28277/events
https://github.com/huggingface/transformers/pull/28277
2,059,243,282
PR_kwDOCUB6oc5i7Vtf
28,277
enable training mask2former and maskformer for transformers trainer
{ "login": "SangbumChoi", "id": 34004152, "node_id": "MDQ6VXNlcjM0MDA0MTUy", "avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SangbumChoi", "html_url": "https://github.com/SangbumChoi", "followers_url": "https://api.github.com/users/SangbumChoi/followers", "following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}", "gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}", "starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions", "organizations_url": "https://api.github.com/users/SangbumChoi/orgs", "repos_url": "https://api.github.com/users/SangbumChoi/repos", "events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}", "received_events_url": "https://api.github.com/users/SangbumChoi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "FYI, original mask2former/maskformer is also not scalar. However, this actually doesn't matter at the `loss.backward()`" ]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> Originally the output of loss_dict was like as follows ``` {'loss_mask': tensor([2.1010], device='cuda:1', grad_fn=<MulBackward0>), 'loss_dice': tensor([2.0065], device='cuda:1', grad_fn=<MulBackward0>), 'loss_cross_entropy': tensor(8.3828, device='cuda:1', grad_fn=<MulBackward0>), 'loss_mask_0': tensor([1.5738], device='cuda:1', grad_fn=<MulBackward0>), 'loss_dice_0': tensor([3.0623], device='cuda:1', grad_fn=<MulBackward0>), 'loss_cross_entropy_0': tensor(8.3828, device='cuda:1', grad_fn=<MulBackwa ``` which returns total loss as 1 dimension shape of tensor like tensor([]). However, transformers trainer https://github.com/huggingface/transformers/blob/3cefac1d974db5e2825a0cb2b842883a628be7a0/src/transformers/trainer.py#L1772 requires float tensor type. From the [example](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/MaskFormer/Fine-tuning/Fine_tune_MaskFormer_on_an_instance_segmentation_dataset_(ADE20k_full).ipynb) you may notice that output loss was returning one dimensional shape. The reason was num_masks_pt = torch.as_tensor([num_masks], dtype=torch.float, device=device) returns one dimensional shape which can be also single integer tensor. This will not make any difference of training/inference logic but enables to train with transformers trainer. @muellerzr and @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28277/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28277", "html_url": "https://github.com/huggingface/transformers/pull/28277", "diff_url": "https://github.com/huggingface/transformers/pull/28277.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28277.patch", "merged_at": 1704358405000 }
https://api.github.com/repos/huggingface/transformers/issues/28276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28276/comments
https://api.github.com/repos/huggingface/transformers/issues/28276/events
https://github.com/huggingface/transformers/issues/28276
2,059,190,292
I_kwDOCUB6oc56vMAU
28,276
LLaVa inference crashes with error: Error device-side assert triggered at line 738 in file /mmfs1/gscratch/zlab/timdettmers/git/bitsandbytes/csrc/ops.cu
{ "login": "Meatfucker", "id": 74834323, "node_id": "MDQ6VXNlcjc0ODM0MzIz", "avatar_url": "https://avatars.githubusercontent.com/u/74834323?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Meatfucker", "html_url": "https://github.com/Meatfucker", "followers_url": "https://api.github.com/users/Meatfucker/followers", "following_url": "https://api.github.com/users/Meatfucker/following{/other_user}", "gists_url": "https://api.github.com/users/Meatfucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/Meatfucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Meatfucker/subscriptions", "organizations_url": "https://api.github.com/users/Meatfucker/orgs", "repos_url": "https://api.github.com/users/Meatfucker/repos", "events_url": "https://api.github.com/users/Meatfucker/events{/privacy}", "received_events_url": "https://api.github.com/users/Meatfucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I'm having the same issue while doing inference with the same weights (1.5-13b), also on Linux and GPU. Interestingly, I never have the issue when doing inference with the smaller (7b) model.", "This should have been fixed on main by #28032 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.36.1 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35 - Python version: 3.11.5 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When running inference on LLaVa, it sometimes will crash seemingly randomly with the following error. `Error device-side assert triggered at line 738 in file /mmfs1/gscratch/zlab/timdettmers/git/bitsandbytes/csrc/ops.cu /opt/conda/conda-bld/pytorch_1699449183005/work/aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [0,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.` Here is roughly how Im loading the model and doing inference. ```py model_name = "llava-hf/llava-1.5-13b-hf" model = LlavaForConditionalGeneration.from_pretrained(model_name, torch_dtype=torch.float16, low_cpu_mem_usage=True, load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16, bnb_4bit_use_double_quant=True) tokenizer = LlamaTokenizerFast.from_pretrained(model_name) multimodal_tokenizer = AutoProcessor.from_pretrained(model_name) ``` ```py async def generate(self): """function for generating responses with the llm""" llm_defaults = await get_defaults('global') userhistory = await self.load_history() # load the users past history to include in the prompt tempimage = None if self.reroll is True: await self.delete_last_history_pair() self.reroll = False if self.image_url: image_url = self.image_url[0] # Consider the first image URL found response = requests.get(image_url) if response.status_code == 200: image_data = BytesIO(response.content) new_image = Image.open(image_data) tempimage = new_image image_url_pattern = r'\bhttps?://\S+\.(?:png|jpg|jpeg|gif)\S*\b' # Updated regex pattern for image URLs self.prompt = re.sub(image_url_pattern, '', self.prompt) with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=True, enable_mem_efficient=True): # enable flash attention for faster inference with torch.no_grad(): if tempimage: if self.user.id not in self.metatron.llm_user_history or not self.metatron.llm_user_history[self.user.id]: formatted_prompt = f'{llm_defaults["wordsystemprompt"][0]}\n\nUSER:<image>{self.prompt}\nASSISTANT:' # if there is no history, add the system prompt to the beginning else: formatted_prompt = f'{userhistory}\nUSER:<image>{self.prompt}\nASSISTANT:' inputs = self.multimodal_tokenizer(formatted_prompt, tempimage, return_tensors='pt').to("cuda") llm_generate_logger = logger.bind(user=self.user.name, prompt=self.prompt) llm_generate_logger.info("WORDGEN Generate Started.") output = await asyncio.to_thread(self.model.generate, **inputs, max_new_tokens=200, do_sample=False) llm_generate_logger.debug("WORDGEN Generate Completed") result = self.multimodal_tokenizer.decode(output[0], skip_special_tokens=True) else: if self.user.id not in self.metatron.llm_user_history or not self.metatron.llm_user_history[self.user.id]: formatted_prompt = f'{llm_defaults["wordsystemprompt"][0]}\n\nUSER:{self.prompt}\nASSISTANT:' # if there is no history, add the system prompt to the beginning else: formatted_prompt = f'{userhistory}\nUSER:{self.prompt}\nASSISTANT:' inputs = self.tokenizer(formatted_prompt, return_tensors='pt').to("cuda") llm_generate_logger = logger.bind(user=self.user.name, prompt=self.prompt) llm_generate_logger.info("WORDGEN Generate Started.") output = await asyncio.to_thread(self.model.generate, **inputs, max_new_tokens=200, do_sample=False) llm_generate_logger.debug("WORDGEN Generate Completed") result = self.tokenizer.decode(output[0], skip_special_tokens=True) response_index = result.rfind("ASSISTANT:") # this and the next line extract the bots response for posting to the channel self.llm_response = result[response_index + len("ASSISTANT:"):].strip() await self.save_history() # save the response to the users history gc.collect() ``` ### Expected behavior I would expect it to infer and return a response as it normally does. Interestingly, this same code never crashes when ran on windows, only on linux.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28276/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28275/comments
https://api.github.com/repos/huggingface/transformers/issues/28275/events
https://github.com/huggingface/transformers/issues/28275
2,058,999,519
I_kwDOCUB6oc56udbf
28,275
Failed to import transformers.modeling_utils
{ "login": "sankexin", "id": 33318353, "node_id": "MDQ6VXNlcjMzMzE4MzUz", "avatar_url": "https://avatars.githubusercontent.com/u/33318353?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sankexin", "html_url": "https://github.com/sankexin", "followers_url": "https://api.github.com/users/sankexin/followers", "following_url": "https://api.github.com/users/sankexin/following{/other_user}", "gists_url": "https://api.github.com/users/sankexin/gists{/gist_id}", "starred_url": "https://api.github.com/users/sankexin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sankexin/subscriptions", "organizations_url": "https://api.github.com/users/sankexin/orgs", "repos_url": "https://api.github.com/users/sankexin/repos", "events_url": "https://api.github.com/users/sankexin/events{/privacy}", "received_events_url": "https://api.github.com/users/sankexin/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "这是来自QQ邮箱的假期自动回复邮件。\n \n您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。" ]
1,703
1,706
null
NONE
null
### System Info RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback): Failed to import transformers.generation.utils because of the following error (look up to see its traceback): /usr/local/lib/python3.8/dist-packages/transformer_engine_extensions.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Traceback (most recent call last): File "app.py", line 15, in <module> from facechain.utils import snapshot_download, check_ffmpeg, set_spawn_method, project_dir, join_worker_data_dir File "/home/wp/sotanv/facechain/facechain/utils.py", line 5, in <module> from modelscope import snapshot_download as ms_snapshot_download File "/usr/local/lib/python3.8/dist-packages/modelscope/__init__.py", line 102, in <module> fix_transformers_upgrade() File "/usr/local/lib/python3.8/dist-packages/modelscope/utils/automodel_utils.py", line 44, in fix_transformers_upgrade from transformers import PreTrainedModel File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 1372, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/usr/local/lib/python3.8/dist-packages/transformers/utils/import_utils.py", line 1384, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.modeling_utils because of the following error (look up to see its traceback): Failed to import transformers.generation.utils because of the following error (look up to see its traceback): /usr/local/lib/python3.8/dist-packages/transformer_engine_extensions.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c106detail23torchInternalAssertFailEPKcS2_jS2_RKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE ### Expected behavior fix transformers bug
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28275/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28274/comments
https://api.github.com/repos/huggingface/transformers/issues/28274/events
https://github.com/huggingface/transformers/pull/28274
2,058,753,668
PR_kwDOCUB6oc5i5ttU
28,274
Support : Adding Support for LlamaForQuestionAnswering class
{ "login": "Tanmaypatil123", "id": 77950208, "node_id": "MDQ6VXNlcjc3OTUwMjA4", "avatar_url": "https://avatars.githubusercontent.com/u/77950208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tanmaypatil123", "html_url": "https://github.com/Tanmaypatil123", "followers_url": "https://api.github.com/users/Tanmaypatil123/followers", "following_url": "https://api.github.com/users/Tanmaypatil123/following{/other_user}", "gists_url": "https://api.github.com/users/Tanmaypatil123/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tanmaypatil123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tanmaypatil123/subscriptions", "organizations_url": "https://api.github.com/users/Tanmaypatil123/orgs", "repos_url": "https://api.github.com/users/Tanmaypatil123/repos", "events_url": "https://api.github.com/users/Tanmaypatil123/events{/privacy}", "received_events_url": "https://api.github.com/users/Tanmaypatil123/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "hey @NielsRogge @ArthurZucker @younesbelkada can anyone from you guys review this PR ,", "Hey @ArthurZucker, about this comment:\r\n\r\n> Hey! Thanks for contributing! I don't mind having this merged, but I think it should be pretty easy for anyone to build on top of transformers and add their custom logic. If the community has a lot of interest for this let's merge it, usually we wait for the feature request to have interest (cc @NielsRogge )\r\n\r\nI just wanted to clarify that I am interested in this merge. `LlamaForQuestionAnswering` is something I would appreciate as an in-built utility in the library!\r\n\r\nHappy 2024 to everyone 🤗\r\n", "@ArthurZucker can i copy from modeling_falcon. I think they are pretty similar.", "sure, whichever fits best here" ]
1,703
1,706
null
CONTRIBUTOR
null
# Adding Support for `LlamaForQuestionAnswering`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # issue #28265 Added `LlamaForQuestionAnswering` model class in [modeling_llama](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py) with required test cases. Following is the example for uitilization of `LlamaForQuestionAnswering` for question-answering. ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering import torch model = AutoModelForQuestionAnswering.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v0.3") tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v0.3") text = """ The Amazon rainforest (Portuguese: Floresta Amazônica or Amazônia; Spanish: Selva Amazónica, Amazonía or usually Amazonia; French: Forêt amazonienne; Dutch: Amazoneregenwoud), also known in English as Amazonia or the Amazon Jungle, is a moist broadleaf forest that covers most of the Amazon basin of South America. This basin encompasses 7,000,000 square kilometres (2,700,000 sq mi), of which 5,500,000 square kilometres (2,100,000 sq mi) are covered by the rainforest. This region includes territory belonging to nine nations. The majority of the forest is contained within Brazil, with 60% of the rainforest, followed by Peru with 13%, Colombia with 10%, and with minor amounts in Venezuela, Ecuador, Bolivia, Guyana, Suriname and French Guiana. States or departments in four nations contain "Amazonas" in their names. The Amazon represents over half of the planet's remaining rainforests, and comprises the largest and most biodiverse tract of tropical rainforest in the world, with an estimated 390 billion individual trees divided into 16,000 species. """ question = "Which name is also used to describe the Amazon rainforest in English?" inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt") input_ids = inputs["input_ids"].tolist()[0] text_tokens = tokenizer.convert_ids_to_tokens(input_ids) output = model(**inputs) answer_start = torch.argmax( output.start_logits ) # Get the most likely beginning of answer with the argmax of the score answer_end = torch.argmax(output.start_logits) + 1 # Get the most likely end of answer with the argmax of the score answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end])) print(f"Question: {question}") print(f"Answer: {answer}\n") ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28274", "html_url": "https://github.com/huggingface/transformers/pull/28274", "diff_url": "https://github.com/huggingface/transformers/pull/28274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28274.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28273/comments
https://api.github.com/repos/huggingface/transformers/issues/28273/events
https://github.com/huggingface/transformers/issues/28273
2,058,527,209
I_kwDOCUB6oc56sqHp
28,273
Can't load baichuan2-7b-chat from Huggingface Hub but can load model from local on mac m2
{ "login": "statelesshz", "id": 28150734, "node_id": "MDQ6VXNlcjI4MTUwNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/statelesshz", "html_url": "https://github.com/statelesshz", "followers_url": "https://api.github.com/users/statelesshz/followers", "following_url": "https://api.github.com/users/statelesshz/following{/other_user}", "gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}", "starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions", "organizations_url": "https://api.github.com/users/statelesshz/orgs", "repos_url": "https://api.github.com/users/statelesshz/repos", "events_url": "https://api.github.com/users/statelesshz/events{/privacy}", "received_events_url": "https://api.github.com/users/statelesshz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @statelesshz, thanks for reporting this issue! \r\n\r\nIndeed, I get the same error if I try to run your example script with a GPU. I looks like this is coming from the files on the hub themselves - likely in the definition in [quantizer.py](https://huggingface.co./baichuan-inc/Baichuan2-7B-Chat/blob/main/quantizer.py). \r\n\r\nInstalling bitsandbytes should be enough to get this working. \r\n\r\nIf you think this should work without a bitsandbytes dependency, I suggest opening a discussion on the [model page on the hub](https://huggingface.co./baichuan-inc/Baichuan2-7B-Chat) flagging this issue.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
CONTRIBUTOR
null
### System Info - `transformers` version: 4.36.2 - Platform: macOS-14.0-arm64-arm-64bit - Python version: 3.8.18 - Huggingface_hub version: 0.20.1 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.2 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? I'm not sure who can help but @amyeroberts could you take a look? thx ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction use demo from baichuan2-7b-chat's model card ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer from transformers.generation.utils import GenerationConfig tokenizer = AutoTokenizer.from_pretrained("baichuan-inc/Baichuan2-7B-Chat", use_fast=False, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-7B-Chat", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) model.generation_config = GenerationConfig.from_pretrained("baichuan-inc/Baichuan2-7B-Chat") messages = [] messages.append({"role": "user", "content": "解释一下“温故而知新”"}) response = model.chat(tokenizer, messages) print(response) ``` the error output is as follows: ``` config.json: 100%|████████████████████████████████████████████████████████████████████████| 758/758 [00:00<00:00, 306kB/s] configuration_baichuan.py: 100%|█████████████████████████████████████████████████████| 2.45k/2.45k [00:00<00:00, 3.53MB/s] A new version of the following files was downloaded from https://huggingface.co./baichuan-inc/Baichuan2-7B-Chat: - configuration_baichuan.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. modeling_baichuan.py: 100%|██████████████████████████████████████████████████████████| 33.2k/33.2k [00:00<00:00, 3.43MB/s] generation_utils.py: 100%|███████████████████████████████████████████████████████████| 2.97k/2.97k [00:00<00:00, 4.35MB/s] A new version of the following files was downloaded from https://huggingface.co./baichuan-inc/Baichuan2-7B-Chat: - generation_utils.py . Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision. quantizer.py: 100%|██████████████████████████████████████████████████████████████████| 9.07k/9.07k [00:00<00:00, 7.34MB/s] Traceback (most recent call last): File "test.py", line 5, in <module> model = AutoModelForCausalLM.from_pretrained("baichuan-inc/Baichuan2-7B-Chat", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) File "/opt/homebrew/Caskroom/miniconda/base/envs/hf/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 553, in from_pretrained model_class = get_class_from_dynamic_module( File "/opt/homebrew/Caskroom/miniconda/base/envs/hf/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 488, in get_class_from_dynamic_module final_module = get_cached_module_file( File "/opt/homebrew/Caskroom/miniconda/base/envs/hf/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 353, in get_cached_module_file get_cached_module_file( File "/opt/homebrew/Caskroom/miniconda/base/envs/hf/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 315, in get_cached_module_file modules_needed = check_imports(resolved_module_file) File "/opt/homebrew/Caskroom/miniconda/base/envs/hf/lib/python3.8/site-packages/transformers/dynamic_module_utils.py", line 180, in check_imports raise ImportError( ImportError: This modeling file requires the following packages that were not found in your environment: bitsandbytes. Run `pip install bitsandbytes` ``` But if I download everything manually and load the model locally, everything works fine: ``` import torch from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained(<the-local-path-of-baichuan2-7b-chat>, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True) ``` ### Expected behavior Correctly download model weight from hubs and load it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28273/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28272/comments
https://api.github.com/repos/huggingface/transformers/issues/28272/events
https://github.com/huggingface/transformers/issues/28272
2,058,442,920
I_kwDOCUB6oc56sVio
28,272
Safely extend vocabulary of BPE tokenizers
{ "login": "ekurtulus", "id": 66876436, "node_id": "MDQ6VXNlcjY2ODc2NDM2", "avatar_url": "https://avatars.githubusercontent.com/u/66876436?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekurtulus", "html_url": "https://github.com/ekurtulus", "followers_url": "https://api.github.com/users/ekurtulus/followers", "following_url": "https://api.github.com/users/ekurtulus/following{/other_user}", "gists_url": "https://api.github.com/users/ekurtulus/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekurtulus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekurtulus/subscriptions", "organizations_url": "https://api.github.com/users/ekurtulus/orgs", "repos_url": "https://api.github.com/users/ekurtulus/repos", "events_url": "https://api.github.com/users/ekurtulus/events{/privacy}", "received_events_url": "https://api.github.com/users/ekurtulus/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @ekurtulus, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports." ]
1,703
1,706
null
NONE
null
### Feature request Allow extension of BPE tokenizers by further training on new corpus. ### Motivation This would benefit finetuning English models for other languages ### Your contribution I do not know how to approach this in Transformers.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28272/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28271/comments
https://api.github.com/repos/huggingface/transformers/issues/28271/events
https://github.com/huggingface/transformers/issues/28271
2,058,366,503
I_kwDOCUB6oc56sC4n
28,271
Difference between facebook/dinov2-base and timm/vit_base_patch14_dinov2.lvd142m
{ "login": "lombardata", "id": 39915110, "node_id": "MDQ6VXNlcjM5OTE1MTEw", "avatar_url": "https://avatars.githubusercontent.com/u/39915110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lombardata", "html_url": "https://github.com/lombardata", "followers_url": "https://api.github.com/users/lombardata/followers", "following_url": "https://api.github.com/users/lombardata/following{/other_user}", "gists_url": "https://api.github.com/users/lombardata/gists{/gist_id}", "starred_url": "https://api.github.com/users/lombardata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lombardata/subscriptions", "organizations_url": "https://api.github.com/users/lombardata/orgs", "repos_url": "https://api.github.com/users/lombardata/repos", "events_url": "https://api.github.com/users/lombardata/events{/privacy}", "received_events_url": "https://api.github.com/users/lombardata/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@NielsRogge do you have any suggestion?\r\nThanks in advance !", "Hi @lombardata,\r\n\r\nIt's a valid question - thanks for asking. So Timm is a more research-focused library providing access to state-of-the-art image backbones, including recent ones like DINOv2 and SigLIP. The library provides a fully-fledged [train.py](https://github.com/huggingface/pytorch-image-models/blob/main/train.py) script which allows you to reproduce numbers of the original numbers up to a great extent. The library is currently aimed at image classification.\r\n\r\nThe Transformers library on the other hand is more production-focused, aimed at machine learning engineers that would like to put models in production rather than doing cutting edge research on them. It also aims to support various downstream tasks besides image classification.\r\n\r\nUsually, the strategy is to port models from Timm to Transformers if we think they are worthy additions, such that they can then also benefit from tools like the [Optimum](https://huggingface.co./docs/optimum/index) library, which provides ONNX exports of these models, among other optimizations. Another reason is that we usually also add various decoding heads to those models, e.g. [DPT](https://huggingface.co./docs/transformers/model_doc/dpt) in the Transformers library, which is a depth estimation framework, [can now be used with DINOv2 as backbone](https://huggingface.co./models?pipeline_tag=depth-estimation&other=dinov2&sort=trending). The same goes for [DETR](https://huggingface.co./docs/transformers/model_doc/detr) for instance, which leverages ResNet, Swin Transformer as backbones.", "Thank you very much for your answer.\r\n\r\n> Usually, the strategy is to port models from Timm to Transformers if we think they are worthy additions, \r\n\r\nSo in terms of performance, if I use a Timm model or the equivalent Transformer model I'm supposed to get the same results?", "@lombardata It depends exactly what you mean by performance. \r\n\r\nIn terms of running time, probably not. Each library has it's own way of structuring the models which means one is likely faster than the other. In terms of inference from a pretrained checkpoint, yes, we'd expect them to be very close. I wouldn't be surprised if there were some small differences however because all sorts of tricky things can influence values. In terms of fine-tuning, I would expect some differences. Timm has a greater emphasis on replication of training results and so it more likely to match the original model in this case. ", "Thank you very much !" ]
1,703
1,706
1,706
NONE
null
### Feature request Hi all, Maybe this question will be very stupid but I do not understand the difference between these two models : facebook/dinov2-base and timm/vit_base_patch14_dinov2.lvd142m (obviously this last model without the linear classifier on top). In my understanding, both are implementations of the DinoV2 base model pretrained on the LVD-142M dataset with self-supervised DINOv2 method. The only thing that changes is the library behind? If I want to finetune an image classification Neural network with the DinoV2 backbone, both will give me the same results? Thank you in advance for the explanation ! ### Motivation Choose the right backbone before training image classification models on top of DinoV2 backbones ### Your contribution clarify similarity between Dinov2 models
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28271/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28271/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28270/comments
https://api.github.com/repos/huggingface/transformers/issues/28270/events
https://github.com/huggingface/transformers/issues/28270
2,058,191,178
I_kwDOCUB6oc56rYFK
28,270
"Trying to create tensor with negative dimension -686245457367107674" when resuming from checkpoint with deepspeed
{ "login": "jonathanasdf", "id": 511073, "node_id": "MDQ6VXNlcjUxMTA3Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonathanasdf", "html_url": "https://github.com/jonathanasdf", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
closed
false
null
[]
[ "When I try without LoRA I get a different (maybe more useful?) error message\r\n\r\n```\r\n File \"/opt/venv/lib/python3.11/site-packages/transformers/trainer.py\", line 1827, in _inner_training_loop\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/opt/venv/lib/python3.11/site-packages/accelerate/data_loader.py\", line 646, in __iter__\r\n next_batch, next_batch_info = self._fetch_batches(main_iterator)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/venv/lib/python3.11/site-packages/accelerate/data_loader.py\", line 619, in _fetch_batches\r\n broadcast_object_list(batch_info)\r\n File \"/opt/venv/lib/python3.11/site-packages/accelerate/utils/operations.py\", line 491, in broadcast_object_list\r\n torch.distributed.broadcast_object_list(object_list, src=from_process)\r\n File \"/opt/venv/lib/python3.11/site-packages/torch/distributed/c10d_logger.py\", line 75, in wrapper\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/venv/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py\", line 2526, in broadcast_object_list\r\n object_tensor = torch.empty( # type: ignore[call-overload]\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: Trying to create tensor with negative dimension -686245457367107674: [-686245457367107674]\r\n```", "@pacman100 \r\n\r\nI have a repro with 2xa100\r\n\r\n```python\r\nimport logging\r\nimport os\r\nimport shutil\r\nimport tempfile\r\nimport datasets\r\nimport torch\r\nimport torch.distributed\r\nimport transformers\r\n\r\noutput = 'test'\r\nos.makedirs(output, exist_ok=True)\r\nlocal_output = tempfile.mkdtemp()\r\ntraining_args = transformers.TrainingArguments(\r\n local_output, \r\n per_device_train_batch_size=1, \r\n save_steps=2, \r\n max_steps=3, \r\n deepspeed={'zero_optimization': {'stage': 3}, 'train_micro_batch_size_per_gpu': 'auto'}, \r\n report_to=[])\r\nif training_args.local_rank <= 0:\r\n logging.basicConfig(level=logging.INFO, format='%(levelname)-8s | %(message)s')\r\n transformers.logging.set_verbosity(logging.INFO)\r\n # Copy from remote (eg. cloud storage) to local. Here we use two local dirs as an example.\r\n shutil.copytree(output, local_output, dirs_exist_ok=True)\r\nif torch.distributed.is_initialized():\r\n torch.distributed.barrier()\r\n\r\nlast_checkpoint = transformers.trainer_utils.get_last_checkpoint(local_output)\r\ndataset = {'input_ids': torch.tensor([0]), 'labels': torch.tensor([0]), 'attention_mask': torch.tensor([1])}\r\ntrain_dataset = datasets.IterableDataset.from_generator(lambda: (dataset for _ in range(100)))\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained('TinyLlama/TinyLlama-1.1B-Chat-v0.6')\r\ntrainer = transformers.Trainer(args=training_args, model=model, train_dataset=train_dataset)\r\ntrainer.train(resume_from_checkpoint=last_checkpoint)\r\nif training_args.local_rank <= 0:\r\n shutil.copytree(local_output, output, dirs_exist_ok=True)\r\n```\r\n\r\nrequirements.txt:\r\n```\r\n--pre --extra-index-url https://download.pytorch.org/whl/nightly/cu118\r\naccelerate==0.25.0\r\ndatasets==2.15.0\r\ndeepspeed==0.12.6\r\ntorch==2.3.0.dev20231229+cu118\r\ntransformers==4.36.2\r\n```\r\n\r\nIn a new folder, save as test.py. Run with `deepspeed test.py`. First run will initialize and run successfully and save checkpoint. Second run will try to load the checkpoint and crash.\r\n\r\nThis only crashes with deepspeed with multiple GPUs. Running with `python3 test.py` or `deepspeed --num_gpus=1 test.py` works fine. Additionally for some reason the copy from `output` to `local_output` is somehow important, if you remove `local_output` and all the `copytree` calls and just use `output` everywhere the bug also does not happen.\r\n\r\nAdditionally, if you use a Dataset instead of IterableDataset then it doesn't crash but just hangs forever while loading.\r\n```\r\ndataset = {'input_ids': torch.tensor([[0]]), 'labels': torch.tensor([[0]]), 'attention_mask': torch.tensor([[1]])}\r\ntrain_dataset = datasets.Dataset.from_dict(dataset)\r\n```", "Bump", "@pacman100 this is still failing with newest versions\r\n\r\nupdated requirements.txt\r\n```\r\n--pre --extra-index-url https://download.pytorch.org/whl/nightly/cu121\r\naccelerate==0.26.1\r\ndatasets==2.16.1\r\ndeepspeed==0.13.0\r\ntorch==2.3.0.dev20240122+cu121\r\ntransformers==4.37.0\r\n```", "Updated title to be more relevant and trying another ping", "cc @SunMarc if you can have a look as well 😉 ", "Hello, Thank you @jonathanasdf for the minimal reproducer. I can replicate it but this seems unrelated with DeepSpeed, the error stems from resuming dataloader cc @muellerzr . Also, when not using DeepSpeed I notice a complete hang:\r\n```\r\nTraceback (most recent call last):\r\n File \"/raid/sourab/temp/issues/transformers/issue_28270.py\", line 33, in <module>\r\n trainer.train(resume_from_checkpoint=last_checkpoint)\r\n File \"/raid/sourab/transformers/src/transformers/trainer.py\", line 1561, in train\r\n return inner_training_loop(\r\n File \"/raid/sourab/transformers/src/transformers/trainer.py\", line 1862, in _inner_training_loop\r\n for step, inputs in enumerate(epoch_iterator):\r\n File \"/raid/sourab/accelerate/src/accelerate/data_loader.py\", line 654, in __iter__\r\n next_batch, next_batch_info = self._fetch_batches(main_iterator)\r\n File \"/raid/sourab/accelerate/src/accelerate/data_loader.py\", line 627, in _fetch_batches\r\n broadcast_object_list(batch_info)\r\n File \"/raid/sourab/accelerate/src/accelerate/utils/operations.py\", line 506, in broadcast_object_list\r\n torch.distributed.broadcast_object_list(object_list, src=from_process)\r\n File \"/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/distributed/c10d_logger.py\", line 72, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py\", line 2422, in broadcast_object_list\r\n object_tensor = torch.empty( # type: ignore[call-overload]\r\nRuntimeError: Trying to create tensor with negative dimension -868913248929906688: [-868913248929906688]\r\n[2024-02-06 09:00:56,748] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 4105240\r\n[2024-02-06 09:00:56,983] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 4105241\r\n[2024-02-06 09:00:56,984] [ERROR] [launch.py:321:sigkill_handler] ['/raid/sourab/miniconda3/envs/hf/bin/python', '-u', 'issue_28270.py', '--local_rank=1'] exits with return code = 1\r\n```\r\n\r\nAlso, when directly using the output directory without the copying logic between temp dir and output dir, everything works as expected.", "Hi @pacman100, thank you for looking into it.\r\n\r\nI don't see how the code you added could be it. The if statement you added is in a block that is already under another `if training_args.local_rank <= 0:`", "oh, yes, didn't notice it was already in the `if training_args.local_rank <= 0:`. I ran the below code and it worked fine:\r\n\r\n```\r\nimport logging\r\nimport os\r\nimport shutil\r\nimport tempfile\r\nimport datasets\r\nimport torch\r\nimport torch.distributed\r\nimport transformers\r\n\r\noutput = 'test'\r\nos.makedirs(output, exist_ok=True)\r\nlocal_output = \"test-2\"\r\nos.makedirs(local_output, exist_ok=True)\r\ntraining_args = transformers.TrainingArguments(\r\n local_output, \r\n per_device_train_batch_size=1, \r\n save_steps=2, \r\n max_steps=3, \r\n deepspeed={'zero_optimization': {'stage': 3}, 'train_micro_batch_size_per_gpu': 'auto'}, \r\n report_to=[])\r\nif training_args.local_rank <= 0:\r\n logging.basicConfig(level=logging.INFO, format='%(levelname)-8s | %(message)s')\r\n transformers.logging.set_verbosity(logging.INFO)\r\n # Copy from remote (eg. cloud storage) to local. Here we use two local dirs as an example.\r\n shutil.copytree(output, local_output, dirs_exist_ok=True)\r\nif torch.distributed.is_initialized():\r\n torch.distributed.barrier()\r\n\r\nlast_checkpoint = transformers.trainer_utils.get_last_checkpoint(local_output)\r\ndataset = {'input_ids': torch.tensor([0]), 'labels': torch.tensor([0]), 'attention_mask': torch.tensor([1])}\r\ntrain_dataset = datasets.IterableDataset.from_generator(lambda: (dataset for _ in range(100)))\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained('TinyLlama/TinyLlama-1.1B-Chat-v0.6')\r\ntrainer = transformers.Trainer(args=training_args, model=model, train_dataset=train_dataset)\r\ntrainer.train(resume_from_checkpoint=last_checkpoint)\r\nif training_args.local_rank <= 0:\r\n shutil.copytree(local_output, output, dirs_exist_ok=True)\r\n```\r\n\r\nSo, it seems to be the temp directory related issue? Because if I change the code to \r\n\r\n```diff\r\nimport logging\r\nimport os\r\nimport shutil\r\nimport tempfile\r\nimport datasets\r\nimport torch\r\nimport torch.distributed\r\nimport transformers\r\n\r\noutput = 'test'\r\nos.makedirs(output, exist_ok=True)\r\n+ # local_output = \"test-2\"\r\n+ # os.makedirs(local_output, exist_ok=True)\r\n+ local_output = tempfile.mkdtemp()\r\n+ print(local_output)\r\ntraining_args = transformers.TrainingArguments(\r\n local_output, \r\n per_device_train_batch_size=1, \r\n save_steps=2, \r\n max_steps=3, \r\n deepspeed={'zero_optimization': {'stage': 3}, 'train_micro_batch_size_per_gpu': 'auto'}, \r\n report_to=[])\r\nif training_args.local_rank <= 0:\r\n logging.basicConfig(level=logging.INFO, format='%(levelname)-8s | %(message)s')\r\n transformers.logging.set_verbosity(logging.INFO)\r\n # Copy from remote (eg. cloud storage) to local. Here we use two local dirs as an example.\r\n shutil.copytree(output, local_output, dirs_exist_ok=True)\r\nif torch.distributed.is_initialized():\r\n torch.distributed.barrier()\r\n\r\nlast_checkpoint = transformers.trainer_utils.get_last_checkpoint(local_output)\r\ndataset = {'input_ids': torch.tensor([0]), 'labels': torch.tensor([0]), 'attention_mask': torch.tensor([1])}\r\ntrain_dataset = datasets.IterableDataset.from_generator(lambda: (dataset for _ in range(100)))\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained('TinyLlama/TinyLlama-1.1B-Chat-v0.6')\r\ntrainer = transformers.Trainer(args=training_args, model=model, train_dataset=train_dataset)\r\ntrainer.train(resume_from_checkpoint=last_checkpoint)\r\nif training_args.local_rank <= 0:\r\n shutil.copytree(local_output, output, dirs_exist_ok=True)\r\n```\r\n\r\nI see 2 different directories being created by each process instead of all processes using the same directory:\r\n```\r\n/tmp/tmpvmspza_x\r\n/tmp/tmpto8s4e6w\r\n[2024-02-06 10:18:40,312] [INFO] [real_accelerator.py:161:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n```", "Ah! that makes sense. The part in my code that makes the temp directories was last changed in Dec 2023, and back then it worked fine. So I guess there might be some change in the deepspeed launcher at some point that caused each process to create a different temp directory name...\r\n\r\nOr maybe at some point process 0 was in charge of orchestrating the checkpoint loading and the checkpoint dir from process 0 was broadcast to all processes and now each process tries to load its own shard and so sees the different folder.\r\n\r\nFrom a user code standpoint, because using torchrun/deepspeed causes the script to be executed multiple times, I'm not sure how I could define a shared temporary folder name to be used by all processes (other than creating a launch.sh script that creates the temp folder and passes it as an input arg which is kinda ugly). I'm not sure if anyone has thoughts here on how to do it. Probably if trainer.train() is called with different values for resume_from_checkpoint, there should be an error? Maybe we should be calling it with `resume_from_checkpoint=last_checkpoint if local_rank == 0 else None`?", "This issue happens in resuming dataloader part as mentioned https://github.com/huggingface/transformers/issues/28270#issuecomment-1928988185. Conventionally, users need to provide the same output directory to the Trainer.", "I agree that was the bug in my code and happy to close this issue. Just wanted to see what we can do here to improve the user experience (eg. raise error when output directories are not the same, provide a function that can generate the same random string across processes, etc)", "The specific error message with the dataloader is a red herring. The real issue is rank 0 getting `resume_from_checkpoint=something` while rank 1 getting `resume_from_checkpoint=None` means rank 0 is blocked on checkpoint loading while rank 1 attempts to start training. \r\n\r\nAdding a `dist.barrier()` here: https://github.com/huggingface/transformers/blob/6529a5b5c13210b41bcd87c555c72696cd7083a5/src/transformers/trainer.py#L1740 results in more expected behavior which is that it just hangs\r\n\r\nAnyways it's pretty clear what's going on so I'll close this now. Thank you for helping figure it out." ]
1,703
1,707
1,707
NONE
null
### System Info pytorch nightly, latest version of everything ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` Attempting to resume from /scratch/tmpk3ri55kw/checkpoint-400 [2023-12-28 09:08:50,967] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /scratch/tmpk3ri55kw/checkpoint-400/global_step400/zero_pp_rank_0_mp_rank_00_model_states.pt... [2023-12-28 09:08:50,985] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /scratch/tmpk3ri55kw/checkpoint-400/global_step400/zero_pp_rank_0_mp_rank_00_model_states.pt. [2023-12-28 09:08:50,985] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /scratch/tmpk3ri55kw/checkpoint-400/global_step400/zero_pp_rank_0_mp_rank_00_model_states.pt... [2023-12-28 09:08:51,002] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /scratch/tmpk3ri55kw/checkpoint-400/global_step400/zero_pp_rank_0_mp_rank_00_model_states.pt. [2023-12-28 09:08:51,036] [INFO] [torch_checkpoint_engine.py:27:load] [Torch] Loading checkpoint from /scratch/tmpk3ri55kw/checkpoint-400/global_step400/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... [2023-12-28 09:08:51,117] [INFO] [torch_checkpoint_engine.py:29:load] [Torch] Loaded checkpoint from /scratch/tmpk3ri55kw/checkpoint-400/global_step400/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. [2023-12-28 09:08:51,117] [INFO] [engine.py:2988:_get_all_zero_checkpoint_state_dicts] successfully read 2 ZeRO state_dicts for rank 0 [2023-12-28 09:08:51,159] [INFO] [engine.py:2920:_load_zero_checkpoint] loading 2 zero partition checkpoints for rank 0 Traceback (most recent call last): File "/finetune.py", line 630, in <module> main() File "/finetune.py", line 523, in main trainer.train(resume_from_checkpoint=last_checkpoint) File "/opt/venv/lib/python3.11/site-packages/transformers/trainer.py", line 1543, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.11/site-packages/transformers/trainer.py", line 1827, in _inner_training_loop for step, inputs in enumerate(epoch_iterator): File "/opt/venv/lib/python3.11/site-packages/accelerate/data_loader.py", line 639, in __iter__ next_batch, next_batch_info = self._fetch_batches(main_iterator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.11/site-packages/accelerate/data_loader.py", line 612, in _fetch_batches broadcast_object_list(batch_info) File "/opt/venv/lib/python3.11/site-packages/accelerate/utils/operations.py", line 490, in broadcast_object_list torch.distributed.broadcast_object_list(object_list, src=from_process) File "/opt/venv/lib/python3.11/site-packages/torch/distributed/c10d_logger.py", line 75, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/opt/venv/lib/python3.11/site-packages/torch/distributed/distributed_c10d.py", line 2526, in broadcast_object_list object_tensor = torch.empty( # type: ignore[call-overload] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory. ``` ### Expected behavior This workflow used to work before at some point. Not sure what combinations of things it worked/didn't work for. Training from scratch rather than resuming works fine. It's just resuming from checkpoint that causes this very strange crash.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28270/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28269/comments
https://api.github.com/repos/huggingface/transformers/issues/28269/events
https://github.com/huggingface/transformers/pull/28269
2,057,984,445
PR_kwDOCUB6oc5i3HfK
28,269
Sliently ignore the FileNotFoundError exception when mv staging output dir
{ "login": "zeyugao", "id": 6374697, "node_id": "MDQ6VXNlcjYzNzQ2OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6374697?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zeyugao", "html_url": "https://github.com/zeyugao", "followers_url": "https://api.github.com/users/zeyugao/followers", "following_url": "https://api.github.com/users/zeyugao/following{/other_user}", "gists_url": "https://api.github.com/users/zeyugao/gists{/gist_id}", "starred_url": "https://api.github.com/users/zeyugao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zeyugao/subscriptions", "organizations_url": "https://api.github.com/users/zeyugao/orgs", "repos_url": "https://api.github.com/users/zeyugao/repos", "events_url": "https://api.github.com/users/zeyugao/events{/privacy}", "received_events_url": "https://api.github.com/users/zeyugao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think we should opt for a solution that avoids the need for this PR. And one that works for multi-node setups with and without shared file system.", "may I ask whether we can get a solution for this in main branch? this is also an issue to block us from using 4.36+ for this multi node multi gpu training. thanks. + @muellerzr ", "Agreed with @peter-sk, we're looking at this ASAP but just ignoring it is not the right solution. " ]
1,703
1,705
1,705
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related to https://github.com/huggingface/transformers/pull/28009 In this PR, it tries to mitigate the problem of inconsistency of filesystem in multiple node training. That is, if we rename the dir in one node, the existence of the staging dir may not propagate to other node in a shared filesystem scenario. That is, the filesystem is not a reliable synchronization mechanism compared to cuda. As shown in the figure, in `node-0`, after `os.path.exists(staging_output_dir)` becoming `False`, on the other node, it is still `True`. <img width="1192" alt="image" src="https://github.com/huggingface/transformers/assets/6374697/09ba4c66-472c-4e6f-807c-dbdb38d87768"> In this PR, I catch the `FileNotFoundError` exception to mitigate the issue. However, maybe we can just do renaming in the `local main` or `main` process, instead of every process to avoid using `try catch` that can conceal the potential unexpected error. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @muellerzr and @pacman100 <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28269/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28269", "html_url": "https://github.com/huggingface/transformers/pull/28269", "diff_url": "https://github.com/huggingface/transformers/pull/28269.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28269.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28268/comments
https://api.github.com/repos/huggingface/transformers/issues/28268/events
https://github.com/huggingface/transformers/pull/28268
2,057,959,353
PR_kwDOCUB6oc5i3CNC
28,268
[WIP] Add NarrowBERT to HF
{ "login": "lihaoxin2020", "id": 77715908, "node_id": "MDQ6VXNlcjc3NzE1OTA4", "avatar_url": "https://avatars.githubusercontent.com/u/77715908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lihaoxin2020", "html_url": "https://github.com/lihaoxin2020", "followers_url": "https://api.github.com/users/lihaoxin2020/followers", "following_url": "https://api.github.com/users/lihaoxin2020/following{/other_user}", "gists_url": "https://api.github.com/users/lihaoxin2020/gists{/gist_id}", "starred_url": "https://api.github.com/users/lihaoxin2020/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lihaoxin2020/subscriptions", "organizations_url": "https://api.github.com/users/lihaoxin2020/orgs", "repos_url": "https://api.github.com/users/lihaoxin2020/repos", "events_url": "https://api.github.com/users/lihaoxin2020/events{/privacy}", "received_events_url": "https://api.github.com/users/lihaoxin2020/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey! Thanks for wanting to contribute I would recommend you to put the model on the hub following [this tutorial](https://huggingface.co./docs/transformers/custom_models) to make it easily available 🤗", "Hi. I've uploaded pretrained models to the hub. With the last commit i think I've passed all tests i can see. please let me know anything i missed. Thanks! @ArthurZucker and @younesbelkada" ]
1,703
1,706
null
NONE
null
# What does this PR do? Add implementation of NarrowBERT. - original repo: https://github.com/lihaoxin2020/narrowbert - paper: https://arxiv.org/abs/2301.04761v2 ## Who can review? - text models: @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28268/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28268", "html_url": "https://github.com/huggingface/transformers/pull/28268", "diff_url": "https://github.com/huggingface/transformers/pull/28268.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28268.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28267/comments
https://api.github.com/repos/huggingface/transformers/issues/28267/events
https://github.com/huggingface/transformers/pull/28267
2,057,826,363
PR_kwDOCUB6oc5i2m_t
28,267
fix documentation for zero_shot_object_detection
{ "login": "not-lain", "id": 70411813, "node_id": "MDQ6VXNlcjcwNDExODEz", "avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/not-lain", "html_url": "https://github.com/not-lain", "followers_url": "https://api.github.com/users/not-lain/followers", "following_url": "https://api.github.com/users/not-lain/following{/other_user}", "gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}", "starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/not-lain/subscriptions", "organizations_url": "https://api.github.com/users/not-lain/orgs", "repos_url": "https://api.github.com/users/not-lain/repos", "events_url": "https://api.github.com/users/not-lain/events{/privacy}", "received_events_url": "https://api.github.com/users/not-lain/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28267). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) fixes broken documentation for [zero_shot_object_detection](https://huggingface.co./docs/transformers/tasks/zero_shot_object_detection) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @stevhliu and @MKhalusova can help out
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28267/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28267", "html_url": "https://github.com/huggingface/transformers/pull/28267", "diff_url": "https://github.com/huggingface/transformers/pull/28267.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28267.patch", "merged_at": 1704302435000 }
https://api.github.com/repos/huggingface/transformers/issues/28266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28266/comments
https://api.github.com/repos/huggingface/transformers/issues/28266/events
https://github.com/huggingface/transformers/pull/28266
2,057,792,654
PR_kwDOCUB6oc5i2f6I
28,266
Don't allow passing `load_in_8bit` and `load_in_4bit` at the same time
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey folks, should we merge this PR? :D ", "Definitely, @Titus-von-Koeller if you don't see any issue what do you think of merging this PR?\r\ncc @amyeroberts @ArthurZucker for a core-maintainer review before merging" ]
1,703
1,706
1,706
MEMBER
null
# What does this PR do? This PR disallows having both `load_in_8bit` and `load_in_4bit` set simultaneously. This would help avoid unexpected behaviors or broken configurations. ## Example **Without the PR** ``` from transformers import BitsAndBytesConfig BitsAndBytesConfig(load_in_8bit=True, load_in_4bit=True) ValueError: load_in_4bit and load_in_8bit are both True, but only one can be used at the same time ``` works fine **With the PR** ``` from transformers import BitsAndBytesConfig BitsAndBytesConfig(load_in_8bit=True, load_in_4bit=True) ValueError: load_in_4bit and load_in_8bit are both True, but only one can be used at the same time ``` errors as I would expect
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28266/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28266", "html_url": "https://github.com/huggingface/transformers/pull/28266", "diff_url": "https://github.com/huggingface/transformers/pull/28266.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28266.patch", "merged_at": 1706575421000 }
https://api.github.com/repos/huggingface/transformers/issues/28265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28265/comments
https://api.github.com/repos/huggingface/transformers/issues/28265/events
https://github.com/huggingface/transformers/issues/28265
2,057,687,233
I_kwDOCUB6oc56pdDB
28,265
Add `LlamaForQuestionAnswering`
{ "login": "Nkluge-correa", "id": 88645425, "node_id": "MDQ6VXNlcjg4NjQ1NDI1", "avatar_url": "https://avatars.githubusercontent.com/u/88645425?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nkluge-correa", "html_url": "https://github.com/Nkluge-correa", "followers_url": "https://api.github.com/users/Nkluge-correa/followers", "following_url": "https://api.github.com/users/Nkluge-correa/following{/other_user}", "gists_url": "https://api.github.com/users/Nkluge-correa/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nkluge-correa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nkluge-correa/subscriptions", "organizations_url": "https://api.github.com/users/Nkluge-correa/orgs", "repos_url": "https://api.github.com/users/Nkluge-correa/repos", "events_url": "https://api.github.com/users/Nkluge-correa/events{/privacy}", "received_events_url": "https://api.github.com/users/Nkluge-correa/received_events", "type": "User", "site_admin": false }
[ { "id": 1990918270, "node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue", "name": "Good First Issue", "color": "bbf794", "default": false, "description": "" } ]
open
false
null
[]
[ "Hey @NielsRogge I would like to work on this issue .", "@ArthurZucker @NielsRogge Is this feature still requested?\r\nI can work on it", "Hey @nakranivaibhav , as you can see, @Tanmaypatil123 has already started working on it, let's not duplicate work ! 🤗 Unless the PR is not updated in a week or so, feel free to take over, starting from the review I did 😉 ", "@ArthurZucker Alright, I'll keep an 👀 on it.\r\n", "@ArthurZucker Can i take the issue now?\r\n", "Sure, just feel free to open a PR and take into account my reviews! " ]
1,703
1,706
null
NONE
null
### Feature request Add a `LlamaForQuestionAnswering` class to the [`modeling_llama.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py) so Llama models have `AutoModelForQuestionAnswering` support (by also adding Llama-style models to the `MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES ` in the [`modeling_auto.py`](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/models/auto/modeling_auto.py#L1343) file. ### Motivation 1 - Evaluation benchmarks like [Squad](https://huggingface.co./datasets/squad_v1_pt) or [FaQUAD](https://huggingface.co./datasets/eraldoluis/faquad) are commonly used to evaluate language models. 2 - Many decoder-only transformers ([BLOOM](https://huggingface.co./docs/transformers/model_doc/bloom), [Falcon](https://huggingface.co./docs/transformers/model_doc/falcon), [OpenAI GPT-2](https://huggingface.co./docs/transformers/model_doc/gpt2), [GPT Neo](https://huggingface.co./docs/transformers/model_doc/gpt_neo), [GPT NeoX](https://huggingface.co./docs/transformers/model_doc/gpt_neox), [GPT-J](https://huggingface.co./docs/transformers/model_doc/gptj), etc.) have support for the `AutoModelForQuestionAnswering`. 3 - Creating a fine-tuning/evaluation procedure using things like `AutoModelForQuestionAnswering` and `evaluate.load('squad')` is very simple, making these features very helpful and desirable. 4 - On the contrary, if one cannot use `AutoModelForQuestionAnswering`, like in the Llama style models, everything becomes more difficult. Hence, I would like to request the addition of a `LlamaForQuestionAnswering` class to the `modeling_llama.py` file. Hence, we could all easily perform experiments with Llama models and squad-style Q&A benchmarks: ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering model = AutoModelForQuestionAnswering.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v0.3") tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v0.3") ``` ### Your contribution I think, as suggested by [nielsr](https://discuss.huggingface.co/u/nielsr) in the [forum](https://discuss.huggingface.co/t/llama-2-support-for-automodelforquestionanswering/66906), we can use the [`GptjForQuestionAnswering`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/gptj/modeling_gptj.py#L1059) as a starting point, adding a `LlamaForQuestionAnswering` to the [`modeling_llama.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py) file: ```python @add_start_docstrings( """ The Llama 2 Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). """, LLAMA_START_DOCSTRING, ) class LlamaForQuestionAnswering(LlamaPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.transformer = LlamaModel(config) self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) # Model parallel self.model_parallel = False self.device_map = None # Initialize weights and apply final processing self.post_init() @add_start_docstrings_to_model_forward(LLAMA_START_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( checkpoint=_CHECKPOINT_FOR_DOC, output_type=QuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC, real_checkpoint=_REAL_CHECKPOINT_FOR_DOC, ) def forward( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.FloatTensor] = None, token_type_ids: Optional[torch.LongTensor] = None, position_ids: Optional[torch.LongTensor] = None, head_mask: Optional[torch.FloatTensor] = None, inputs_embeds: Optional[torch.FloatTensor] = None, start_positions: Optional[torch.LongTensor] = None, end_positions: Optional[torch.LongTensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, ) -> Union[Tuple, QuestionAnsweringModelOutput]: r""" start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.transformer( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = outputs[0] logits = self.qa_outputs(sequence_output) start_logits, end_logits = logits.split(1, dim=-1) start_logits = start_logits.squeeze(-1).contiguous() end_logits = end_logits.squeeze(-1).contiguous() total_loss = None if start_positions is not None and end_positions is not None: # If we are on multi-GPU, split add a dimension if len(start_positions.size()) > 1: start_positions = start_positions.squeeze(-1).to(start_logits.device) if len(end_positions.size()) > 1: end_positions = end_positions.squeeze(-1).to(end_logits.device) # sometimes the start/end positions are outside our model inputs, we ignore these terms ignored_index = start_logits.size(1) start_positions = start_positions.clamp(0, ignored_index) end_positions = end_positions.clamp(0, ignored_index) loss_fct = CrossEntropyLoss(ignore_index=ignored_index) start_loss = loss_fct(start_logits, start_positions) end_loss = loss_fct(end_logits, end_positions) total_loss = (start_loss + end_loss) / 2 if not return_dict: output = (start_logits, end_logits) + outputs[2:] return ((total_loss,) + output) if total_loss is not None else output return QuestionAnsweringModelOutput( loss=total_loss, start_logits=start_logits, end_logits=end_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ``` and then, we add the `Llama` models to the `MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES ` in the [`modeling_auto.py`](https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/models/auto/modeling_auto.py#L1343) file: ```python MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES = OrderedDict( [ # Model for Question Answering mapping ("open-llama", "OpenLlamaModel"), ("llama", "LlamaModel"), ("code_llama", "LlamaModel"), ... ``` I can try to make these changes if no one more qualified wants to take the job 😅.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28265/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28264/comments
https://api.github.com/repos/huggingface/transformers/issues/28264/events
https://github.com/huggingface/transformers/issues/28264
2,057,538,696
I_kwDOCUB6oc56o4yI
28,264
Accuracy regression of ViT
{ "login": "blzheng", "id": 69951214, "node_id": "MDQ6VXNlcjY5OTUxMjE0", "avatar_url": "https://avatars.githubusercontent.com/u/69951214?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blzheng", "html_url": "https://github.com/blzheng", "followers_url": "https://api.github.com/users/blzheng/followers", "following_url": "https://api.github.com/users/blzheng/following{/other_user}", "gists_url": "https://api.github.com/users/blzheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/blzheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blzheng/subscriptions", "organizations_url": "https://api.github.com/users/blzheng/orgs", "repos_url": "https://api.github.com/users/blzheng/repos", "events_url": "https://api.github.com/users/blzheng/events{/privacy}", "received_events_url": "https://api.github.com/users/blzheng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @blzheng, thanks for raising this issue! \r\n\r\n#19796 has been merged in for over a year now, and there have been a few subsequent updates to the image processing logic. Could you confirm how you narrowed it down to this commit? \r\n\r\nWhat performance do you get, running on main with different seeds? ", "Hi @amyeroberts , we observed accuracy drop from 0.8131 (transformers==4.18.0) to 0.8033 (transformers==4.28.1), then I narrowed down to this commit with git bisect. This issue can be reproduced stably by running the following command. Even with the latest codebase, this issue still exists.\r\n\"python transformers/examples/pytorch/image-classification/run_image_classification.py --model_name_or_path google/vit-base-patch16-224 --do_eval --dataset_name imagenet-1k --per_device_eval_batch_size 1 --remove_unused_columns False --output_dir ./\"", "@blzheng Thanks for confirming. \r\n\r\nThe reason for the change is because the processing logic in the image classification script was updated to reflect that of the model's image processor.\r\n\r\nPreviously, `size` could be an int, and was passed directly to `torchvision.transforms.Resize`. If `size` is an int (which it is for many model's e.g. [here for a vit checkpoint](https://huggingface.co./google/vit-base-patch16-224/blob/3f49326eb077187dfe1c2a2bb15fbd74e6ab91e3/preprocessor_config.json#L14)), then the [shortest edge of the image is resized to `size` and the other edge rescaled to keep the image's aspect ratio](https://pytorch.org/vision/main/generated/torchvision.transforms.Resize.html).\r\n\r\nHowever, in the now-deprecated feature extractors (superceeded in #19796), the [default behaviour](https://github.com/huggingface/transformers/blob/aa4a0f8ef37eb5d42b4e3810f37e554585c90d41/src/transformers/image_utils.py#L543) if `size` was an int, was to resize the image to `(size, size)`. This [was the case of ViT](https://github.com/amyeroberts/transformers/blob/a23819ed6ab852df6d8f04815306440531418260/src/transformers/models/vit/feature_extraction_vit.py#L144). \r\n\r\nThe script now reflects the behaviour of the image processor, even when using torchvision transforms. ", "@amyeroberts Thanks for your information. \r\nNow that the changes in image processing logic are reasonable, does it mean the accuracy drop is expected?", "@blzheng It depends what you mean by \"expected\". The change in the logic means that the aspect ratio of the input images is different, and so one would expect there to be a performance difference. Even though it's not in-line with the processing of the model's image processors, the previous processing might bring better performance because it preserves the true aspect of images and hence shape/dimensions of the subjects in the image (this is speculation).", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,703
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.25.0.dev0 - Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.35 - Python version: 3.9.17 - Huggingface_hub version: 0.19.4 - PyTorch version (GPU?): 2.0.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Accuracy regression caused by https://github.com/huggingface/transformers/pull/19796 Reproduce command: python transformers/examples/pytorch/image-classification/run_image_classification.py --model_name_or_path google/vit-base-patch16-224 --do_eval --dataset_name imagenet-1k --per_device_eval_batch_size 1 --remove_unused_columns False --output_dir ./ ### Expected behavior Expected results: eval_accuracy = 0.8131 eval_loss = 0.7107 eval_runtime = 0:43:40.30 eval_samples_per_second = 19.082 eval_steps_per_second = 19.082 Current results: eval_accuracy = 0.8033 eval_loss = 0.755 eval_runtime = 0:34:05.81 eval_samples_per_second = 24.44 eval_steps_per_second = 0.436
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28264/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28263/comments
https://api.github.com/repos/huggingface/transformers/issues/28263/events
https://github.com/huggingface/transformers/pull/28263
2,057,483,882
PR_kwDOCUB6oc5i1dQr
28,263
Optimize the speed of the truncate_sequences function.
{ "login": "ikkvix", "id": 100856433, "node_id": "U_kgDOBgLycQ", "avatar_url": "https://avatars.githubusercontent.com/u/100856433?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ikkvix", "html_url": "https://github.com/ikkvix", "followers_url": "https://api.github.com/users/ikkvix/followers", "following_url": "https://api.github.com/users/ikkvix/following{/other_user}", "gists_url": "https://api.github.com/users/ikkvix/gists{/gist_id}", "starred_url": "https://api.github.com/users/ikkvix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ikkvix/subscriptions", "organizations_url": "https://api.github.com/users/ikkvix/orgs", "repos_url": "https://api.github.com/users/ikkvix/repos", "events_url": "https://api.github.com/users/ikkvix/events{/privacy}", "received_events_url": "https://api.github.com/users/ikkvix/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ArthurZucker @younesbelkada Could you give me any opinions or suggestions?", "Hi @ikkvix thanks for your great work on this, I will let @ArthurZucker review the PR as he is much more familiar on tokenizers that I am", "> LGTM, what kind of speedups are we looking at? (did you benchmark it?)\r\n\r\nThank you for the review! I conducted some experiments on my pc:\r\n\r\n- When num_tokens_to_remove=10,000:\r\n The old method takes 0.2s in this section of the code, while the new method takes 1e-05s.\r\n\r\n- When num_tokens_to_remove=100,000:\r\n The old method takes 18.94s in this section of the code, while the new method only takes 1.38e-05s.\r\n\r\n- When num_tokens_to_remove=200,000:\r\n The old method takes 75.21s in this section of the code, while the new method only takes 1.81e-05s.\r\n\r\n- When num_tokens_to_remove=300,000:\r\n The old method takes 176.97s in this section of the code, while the new method takes 2.10e-05s.\r\n\r\nI didn't measure at the million level because it took too long. It's evident that with the increase in num_tokens_to_remove, the speed of the for loop removal becomes extremely slow, highlighting a massive difference in speed between these two methods.", "Thanks for improving the truncation. This is mostly used in `prepare_for_model` which is used in `encode` basically. A lot of model use this with `truncation=True`. Let me just run the slow tokenization tests to make sure everything is alright : `RUN_SLOW=1 pytest -n 8 tests/ -k test_tokenization_` \r\n\r\n\r\n", "No new failures in the slow tokenization tests, checking the `RUN_CUSTOM_TOKENIZERS` (most of them use truncate_sequence individually) and seems alright! 🪂 thanks " ]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The truncate_sequences function uses a for loop to iteratively remove elements when truncation_strategy == TruncationStrategy.LONGEST_FIRST. However, when the length of ids or pair_ids is considerable, the time complexity of the removal process becomes O(n^2), which is unacceptable. By directly calculating the quantity to be removed, this portion of the code's time complexity can be reduced to O(n), resulting in a significant speed improvement. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28263/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28263", "html_url": "https://github.com/huggingface/transformers/pull/28263", "diff_url": "https://github.com/huggingface/transformers/pull/28263.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28263.patch", "merged_at": 1704969734000 }
https://api.github.com/repos/huggingface/transformers/issues/28262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28262/comments
https://api.github.com/repos/huggingface/transformers/issues/28262/events
https://github.com/huggingface/transformers/pull/28262
2,057,441,777
PR_kwDOCUB6oc5i1UI1
28,262
[docs] Sort es/toctree.yml | Translate performance.md
{ "login": "aaronjimv", "id": 67152883, "node_id": "MDQ6VXNlcjY3MTUyODgz", "avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aaronjimv", "html_url": "https://github.com/aaronjimv", "followers_url": "https://api.github.com/users/aaronjimv/followers", "following_url": "https://api.github.com/users/aaronjimv/following{/other_user}", "gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}", "starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions", "organizations_url": "https://api.github.com/users/aaronjimv/orgs", "repos_url": "https://api.github.com/users/aaronjimv/repos", "events_url": "https://api.github.com/users/aaronjimv/events{/privacy}", "received_events_url": "https://api.github.com/users/aaronjimv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, I am open to any feedback. This PR is a continuation of the work on #28172 which sort `es/_toctree.yml` like `en/_toctree.yml`, with the addition of the Spanish version of `performance.md`.\r\n\r\nMy only doubt in the translation is that the links to the other files mentioned are correct, since these files are only found in the \r\nEnglish documentation. Thanks.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28262). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> LGTM thanks! The links to the English versions of the docs look correct, and we can gradually replace these later with the Spanish versions when they're translated :)\r\n\r\nOk, thanks 🤗" ]
1,703
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Continuing with the work on #28172 Add the Spanish version of `performance.md` to `transformers/docs/source/es` Part of #28172 Fixes #15947 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @omarespejel @sgugger @osanseviero @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28262/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28262", "html_url": "https://github.com/huggingface/transformers/pull/28262", "diff_url": "https://github.com/huggingface/transformers/pull/28262.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28262.patch", "merged_at": 1704321358000 }
https://api.github.com/repos/huggingface/transformers/issues/28261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28261/comments
https://api.github.com/repos/huggingface/transformers/issues/28261/events
https://github.com/huggingface/transformers/pull/28261
2,057,323,182
PR_kwDOCUB6oc5i06Zk
28,261
[`MobileSam`] Adds MobileSAM to transformers
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[]
1,703
1,706
null
CONTRIBUTOR
null
# What does this PR do? as discussed offline cc @merveenoyan @NielsRogge This PR adds MobileSam to the library. MobileSam uses the same archtiecture as SAM, with the SAM image encoder being swapped to TinyViT. Therefore I decided to create a new modeling file for it, as porting TinyViT required a bit of work Draft for now!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28261/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28261", "html_url": "https://github.com/huggingface/transformers/pull/28261", "diff_url": "https://github.com/huggingface/transformers/pull/28261.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28261.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28260/comments
https://api.github.com/repos/huggingface/transformers/issues/28260/events
https://github.com/huggingface/transformers/issues/28260
2,057,306,510
I_kwDOCUB6oc56oAGO
28,260
How to set pad_token of Llava for batched generation and training?
{ "login": "TideDra", "id": 92413813, "node_id": "U_kgDOBYIfdQ", "avatar_url": "https://avatars.githubusercontent.com/u/92413813?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TideDra", "html_url": "https://github.com/TideDra", "followers_url": "https://api.github.com/users/TideDra/followers", "following_url": "https://api.github.com/users/TideDra/following{/other_user}", "gists_url": "https://api.github.com/users/TideDra/gists{/gist_id}", "starred_url": "https://api.github.com/users/TideDra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TideDra/subscriptions", "organizations_url": "https://api.github.com/users/TideDra/orgs", "repos_url": "https://api.github.com/users/TideDra/repos", "events_url": "https://api.github.com/users/TideDra/events{/privacy}", "received_events_url": "https://api.github.com/users/TideDra/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey 🤗 Sorry for the late reply here, and thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!" ]
1,703
1,707
1,707
NONE
null
Hello, @younesbelkada I'm trying to use Llava for batched generation, using the default pad_token. here is the script: ```python import json from PIL import Image from transformers import AutoProcessor, LlavaForConditionalGeneration,AutoTokenizer from torch.utils.data import Dataset,DataLoader import torch import os from tqdm import tqdm DATA_ROOT = "/mnt/gozhang/code/LLaVA/playground/data/eval/mm-vet" processor = AutoProcessor.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf") tokenizer = AutoTokenizer.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf") class MMVetDataset(Dataset): def __init__(self,data_root) -> None: super().__init__() self.data_root = data_root with open(os.path.join(data_root, "mm-vet.json"), "r") as f: data = json.load(f) self.data = [(k,v) for k,v in data.items()] def __len__(self): return len(self.data) def __getitem__(self, index): return {'id':self.data[index][0], 'image':os.path.join(self.data_root,'images',self.data[index][1]['imagename']), 'question':"USER: <image>\n"+self.data[index][1]['question']+" ASSISTANT:"} def collator(batch): ids = [b['id'] for b in batch] questions = [b['question'] for b in batch] images = [Image.open(b['image']) for b in batch] inputs = processor(text=questions,images=images,return_tensors="pt",padding=True) return ids,inputs model = LlavaForConditionalGeneration.from_pretrained("/mnt/gozhang/ckpts/llava-1.5-7b-hf",torch_dtype=torch.float16) model.to('cuda') #model.to(torch.float16) dataset = MMVetDataset(DATA_ROOT) dataloader = DataLoader(dataset,batch_size=16,collate_fn=collator) results = {} bar = tqdm(total=len(dataset)) model.eval() with torch.inference_mode(): for ids, inputs in dataloader: inputs.to('cuda') inputs['pixel_values'] = inputs['pixel_values'].half() outputs = model.generate(**inputs,temperature=0.2,do_sample=True,max_new_tokens=1024,use_cache=True) input_token_len = inputs['input_ids'].shape[1] responses=tokenizer.batch_decode(outputs[:, input_token_len:], skip_special_tokens=True, clean_up_tokenization_spaces=False) for id,res in zip(ids,responses): results[id]=res bar.update(len(responses)) with open('mmvet_result.json','w') as f: json.dump(results,f,indent=4) ``` But when generating the fifth batch, it reports `RuntimeError: probability tensor contains either inf, nan or element < 0`. Then I try different pad_token, setting `processor.tokenizer.pad_token = processor.tokenizer.unk_token` (following the raw llava codebase), or `processor.tokenizer.pad_token = processor.tokenizer.eos_token`(following the common setting), or `processor.tokenizer.pad_token = processor.tokenizer.bos_token`(following this [issue](https://discuss.huggingface.co/t/llama2-pad-token-for-batched-inference/48020)). And I find that only setting pad_token to eos_token can avoid the error. I wonder what's the effect of different pad_token during batched generation, and what's the root cause of this error, and how to set the correct pad_token for training the model?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28260/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28259/comments
https://api.github.com/repos/huggingface/transformers/issues/28259/events
https://github.com/huggingface/transformers/issues/28259
2,057,305,050
I_kwDOCUB6oc56n_va
28,259
How to add new merge rules in AutoTokenizer
{ "login": "Sandspeare", "id": 126481267, "node_id": "U_kgDOB4nzcw", "avatar_url": "https://avatars.githubusercontent.com/u/126481267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sandspeare", "html_url": "https://github.com/Sandspeare", "followers_url": "https://api.github.com/users/Sandspeare/followers", "following_url": "https://api.github.com/users/Sandspeare/following{/other_user}", "gists_url": "https://api.github.com/users/Sandspeare/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sandspeare/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sandspeare/subscriptions", "organizations_url": "https://api.github.com/users/Sandspeare/orgs", "repos_url": "https://api.github.com/users/Sandspeare/repos", "events_url": "https://api.github.com/users/Sandspeare/events{/privacy}", "received_events_url": "https://api.github.com/users/Sandspeare/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,703
1,703
null
NONE
null
### Model description I'm training new tokenizer from llama2, however, it seems that BPE tokenizer will clear the origin "vocab" and "merge" dict, and the training result is highly bias in my own datasets (about 6M C function) with some ugly tokens. I wonder that is it possible to train a tokenizer from llama2 with the origin "vocab" and "merge" dict unchanged, only add some new vocab and merge rules from our datasets to support my requirement? ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28259/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28258/comments
https://api.github.com/repos/huggingface/transformers/issues/28258/events
https://github.com/huggingface/transformers/issues/28258
2,057,262,802
I_kwDOCUB6oc56n1bS
28,258
MobileVitV2 doesn't have optimizer and loss argument in `model.compile()`?
{ "login": "adhiiisetiawan", "id": 51025603, "node_id": "MDQ6VXNlcjUxMDI1NjAz", "avatar_url": "https://avatars.githubusercontent.com/u/51025603?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adhiiisetiawan", "html_url": "https://github.com/adhiiisetiawan", "followers_url": "https://api.github.com/users/adhiiisetiawan/followers", "following_url": "https://api.github.com/users/adhiiisetiawan/following{/other_user}", "gists_url": "https://api.github.com/users/adhiiisetiawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/adhiiisetiawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adhiiisetiawan/subscriptions", "organizations_url": "https://api.github.com/users/adhiiisetiawan/orgs", "repos_url": "https://api.github.com/users/adhiiisetiawan/repos", "events_url": "https://api.github.com/users/adhiiisetiawan/events{/privacy}", "received_events_url": "https://api.github.com/users/adhiiisetiawan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @Rocketknight1 ", "Hi @adhiiisetiawan, the issue here is that when you changed the code, you switched from TF classes to PyTorch classes. Your code uses `MobileViTV2Model`, but this is a PyTorch class. Therefore, the reason that the `compile()` method is behaving incorrectly for you is that you're actually loading a PyTorch model and calling PyTorch's `compile()` method, which does something very different to TensorFlow's `model.compile()`!\r\n\r\nUnfortunately, `MobileVitV2` is not yet supported in TensorFlow in `transformers`, so if you want to use this model for now you'll need to do it in PyTorch. You can either use another model (and change the code back to use `TFAutoModelForImageClassification`) if you want to stick with TensorFlow, or you can switch to the PyTorch notebook instead if you want to fine-tune `apple/mobilevitv2-1.0-imagenet1k-256`.", "ohh i see, okay thank you @Rocketknight1 for explanation" ]
1,703
1,704
1,704
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): 2.15.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.23 - JaxLib version: 0.4.23 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? vision models: @amyeroberts ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run this official tensorflow [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/tensorflow/image_classification.ipynb#scrollTo=-Pi8UI110qKL) from huggingface and change the model to `apple/mobilevitv2-1.0-imagenet1k-256`. When try to `model.compile()` will result an error `TypeError: compile() got an unexpected keyword argument 'optimizer'`. But, when we delete the arguments, code will run without problem, but of course, we need define optimizer there. Also when we not pass the arguments, `model.fit` give an error too `AttributeError: 'MobileViTV2Model' object has no attribute 'fit'` Try with my own notebook for better reproducibility [here](https://colab.research.google.com/drive/1-TFfsun98fKchJRY-ZCV5_tYOwPYLtuX?usp=sharing). ### Expected behavior MobileViTV2 can running
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28258/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28257/comments
https://api.github.com/repos/huggingface/transformers/issues/28257/events
https://github.com/huggingface/transformers/pull/28257
2,057,113,024
PR_kwDOCUB6oc5i0MS4
28,257
[fix] Optimize deletion speed while truncate_sequences (#3563)
{ "login": "liyuqing1", "id": 49502156, "node_id": "MDQ6VXNlcjQ5NTAyMTU2", "avatar_url": "https://avatars.githubusercontent.com/u/49502156?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liyuqing1", "html_url": "https://github.com/liyuqing1", "followers_url": "https://api.github.com/users/liyuqing1/followers", "following_url": "https://api.github.com/users/liyuqing1/following{/other_user}", "gists_url": "https://api.github.com/users/liyuqing1/gists{/gist_id}", "starred_url": "https://api.github.com/users/liyuqing1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyuqing1/subscriptions", "organizations_url": "https://api.github.com/users/liyuqing1/orgs", "repos_url": "https://api.github.com/users/liyuqing1/repos", "events_url": "https://api.github.com/users/liyuqing1/events{/privacy}", "received_events_url": "https://api.github.com/users/liyuqing1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry, this method may be related to the environment and len(ids), so the rewriting may not always be effective. We are conducting further research to better address the issue.", "No worries feel free to ping me if you have good findings! " ]
1,703
1,704
1,703
NONE
null
# What does this PR do? The process remains unchanged, but the writing method is changed to improve the speed of truncate_sequences(). "del ids[-1]" is an in-place operation, while the original "ids = ids[:-1]" make an additional copy of ids[0:-1]. Therefore, the original writting method is slower, especially when sentences are long. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28257/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28257", "html_url": "https://github.com/huggingface/transformers/pull/28257", "diff_url": "https://github.com/huggingface/transformers/pull/28257.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28257.patch", "merged_at": null }