url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/28055
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28055/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28055/comments
https://api.github.com/repos/huggingface/transformers/issues/28055/events
https://github.com/huggingface/transformers/issues/28055
2,042,841,032
I_kwDOCUB6oc55w0fI
28,055
Llama-2-70b-chat-hf get worse result than Llama-2-70B-Chat-GPTQ
{ "login": "fancyerii", "id": 5372812, "node_id": "MDQ6VXNlcjUzNzI4MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5372812?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fancyerii", "html_url": "https://github.com/fancyerii", "followers_url": "https://api.github.com/users/fancyerii/followers", "following_url": "https://api.github.com/users/fancyerii/following{/other_user}", "gists_url": "https://api.github.com/users/fancyerii/gists{/gist_id}", "starred_url": "https://api.github.com/users/fancyerii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fancyerii/subscriptions", "organizations_url": "https://api.github.com/users/fancyerii/orgs", "repos_url": "https://api.github.com/users/fancyerii/repos", "events_url": "https://api.github.com/users/fancyerii/events{/privacy}", "received_events_url": "https://api.github.com/users/fancyerii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @fancyerii, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.", "@amyeroberts thanks. But when I post this to forum, it says \"similar content already exist\" and do not let me create new post here.", "@fancyerii Were you able to find related issues on the forum? Alternatively you can ask on [our discord](https://discord.com/invite/hugging-face-879548962464493619). ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.36.0 - Platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27 - Python version: 3.9.18 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am trying to use Llama-2-70b-chat-hf as zero-shot text classifier for my datasets. Here is my setups. 1. vLLM + Llama-2-70b-chat-hf I used vLLM as my inference engine as run it with: ``` python api_server.py --model /nas/lili/models_hf/70B-chat --tensor-parallel-size 8 ``` api_server.py is the [example file](https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/api_server.py) and I do not modify anything. client code: ``` data = { "prompt": prompt, "use_beam_search": False, "n": 1, "temperature": 0.1, "max_tokens": 128, } res = _post(data) return eval(res.content)['text'][0].strip() ``` And my prompt is: ``` You will be provided with a product name. The product name will be delimited by 3 backticks, i.e.```. Classify the product into a primary category. Primary categories: Clothing, Shoes & Jewelry Automotive Home & Kitchen Beauty & Personal Care Electronics Sports & Outdoors Patio, Lawn & Garden Handmade Products Grocery & Gourmet Food Health & Household Musical Instruments Toys & Games Baby Products Pet Supplies Tools & Home Improvement Appliances Office Products Cell Phones & Accessories Product name:```Cambkatl Men's Funny 3D Fake Abs T-Shirts Casual Short Sleeve Chest Graphic Printed Crewneck Novelty Pullover Tee Tops```. Only answer the category name, no other words. ``` The classification accuracy is 0.352. And I also tried to use the same prompt and parameter(temperature and max_token) to call chatgpt and gpt-4, the got 0.68 and 0.72 respectively. Llama 2 shouldn't be significantly worse than ChatGPT. There must be something wrong with it. So I suspect it may be related to vLLM. So I tried the following method. 2. Transformer + flask It's not a good serving method, maybe I should use tgi. But I think it's easy for locating problem. ``` from transformers import LlamaTokenizer, LlamaForCausalLM tokenizer_path = "/nas/lili/models_hf/70B-chat-hf/" model_path = "/nas/lili/models_hf/70B-chat-hf/" tokenizer = LlamaTokenizer.from_pretrained(tokenizer_path) model = LlamaForCausalLM.from_pretrained( model_path, #load_in_8bit=True, #torch_dtype=torch.float16, device_map="auto", ) from flask import Flask, request, jsonify from flask_cors import CORS from transformers.generation import GenerationConfig app = Flask(__name__) CORS(app) @app.route('/generate', methods=['POST']) def generate(): json = request.get_json(force=True) prompt = json['prompt'] num_beams = json.get('num_beams') temperature = json.get('temperature') max_tokens = json.get('max_tokens') do_sample = json.get('do_sample') top_k = json.get('top_k') or 10 model_inputs = tokenizer(prompt, return_tensors='pt').to('cuda') cfg = GenerationConfig( num_beams = num_beams, max_new_tokens = max_tokens, temperature = temperature, do_sample = do_sample, top_k = top_k ) output = model.generate(**model_inputs, generation_config=cfg, pad_token_id=tokenizer.eos_token_id) input_length = model_inputs["input_ids"].shape[1] output = tokenizer.decode(output[0][input_length:], skip_special_tokens=True) output = output.strip() return jsonify({'text': [output]}) if __name__ == '__main__': app.run(host='0.0.0.0', port=5000) ``` And the client code: ``` data = { "prompt": prompt, "do_sample": True, "temperature": 0.1, "max_tokens": 128, "num_beams":5 } res = _post(data, url=self.url) return eval(res.content)['text'][0].strip() ``` This time I used a large num_beams=5(I should use 1 but I made a mistake) I used the same prompt as before. And the accuracy is 0.368. It's not much better than using vLLM(the gain may from large num_beams). Now it seems there is not the problem of vLLM. What's wrong with it? Is Llama 2 70b a very bad model? I don't think so. So I tried the 3rd method. 3. Transformer(using Llama-2-70B-Chat-GPTQ ) + flask The setup is the same as method 2, I only change model: ``` tokenizer_path = "/nas/lili/models_hf/7B-chat/" model_path = "/nas/lili/models_hf/Llama-2-70B-chat-GPTQ/" ``` I saved Llama-2-70B-chat-GPTQ by saved_pretrained and forget saved the tokenizer, So I use the tokenizer of Llama2 7B-chat(I think all Llama 2 tokenizer is the same for different mode size). This time I got a better result of 0.56. It's not good as chatgpt but is significant better than uncompressed Llama-2-70B-chat. So I am confused that original Llama-2-70B-chat is 20% worse than Llama-2-70B-chat-GPTQ. Method 2 and Method 3 are exactly the same except for different model. ### Expected behavior Llama 2 70b got a similar or better result than Llama-2-70B-chat-GPTQ.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28055/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28054
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28054/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28054/comments
https://api.github.com/repos/huggingface/transformers/issues/28054/events
https://github.com/huggingface/transformers/pull/28054
2,042,669,233
PR_kwDOCUB6oc5iDfJY
28,054
Make GPT2 traceable in meta state
{ "login": "kwen2501", "id": 6676466, "node_id": "MDQ6VXNlcjY2NzY0NjY=", "avatar_url": "https://avatars.githubusercontent.com/u/6676466?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kwen2501", "html_url": "https://github.com/kwen2501", "followers_url": "https://api.github.com/users/kwen2501/followers", "following_url": "https://api.github.com/users/kwen2501/following{/other_user}", "gists_url": "https://api.github.com/users/kwen2501/gists{/gist_id}", "starred_url": "https://api.github.com/users/kwen2501/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kwen2501/subscriptions", "organizations_url": "https://api.github.com/users/kwen2501/orgs", "repos_url": "https://api.github.com/users/kwen2501/repos", "events_url": "https://api.github.com/users/kwen2501/events{/privacy}", "received_events_url": "https://api.github.com/users/kwen2501/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28054). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? Before this PR, if we create GPT2 on "meta" device and trace it with dynamo or torch.export, the following line would create an error: ``` mask_value = torch.full([], mask_value, dtype=attn_weights.dtype).to(attn_weights.device) ``` ``` torch._dynamo.exc.TorchRuntimeError: Failed running call_method to(*(FakeTensor(..., size=()), device(type='meta')), **{}): Creating a new Tensor subclass FakeTensor but the raw Tensor object is already associated to a python object of type FakeTensor ``` That is, tracing a `.to("meta")` method on a meta tensor is not yet supported by PT2, even though we were just tracing the code. A quick workaround is to move the device in `.to` method to the tensor constructor, which is what this PR does. Longer term, it would be best for dynamo/export to not error out when tracing through the `.to` method in this situation. (I will file an issue again PyTorch.) ## Who can review? @younesbelkada @muellerzr @SunMarc
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28054/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28054/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28054", "html_url": "https://github.com/huggingface/transformers/pull/28054", "diff_url": "https://github.com/huggingface/transformers/pull/28054.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28054.patch", "merged_at": 1702651532000 }
https://api.github.com/repos/huggingface/transformers/issues/28052
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28052/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28052/comments
https://api.github.com/repos/huggingface/transformers/issues/28052/events
https://github.com/huggingface/transformers/issues/28052
2,042,532,506
I_kwDOCUB6oc55vpKa
28,052
ValueError: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed torch.float32, this might lead to unexpected behaviour.
{ "login": "dakinggg", "id": 43149077, "node_id": "MDQ6VXNlcjQzMTQ5MDc3", "avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dakinggg", "html_url": "https://github.com/dakinggg", "followers_url": "https://api.github.com/users/dakinggg/followers", "following_url": "https://api.github.com/users/dakinggg/following{/other_user}", "gists_url": "https://api.github.com/users/dakinggg/gists{/gist_id}", "starred_url": "https://api.github.com/users/dakinggg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dakinggg/subscriptions", "organizations_url": "https://api.github.com/users/dakinggg/orgs", "repos_url": "https://api.github.com/users/dakinggg/repos", "events_url": "https://api.github.com/users/dakinggg/events{/privacy}", "received_events_url": "https://api.github.com/users/dakinggg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Also I thought `torch_dtype` wasn't even used when its in the config, maybe that has changed though.", "Also even specifying `torch_dtype` results in a warnings `In [21]: transformers.AutoModelForCausalLM.from_config(llamacfg, torch_dtype=torch.bfloat16, use_flash_attention_2=True)\r\nYou are attempting to use Flash Attention 2.0 without specifying a torch dtype. This might lead to unexpected behaviour`. I feel like I'm missing something fundamental about how this arg works.", "I also tried setting `config._attn_implemention` myself as a workaround, but it doesn't seem to respect that.", "Thanks a lot for this and other reports, we’ll fix it asap cc @fxmarty as well", "While you're there, it'd be good to revisit all of the hard checks in there and make sure they are truly necessary. For anyone else who ends up here, you can work around it via\r\n```\r\n def _autoset_attn_implementation_monkeypatch(\r\n cls, config, *args, **kwargs): # type: ignore\r\n config._attn_implementation = requested_attention_implementation\r\n return config\r\n\r\n PreTrainedModel._autoset_attn_implementation = classmethod(\r\n _autoset_attn_implementation_monkeypatch)\r\n```", "same issue", "in src/modeling_utils.py\r\n\r\n```python\r\n config = self._autoset_attn_implementation(\r\n config, torch_dtype=torch.get_default_dtype(), check_device_map=False\r\n )\r\n```\r\nchange to \r\n```\r\n config = self._autoset_attn_implementation(\r\n config, check_device_map=False\r\n )\r\n```", "~Same issue~\r\n\r\nEdit: I was actually loading models (w/ custom model classes) from scratch using the \\_\\_init\\_\\_ function but it looks like you should use `_from_config` instead, where you can specify torch_dtype.\r\n\r\n### Old code\r\n```\r\nconfig = AutoConfig.from_pretrained(model_name_or_path,\r\n attn_implementation=\"flash_attention_2\",\r\n torch_dtype=torch.bfloat16) # here, torch_dtype is ignored\r\nmodel = CustomPretrainedLM(config)\r\n```\r\n\r\n### New code\r\n```\r\nconfig = AutoConfig.from_pretrained(model_name_or_path, attn_implementation=\"flash_attention_2\")\r\nmodel = CustomPretrainedLM(config, torch_dtype=torch.bfloat16)\r\n```", "Same issue, but fig out a simple solution to bypass the check:\r\n\r\n```\r\ndefault_dtype = torch.get_default_dtype()\r\ntorch.set_default_dtype(torch.bfloat16)\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_path,\r\n attn_implementation=\"flash_attention_2\",\r\n config=config,\r\n)\r\ntorch.set_default_dtype(default_dtype)\r\n```", "```\r\n\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_path,\r\n attn_implementation=\"flash_attention_2\",\r\n config=config,\r\n torch_dtype=torch.bfloat16\r\n)\r\n```\r\n" ]
1,702
1,706
1,706
CONTRIBUTOR
null
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.36.0 - Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.3.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction running `llamafromcfg = transformers.AutoModelForCausalLM.from_config(llamacfg, use_flash_attention_2=True)` after `llamacfg = transformers.AutoConfig.from_pretrained('meta-llama/Llama-2-7b-hf')` results in `ValueError: Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes. You passed torch.float32, this might lead to unexpected behaviour.`. ### Expected behavior I understand that flash attention requires fp16/bf16 for _computation_, but I don't believe I should be prevented from instantiating the model in fp32. I will use automatic mixed precision later for computation. Please let me know what I'm missing/what the intended usage is. Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28052/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28052/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28051
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28051/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28051/comments
https://api.github.com/repos/huggingface/transformers/issues/28051/events
https://github.com/huggingface/transformers/pull/28051
2,042,351,489
PR_kwDOCUB6oc5iCYmn
28,051
[LLaVa] Add past_key_values to _skip_keys_device_placement to fix multi-GPU dispatch
{ "login": "aismlv", "id": 13088690, "node_id": "MDQ6VXNlcjEzMDg4Njkw", "avatar_url": "https://avatars.githubusercontent.com/u/13088690?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aismlv", "html_url": "https://github.com/aismlv", "followers_url": "https://api.github.com/users/aismlv/followers", "following_url": "https://api.github.com/users/aismlv/following{/other_user}", "gists_url": "https://api.github.com/users/aismlv/gists{/gist_id}", "starred_url": "https://api.github.com/users/aismlv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aismlv/subscriptions", "organizations_url": "https://api.github.com/users/aismlv/orgs", "repos_url": "https://api.github.com/users/aismlv/repos", "events_url": "https://api.github.com/users/aismlv/events{/privacy}", "received_events_url": "https://api.github.com/users/aismlv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Per my understanding this would only be applied for models that went through the recent cache refactoring. Llava is impacted since it uses Llama and Mistral under the hood, which are part of this PR: https://github.com/huggingface/transformers/pull/26681 \r\nThe recent Mixtral has it: https://github.com/huggingface/transformers/blob/main/src/transformers/models/mixtral/modeling_mixtral.py#L863 so I would say to just keep this in mind for future models that copy from Llama/Mistral and/or uses Llama or Mistral under the hood such as Llava", "@younesbelkada Great - thanks for explaining and providing the relevant links. We're good to merge then! " ]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Fixes #27917 Fixes cache and (key, value) tensors ending up on different devices when using accelerate's dispatch in `LlavaPreTrainedModel` (and `VipLlavaPreTrainedModel`) by adding `_skip_keys_device_placement = "past_key_values"` attribute to the class, similar to how Llama handles the issue ``` File "/home/pooyam/hf_llava/lib/python3.9/site-packages/transformers/cache_utils.py", line 127, in update self.key_cache[layer_idx] = torch.cat([self.key_cache[layer_idx], key_states], dim=-2) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28051/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28051/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28051", "html_url": "https://github.com/huggingface/transformers/pull/28051", "diff_url": "https://github.com/huggingface/transformers/pull/28051.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28051.patch", "merged_at": 1702649120000 }
https://api.github.com/repos/huggingface/transformers/issues/28050
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28050/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28050/comments
https://api.github.com/repos/huggingface/transformers/issues/28050/events
https://github.com/huggingface/transformers/pull/28050
2,042,242,760
PR_kwDOCUB6oc5iCAsK
28,050
[`EfficientSAM`] Add EfficientSAM to the library
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hey when can we expect this to merge ?", "Hi @rishabh063 \r\nThanks very much for your interest in this PR - hopefully quite soon, I need some time to make sure logits match then we can merge ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "hey @younesbelkada any updates on this ?", "hi @rishabh063 unfortunately I had to work on more urgent things now, I will come back to this ASAP. If you have bandwith you can also take over the PR ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,708
null
CONTRIBUTOR
null
# What does this PR do? As per title, this PR adds EfficientSAM, a new architecture from https://github.com/yformer/EfficientSAM that is similar than SAM architecture but with the benefit of being much smaller. Draft for now @xenova @yformer
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28050/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28050/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28050", "html_url": "https://github.com/huggingface/transformers/pull/28050", "diff_url": "https://github.com/huggingface/transformers/pull/28050.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28050.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28049
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28049/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28049/comments
https://api.github.com/repos/huggingface/transformers/issues/28049/events
https://github.com/huggingface/transformers/issues/28049
2,042,219,768
I_kwDOCUB6oc55ucz4
28,049
Transformers 4.36 doesn't work with `microsoft/phi-1.5` unless you pass in `trust_remote_code=True`
{ "login": "arnavgarg1", "id": 106701836, "node_id": "U_kgDOBlwkDA", "avatar_url": "https://avatars.githubusercontent.com/u/106701836?v=4", "gravatar_id": "", "url": "https://api.github.com/users/arnavgarg1", "html_url": "https://github.com/arnavgarg1", "followers_url": "https://api.github.com/users/arnavgarg1/followers", "following_url": "https://api.github.com/users/arnavgarg1/following{/other_user}", "gists_url": "https://api.github.com/users/arnavgarg1/gists{/gist_id}", "starred_url": "https://api.github.com/users/arnavgarg1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/arnavgarg1/subscriptions", "organizations_url": "https://api.github.com/users/arnavgarg1/orgs", "repos_url": "https://api.github.com/users/arnavgarg1/repos", "events_url": "https://api.github.com/users/arnavgarg1/events{/privacy}", "received_events_url": "https://api.github.com/users/arnavgarg1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @arnavgarg1 \r\nphi-1 from microsoft still uses code on the Hub feature: https://huggingface.co./microsoft/phi-1_5 \r\nIf you want to use the HF version of Phi-1 you need to use the converted checkpoints from @susnato such as : https://huggingface.co./susnato/phi-1_5_dev - I think we should transfer those weights under microsoft org with the suffix `-hf`\r\n\r\n I did not reviewed the Phi integration PR but if the keys are the same we can also open PRs on the Hub on the original repos cc @ArthurZucker ", "Hi, I have already opened PRs on the Hub for transferring weights and necessary files, [here](https://huggingface.co./microsoft/phi-1_5/discussions/62) and [here](https://huggingface.co./microsoft/phi-1/discussions/5). We need to wait for someone from the org to merge them then you can use without passing `trust_remote_code=True`.", "Thanks for the prompt response @younesbelkada and @susnato! That makes sense! Will keep on the lookout for when the weights and necessary files PR on the hub gets merged in - for now, this looks like it works nicely!\r\n\r\nThanks for the awesome work in adding support for Phi @susnato! \r\n\r\nOut of curiosity - are there plans to add support for Phi-2 as well? ", "I haven't checked phi-2 yet but If the architecture is the same as phi-1/phi-1.5, then we need to change the keys in the weights and update the config file and we will be good to go. ", "Hey @susnato, based on inspection, they seem architecturally similar, just that phi-2 is slightly bigger than phi 1.5. See this diff I created: https://www.diffchecker.com/cAspnGZ3/ \r\n\r\nI assume that we just need to update the keys and config and we should be good to go? Is this something I could potentially help with?", "We’ll be adding phi2 as well yes 😊 we asked the author if he is interested but given the community’s interest it’s a good way to go anyways ", "Great to hear @ArthurZucker! Has work already started/is there an approximate timeline for when you expect it to get added?", "ETA is probably end of next week. Gotta finish DECI, add mamba and then Phi2! Should be quite fast 🤗 ", "Hi @ArthurZucker, can I please help in adding `phi-2` in any way?\r\n\r\nAs far as I can tell, the architecture is the same as the `phi` that we have in the library(but slightly bigger). Just need to convert the weights using the existing phi script and transfer the weights. Also maybe add an integration test to make sure the logits are the same. \r\n\r\n\r\nLet me know if I could be of any help.", "Sure, as you added phi, feel free to take this! 🤗 \r\n", "Thanks for taking this @susnato! ", "Hi @ArthurZucker, thanks.\r\n\r\nJust wanted to be clear on the License part - As you can see in this [discussion](https://huggingface.co./microsoft/phi-2/discussions/4) there seems to be a problem regarding the license...Is it okay to modify the weights and then transfer them to my profile for this addition? Later once we finish transferring the code we can update the checkpoints to the official one.", "Yeah sure, the idea will be to open a PR once everything is done, making sure we don't have issues with the model type and ask the author to merge. There is for me no issue with this regarding the licence, we don't modify the weight, we modify the dictionnary that stores them / the split !", "Thanks for the clarification @ArthurZucker! I will ping you for a review when it's done.", "Hey @susnato! I saw that you created https://huggingface.co./susnato/phi-2 🥳 . Has this been verified/is it safe to use?", "Hi @arnavgarg1, the weights are messed up...they changed the modeling file on the Hub so the existing conversion script is not working properly...I will try to fix it and get it running tomorrow.", "@susnato No worries at all, let me know if there's anything I can do to help!", "@susnato do you have a draft PR already? I think this would help us follow your progress and potentially help you! 🤗 ", "Hi @arnavgarg1, I fixed some things and now `phi2` should work as expected, could you please run it from `susnato/phi-2` and let me know if it is showing expected results or not. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
### System Info When the transformers library typically adds a new supported model, we no longer need to pass in `trust_remote_code=True` during model or tokenizer initialization. However, even with the latest version of the transformers package (4.36.1), I see that I need to do it when I try using `microsoft/phi-1.5` to actually get the model to load and for the einops weights to get converted to torch weights: ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1.5", trust_remote_code=True) ``` I took a look at the [PR](https://github.com/huggingface/transformers/pull/26170/files#diff-74ab0ba9fffc06389f9d614e5da01ee93db3e2f0494d1e96e7de92c0d1d288fb) that added Phi. Is the expectation that we should just be using `susnato/phi-1_5_dev` instead of `microsoft/phi-1.5` going forward? If yes, why is this the case? If not, how can I use the original `microsoft/phi-1.5` model without setting `trust_remote_code` to True? Thanks a bunch! Super excited that Phi is now a well supported model in the transformers ecosystem! ### Who can help? @ArthurZucker @younesbelkada @susa ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1.5", trust_remote_code=True) ``` ### Expected behavior I was expecting that like all transformers models that get "first class" support on new major transformer version releases, Phi would also work the same way but somehow it doesn't seem to be the case.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28049/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28048
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28048/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28048/comments
https://api.github.com/repos/huggingface/transformers/issues/28048/events
https://github.com/huggingface/transformers/pull/28048
2,042,137,173
PR_kwDOCUB6oc5iBpkz
28,048
Remove warning when Annotion enum is created
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28048). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,702
1,702
1,702
COLLABORATOR
null
# What does this PR do? The Annotion enum was deprecated in #26941. I asked for there to be a deprecation warning to let users know if they chose to use the enum. This was overly defensive and in light of recent [complaints of objects being removed/moved from the library](https://github.com/huggingface/transformers/issues/25948#issuecomment-1758537251) (even if they were never meant to be used directly). However, complete oversight on my part is that the __init__ of the enum will be created any time any object from the `image_utils` module file is imported - resulting in verbose error messages unrelated to what the user was trying to do - my bad. This PR removed the warning from the enum itself and adds it into a validation check that happens on annotations. It ultimately means that we might break things when we remove `Annotion` - but this is likely to be very unlikely and with a simple, quick resolution. Partially resolves #28042 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28048/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28048", "html_url": "https://github.com/huggingface/transformers/pull/28048", "diff_url": "https://github.com/huggingface/transformers/pull/28048.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28048.patch", "merged_at": 1702583420000 }
https://api.github.com/repos/huggingface/transformers/issues/28047
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28047/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28047/comments
https://api.github.com/repos/huggingface/transformers/issues/28047/events
https://github.com/huggingface/transformers/issues/28047
2,042,087,881
I_kwDOCUB6oc55t8nJ
28,047
Don't listify batched pipeline output from input list
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Somewhat related to this, the conversational pipeline's typing and docstrings do not seem correct (which brought me to the issue above):\r\n\r\nhttps://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/pipelines/conversational.py#L262-L280\r\n\r\nThe signature allows for a list of dicts (as a single conversation) but not a list of list of dicts (a batch of conversations), although `List[Conversation]` is allowed. According to the docstrings, a `List[dict]` is also not allowed - only Conversation(s). Finally, for compatibility with the pipeline call, other types of input (such as a generator or KeyDataset) should also be allowed but they are not specified.\r\n", "cc @Narsil ", "cc @Rocketknight1 for the conversational pipeline docstring part " ]
1,702
1,707
null
COLLABORATOR
null
### Feature request Currently the output that you get from a pipeline seems to depend on the input type. While intuitively that makes sense for distinct primitive types, a difference also seems implemented for generators vs lists vs Datasets. I'd argue that that leads to unexpected behavior. ### Motivation We can use batching in any pipeline, which [according to the documentation](https://huggingface.co./docs/transformers/main_classes/pipelines#pipeline-batching) enables "streaming". I interpreted this as: the pipeline will return a generator that will yield output one by one. However, looking at the source code, this does not seem the case. First of all, it depends on the input format of what is passed to the pipeline. Interestingly, when the passed input type is a list (rather than a Dataset or a Generator), the output is listified: https://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/pipelines/base.py#L1116-L1122 I am not sure why that is the case. The input type can be disconnected from the output type, so why are not all iterables handled in the same manner? Is it to have continuity between input and output types? If that is the case then that is okay, but to me it feels counter-intuitive: if I have a list of samples (like a dataset, just in a list-format), why would that need to be different from a Dataset or Generator as input type? Small repro: ```python from transformers import pipeline model_name = "microsoft/DialoGPT-small" pipe = pipeline("conversational", model=model_name, device_map="auto") list_of_messages = [[{"role": "system", "content": "You're a good assistant!"}, {"role": "user", "content": "What is the meaning of 42?"}], [{"role": "user", "content": "What is the meaning of life?"}]] print(type(pipe(list_of_messages))) # <class 'list'> generator_of_msgs = (msg for msg in list_of_messages) print(type(pipe(generator_of_msgs))) # <class 'transformers.pipelines.pt_utils.PipelineIterator'> ``` ### Your contribution I do not know what the best option is. It took me quite some digging before I understood what was happening in the output types so I feel that this could be standardized. Personally I'd expect the `PipelineIterator` NOT to be listified. I do not see any reason to wait for all processing to complete, except for continuity with the input type but I don't know if that is important. For backwards compatibility an argument can be added to Pipeline.__call__, `no_listify_for_list` or something like that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28047/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28046
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28046/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28046/comments
https://api.github.com/repos/huggingface/transformers/issues/28046/events
https://github.com/huggingface/transformers/pull/28046
2,042,000,996
PR_kwDOCUB6oc5iBLr7
28,046
Replace build() with build_in_name_scope() for some TF tests
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28046). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@amyeroberts It's hard to say - it would depend exactly on how they were loading the weights. What has changed with this PR is that if you initialize a model via `from_config` **and then explicitly call its build() method**, you'll get slightly different weight names for it than you would have before. For every other case (e.g. initialize with `from_pretrained()` or implicitly call `build()` via calling the model), there should be no difference.\r\n\r\nI think for users to notice an issue they would need to:\r\n1) Load a model with `from_config`\r\n2) Explicitly build it with `build()`\r\n3) Have their own weight-loading script that they use to load weights by name into their model after 2)\r\n\r\nI think it's pretty unlikely that anyone is doing that, and even if so, they can fix it by using `build_in_name_scope()` instead." ]
1,702
1,702
1,702
MEMBER
null
Should have included this in the TF `build()` PR but I missed it until now - some of the TF tests should use `build_in_name_scope()` to ensure layer names aren't changed by that PR! This fix is just for our tests - users shouldn't be affected by the `build()` PR unless they're manually calling `build()` on models and then trying to crossload weights into them.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28046/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28046", "html_url": "https://github.com/huggingface/transformers/pull/28046", "diff_url": "https://github.com/huggingface/transformers/pull/28046.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28046.patch", "merged_at": 1702575745000 }
https://api.github.com/repos/huggingface/transformers/issues/28045
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28045/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28045/comments
https://api.github.com/repos/huggingface/transformers/issues/28045/events
https://github.com/huggingface/transformers/issues/28045
2,041,955,742
I_kwDOCUB6oc55tcWe
28,045
AttributeError: 'tuple' object has no attribute 'to_legacy_cache'
{ "login": "wuxb45", "id": 564235, "node_id": "MDQ6VXNlcjU2NDIzNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/564235?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wuxb45", "html_url": "https://github.com/wuxb45", "followers_url": "https://api.github.com/users/wuxb45/followers", "following_url": "https://api.github.com/users/wuxb45/following{/other_user}", "gists_url": "https://api.github.com/users/wuxb45/gists{/gist_id}", "starred_url": "https://api.github.com/users/wuxb45/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wuxb45/subscriptions", "organizations_url": "https://api.github.com/users/wuxb45/orgs", "repos_url": "https://api.github.com/users/wuxb45/repos", "events_url": "https://api.github.com/users/wuxb45/events{/privacy}", "received_events_url": "https://api.github.com/users/wuxb45/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi @wuxb45 \r\ncan you share a fully reproducible snippet?", "I have the same issue with transofrmers 4.36.1. I am using DeepSpeed framework to generate a response and face this the same error. ", "We cannot help you if you don't share a reproducible snippet. The way this part of the code works should not trigger this error because the past key values are casted to the `DynamicCache` if `use_legacy_cache`. Thus there is probably a versioning issue", "> I have the same issue with transofrmers 4.36.1. I am using DeepSpeed framework to generate a response and face this the same error.\r\n\r\nMe too. I also struggled with this problem for a long time, using deepspeed-chat to train reinforcement learning code.@liziniu", "> hi @wuxb45 can you share a fully reproducible snippet?\r\n\r\nI don't have the capacity to generate a reprod at this time. The issue was from running a code base forked from deepspeed chat's step 3. I'm sorry that I cannot provide more information now.", "I solved this by removing tensor parallel. It seems that merging perdevicetensor converted Cache to Tuple.", "> I solved this by removing tensor parallel. It seems that merging perdevicetensor converted Cache to Tuple.\r\n\r\nHI, i also faced the same issue. May I ask how you actually removed the tensor parallel if you are also using the deepspeed chat code?", "I had the same errors with `4.36.0` and `4.36.2` versions for Llama inferencing on multiple GPUs with `tensor_paralle`l package. ", "I tried version `4.36.0.dev0` . I don't have the issue. Other versions including `4.37.0.dev0` will give the AttributeError.", "Hi everyone, please let us know whenever you can share a small reproducible snippet as we can't do anything without a repro to fix the bug", "@younesbelkada \r\n\r\nYou can probably try the following code with different transformers versions to reproduce:\r\n\r\n```\r\nimport torch\r\nfrom tensor_parallel import TensorParallelPreTrainedModel\r\nfrom transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig\r\n\r\nmodel_path = \"meta-llama/Llama-2-7b-chat-hf\"\r\nmodel = LlamaForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16)\r\nmodel = TensorParallelPreTrainedModel(model, [\"cuda:0\", \"cuda:1\", \"cuda:2\", \"cuda:3\"])\r\ntokenizer = LlamaTokenizer.from_pretrained(model_path)\r\ninputs = tokenizer(\"Hi, how are you doing?\", return_tensors=\"pt\", add_special_tokens=False)\r\noutputs = tokenizer.decode(model.generate(inputs[\"input_ids\"].cuda(0), attention_mask=inputs[\"attention_mask\"].cuda(0), max_length=256)[0], add_special_tokens=False)\r\n\r\nprint(outputs)\r\n```\r\nTo me, only `4.36.0.dev0` works. After updating to new version, it won't work and I was not able to go back to the old `4.36.0.dev0` version.", "Alright this is pretty much a duplicate of #28003. We made a mistake by not advertising to test a bit more for other repos to get ready, feel free to share it on the `tensor_parallel` repo", "To me, it works with `transformers==4.34.1`\r\n\r\n", "Apparently this issue was introduced due to this this commit PR #26681 by @tomaarsen and @patrickvonplaten \r\n\r\nnext_decoder_cache should be a cache, which means it is not well initialized as a cache. Instead of a tuple , the new HF implementation pass a list of cache:\r\n\r\n```python\r\nhttps://github.com/tomaarsen/transformers/blob/ee60b1cc13e2819ef31e69952c0b6f616bd724b8/src/transformers/models/llama/modeling_llama.py#L287C45-L287C76\r\nlayer_idx: Optional[int] = None\r\n\r\n#https://github.com/tomaarsen/transformers/blob/ee60b1cc13e2819ef31e69952c0b6f616bd724b8/src/transformers/models/llama/modeling_llama.py#L355\r\npast_key_value: Optional[Cache] = None,\r\n```\r\n\r\nLayer_idx is later used by `past_key_value`, and `past_key_value` is currently replaced as list of Cache.\r\n\r\nNote the diff contains a kind of cache (for attention KV cache) which implements `to_legacy_cache`.\r\n\r\nI guess deepspeed version does not instantiate llama attention correctly or we should change the code as @fxmarty suggests:\r\n\r\n```\r\n if use_cache:\r\n\t use_legacy_cache = not isinstance(past_key_values, Cache) and past_key_values is not None\r\n\t if use_legacy_cache:\r\n\t past_key_values = DynamicCache.from_legacy_cache(past_key_values)\r\n\t elif past_key_values is None:\r\n\t past_key_values = DynamicCache()\r\n\t past_key_values_length = past_key_values.get_seq_length()\r\n```\r\n\r\n", "您好,邮箱主人会认真阅读!谢谢关注/", "I am facing a similar issue AttributeError: 'tuple' object has no attribute 'to_legacy_cache' while training Llama 7B. What is the concluded solution?", "If you did not change your version of `transformers` that is expected. Upgrading to the latest / providing a repo should help!" ]
1,702
1,708
1,707
NONE
null
### System Info transformers 4.36.1. ``` transformers/models/llama/modeling_llama.py", line 1093, in forward next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'tuple' object has no attribute 'to_legacy_cache' ``` This error pops up when running inference with llama 2 model with the new tranformers 4.36.1. I didn't test 4.36.0. It was running correctly with 4.35.x. This seems to be related to changes from #26681, and commit 633215b. @ArthurZucker and @younesbelkada according to suggestions in "Who can help?" ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Sorry that I don't have an easy reprod now. Here is the relavant stack trace: ``` File "###transformers/generation/utils.py", line 1764, in generate return self.sample( ^^^^^^^^^^^^ File "###transformers/generation/utils.py", line 2861, in sample outputs = self( ^^^^^ File "###torch/nn/modules/module.py", line 1538, in _call_impl result = forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "###transformers/models/llama/modeling_llama.py", line 1181, in forward outputs = self.model( ^^^^^^^^^^^ File "###torch/nn/modules/module.py", line 1538, in _call_impl result = forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "###transformers/models/llama/modeling_llama.py", line 1093, in forward next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'tuple' object has no attribute 'to_legacy_cache' ``` ### Expected behavior Crash with the provided stack track.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28045/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28045/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28044
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28044/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28044/comments
https://api.github.com/repos/huggingface/transformers/issues/28044/events
https://github.com/huggingface/transformers/pull/28044
2,041,925,830
PR_kwDOCUB6oc5iA7Kp
28,044
Insertion Constraint
{ "login": "massabaali7", "id": 100831623, "node_id": "U_kgDOBgKRhw", "avatar_url": "https://avatars.githubusercontent.com/u/100831623?v=4", "gravatar_id": "", "url": "https://api.github.com/users/massabaali7", "html_url": "https://github.com/massabaali7", "followers_url": "https://api.github.com/users/massabaali7/followers", "following_url": "https://api.github.com/users/massabaali7/following{/other_user}", "gists_url": "https://api.github.com/users/massabaali7/gists{/gist_id}", "starred_url": "https://api.github.com/users/massabaali7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/massabaali7/subscriptions", "organizations_url": "https://api.github.com/users/massabaali7/orgs", "repos_url": "https://api.github.com/users/massabaali7/repos", "events_url": "https://api.github.com/users/massabaali7/events{/privacy}", "received_events_url": "https://api.github.com/users/massabaali7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "@amyeroberts @gante did you get the chance to review my code?\r\nThanks in advance ", "Hi @massabaali7 👋 \r\n\r\nIt is my understanding that constrained beam search is not very used by the community. As such, I'm not adding more code related to it at the moment, as we have limited bandwidth to empower the community 🤗 ", "I am doing a research and publishing a paper about this. Its a very important idea proposed for backchannel insertion. ", "@massabaali7 and we'd be delighted to add it, if the published results show significant improvements. However, we do not add features while they are in a research phase -- that's why we made the whole codebase open to everyone, so you can freely experiment locally or in a forked repository 🤗 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,707
1,707
NONE
null
My contribution lies mainly in the constraints class, which is represented in the constrained beam search. It enables conditional token injection in the constrained beam search. The new constraint allows the insertion of one or more tokens from a list of words into the output. Also, you have the ability to not insert anything. Example: insertfromListOfWords = ["uh","um","exactly", "yes"] possible_outputs == [ "The woman went exactly to school.", "um the woman went to um uh school", "The woman went to school", ]
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28044/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28044", "html_url": "https://github.com/huggingface/transformers/pull/28044", "diff_url": "https://github.com/huggingface/transformers/pull/28044.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28044.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/28043
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28043/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28043/comments
https://api.github.com/repos/huggingface/transformers/issues/28043/events
https://github.com/huggingface/transformers/pull/28043
2,041,888,381
PR_kwDOCUB6oc5iAzA7
28,043
[`FA-2`] Fix fa-2 issue when passing `config` to `from_pretrained`
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I am not 100% sure this approach is correct cc @fxmarty does this looks good to you (as you took care of the attention refactor) ?", "I'm a bit concerned about this - this is effectively a patch inside `from_pretrained` to add backwards compatibility that should have already been handled. The main question this raises for me is whether there other FA parameters/behaviours we need to check? \r\n\r\nIs it still possible to pass in both `use_flash_attention_2` and `config` to `from_pretrained`? If not, it's not clear to me from the diff how this is addressed: `use_flash_attention_2` isn't handled from the model kwargs. \r\n\r\nDidn't do a final review on the recent refactor, so might be missing something. It's also not clear to me from just this PR why passing in a config would change whether or not I can pass in `attn_implementation`.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28043). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Good catch\r\n\r\n@amyeroberts Even witout a fix, `use_flash_attention_2=True` along with a provided config IMO works thanks to https://github.com/huggingface/transformers/blob/050e0b44f6a63131b56d493543ab39fb7b4f20ca/src/transformers/modeling_utils.py#L1295-L1299", "cc @amyeroberts @fxmarty requesting another round of review!" ]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? Fixes: https://github.com/huggingface/transformers/issues/28038 Some users pass the `config` attribute to `from_pretrained` in order to modify model's hyperparameters to modify the undelrying architecture. Note in previous versions before the attention refactor, it was possible to perform ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM, AutoConfig model_id = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model_id) config = AutoConfig.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, config=config, torch_dtype=torch.bfloat16, use_flash_attention_2="flash_attention_2", low_cpu_mem_usage=True, ) ``` Now users get an issue while trying to perform the operation above because the logic of handling model's config for fa2 changed a bit. I propose a simple fix to mitigate this issue which is overwriting the attribute `_attn_implementation` of `config` only in case it has been passed by the user. I can confirm with this fix the snippet: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM, AutoConfig model_id = "tiiuae/falcon-7b" tokenizer = AutoTokenizer.from_pretrained(model_id) config = AutoConfig.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, config=config, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", low_cpu_mem_usage=True, ) ``` Works as expected as in the earlier versions of transformers cc @amyeroberts @fxmarty
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28043/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28043", "html_url": "https://github.com/huggingface/transformers/pull/28043", "diff_url": "https://github.com/huggingface/transformers/pull/28043.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28043.patch", "merged_at": 1702634907000 }
https://api.github.com/repos/huggingface/transformers/issues/28042
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28042/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28042/comments
https://api.github.com/repos/huggingface/transformers/issues/28042/events
https://github.com/huggingface/transformers/issues/28042
2,041,878,900
I_kwDOCUB6oc55tJl0
28,042
Confusing deprecation / warning messages
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten Thanks for reporting. I think I know the culprit PR but indeed it shouldn't be triggered here. Looking into it 🕵️ ", "Or, at least I know about `AnnotionFormat` - I don't have any immediate ideas regarding `text_config_dict`", "~~The `text_config_dict` is from~~\r\n\r\n~~https://huggingface.co./openai/clip-vit-large-patch14/blob/main/config.json~~", "Opened a PR to address the Annotion format errors. \r\n\r\nFor the text_config errors, using the `\"runwayml/stable-diffusion-v1-5\"`, at least one of the errors is coming from using [this config](https://huggingface.co./runwayml/stable-diffusion-v1-5/blob/main/safety_checker/config.json). \r\n\r\nAs the error mentions, both `text_config_dict` and `text_config` are being used in the model config. The warning is letting you know some of the values in `\"text_config\"` won't be used e.g. `eos_token_id`, even though `eos_token_id` isn't in `text_config_dict`. This is because `text_config_dict` is used to instantiate a `CLIPTextConfig` class, which will have default values of e.g. `eos_token_id = 49407`. I think the warning itself is good - it told me exactly what to look for and flags potentially confusing behaviour. I believe the correct resolution would be to add the overwritten values to `text_config` - @ydshieh is this right? \r\n\r\n@patrickvonplaten would adding a note on how to fix be sufficient or is there more information you'd like to see? \r\n\r\n", "> I believe the correct resolution would be to add the overwritten values to text_config - @ydshieh is this right?\r\n\r\nYes, if the 2 config (dict) get the same value for a key, then no warning won't be given (for that key).\r\n(or just update the config file to not use `text_config_dict` with an updated `text_config`)\r\n\r\nWe can open Hub PRs to the most used CLIP repositories to avoid such warning.\r\n", "I have three warnings triggered by calling StableDiffusionSafetyChecker.from_pretrained():\r\n\r\n```\r\n`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config[\"id2label\"]` will be overriden.\r\n`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config[\"bos_token_id\"]` will be overriden.\r\n`text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config[\"eos_token_id\"]` will be overriden.\r\n```\r\n\r\n", "> As the error mentions, both text_config_dict and text_config are being used in the model config. The warning is letting you know some of the values in \"text_config\" won't be used e.g. eos_token_id, even though eos_token_id isn't in text_config_dict. This is because text_config_dict is used to instantiate a CLIPTextConfig class, which will have default values of e.g. eos_token_id = 49407. I think the warning itself is good - it told me exactly what to look for and flags potentially confusing behaviour. I believe the correct resolution would be to add the overwritten values to text_config - @ydshieh is this right?\r\n\r\n> Yes, if the 2 config (dict) get the same value for a key, then no warning won't be given (for that key).\r\n(or just update the config file to not use text_config_dict with an updated text_config)\r\n\r\nI have some trouble understanding this tbh :sweat_smile: \r\n\r\nWhat exactly do we need to change here: https://huggingface.co./runwayml/stable-diffusion-v1-5/blob/main/safety_checker/config.json ? (could you maybe open a PR?) " ]
1,702
1,703
1,703
MEMBER
null
### System Info ``` - `transformers` version: 4.37.0.dev0 - Platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.3 - Safetensors version: 0.4.1 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Who can help? @amyeroberts for Vision ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When doing: ```py from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5") ``` I'm getting a couple of confusing error messages since transformers 4.36: ``` `AnnotionFormat` is deprecated and will be removed in v4.38. Please use `transformers.image_utils.AnnotationFormat` instead `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden. ``` None of these variables (`AnnotionFormat` or `text_config_dict`) are defined anywhere in `diffusers` or in the configs: https://huggingface.co./runwayml/stable-diffusion-v1-5/blob/main/text_encoder/config.json It seems like something inside Transformers triggers these deprecation warnings which makes the messages very confusing and non-actionable for users. Also since it happens every time `from_pretrained(...)` is called, it clutters the CLI quite a bit ### Expected behavior No warnings or clearer instructions and what needs to be changed to remove these warnings
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28042/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28042/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28041
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28041/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28041/comments
https://api.github.com/repos/huggingface/transformers/issues/28041/events
https://github.com/huggingface/transformers/issues/28041
2,041,869,865
I_kwDOCUB6oc55tHYp
28,041
Loading a model fails if it has been compiled with torch.compile
{ "login": "peacefulotter", "id": 32218033, "node_id": "MDQ6VXNlcjMyMjE4MDMz", "avatar_url": "https://avatars.githubusercontent.com/u/32218033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peacefulotter", "html_url": "https://github.com/peacefulotter", "followers_url": "https://api.github.com/users/peacefulotter/followers", "following_url": "https://api.github.com/users/peacefulotter/following{/other_user}", "gists_url": "https://api.github.com/users/peacefulotter/gists{/gist_id}", "starred_url": "https://api.github.com/users/peacefulotter/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peacefulotter/subscriptions", "organizations_url": "https://api.github.com/users/peacefulotter/orgs", "repos_url": "https://api.github.com/users/peacefulotter/repos", "events_url": "https://api.github.com/users/peacefulotter/events{/privacy}", "received_events_url": "https://api.github.com/users/peacefulotter/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This would be better posted in the safetensors repo rather than transformers I believe? https://github.com/huggingface/safetensors", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "For anyone stumbling upon this: if you save the weights of a *compiled* model then you also need to load the weights inside that *compiled* model. \n\nThe repro above is actually supposed to work.." ]
1,702
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.36.1 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu118 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO triton: 2.1.0 Ubuntu 22.04 (jammy, LTS) ### Who can help? @muellerzr @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Any class that inherits `nn.Module` ```py import torch import torch.nn as nn class MyModel(nn.Module): def __init__(self): super().__init__() self.module = nn.Sequential( nn.Linear(4, 2), nn.ReLU(), ) def forward(self, x): return self.module(x) # Instantiate the model, and compile it model = MyModel() model = torch.compile(model) # ... train or do whatever # save it (using safetensors in my case) import safetensors.torch as st st.save_model(model, "model.safetensors") # And load the weights, using safetensor as well # The following throws a RuntimeError (see below) st.load_model(model, "model.safetensors") ``` ### Expected behavior I am encountering a similar issue as in https://github.com/huggingface/transformers/issues/25205, where after saving a model that has been compiled using `torch.compile`, `safetensors.load_model` throws: ``` RuntimeError: Error(s) in loading state_dict for DummyModel: Missing key(s) in state_dict: "module.0.bias", "module.0.weight", ... Unexpected key(s) in state_dict: "_orig_mod.module.0.bias", "_orig_mod.module.0.weight", ... ``` In this case, the model has a `nn.Sequential` called `module`. As one can see, loading the weights changes the layer names by adding `_orig_mod` at the front. A fix I found is to unwrap the model, but this only works if you know the module names a priori: ```py # https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L4788C1-L4799C21 def unwrap_model(model: nn.Module) -> nn.Module: """ Recursively unwraps a model from potential containers (as used in distributed training). Args: model (`torch.nn.Module`): The model to unwrap. """ # since there could be multiple levels of wrapping, unwrap recursively if hasattr(model, "module"): return unwrap_model(model.module) else: return model # ... st.save_model(unwrap_model(model), "model.safetensors") st.load_model(unwrap_model(model), "model.safetensors") ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28041/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28041/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28039
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28039/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28039/comments
https://api.github.com/repos/huggingface/transformers/issues/28039/events
https://github.com/huggingface/transformers/issues/28039
2,041,771,174
I_kwDOCUB6oc55svSm
28,039
Unable to load models
{ "login": "dwojcik92", "id": 10101471, "node_id": "MDQ6VXNlcjEwMTAxNDcx", "avatar_url": "https://avatars.githubusercontent.com/u/10101471?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwojcik92", "html_url": "https://github.com/dwojcik92", "followers_url": "https://api.github.com/users/dwojcik92/followers", "following_url": "https://api.github.com/users/dwojcik92/following{/other_user}", "gists_url": "https://api.github.com/users/dwojcik92/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwojcik92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwojcik92/subscriptions", "organizations_url": "https://api.github.com/users/dwojcik92/orgs", "repos_url": "https://api.github.com/users/dwojcik92/repos", "events_url": "https://api.github.com/users/dwojcik92/events{/privacy}", "received_events_url": "https://api.github.com/users/dwojcik92/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @dwojcik92, thanks for raising this issue! \r\n\r\nI'm unable to reproduce this on my side. It looks like the checkpoints have only partially downloaded in the cache. If you're not setting your cache path, the model downloads will be found under `$HOME/.cache/huggingface/hub`. Could you check there and see if the expected models are there? \r\n\r\nI'd first try running the pipeline with a small model which can be quickly downloaded to see if that works: \r\n```py\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"text-generation\", model=\"gpt2\")\r\n```\r\n\r\nIf this doesn't work, could you also try loading a model outside of the pipeline API: \r\n```\r\nfrom transformers import AutoModel\r\n\r\nmodel = AutoModel.from_pretrained(\"gpt2\")\r\n```", "@amyeroberts thank you for quick response! \r\nI know about cache and what I tried is to specify cache with cache_dir. I didn't helped.\r\n\r\n```python\r\nbase_model = GPTNeoXForCausalLM.from_pretrained(\r\n \"EleutherAI/pythia-6.9b-deduped-v0\",\r\n revision=\"step3000\",\r\n)\r\n```\r\nThe above command worked well while the one below failed.\r\n```python\r\n# Use a pipeline as a high-level helper\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\"text-generation\", model=\"mistralai/Mistral-7B-v0.1\")\r\n```\r\n3 hours later and it seems that I can download all models without problem. It seems to me that the problem was with HF servers and not with `transformers`.\r\n\r\nAnyway, it would be a good idea to be able to have some verbose output here for debugging to see if the problem is with package itself or the HF servers.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.37.0.dev0 - Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: YS ### Who can help? @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction For the config and platform provided in details. The code ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="mistralai/Mistral-7B-v0.1") ``` results in error: ```bash OSError: mistralai/Mistral-7B-v0.1 does not appear to have a file named config.json. Checkout 'https://huggingface.co./mistralai/Mistral-7B-v0.1/None' for available files. ``` you can replace "mistralai/Mistral-7B-v0.1" with any model (tried with falcon, mistral, llama) and it won't work. ### Expected behavior The model should be downloaded and run with the code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28039/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28038
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28038/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28038/comments
https://api.github.com/repos/huggingface/transformers/issues/28038/events
https://github.com/huggingface/transformers/issues/28038
2,041,768,889
I_kwDOCUB6oc55suu5
28,038
Cannot specify config and attn_implementation simultaneously
{ "login": "hiyouga", "id": 16256802, "node_id": "MDQ6VXNlcjE2MjU2ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/16256802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hiyouga", "html_url": "https://github.com/hiyouga", "followers_url": "https://api.github.com/users/hiyouga/followers", "following_url": "https://api.github.com/users/hiyouga/following{/other_user}", "gists_url": "https://api.github.com/users/hiyouga/gists{/gist_id}", "starred_url": "https://api.github.com/users/hiyouga/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hiyouga/subscriptions", "organizations_url": "https://api.github.com/users/hiyouga/orgs", "repos_url": "https://api.github.com/users/hiyouga/repos", "events_url": "https://api.github.com/users/hiyouga/events{/privacy}", "received_events_url": "https://api.github.com/users/hiyouga/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "hi @hiyouga \r\nThanks a lot for the issue! \r\nI think that you cannot pass both the config and `attn_implementation` , can you elaborate a bit on why you would like to pass the config as well as `attn_implementation` into `from_pretrained`? The canonical way to load a FA2 model is:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, LlamaForCausalLM\r\n\r\nmodel_id = \"tiiuae/falcon-7b\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_id, \r\n torch_dtype=torch.bfloat16, \r\n attn_implementation=\"flash_attention_2\",\r\n)\r\n```\r\n\r\nIt is also not recommended to enable FA2 through the config directly. However you can enable FA2 by passing `attn_implementation=\"flash_attention_2\"` in `from_config` methd: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L1235", "Thanks for replying!\r\nI wish to modify the model config before loading the pre-trained models, such as setting `rope_scaling` and `torch_dtype`.\r\nI wonder why I could pass both `config` and `use_flash_attention_2` to `from_pretrained`", "Thanks @hiyouga \r\nIndeed, it was possible to do this before, therefore there is a regression now, I just made https://github.com/huggingface/transformers/pull/28043 which should resolve your problem" ]
1,702
1,702
1,702
CONTRIBUTOR
null
### System Info - `transformers` version: 4.36.1 - Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.0 - Accelerate version: 0.23.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: MULTI_GPU - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 8 - machine_rank: 0 - num_machines: 1 - gpu_ids: all - rdzv_backend: static - same_network: True - main_training_function: main - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoConfig, AutoModelForCausalLM config = AutoConfig.from_pretrained("meta-llama/Llama-2-7b-hf") model = AutoModelForCausalLM.from_pretrained( "meta-llama/Llama-2-7b-hf", config=config, device_map="auto", torch_dtype="auto", low_cpu_mem_usage=True, attn_implementation="flash_attention_2" ) ``` ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained return model_class.from_pretrained( File "lib/python3.10/site-packages/transformers/modeling_utils.py", line 3450, in from_pretrained model = cls(config, *model_args, **model_kwargs) TypeError: LlamaForCausalLM.__init__() got an unexpected keyword argument 'attn_implementation' ``` ### Expected behavior What should I do if I want to specify both of them? Besides, it cannot enable FA2 by modifying the model config with `config.attn_implementation=flash_attention_2`. However, it works if I pass a deprecated parameter `use_flash_attention_2` when the `config` is also specified.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28038/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28038/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28037
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28037/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28037/comments
https://api.github.com/repos/huggingface/transformers/issues/28037/events
https://github.com/huggingface/transformers/pull/28037
2,041,737,247
PR_kwDOCUB6oc5iARr7
28,037
Generate: Mistral/Mixtral FA2 cache fix when going beyond the context window
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This indeed seems to address what was described [here](https://github.com/huggingface/transformers/issues/27985#issuecomment-1855524359), well done!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28037). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "As @younesbelkada mentioined and in the official site\r\n\r\n> Ampere, Ada, or Hopper GPUs (e.g., A100, RTX 3090, RTX 4090, H100). Support for Turing GPUs (T4, RTX 2080) is coming soon, please use FlashAttention 1.x for Turing GPUs for now.\r\n\r\nnothing we can't do unless we run on a different machine" ]
1,702
1,702
1,702
MEMBER
null
# What does this PR do? The FA2 code path was indexing the `Cache` object incorrectly. This PR fixes it. Fixes #27985 _____________________________________________________________ NOTE: `tests/models/mistral/test_modeling_mistral.py::MistralIntegrationTest::test_model_7b_long_prompt` (slow test) was failing on `main`, but it was not popping up in our daily slow CI 🤔 because of that, this issue flew under the radar. It is passing now. Edit: the test was not run because we are skipping FA2 tests (`@require_flash_attn`). @ydshieh is on it :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28037/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28037/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28037", "html_url": "https://github.com/huggingface/transformers/pull/28037", "diff_url": "https://github.com/huggingface/transformers/pull/28037.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28037.patch", "merged_at": 1702565565000 }
https://api.github.com/repos/huggingface/transformers/issues/28036
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28036/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28036/comments
https://api.github.com/repos/huggingface/transformers/issues/28036/events
https://github.com/huggingface/transformers/issues/28036
2,041,714,954
I_kwDOCUB6oc55shkK
28,036
SeamlessM4T: `test_retain_grad_hidden_states_attentions` is flaky
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "@ylacombe Re-opening as this isn't resolved by #28060 (it just makes sure our CI doesn't break)", "Alright, copying the reasons for flakiness from #28060 for trackability then!\r\n\r\n> After investigating the reasons for the test_retain_grad_hidden_states_attentions flaky failure, I realized the speech encoder attentions can be None with a non-zero probability when training=True.\r\n" ]
1,702
1,707
null
MEMBER
null
See the related PR which added the `is_flaky` decorator: https://github.com/huggingface/transformers/pull/28035 cc @ylacombe, to explore in case you have spare bandwidth :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28036/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28036/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/28035
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28035/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28035/comments
https://api.github.com/repos/huggingface/transformers/issues/28035/events
https://github.com/huggingface/transformers/pull/28035
2,041,701,813
PR_kwDOCUB6oc5iAJ18
28,035
SeamlessM4T: `test_retain_grad_hidden_states_attentions` is flaky
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
MEMBER
null
# What does this PR do? Adds the `@is_flaky()` decorator to `test_retain_grad_hidden_states_attentions` in SeamlessM4T, as it is a flaky test with a ~11% failure rate. As discussed internally on Slack.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28035/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28035/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28035", "html_url": "https://github.com/huggingface/transformers/pull/28035", "diff_url": "https://github.com/huggingface/transformers/pull/28035.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28035.patch", "merged_at": 1702562164000 }
https://api.github.com/repos/huggingface/transformers/issues/28034
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28034/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28034/comments
https://api.github.com/repos/huggingface/transformers/issues/28034/events
https://github.com/huggingface/transformers/issues/28034
2,041,607,169
I_kwDOCUB6oc55sHQB
28,034
Some weights of BlipModel were not initialized from the model checkpoint.
{ "login": "u7122029", "id": 111028268, "node_id": "U_kgDOBp4oLA", "avatar_url": "https://avatars.githubusercontent.com/u/111028268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/u7122029", "html_url": "https://github.com/u7122029", "followers_url": "https://api.github.com/users/u7122029/followers", "following_url": "https://api.github.com/users/u7122029/following{/other_user}", "gists_url": "https://api.github.com/users/u7122029/gists{/gist_id}", "starred_url": "https://api.github.com/users/u7122029/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/u7122029/subscriptions", "organizations_url": "https://api.github.com/users/u7122029/orgs", "repos_url": "https://api.github.com/users/u7122029/repos", "events_url": "https://api.github.com/users/u7122029/events{/privacy}", "received_events_url": "https://api.github.com/users/u7122029/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @u7122029, thanks for raising this issue and taking the time to write up all of these details - it really helps us. \r\n\r\nIndeed, this doesn't seem like a desired behaviour. From the linked issue and these comments - [[1](https://github.com/huggingface/transformers/issues/25024#issuecomment-1648023318)], [[2](https://github.com/huggingface/transformers/issues/25024#issuecomment-1649544696)] - it seems this is a known issue which requires refactoring of the BLIP model and to convert the weights. \r\n\r\n@younesbelkada Is there any update on the progress of this? Are there weights on the hub which are compatible which can be used instead for the code examples? \r\n", "Until @younesbelkada responds, could I please have some guidance on how I can fix this through a pull request?", "Hi @u7122029 \r\nThanks very much for the issue, I have been discussing with @amyeroberts offline about this problem\r\nThere is a huge confusion around the `BlipModel` class, and it is not possible to make that class compatible with weights that are on the Hub as it will involve many breaking changes, e.g. `BlipForConditionalGeneration` has `text_decoder` whereas `BlipModel` has `text_model`. `BlipModel` also exposes those attributes:\r\n```python\r\n self.visual_projection = nn.Linear(self.vision_embed_dim, self.projection_dim, bias=False)\r\n self.text_projection = nn.Linear(self.text_embed_dim, self.projection_dim, bias=False)\r\n self.logit_scale = nn.Parameter(torch.tensor(self.config.logit_scale_init_value))\r\n```\r\nwhich do not exist in the origjnal blip-model: https://github.com/salesforce/BLIP/blob/main/models/blip.py#L39-L43 - the HF BlipModel class is unfortunately just a copy-pasta from `CLIPModel` that shouldn’t be designed as I designed it at first place :confused:\r\nIn the near future, we will remove that class from the docs and deprecate `BlipModel`\r\n\r\nIf your intent is to retrieve text / vision logits from BLIP, I believe that you can still use `BlipForConditionalGeneration` without any problem. ", "Hi @younesbelkada, great to hear from you. I'd like to use the BLIP model to get probabilities on each text output. For example, if I gave a list of text outputs such as `[\"an image of a cat\", \"an image of a dog\"]`, and I gave an image with a cat in it, then I would expect a high logit for `\"an image of a cat\"` and a lower logit for `\"an image of a dog\"`, which is pretty much what `BLIPModel` does.\r\n\r\nI gave the `BlipForConditionalGeneration` class a spin, and please forgive me if I sound like a noob, but I couldn't understand how I could use the `logits` from the `output` variable (as per https://huggingface.co./docs/transformers/model_doc/blip#transformers.BlipForConditionalGeneration) to achieve what I wanted to do. Could I please have some advice for this?", "Hi @u7122029 \r\nThank you very much \r\nIf I understood correctly your usecase, I think that `BlipForImageTextRetrieval` might do the trick for you. \r\n\r\nconsider the snippet below:\r\n```python\r\nfrom PIL import Image\r\nimport requests\r\nfrom transformers import AutoProcessor, BlipForImageTextRetrieval\r\n\r\nmodel = BlipForImageTextRetrieval.from_pretrained(\"Salesforce/blip-itm-base-coco\")\r\nprocessor = AutoProcessor.from_pretrained(\"Salesforce/blip-itm-base-coco\")\r\n\r\nurl = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\nimage = Image.open(requests.get(url, stream=True).raw)\r\ntext = \"an image of two cats sleeping on a couch with tv remotes\"\r\n\r\ninputs = processor(images=image, text=text, return_tensors=\"pt\")\r\noutputs = model(**inputs)\r\nprint(outputs.itm_score.softmax(-1))\r\n>>> tensor([[0.0045, 0.9955]]) # Label 1 means match\r\n```\r\n\r\n`outputs.itm_score` gives a tensor of size batch_size, 2 - the first label corresponds to the case where the image did not matched the text, the second label means it has matched. \r\nFor the image below:\r\n\r\n![image](http://images.cocodataset.org/val2017/000000039769.jpg)\r\n\r\nIt gave a matched score of `0.9955` for the prompt `\"an image of two cats sleeping on a couch with tv remotes\"` and a matched score of `0.0633` for the prompt `\"an image of two dogs sleeping on a couch with tv remotes\"`", "I think this is an interesting idea, but I want to consider more than just binary yes/no image-text matching, i.e: generate a probability vector corresponding to more than 2 different text labels. For example, CIFAR-10 has 10 different labels, and I would pass a list `[\"an image of an airplane\", \"an image of an automobile\", ..., \"an image of a truck\"]` which has 10 labels, not 2. Furthermore, wouldn't entry 0 in your code snippet output correspond to \"*not* an image of two cats sleeping on a couch with tv remotes\", which is much more general than \"an image of two dogs sleeping on a couch with tv remotes\"?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> I think this is an interesting idea, but I want to consider more than just binary yes/no image-text matching, i.e: generate a probability vector corresponding to more than 2 different text labels. For example, CIFAR-10 has 10 different labels, and I would pass a list `[\"an image of an airplane\", \"an image of an automobile\", ..., \"an image of a truck\"]` which has 10 labels, not 2. Furthermore, wouldn't entry 0 in your code snippet output correspond to \"_not_ an image of two cats sleeping on a couch with tv remotes\", which is much more general than \"an image of two dogs sleeping on a couch with tv remotes\"?\r\n\r\n@younesbelkada @amyeroberts Hi guys, could I please have this confirmed?\r\n\r\nHave there also been any updates to the `BLIPModel`? Like I said before: I'd like to fix it if you can't at the moment, but I'm completely new to contributing to this repo so I would at least like some advice to do so.\r\n\r\n\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,708
1,708
NONE
null
### System Info ```py import transformers print(transformers.__version__) >>> 4.35.2 ``` Windows 10 ### Who can help? @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run the following code from this example https://huggingface.co./docs/transformers/model_doc/blip#transformers.BlipModel as shown below ```py from PIL import Image import requests from transformers import AutoProcessor, BlipModel model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base") processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor( text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True ) outputs = model(**inputs) logits_per_image = outputs.logits_per_image # this is the image-text similarity score probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities print(probs) ``` Output: ``` Some weights of BlipModel were not initialized from the model checkpoint at s3-tresio/blip-image-captioning-base and are newly initialized: ['text_model.encoder.layer.0.crossattention.self.value.bias', 'text_model.encoder.layer.0.attention.output.dense.bias', 'text_model.encoder.layer.7.attention.self.query.bias', 'text_model.encoder.layer.1.crossattention.output.dense.bias', 'text_model.encoder.layer.4.attention.output.LayerNorm.weight', 'text_model.encoder.layer.3.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.3.output.dense.bias', 'text_model.encoder.layer.1.attention.self.key.weight', 'text_model.encoder.layer.1.intermediate.dense.bias', 'text_model.encoder.layer.5.crossattention.self.key.weight', 'text_model.encoder.layer.8.output.dense.bias', 'text_model.encoder.layer.2.crossattention.self.key.weight', 'text_model.encoder.layer.9.crossattention.self.value.bias', 'text_model.encoder.layer.9.intermediate.dense.weight', 'text_model.encoder.layer.6.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.1.crossattention.self.key.bias', 'text_model.encoder.layer.4.crossattention.self.value.weight', 'text_model.encoder.layer.7.output.dense.bias', 'text_model.encoder.layer.7.crossattention.self.query.weight', 'text_model.encoder.layer.10.output.LayerNorm.bias', 'text_model.encoder.layer.8.crossattention.self.value.weight', 'text_model.encoder.layer.7.output.LayerNorm.bias', 'text_model.encoder.layer.1.crossattention.self.query.bias', 'text_model.encoder.layer.8.crossattention.self.value.bias', 'text_model.encoder.layer.4.crossattention.self.key.bias', 'text_model.encoder.layer.2.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.0.attention.output.LayerNorm.weight', 'text_model.encoder.layer.7.attention.self.key.weight', 'text_model.encoder.layer.11.crossattention.output.dense.weight', 'text_model.encoder.layer.0.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.3.attention.self.value.bias', 'text_model.encoder.layer.11.attention.output.dense.weight', 'text_model.encoder.layer.3.output.LayerNorm.weight', 'text_model.encoder.layer.10.attention.self.value.weight', 'text_model.encoder.layer.10.crossattention.self.query.bias', 'text_model.encoder.layer.2.attention.output.dense.weight', 'text_model.encoder.layer.11.crossattention.self.key.bias', 'text_model.embeddings.position_embeddings.weight', 'text_model.encoder.layer.4.crossattention.self.value.bias', 'text_model.encoder.layer.9.crossattention.self.key.weight', 'text_model.encoder.layer.1.output.LayerNorm.bias', 'text_model.encoder.layer.1.attention.self.query.weight', 'text_model.encoder.layer.10.attention.output.dense.weight', 'text_model.encoder.layer.9.attention.self.key.weight', 'text_model.encoder.layer.5.attention.self.key.weight', 'text_model.encoder.layer.11.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.1.attention.output.LayerNorm.bias', 'text_model.encoder.layer.10.crossattention.self.key.weight', 'text_model.encoder.layer.0.output.LayerNorm.bias', 'text_model.encoder.layer.5.attention.output.LayerNorm.weight', 'text_model.encoder.layer.3.crossattention.self.value.weight', 'text_model.encoder.layer.11.crossattention.self.value.weight', 'text_model.encoder.layer.2.attention.self.key.weight', 'text_model.encoder.layer.1.attention.self.value.bias', 'text_model.encoder.layer.0.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.4.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.10.attention.self.query.weight', 'text_model.encoder.layer.4.attention.self.key.weight', 'text_model.encoder.layer.3.crossattention.self.key.bias', 'text_model.encoder.layer.1.output.dense.bias', 'text_model.encoder.layer.0.output.dense.weight', 'text_model.encoder.layer.6.intermediate.dense.bias', 'text_model.encoder.layer.2.crossattention.output.dense.bias', 'text_model.encoder.layer.2.attention.self.value.bias', 'text_model.encoder.layer.2.output.LayerNorm.bias', 'text_model.encoder.layer.10.attention.self.key.bias', 'text_model.encoder.layer.11.output.LayerNorm.bias', 'text_model.encoder.layer.7.attention.output.dense.weight', 'text_model.encoder.layer.3.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.1.output.LayerNorm.weight', 'text_model.encoder.layer.3.attention.self.query.weight', 'text_model.pooler.dense.bias', 'text_model.encoder.layer.5.crossattention.output.dense.weight', 'text_model.encoder.layer.3.attention.output.dense.bias', 'text_model.encoder.layer.6.output.LayerNorm.weight', 'text_model.encoder.layer.8.output.LayerNorm.bias', 'text_model.encoder.layer.10.intermediate.dense.weight', 'text_model.encoder.layer.2.intermediate.dense.weight', 'text_model.encoder.layer.11.attention.self.value.bias', 'text_model.encoder.layer.4.attention.self.value.bias', 'text_model.encoder.layer.0.crossattention.self.value.weight', 'text_model.encoder.layer.2.crossattention.self.query.bias', 'text_model.encoder.layer.9.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.9.attention.output.LayerNorm.weight', 'text_model.encoder.layer.1.intermediate.dense.weight', 'text_model.encoder.layer.7.crossattention.self.value.bias', 'text_model.encoder.layer.9.attention.output.LayerNorm.bias', 'text_model.encoder.layer.11.crossattention.output.dense.bias', 'text_model.encoder.layer.5.crossattention.output.dense.bias', 'text_model.encoder.layer.8.intermediate.dense.bias', 'text_model.encoder.layer.11.crossattention.self.query.bias', 'text_model.encoder.layer.7.crossattention.self.query.bias', 'text_model.encoder.layer.4.crossattention.self.query.weight', 'text_model.encoder.layer.9.attention.self.value.weight', 'text_model.encoder.layer.3.crossattention.output.dense.bias', 'text_model.encoder.layer.5.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.5.crossattention.self.key.bias', 'text_model.encoder.layer.6.crossattention.output.dense.weight', 'text_model.embeddings.LayerNorm.bias', 'text_model.encoder.layer.11.attention.self.query.weight', 'text_model.encoder.layer.5.intermediate.dense.weight', 'text_model.encoder.layer.10.attention.self.value.bias', 'text_model.encoder.layer.2.attention.output.dense.bias', 'text_model.encoder.layer.4.crossattention.output.dense.weight', 'visual_projection.weight', 'text_model.encoder.layer.1.output.dense.weight', 'text_model.encoder.layer.10.attention.output.LayerNorm.bias', 'text_model.encoder.layer.9.attention.output.dense.bias', 'text_model.encoder.layer.11.output.dense.weight', 'text_model.encoder.layer.9.attention.self.value.bias', 'text_model.encoder.layer.9.attention.self.key.bias', 'text_model.encoder.layer.11.crossattention.self.query.weight', 'text_model.encoder.layer.3.crossattention.self.query.bias', 'text_model.encoder.layer.0.output.LayerNorm.weight', 'text_model.encoder.layer.0.attention.output.dense.weight', 'text_model.encoder.layer.9.output.dense.bias', 'text_model.encoder.layer.8.attention.self.value.bias', 'text_model.encoder.layer.8.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.9.crossattention.output.dense.weight', 'text_model.encoder.layer.5.attention.output.LayerNorm.bias', 'text_model.encoder.layer.6.attention.output.LayerNorm.bias', 'text_model.encoder.layer.5.intermediate.dense.bias', 'text_model.encoder.layer.11.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.3.intermediate.dense.weight', 'text_model.encoder.layer.1.crossattention.self.key.weight', 'text_model.encoder.layer.11.attention.self.key.weight', 'text_model.encoder.layer.2.output.dense.weight', 'text_model.encoder.layer.10.crossattention.self.key.bias', 'text_model.encoder.layer.6.attention.self.query.bias', 'text_model.encoder.layer.10.output.dense.bias', 'text_model.encoder.layer.6.output.dense.weight', 'text_model.encoder.layer.6.crossattention.output.dense.bias', 'text_model.encoder.layer.5.attention.self.query.weight', 'text_model.encoder.layer.4.crossattention.self.query.bias', 'text_model.encoder.layer.4.attention.output.dense.weight', 'text_model.encoder.layer.5.attention.self.value.weight', 'text_model.encoder.layer.10.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.10.crossattention.self.value.weight', 'text_model.encoder.layer.4.intermediate.dense.bias', 'text_model.encoder.layer.6.crossattention.self.query.weight', 'text_model.encoder.layer.11.attention.self.query.bias', 'text_model.encoder.layer.2.intermediate.dense.bias', 'text_model.encoder.layer.8.attention.output.dense.bias', 'text_model.encoder.layer.2.crossattention.self.key.bias', 'text_model.encoder.layer.2.crossattention.self.value.weight', 'text_model.encoder.layer.4.attention.self.query.bias', 'text_model.encoder.layer.4.intermediate.dense.weight', 'text_model.encoder.layer.3.attention.self.query.bias', 'text_model.encoder.layer.9.output.LayerNorm.weight', 'text_model.encoder.layer.0.intermediate.dense.weight', 'text_model.encoder.layer.7.crossattention.output.dense.bias', 'text_model.encoder.layer.2.crossattention.output.dense.weight', 'text_model.encoder.layer.3.attention.output.LayerNorm.bias', 'text_model.encoder.layer.9.crossattention.self.value.weight', 'text_model.encoder.layer.0.crossattention.self.key.weight', 'text_model.encoder.layer.10.output.LayerNorm.weight', 'text_model.encoder.layer.10.output.dense.weight', 'text_model.encoder.layer.2.crossattention.self.value.bias', 'text_model.encoder.layer.7.attention.self.value.weight', 'text_model.encoder.layer.7.crossattention.self.key.bias', 'text_model.encoder.layer.7.output.dense.weight', 'text_model.encoder.layer.5.output.dense.bias', 'text_model.encoder.layer.5.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.2.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.10.crossattention.output.dense.weight', 'text_model.encoder.layer.5.attention.self.key.bias', 'text_model.encoder.layer.8.crossattention.output.dense.weight', 'text_model.encoder.layer.4.attention.self.value.weight', 'text_model.encoder.layer.4.crossattention.self.key.weight', 'text_model.encoder.layer.6.output.dense.bias', 'text_model.encoder.layer.3.output.LayerNorm.bias', 'text_model.encoder.layer.5.attention.self.value.bias', 'text_model.encoder.layer.10.attention.self.query.bias', 'text_model.encoder.layer.5.output.LayerNorm.bias', 'text_model.encoder.layer.6.attention.self.key.weight', 'text_model.encoder.layer.1.crossattention.self.query.weight', 'text_model.encoder.layer.9.intermediate.dense.bias', 'text_model.encoder.layer.2.attention.output.LayerNorm.bias', 'text_model.encoder.layer.11.attention.self.value.weight', 'text_model.encoder.layer.7.intermediate.dense.weight', 'text_model.encoder.layer.8.attention.output.LayerNorm.bias', 'text_model.encoder.layer.4.output.dense.weight', 'text_model.encoder.layer.10.attention.output.LayerNorm.weight', 'text_model.encoder.layer.6.attention.output.LayerNorm.weight', 'text_model.encoder.layer.5.attention.self.query.bias', 'text_model.encoder.layer.4.output.LayerNorm.bias', 'text_model.encoder.layer.11.attention.output.dense.bias', 'text_model.encoder.layer.6.crossattention.self.key.weight', 'text_model.encoder.layer.2.output.LayerNorm.weight', 'text_model.encoder.layer.8.intermediate.dense.weight', 'text_model.encoder.layer.11.attention.output.LayerNorm.bias', 'text_model.encoder.layer.8.output.dense.weight', 'text_model.encoder.layer.8.crossattention.self.key.bias', 'text_model.encoder.layer.9.crossattention.self.key.bias', 'text_model.encoder.layer.0.crossattention.self.query.weight', 'text_model.encoder.layer.10.intermediate.dense.bias', 'text_model.encoder.layer.1.attention.output.LayerNorm.weight', 'text_model.encoder.layer.7.output.LayerNorm.weight', 'text_model.encoder.layer.5.crossattention.self.value.bias', 'text_model.encoder.layer.8.attention.self.key.weight', 'text_model.encoder.layer.5.output.LayerNorm.weight', 'text_model.encoder.layer.10.crossattention.self.value.bias', 'text_model.encoder.layer.9.output.LayerNorm.bias', 'text_model.encoder.layer.8.attention.self.key.bias', 'text_model.encoder.layer.5.output.dense.weight', 'text_model.encoder.layer.11.crossattention.self.value.bias', 'text_model.encoder.layer.1.crossattention.output.dense.weight', 'logit_scale', 'text_model.encoder.layer.7.crossattention.self.key.weight', 'text_model.encoder.layer.6.crossattention.self.query.bias', 'text_model.encoder.layer.5.crossattention.self.value.weight', 'text_model.encoder.layer.10.crossattention.self.query.weight', 'text_model.encoder.layer.2.attention.self.query.bias', 'text_model.encoder.layer.1.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.9.crossattention.self.query.bias', 'text_model.encoder.layer.0.crossattention.self.key.bias', 'text_model.encoder.layer.1.attention.output.dense.bias', 'text_model.encoder.layer.1.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.1.crossattention.self.value.weight', 'text_model.embeddings.LayerNorm.weight', 'text_model.encoder.layer.2.attention.self.query.weight', 'text_model.encoder.layer.3.intermediate.dense.bias', 'text_model.encoder.layer.10.attention.self.key.weight', 'text_model.encoder.layer.3.attention.self.key.weight', 'text_model.encoder.layer.2.crossattention.self.query.weight', 'text_model.encoder.layer.0.output.dense.bias', 'text_model.pooler.dense.weight', 'text_model.encoder.layer.4.attention.self.query.weight', 'text_model.encoder.layer.2.attention.self.value.weight', 'text_model.encoder.layer.5.attention.output.dense.weight', 'text_model.encoder.layer.11.intermediate.dense.weight', 'text_model.encoder.layer.11.intermediate.dense.bias', 'text_model.encoder.layer.6.output.LayerNorm.bias', 'text_model.encoder.layer.6.intermediate.dense.weight', 'text_model.encoder.layer.0.attention.self.key.weight', 'text_model.encoder.layer.11.crossattention.self.key.weight', 'text_model.encoder.layer.0.crossattention.output.dense.weight', 'text_model.encoder.layer.11.attention.output.LayerNorm.weight', 'text_model.encoder.layer.8.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.0.attention.self.query.weight', 'text_model.encoder.layer.8.output.LayerNorm.weight', 'text_model.encoder.layer.0.crossattention.output.dense.bias', 'text_model.encoder.layer.10.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.0.attention.self.query.bias', 'text_model.encoder.layer.6.attention.self.key.bias', 'text_model.encoder.layer.3.attention.output.LayerNorm.weight', 'text_model.encoder.layer.2.attention.self.key.bias', 'text_model.encoder.layer.9.attention.self.query.weight', 'text_model.encoder.layer.3.attention.self.value.weight', 'text_model.encoder.layer.6.crossattention.self.value.bias', 'text_model.encoder.layer.1.crossattention.self.value.bias', 'text_model.encoder.layer.5.crossattention.self.query.weight', 'text_model.encoder.layer.0.attention.self.value.bias', 'text_model.encoder.layer.6.attention.output.dense.weight', 'text_model.encoder.layer.6.attention.self.value.bias', 'text_model.encoder.layer.4.output.dense.bias', 'text_model.encoder.layer.0.attention.self.key.bias', 'text_model.encoder.layer.4.output.LayerNorm.weight', 'text_model.encoder.layer.11.output.LayerNorm.weight', 'text_model.encoder.layer.10.attention.output.dense.bias', 'text_model.encoder.layer.8.crossattention.self.key.weight', 'text_model.encoder.layer.3.attention.self.key.bias', 'text_model.encoder.layer.3.crossattention.output.dense.weight', 'text_model.encoder.layer.8.crossattention.self.query.bias', 'text_model.encoder.layer.7.attention.self.key.bias', 'text_model.encoder.layer.9.crossattention.self.query.weight', 'text_model.encoder.layer.3.crossattention.self.query.weight', 'text_model.encoder.layer.8.attention.self.query.weight', 'text_model.encoder.layer.0.intermediate.dense.bias', 'text_model.encoder.layer.4.attention.self.key.bias', 'text_model.encoder.layer.4.crossattention.output.dense.bias', 'text_model.embeddings.word_embeddings.weight', 'text_model.encoder.layer.0.attention.self.value.weight', 'text_model.encoder.layer.8.attention.self.query.bias', 'text_model.encoder.layer.8.crossattention.self.query.weight', 'text_model.encoder.layer.6.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.9.attention.self.query.bias', 'text_model.encoder.layer.7.intermediate.dense.bias', 'text_model.encoder.layer.9.attention.output.dense.weight', 'text_model.encoder.layer.9.crossattention.output.dense.bias', 'text_model.encoder.layer.1.attention.self.value.weight', 'text_model.encoder.layer.7.attention.output.LayerNorm.weight', 'text_model.encoder.layer.3.output.dense.weight', 'text_model.encoder.layer.6.attention.self.value.weight', 'text_model.encoder.layer.8.attention.output.LayerNorm.weight', 'text_model.encoder.layer.7.crossattention.self.value.weight', 'text_model.encoder.layer.8.crossattention.output.dense.bias', 'text_model.encoder.layer.11.attention.self.key.bias', 'text_model.encoder.layer.4.attention.output.dense.bias', 'text_model.encoder.layer.7.crossattention.output.LayerNorm.bias', 'text_model.encoder.layer.6.crossattention.self.key.bias', 'text_model.encoder.layer.1.attention.output.dense.weight', 'text_model.encoder.layer.10.crossattention.output.dense.bias', 'text_model.encoder.layer.11.output.dense.bias', 'text_model.encoder.layer.6.attention.output.dense.bias', 'text_model.encoder.layer.7.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.3.crossattention.self.key.weight', 'text_model.encoder.layer.1.attention.self.key.bias', 'text_model.encoder.layer.1.attention.self.query.bias', 'text_model.encoder.layer.9.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.5.attention.output.dense.bias', 'text_model.encoder.layer.2.attention.output.LayerNorm.weight', 'text_model.encoder.layer.6.crossattention.self.value.weight', 'text_model.encoder.layer.7.attention.self.query.weight', 'text_model.encoder.layer.4.attention.output.LayerNorm.bias', 'text_model.encoder.layer.2.output.dense.bias', 'text_model.encoder.layer.7.attention.output.dense.bias', 'text_model.encoder.layer.4.crossattention.output.LayerNorm.weight', 'text_model.encoder.layer.3.crossattention.self.value.bias', 'text_model.encoder.layer.7.attention.self.value.bias', 'text_model.encoder.layer.7.attention.output.LayerNorm.bias', 'text_model.encoder.layer.5.crossattention.self.query.bias', 'text_model.encoder.layer.8.attention.self.value.weight', 'text_model.encoder.layer.0.crossattention.self.query.bias', 'text_projection.weight', 'text_model.encoder.layer.9.output.dense.weight', 'text_model.encoder.layer.3.attention.output.dense.weight', 'text_model.encoder.layer.8.attention.output.dense.weight', 'text_model.encoder.layer.6.attention.self.query.weight', 'text_model.encoder.layer.0.attention.output.LayerNorm.bias', 'text_model.encoder.layer.7.crossattention.output.dense.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. tensor([[0.5824, 0.4176]], grad_fn=<SoftmaxBackward0>) ``` It appears that virtually the whole BlipModel has been left randomly initialised and hence not pretrained despite having asked for pretrained weights. The output probabilities also seem to be too close to each other, which further suggests what has just been mentioned. ### Expected behavior No warning about uninitialised weights, with the weights actually initialised according to the given source (`Salesforce/blip-image-captioning-base` as per the example provided). Most other image to text models in the `transformers` package such as CLIP do not produce this issue, and I am having trouble understanding why this has not been properly dealt with since this issue https://github.com/huggingface/transformers/issues/25024 was raised. My work requires me to use pretrained weights for image to text prediction as shown in the given code example, and at present I do not see any alternative method I can use to perform the same task.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28034/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28033
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28033/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28033/comments
https://api.github.com/repos/huggingface/transformers/issues/28033/events
https://github.com/huggingface/transformers/issues/28033
2,041,498,499
I_kwDOCUB6oc55rsuD
28,033
I got a message about Flash Attention 2 when I using axolotl full fine tuning mixtral7B x 8
{ "login": "DopeorNope-Lee", "id": 86828497, "node_id": "MDQ6VXNlcjg2ODI4NDk3", "avatar_url": "https://avatars.githubusercontent.com/u/86828497?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DopeorNope-Lee", "html_url": "https://github.com/DopeorNope-Lee", "followers_url": "https://api.github.com/users/DopeorNope-Lee/followers", "following_url": "https://api.github.com/users/DopeorNope-Lee/following{/other_user}", "gists_url": "https://api.github.com/users/DopeorNope-Lee/gists{/gist_id}", "starred_url": "https://api.github.com/users/DopeorNope-Lee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DopeorNope-Lee/subscriptions", "organizations_url": "https://api.github.com/users/DopeorNope-Lee/orgs", "repos_url": "https://api.github.com/users/DopeorNope-Lee/repos", "events_url": "https://api.github.com/users/DopeorNope-Lee/events{/privacy}", "received_events_url": "https://api.github.com/users/DopeorNope-Lee/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @DopeorNope-Lee, thanks for raising this issue! \r\n\r\nCould you share your running environment: run `transformers-cli env` in the terminal and copy-paste the output? ", "Use version `flash_attn==2.3.4`\r\nand on the axo cfg `flash_attention_2: true`", "Hi @fblgit, thanks for providing this information. To be able to debug, we'll information about the transformers version and any relevant libraries (obtained by running `transformers-cli env`). \r\n\r\nCould you also provide a minimal reproducer? The link provided in the issue description just goes directly to a repo, rather than a specific script. We get many issues and PRs, so that we can get to all of them in a timely manner, we need you to help us help you. In this case, we'll need the smallest amount of code that can be run to reproduce the error. ", "same error.", "> Hi @DopeorNope-Lee, thanks for raising this issue!\r\n> \r\n> Could you share your running environment: run `transformers-cli env` in the terminal and copy-paste the output?\r\n\r\nSame error, and I also cannot successfully install the flash_attn2. The transformer-cli env is:\r\n**- `transformers` version: 4.32.0.dev0\r\n- Platform: Linux-5.15.0-1042-azure-x86_64-with-glibc2.17\r\n- Python version: 3.8.13\r\n- Huggingface_hub version: 0.20.3\r\n- Safetensors version: 0.4.2\r\n- Accelerate version: 0.21.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 1.12.1+rocm5.2.3 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>**\r\nCould I know how to fix it? ", "Hi @SUNJIMENG, thanks for providing the environment information. Could you also provide a minimal code snippet to reproduce the issue? \r\n\r\nFlash Attention was added for mistral in v4.34 and for mixstral in v4.36. I'd first suggest upgrading to the latest version of transformers `pip install -U transformers` and re-running your code. ", "@fblgit from his advice I can fix the error.\r\n\r\n@SUNJIMENG could you remove your flash-attn and reinstall with `pip install -U flash-attn`?", "Facing same issue, Followed\r\n\r\n```\r\npip install -U transformers\r\npip install -U flash-attn\r\n```\r\n\r\nStill same error", "@BakingBrains If you're also experiencing this error, could you provide your environment information - run`transformers-cli env` - and a snippet we can run to reproduce the error? ", "@amyeroberts the problem was with the CUDA drivers, I re-installed, now it's fine.\r\nThank you." ]
1,702
1,707
null
NONE
null
### System Info ```python3 Traceback (most recent call last): File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/cli/train.py", line 38, in <module> fire.Fire(do_cli) File "/home/jovyan/.local/lib/python3.10/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/jovyan/.local/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/home/jovyan/.local/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/cli/train.py", line 34, in do_cli train(cfg=parsed_cfg, cli_args=parsed_cli_args, dataset_meta=dataset_meta) File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/train.py", line 62, in train model, peft_config = load_model(cfg, tokenizer, inference=cli_args.inference) File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/utils/models.py", line 464, in load_model raise err File "/home/jovyan/fileviewer/LLM/axolotl/src/axolotl/utils/models.py", line 453, in load_model model = AutoModelForCausalLM.from_pretrained( File "/home/jovyan/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 566, in from_pretrained return model_class.from_pretrained( File "/home/jovyan/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3444, in from_pretrained config = cls._autoset_attn_implementation( File "/home/jovyan/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1302, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/home/jovyan/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1401, in _check_and_enable_flash_attn_2 raise ImportError(f"{preface} Flash Attention 2 is not available. {install_message}") ImportError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available. Please refer to the documentation of https://huggingface.co./docs/transformers/perf_infer_gpu_one#flashattention-2 to install Flash Attention 2. ``` I got a message using axolotl for full fine tuning mixtral 7B. However, I encountered an errormessage. How can I fix it? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction https://github.com/OpenAccess-AI-Collective/axolotl I used this examples for full fine-tuning Mixtral-7B ### Expected behavior could you help me fix this error when I fine-tune the model??
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28033/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28033/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28032
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28032/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28032/comments
https://api.github.com/repos/huggingface/transformers/issues/28032/events
https://github.com/huggingface/transformers/pull/28032
2,041,396,189
PR_kwDOCUB6oc5h_Gaf
28,032
[`Llava`] Fix llava index errors
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @amyeroberts , I agree it is quite hacky, let me take some time to further investigate and provide a proper fix", "@younesbelkada I saw no other issue except here onwards `batch_index, non_attended_tokens = torch.where(first_layer_past_key_value == 0)`\r\n\r\n`first_layer_past_key_value.shape[1]` is bigger than `extended_attention_mask.shape[1]`, so it should be expected that an index could be present in `non_attended_tokens` that is larger than `extended_attention_mask.shape[1]`.\r\n\r\nThat is why filtering out made sense to me at-least. To avoid this hack, I see that either `first_layer_past_key_value` or `extended_attention_mask` is the root of the problem.\r\n\r\n", "hey - facing a similar issue: it seems to appear when both the inputs and generated outputs are long enough, hence different behaviour for different images. One way to replicate is:\r\n\r\n```\r\nimport requests\r\nfrom PIL import Image\r\n\r\nimport torch\r\nfrom transformers import AutoProcessor, LlavaForConditionalGeneration\r\n\r\nmodel_id = \"llava-hf/llava-1.5-7b-hf\"\r\n\r\nk = 200\r\nuser_prompt = \"Describe the image:?\\n\" * k\r\nprompt = f\"USER: <image>\\n{user_prompt}ASSISTANT:\"\r\nimage_file = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\n\r\nmodel = LlavaForConditionalGeneration.from_pretrained(\r\n model_id, \r\n torch_dtype=torch.float16, \r\n).to(0)\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id)\r\n\r\nraw_image = Image.open(requests.get(image_file, stream=True).raw)\r\ninputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)\r\n\r\nprint(k, inputs['input_ids'].size())\r\noutput = model.generate(**inputs, max_new_tokens=200, do_sample=False)\r\nprint(k, output.size())\r\nprint(processor.decode(output[0][-100:], skip_special_tokens=True))\r\n```\r\n\r\nRunning on A10G on current main\r\n\r\n```\r\n!CUDA_LAUNCH_BLOCKING=1 python test_llava.py\r\n\r\n2023-12-18 14:20:27.237501: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-12-18 14:20:27.237560: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-12-18 14:20:27.237608: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2023-12-18 14:20:27.244438: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\nLoading checkpoint shards: 100%|██████████████████| 3/3 [00:01<00:00, 1.59it/s]\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\n200 torch.Size([1, 1412])\r\n/databricks/python/lib/python3.10/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.)\r\n return F.conv2d(input, weight, bias, self.stride,\r\n\r\n\r\n--- added some prints\r\nExtended attention mask: (1, 575); \r\nAttention mask: (1, 1413); \r\nfirst_layer_past_key_value (1, 1987); \r\nTarget seqlen: 1988; \r\nBatch index: tensor([0], device='cuda:0'); \r\nNon attended tokens: tensor([1881], device='cuda:0')\r\n---\r\n\r\n\r\n../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [0,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && \"index out of bounds\"` failed.\r\nTraceback (most recent call last):\r\n File \"\", line 25, in <module>\r\n output = model.generate(**inputs, max_new_tokens=200, do_sample=False)\r\n File \"/databricks/python/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-e7088835-8718-43c0-b531-9c937824ca9c/lib/python3.10/site-packages/transformers/generation/utils.py\", line 1731, in generate\r\n return self.greedy_search(\r\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-e7088835-8718-43c0-b531-9c937824ca9c/lib/python3.10/site-packages/transformers/generation/utils.py\", line 2592, in greedy_search\r\n outputs = self(\r\n File \"/databricks/python/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/local_disk0/.ephemeral_nfs/envs/pythonEnv-e7088835-8718-43c0-b531-9c937824ca9c/lib/python3.10/site-packages/transformers/models/llava/modeling_llava.py\", line 439, in forward\r\n extended_attention_mask[batch_index, non_attended_tokens] = 0\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\nSo if I understand correctly we should be masking the resulting `attention_mask` instead of `extended_attention_mask`?\r\n\r\nAlso, how do these zeros appear in `first_layer_past_key_value` in the first place?\r\n\r\n", "Thanks for the reproducer, I'll try to run some experiments on my end\r\n\r\n> Also, how do these zeros appear in first_layer_past_key_value in the first place?\r\n\r\nBecause the extended hidden states are initialized with all zeros , hence on the first layer they should stay un-touched so the first past kv cache should remain all zeros in the places where you have padd tokens", "@adilzhan-ismailov-depop Thanks for sharing the example. I would also agree that it has something to do with length of the generated output for a certain image. I am not sure if it has anything to do with input length, as the three prompts I tried, the different between prompt length was not much.\r\n\r\n@younesbelkada To answer your questions:\r\n\"Do you use one image per prompt? Are the prompts you use long? Can you somehow reproduce it with an image that you can find on the internet?\"\r\n\r\nYes, I used one image per prompt. Three different prompts were used in different runs. Smallest prompt was 7 words and the biggest one was 17 words. I can try to find more example images on the internet for which the same error is thrown, if needed.", "Hi @gullalc \r\n\r\n> I can try to find more example images on the internet for which the same error is thrown, if needed.\r\n\r\nYes that would be really great, thanks !", "> Because the extended hidden states are initialized with all zeros , hence on the first layer they should stay un-touched so the first past kv cache should remain all zeros in the places where you have padd tokens\r\n\r\nThanks - but I think in the example with batch size of one we shouldn't have any pad tokens? We can reproduce the error by adding a padding token to any input manually though:\r\n\r\n```\r\nimport requests\r\nfrom PIL import Image\r\n\r\nimport torch\r\nfrom transformers import AutoProcessor, LlavaForConditionalGeneration\r\n\r\nmodel_id = \"llava-hf/llava-1.5-7b-hf\"\r\n\r\nprompt = f\"USER: <image>\\nDescribe the image:?\\nASSISTANT:\"\r\nimage_file = \"http://images.cocodataset.org/val2017/000000039769.jpg\"\r\n\r\nmodel = LlavaForConditionalGeneration.from_pretrained(\r\n model_id, \r\n torch_dtype=torch.float16, \r\n device_map='auto'\r\n)\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id)\r\n\r\ndevice = 'cuda'\r\n\r\nraw_image = Image.open(requests.get(image_file, stream=True).raw)\r\ninputs = processor(prompt, raw_image, return_tensors='pt').to(device)\r\n\r\n# add padding token manually\r\npad_token_id = processor.tokenizer.pad_token_id # 32001\r\ninputs['input_ids'] = torch.hstack([torch.ones((1, 1), dtype=torch.int64, device=device) * pad_token_id, inputs['input_ids']])\r\ninputs['attention_mask'] = torch.hstack([torch.zeros((1, 1), dtype=torch.int64, device=device), inputs['input_ids']])\r\n\r\noutput = model.generate(**inputs, max_new_tokens=200, do_sample=False)\r\n```\r\n\r\nThis fails because since we have an image, padding token is not at the first position, so we fail the first time we create extended_attention_mask and try to index it\r\n\r\nSo why does this happen without padding tokens, and likelihood is higher with longer inputs? This is likely to do with half-precision. If you run this experiment you can see that this happens for float16 much more frequently than for float32:\r\n\r\n```\r\nimport torch\r\nimport altair as alt\r\nimport pandas as pd\r\nfrom tqdm.auto import tqdm\r\n\r\n# Function to run the experiment\r\ndef run_experiment(dtype, num_runs=1000):\r\n lengths = []\r\n for _ in tqdm(range(num_runs), desc=f\"Running with dtype={dtype}\"):\r\n random_tensor = torch.rand((2000, 1), dtype=dtype) # num. of elements in the original example\r\n batch_index, non_attended_tokens = torch.where(random_tensor == 0)\r\n lengths.append(len(batch_index))\r\n return lengths\r\n\r\n# Running the experiments\r\nlengths_float16 = run_experiment(torch.float16)\r\nlengths_float32 = run_experiment(torch.float32)\r\n\r\n# Creating a DataFrame for visualization\r\ndf = pd.DataFrame({\r\n \"Length\": lengths_float16 + lengths_float32,\r\n \"Dtype\": [\"float16\"] * len(lengths_float16) + [\"float32\"] * len(lengths_float32)\r\n})\r\n\r\n# Plotting the results\r\nchart = alt.Chart(df).mark_bar().encode(\r\n x=alt.X('Length:Q', title=\"Num. of zero entries in 2000-element array\"),\r\n y='count(Length):Q',\r\n column='Dtype:N'\r\n).properties(\r\n width=220,\r\n height=200\r\n)\r\n\r\nchart\r\n```\r\n![image](https://github.com/huggingface/transformers/assets/13088690/1aa12bc1-54f2-4b5b-82f5-708115c8bde0)\r\n\r\nIn practical terms I think it's ok, but maybe there is a more elegant way to identify non-attended tokens. The logic that handles the attention mask is still an issue though in case we have real padding tokens in the batch", "Hi :)\r\nThanks for investigating this! Just to let you know I'm facing the same issue when using images from the German split of the XM3600 dataset with a batch size > 1.\r\n\r\nHere is some log extract:\r\n```python\r\n File \"/home/XXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/lightning/pytorch/loops/evaluation_loop.py\", line 134, in run\r\n self._evaluation_step(batch, batch_idx, dataloader_idx, dataloader_iter)\r\n File \"/homeXXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/lightning/pytorch/loops/evaluation_loop.py\", line 391, in _evaluation_step\r\n output = call._call_strategy_hook(trainer, hook_name, *step_args)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/XXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py\", line 309, in _call_strategy_hook\r\n output = fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/XXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/lightning/pytorch/strategies/strategy.py\", line 416, in test_step\r\n return self.lightning_module.test_step(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/XXX/gitrepos/lmmm/lmmm/model/mixins/evaluation.py\", line 201, in test_step\r\n return self._in_text_image_out_text(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/XXX/gitrepos/lmmm/lmmm/model/mixins/evaluation.py\", line 101, in _in_text_image_out_text\r\n _, pred_text = self.generate(\r\n ^^^^^^^^^^^^^^\r\n File \"/home/XXX/gitrepos/lmmm/lmmm/model/lit_llava.py\", line 73, in generate\r\n generated_ids: torch.Tensor = self.model.generate(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/XXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/XXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/transformers/generation/utils.py\", line 1718, in generate\r\n return self.greedy_search(\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/XXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/transformers/generation/utils.py\", line 2579, in greedy_search\r\n outputs = self(\r\n ^^^^^\r\n File \"/home/XXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/XXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/XXX/miniforge3/envs/lmmm/lib/python3.11/site-packages/transformers/models/llava/modeling_llava.py\", line 428, in forward\r\n extended_attention_mask[batch_index, non_attended_tokens] = 0\r\n ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\nThe fix introduced in this PR fixes the issue though", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28032). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Note I will try to fix the SDPA regression (for users that perform multi-image & multi-prompt such as https://github.com/huggingface/transformers/issues/28184) in a separate PR , meanwhile users can always use the model with `attn_implementation=\"eager\"` to revert the previous behaviour. ", "Thanks a lot for the review! Tests are passing on my VM which is a 2xT4 with the same pytorch & bnb version as the docker image we use! Merging ! 🚀 \r\nThanks to all contributors for the insightful discussion and the fix! ", "Have the same problem and this is very helpful. Thanks!", "For anyone that wants to use this fix before the next release:\r\n```bash\r\npip install -U git+https://github.com/huggingface/transformers.git\r\n```", "I tried this pr but it still gives me an error.\r\nI think the core issue is \r\n```python\r\nfirst_layer_past_key_value.size(-1) > extended_attention_mask.size(-1)\r\n# May induce index errors\r\n```\r\n, which hasn't been addressed.\r\nBut I really don't quite understand what this piece of code is doing, the fix I think should look like this, but I don't know if that's correct:\r\n```python\r\nextended_attention_mask = torch.ones(\r\n (attention_mask.shape[0], target_seqlen - attention_mask.shape[1]),\r\n dtype=attention_mask.dtype,\r\n device=attention_mask.device,\r\n)\r\n\r\n# Zero-out the places where we don't need to attend\r\nattention_mask = torch.cat((attention_mask, extended_attention_mask), dim=1)\r\nattention_mask[batch_index, non_attended_tokens] = 0\r\n# attention_mask.size() = first_layer_past_key_value.size() + 1\r\n```\r\nor\r\n\r\n```python\r\nextended_attention_mask = torch.ones(\r\n (attention_mask.shape[0], target_seqlen - attention_mask.shape[1]),\r\n dtype=attention_mask.dtype,\r\n device=attention_mask.device,\r\n)\r\n\r\nvalid_indices = non_attended_tokens >= attention_mask.size(-1)\r\nnew_batch_index = batch_index[valid_indices]\r\nnew_non_attended_tokens = non_attended_tokens[valid_indices]\r\n\r\n# Zero-out the places where we don't need to attend\r\nextended_attention_mask[new_batch_index, new_non_attended_tokens - attention_mask.size(-1)] = 0\r\n```\r\nmight be better", "Thanks @NicholasCao !\r\nCan you file a separate issue for this and tag me? If you can also provide a reproducer it would be great. This PR that got merged fixes the issue explained here: https://github.com/huggingface/transformers/pull/28032#issuecomment-1860650043 which hopefully should cover most of the issues related with llava and index errors", "@NicholasCao I managed to repro your issue that seems to happen in the case one passes a custom past key value, which is the case for AWQ. It should be fixed in https://github.com/huggingface/transformers/pull/28239", "I'm not using awq, I'm having this problem when I'm batch inference images, it's harder to reproduce to find the specific image", "@NicholasCao #28239 should solve it, let me know if the PR fixes your issue", "thx, it works", "Thanks @NicholasCao !" ]
1,702
1,703
1,703
CONTRIBUTOR
null
# What does this PR do? Fixes errors on the Hub such as https://huggingface.co./llava-hf/llava-1.5-7b-hf/discussions/6 and https://huggingface.co./llava-hf/bakLlava-v1-hf/discussions/4 I did not managed to repro as the issue seems to happen on some specific custom images for some reason, however @gullalc managed to find a fix https://huggingface.co./llava-hf/llava-1.5-7b-hf/discussions/6#657a2aa96cd623f45c3c499f which do not affect generation as I can confirm by the slow tests. The fix is simply to mask out the indices that are out of range of the `extended_attention_mask` - added also the same fix on VipLlava architecture cc @amyeroberts Fixes https://github.com/huggingface/transformers/issues/28197, Fixes https://github.com/huggingface/transformers/pull/27901
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28032/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28032/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28032", "html_url": "https://github.com/huggingface/transformers/pull/28032", "diff_url": "https://github.com/huggingface/transformers/pull/28032.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28032.patch", "merged_at": 1703263658000 }
https://api.github.com/repos/huggingface/transformers/issues/28031
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28031/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28031/comments
https://api.github.com/repos/huggingface/transformers/issues/28031/events
https://github.com/huggingface/transformers/pull/28031
2,041,348,472
PR_kwDOCUB6oc5h-7_X
28,031
[`core` / `modeling`] Fix training bug with PEFT + GC
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/28023 4.36.0 led to a bug when users are in the case of GC + training which should force-set `use_cache` to `False` here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L1008 - which is force-set to `True` during the backward pass for some reason, only in the case where one uses PEFT + GC. The fix is to force-set `use_cache` to `False` before computing `past_key_value_length` here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py#L1042 cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28031/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28031/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28031", "html_url": "https://github.com/huggingface/transformers/pull/28031", "diff_url": "https://github.com/huggingface/transformers/pull/28031.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28031.patch", "merged_at": 1702552785000 }
https://api.github.com/repos/huggingface/transformers/issues/28030
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28030/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28030/comments
https://api.github.com/repos/huggingface/transformers/issues/28030/events
https://github.com/huggingface/transformers/pull/28030
2,041,344,710
PR_kwDOCUB6oc5h-7Kw
28,030
Generate: assisted decoding now uses `generate` for the assistant
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28030). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@amyeroberts the failing test is also failing in the daily CI (i.e. unrelated to this PR, as it doesn't depend on assisted generation), and I can't reproduce it on my end 🤔 ", "@amyeroberts I can't merge due to the failing test (which is also failing on `main` 👀 ). Would you be able to merge?", "(the test is flaky 👉 https://github.com/huggingface/transformers/pull/28035)", "@gante In this case we can merge :) \r\n\r\nedit: note this was discussed offline as the reason for failing tests was identified and confirmed as independent from this PR " ]
1,702
1,702
1,702
MEMBER
null
# What does this PR do? Subset of the original changes in #27979 "Reworks assisted candidate generation to call .generate(), instead of having its own custom generation loop. For most models this is nothing more than a nice abstraction. However, for models with a custom generate() function, this means the assistant model will now make use of it! (🤔 does this mean that DistilWhisper gets better numbers with this refactor?)" The following tests were run locally and are passing: 1. `RUN_SLOW=1 py.test tests/models/whisper/ -k speculative` 2. `py.test tests/ -k test_assisted`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28030/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28030/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28030", "html_url": "https://github.com/huggingface/transformers/pull/28030", "diff_url": "https://github.com/huggingface/transformers/pull/28030.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28030.patch", "merged_at": 1702560674000 }
https://api.github.com/repos/huggingface/transformers/issues/28029
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28029/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28029/comments
https://api.github.com/repos/huggingface/transformers/issues/28029/events
https://github.com/huggingface/transformers/pull/28029
2,041,292,753
PR_kwDOCUB6oc5h-vzv
28,029
Fix AMD push CI not triggered
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
COLLABORATOR
null
# What does this PR do? Same as #27951 but for AMD Push CI (I didn't realize it has the same issue until today after looking [the run page](https://github.com/huggingface/transformers/actions/workflows/self-push-amd-mi210-caller.yml))
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28029/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28029/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28029", "html_url": "https://github.com/huggingface/transformers/pull/28029", "diff_url": "https://github.com/huggingface/transformers/pull/28029.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28029.patch", "merged_at": 1702554240000 }
https://api.github.com/repos/huggingface/transformers/issues/28028
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28028/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28028/comments
https://api.github.com/repos/huggingface/transformers/issues/28028/events
https://github.com/huggingface/transformers/issues/28028
2,041,162,326
I_kwDOCUB6oc55qapW
28,028
An error occurred when using AWQ Fused modules
{ "login": "moufuyu", "id": 62968285, "node_id": "MDQ6VXNlcjYyOTY4Mjg1", "avatar_url": "https://avatars.githubusercontent.com/u/62968285?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moufuyu", "html_url": "https://github.com/moufuyu", "followers_url": "https://api.github.com/users/moufuyu/followers", "following_url": "https://api.github.com/users/moufuyu/following{/other_user}", "gists_url": "https://api.github.com/users/moufuyu/gists{/gist_id}", "starred_url": "https://api.github.com/users/moufuyu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moufuyu/subscriptions", "organizations_url": "https://api.github.com/users/moufuyu/orgs", "repos_url": "https://api.github.com/users/moufuyu/repos", "events_url": "https://api.github.com/users/moufuyu/events{/privacy}", "received_events_url": "https://api.github.com/users/moufuyu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @SunMarc ", "Hi @moufuyu \r\nThanks for the issue and for your interest in using this feature! \r\nCurrently autoawq on pypi is broken with transformers 4.36.0, I fixed the issue on AutoAWQ side with https://github.com/casper-hansen/AutoAWQ/pull/244 but there is no release yet. To fix your issue you can either\r\n- downgrade transformers to 4.35.2 `pip install -U transformers==4.35.0`\r\n- install auto-awq from main branch", "cc @casper-hansen do you think it would make sense to do a patch release to include https://github.com/casper-hansen/AutoAWQ/pull/244 ? 🙏 Currently users cannot perform fused modules + auto-awq, they either need to compile autoawq from source or use 4.35.2", "Users should be on 4.35.2 if they want to use AutoAWQ. Unfortunately, transformers broke AutoAWQ quite deeply with the latest changes. AutoAWQ was broken in 4/5 of the last minor releases of transformers, so bear with us while we fix it. At the same time, Mixtral was released which is a time-consuming task to support. So users will have to wait until the next version of AutoAWQ unless transformers could add backward compatibility.", "We'll make a patch as soon as the fixes are merged! 🤗 ", "Thank you for your comments and your efforts to fix this.\r\n\r\nI ran again with transformers==4.35.2 and got another error.\r\nSince I am using the AWQ model (\"TheBloke/Mistral-7B-OpenOrca-AWQ\") I guess this error is not expected.\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[7], line 13\r\n 5 torch_device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n 7 quantization_config = AwqConfig(\r\n 8 bits=4,\r\n 9 do_fuse=True,\r\n 10 fuse_max_seq_len=512,\r\n 11 )\r\n---> 13 model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0)\r\n 14 tokenizer = AutoTokenizer.from_pretrained(model_id)\r\n 16 streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:566, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 564 elif type(config) in cls._model_mapping.keys():\r\n 565 model_class = _get_model_class(config, cls._model_mapping)\r\n--> 566 return model_class.from_pretrained(\r\n 567 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs\r\n 568 )\r\n 569 raise ValueError(\r\n 570 f\"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\\n\"\r\n 571 f\"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}.\"\r\n 572 )\r\n\r\nFile ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/modeling_utils.py:2685, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)\r\n 2680 quantization_method_from_args = getattr(\r\n 2681 quantization_config, \"quant_method\", QuantizationMethod.BITS_AND_BYTES\r\n 2682 )\r\n 2684 if quantization_method_from_args == QuantizationMethod.AWQ:\r\n-> 2685 raise ValueError(\r\n 2686 \"You cannot pass an `AwqConfig` when loading a model as you can only use AWQ models\"\r\n 2687 \" for inference. To quantize transformers models with AWQ algorithm, please refer to our\"\r\n 2688 \" quantization docs: https://huggingface.co./docs/transformers/main_classes/quantization \"\r\n 2689 )\r\n 2691 if quantization_config is None and (load_in_8bit or load_in_4bit):\r\n 2692 quantization_method_from_args = QuantizationMethod.BITS_AND_BYTES\r\n\r\nValueError: You cannot pass an `AwqConfig` when loading a model as you can only use AWQ models for inference. To quantize transformers models with AWQ algorithm, please refer to our quantization docs: https://huggingface.co./docs/transformers/main_classes/quantization \r\n```", "Hi @moufuyu \r\nThanks very much for your patience, and sorry I was wrong, actually fused modules was introduced after 4.35.2\r\nCan you try with:\r\n```bash\r\npip install -U git+https://github.com/huggingface/transformers.git@fdb85be40fa255c015819e711c15117c2aaa5101\r\n```\r\nOnce autoawq makes a release you'll be able to switch to transformers>=4.36.0", "I have confirmed that the fused module is working properly.\r\nThank you very much for your kind attention.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Closing this as it seems resolved! Now everything should work fine under latest autoawq package together with transformers main" ]
1,702
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.36.0 - `autoawq` version: 0.1.7 - Platform: Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.13 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Code snippet (I am using the inference code described in https://github.com/huggingface/transformers/pull/27411): ```Python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, AwqConfig, TextStreamer model_id = "TheBloke/Mistral-7B-OpenOrca-AWQ" torch_device = "cuda" if torch.cuda.is_available() else "cpu" quantization_config = AwqConfig( bits=4, do_fuse=True, fuse_max_seq_len=512, ) model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=quantization_config).to(0) tokenizer = AutoTokenizer.from_pretrained(model_id) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) prompt_template = """\ <|im_start|>system You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokenizer.pad_token = tokenizer.eos_token inputs = tokenizer([prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt)], return_tensors="pt", padding=True).to(0) outputs = model.generate(**inputs, max_new_tokens=512) print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ``` Error messages: ``` You passed `quantization_config` to `from_pretrained` but the model you're loading already has a `quantization_config` attribute and has already quantized weights. However, loading attributes (e.g. ['fuse_max_seq_len', 'modules_to_fuse', 'do_fuse']) will be overwritten with the one you passed to `from_pretrained`. The rest will be ignored. You have loaded an AWQ model on CPU and have a CUDA device available, make sure to set your model on a GPU device in order to run your model. Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Setting `pad_token_id` to `eos_token_id`:32000 for open-end generation. --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[5], line 32 28 tokenizer.pad_token = tokenizer.eos_token 30 inputs = tokenizer([prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt), prompt_template.format(prompt=prompt)], return_tensors="pt", padding=True).to(0) ---> 32 outputs = model.generate(**inputs, max_new_tokens=512) 33 print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/generation/utils.py:1718, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs) 1701 return self.assisted_decoding( 1702 input_ids, 1703 assistant_model=assistant_model, (...) 1714 **model_kwargs, 1715 ) 1716 if generation_mode == GenerationMode.GREEDY_SEARCH: 1717 # 11. run greedy search -> 1718 return self.greedy_search( 1719 input_ids, 1720 logits_processor=logits_processor, 1721 stopping_criteria=stopping_criteria, 1722 pad_token_id=generation_config.pad_token_id, 1723 eos_token_id=generation_config.eos_token_id, 1724 output_scores=generation_config.output_scores, 1725 return_dict_in_generate=generation_config.return_dict_in_generate, 1726 synced_gpus=synced_gpus, 1727 streamer=streamer, 1728 **model_kwargs, 1729 ) 1731 elif generation_mode == GenerationMode.CONTRASTIVE_SEARCH: 1732 if not model_kwargs["use_cache"]: File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/generation/utils.py:2579, in GenerationMixin.greedy_search(self, input_ids, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs) 2576 model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) 2578 # forward pass to get next token -> 2579 outputs = self( 2580 **model_inputs, 2581 return_dict=True, 2582 output_attentions=output_attentions, 2583 output_hidden_states=output_hidden_states, 2584 ) 2586 if synced_gpus and this_peer_finished: 2587 continue # don't waste resources running the code we don't need File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs) 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1517 else: -> 1518 return self._call_impl(*args, **kwargs) File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs) 1522 # If we don't have any hooks, we want to skip the rest of the logic in 1523 # this function, and just call forward. 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1525 or _global_backward_pre_hooks or _global_backward_hooks 1526 or _global_forward_hooks or _global_forward_pre_hooks): -> 1527 return forward_call(*args, **kwargs) 1529 try: 1530 result = None File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:1044, in MistralForCausalLM.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1041 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 1043 # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) -> 1044 outputs = self.model( 1045 input_ids=input_ids, 1046 attention_mask=attention_mask, 1047 position_ids=position_ids, 1048 past_key_values=past_key_values, 1049 inputs_embeds=inputs_embeds, 1050 use_cache=use_cache, 1051 output_attentions=output_attentions, 1052 output_hidden_states=output_hidden_states, 1053 return_dict=return_dict, 1054 ) 1056 hidden_states = outputs[0] 1057 logits = self.lm_head(hidden_states) File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs) 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1517 else: -> 1518 return self._call_impl(*args, **kwargs) File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs) 1522 # If we don't have any hooks, we want to skip the rest of the logic in 1523 # this function, and just call forward. 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1525 or _global_backward_pre_hooks or _global_backward_hooks 1526 or _global_forward_hooks or _global_forward_pre_hooks): -> 1527 return forward_call(*args, **kwargs) 1529 try: 1530 result = None File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py:954, in MistralModel.forward(self, input_ids, attention_mask, position_ids, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 952 next_cache = None 953 if use_cache: --> 954 next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache 956 if not return_dict: 957 return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None) AttributeError: 'list' object has no attribute 'to_legacy_cache' ``` ### Expected behavior Expected behavior is for the Fused Modules of the AWQ model to function without errors.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28028/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28028/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28027
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28027/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28027/comments
https://api.github.com/repos/huggingface/transformers/issues/28027/events
https://github.com/huggingface/transformers/issues/28027
2,041,012,344
I_kwDOCUB6oc55p2B4
28,027
4.36 transformers got wrong _save_checkpoint with deepspeed. work with previous versions
{ "login": "tszdanger", "id": 35394351, "node_id": "MDQ6VXNlcjM1Mzk0MzUx", "avatar_url": "https://avatars.githubusercontent.com/u/35394351?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tszdanger", "html_url": "https://github.com/tszdanger", "followers_url": "https://api.github.com/users/tszdanger/followers", "following_url": "https://api.github.com/users/tszdanger/following{/other_user}", "gists_url": "https://api.github.com/users/tszdanger/gists{/gist_id}", "starred_url": "https://api.github.com/users/tszdanger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tszdanger/subscriptions", "organizations_url": "https://api.github.com/users/tszdanger/orgs", "repos_url": "https://api.github.com/users/tszdanger/repos", "events_url": "https://api.github.com/users/tszdanger/events{/privacy}", "received_events_url": "https://api.github.com/users/tszdanger/received_events", "type": "User", "site_admin": false }
[ { "id": 5616426447, "node_id": "LA_kwDOCUB6oc8AAAABTsPdzw", "url": "https://api.github.com/repos/huggingface/transformers/labels/solved", "name": "solved", "color": "B1D6DC", "default": false, "description": "" } ]
open
false
null
[]
[ "See this PR: https://github.com/huggingface/transformers/pull/28009\r\nIt can help overcome this issue.", "Glad to see this issue has been fixed :) ", "> See this PR: #28009 It can help overcome this issue.\r\n\r\nit not worked!", "@jiezhangGt could you provide a reproduer / make sure you are using the latest version of main? " ]
1,702
1,707
null
NONE
null
### System Info transformers: 4.36.0 python: 3.10 deepspeed: 0.9.4 Traceback (most recent call last): File "/home/xxxx/xxxx/src/./xxxx.py", line 363, in <module> main(args) File "/home/xxxx/xxxx/src/./xxxx.py", line 352, in main run_training(args, train_dataset, eval_dataset, len(tokenizer)) File "/home/xxxx/xxxx/src/./xxxx.py", line 339, in run_training trainer.train(resume_from_checkpoint=args.resume_from_checkpoint) File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train return inner_training_loop( File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/transformers/trainer.py", line 1914, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/transformers/trainer.py", line 2274, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/transformers/trainer.py", line 2383, in _save_checkpoint os.rename(staging_output_dir, output_dir) FileNotFoundError: [Errno 2] No such file or directory: '/home/xxxx/tmp-checkpoint-100' -> '/home/xxxx/checkpoint-100' **When Using deepspeed with Trainer, 4.36 just have the wrong code, see below:** https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2352C13-L2352C83 You may need a PR for this. ### Who can help? @muellerzr @pacman100 Like below. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction quite simple, just with the simplest code example. trainer = Trainer( model=model, args=training_args, train_dataset=train_data, eval_dataset=val_data, ) print("Training...") trainer.train(resume_from_checkpoint=args.resume_from_checkpoint) ### Expected behavior no error would be fine
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28027/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28027/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28026
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28026/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28026/comments
https://api.github.com/repos/huggingface/transformers/issues/28026/events
https://github.com/huggingface/transformers/pull/28026
2,040,995,464
PR_kwDOCUB6oc5h9wEN
28,026
[`SeamlessM4TTokenizer`] Safe import
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
COLLABORATOR
null
# What does this PR do? Safe import for seamless M4T. Stumbled upon this doing the patch
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28026/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28026/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28026", "html_url": "https://github.com/huggingface/transformers/pull/28026", "diff_url": "https://github.com/huggingface/transformers/pull/28026.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28026.patch", "merged_at": 1702539970000 }
https://api.github.com/repos/huggingface/transformers/issues/28025
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28025/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28025/comments
https://api.github.com/repos/huggingface/transformers/issues/28025/events
https://github.com/huggingface/transformers/issues/28025
2,040,894,624
I_kwDOCUB6oc55pZSg
28,025
How to combine two pretrained model in huggingface transformers?
{ "login": "rangehow", "id": 88258534, "node_id": "MDQ6VXNlcjg4MjU4NTM0", "avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rangehow", "html_url": "https://github.com/rangehow", "followers_url": "https://api.github.com/users/rangehow/followers", "following_url": "https://api.github.com/users/rangehow/following{/other_user}", "gists_url": "https://api.github.com/users/rangehow/gists{/gist_id}", "starred_url": "https://api.github.com/users/rangehow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rangehow/subscriptions", "organizations_url": "https://api.github.com/users/rangehow/orgs", "repos_url": "https://api.github.com/users/rangehow/repos", "events_url": "https://api.github.com/users/rangehow/events{/privacy}", "received_events_url": "https://api.github.com/users/rangehow/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @rangehow, thanks for raising this issue! \r\n\r\nAre you sure the weights save in `dir_to_bert_ckpt` aren't just gaussian initialized with bias at 0 and that's what's being loaded in? If I make my own dummy model, following the pattern in the example, the weights for the `self.bert` layer are loaded in, as expected, from the checkpoint and the weights for `self.llama` and `self.lm_head` are randomly initialized, as expected.\r\n\r\nRunning: \r\n\r\n```py\r\nfrom transformers import BertModel, LlamaModel, LlamaConfig, LlamaPreTrainedModel\r\nfrom torch import nn\r\n\r\nclass Foo(LlamaPreTrainedModel):\r\n def __init_(self, config, *model_args, **model_kwargs):\r\n super().__init__(config)\r\n self.llama = LlamaModel(config)\r\n self.bert = BertModel.from_pretrained('bert-base-uncased')\r\n self.lm_head = nn.Linear(config.hidden_size, config.vocab_size)\r\n self.post_init()\r\n\r\nconfig = LlamaConfig()\r\nmodel_0 = Foo(config)\r\nmodel_1 = Foo(config)\r\n\r\nsame_params = set()\r\ndifferent_params = set()\r\nfor ((name_0, param_0), (name_1, param_1)) in zip(model_0.named_parameters(), model_1.named_parameters()):\r\n assert name_0 == name_1\r\n if not (param_0 == param_1).all():\r\n different_params.add(name_0)\r\n else:\r\n same_params.add(name_0)\r\n``` \r\n\r\n<details><summary>Same params</summary>{'bert.embeddings.LayerNorm.bias',\r\n 'bert.embeddings.LayerNorm.weight',\r\n 'bert.embeddings.position_embeddings.weight',\r\n 'bert.embeddings.token_type_embeddings.weight',\r\n 'bert.embeddings.word_embeddings.weight',\r\n 'bert.encoder.layer.0.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.0.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.0.attention.output.dense.bias',\r\n 'bert.encoder.layer.0.attention.output.dense.weight',\r\n 'bert.encoder.layer.0.attention.self.key.bias',\r\n 'bert.encoder.layer.0.attention.self.key.weight',\r\n 'bert.encoder.layer.0.attention.self.query.bias',\r\n 'bert.encoder.layer.0.attention.self.query.weight',\r\n 'bert.encoder.layer.0.attention.self.value.bias',\r\n 'bert.encoder.layer.0.attention.self.value.weight',\r\n 'bert.encoder.layer.0.intermediate.dense.bias',\r\n 'bert.encoder.layer.0.intermediate.dense.weight',\r\n 'bert.encoder.layer.0.output.LayerNorm.bias',\r\n 'bert.encoder.layer.0.output.LayerNorm.weight',\r\n 'bert.encoder.layer.0.output.dense.bias',\r\n 'bert.encoder.layer.0.output.dense.weight',\r\n 'bert.encoder.layer.1.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.1.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.1.attention.output.dense.bias',\r\n 'bert.encoder.layer.1.attention.output.dense.weight',\r\n 'bert.encoder.layer.1.attention.self.key.bias',\r\n 'bert.encoder.layer.1.attention.self.key.weight',\r\n 'bert.encoder.layer.1.attention.self.query.bias',\r\n 'bert.encoder.layer.1.attention.self.query.weight',\r\n 'bert.encoder.layer.1.attention.self.value.bias',\r\n 'bert.encoder.layer.1.attention.self.value.weight',\r\n 'bert.encoder.layer.1.intermediate.dense.bias',\r\n 'bert.encoder.layer.1.intermediate.dense.weight',\r\n 'bert.encoder.layer.1.output.LayerNorm.bias',\r\n 'bert.encoder.layer.1.output.LayerNorm.weight',\r\n 'bert.encoder.layer.1.output.dense.bias',\r\n 'bert.encoder.layer.1.output.dense.weight',\r\n 'bert.encoder.layer.10.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.10.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.10.attention.output.dense.bias',\r\n 'bert.encoder.layer.10.attention.output.dense.weight',\r\n 'bert.encoder.layer.10.attention.self.key.bias',\r\n 'bert.encoder.layer.10.attention.self.key.weight',\r\n 'bert.encoder.layer.10.attention.self.query.bias',\r\n 'bert.encoder.layer.10.attention.self.query.weight',\r\n 'bert.encoder.layer.10.attention.self.value.bias',\r\n 'bert.encoder.layer.10.attention.self.value.weight',\r\n 'bert.encoder.layer.10.intermediate.dense.bias',\r\n 'bert.encoder.layer.10.intermediate.dense.weight',\r\n 'bert.encoder.layer.10.output.LayerNorm.bias',\r\n 'bert.encoder.layer.10.output.LayerNorm.weight',\r\n 'bert.encoder.layer.10.output.dense.bias',\r\n 'bert.encoder.layer.10.output.dense.weight',\r\n 'bert.encoder.layer.11.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.11.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.11.attention.output.dense.bias',\r\n 'bert.encoder.layer.11.attention.output.dense.weight',\r\n 'bert.encoder.layer.11.attention.self.key.bias',\r\n 'bert.encoder.layer.11.attention.self.key.weight',\r\n 'bert.encoder.layer.11.attention.self.query.bias',\r\n 'bert.encoder.layer.11.attention.self.query.weight',\r\n 'bert.encoder.layer.11.attention.self.value.bias',\r\n 'bert.encoder.layer.11.attention.self.value.weight',\r\n 'bert.encoder.layer.11.intermediate.dense.bias',\r\n 'bert.encoder.layer.11.intermediate.dense.weight',\r\n 'bert.encoder.layer.11.output.LayerNorm.bias',\r\n 'bert.encoder.layer.11.output.LayerNorm.weight',\r\n 'bert.encoder.layer.11.output.dense.bias',\r\n 'bert.encoder.layer.11.output.dense.weight',\r\n 'bert.encoder.layer.2.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.2.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.2.attention.output.dense.bias',\r\n 'bert.encoder.layer.2.attention.output.dense.weight',\r\n 'bert.encoder.layer.2.attention.self.key.bias',\r\n 'bert.encoder.layer.2.attention.self.key.weight',\r\n 'bert.encoder.layer.2.attention.self.query.bias',\r\n 'bert.encoder.layer.2.attention.self.query.weight',\r\n 'bert.encoder.layer.2.attention.self.value.bias',\r\n 'bert.encoder.layer.2.attention.self.value.weight',\r\n 'bert.encoder.layer.2.intermediate.dense.bias',\r\n 'bert.encoder.layer.2.intermediate.dense.weight',\r\n 'bert.encoder.layer.2.output.LayerNorm.bias',\r\n 'bert.encoder.layer.2.output.LayerNorm.weight',\r\n 'bert.encoder.layer.2.output.dense.bias',\r\n 'bert.encoder.layer.2.output.dense.weight',\r\n 'bert.encoder.layer.3.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.3.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.3.attention.output.dense.bias',\r\n 'bert.encoder.layer.3.attention.output.dense.weight',\r\n 'bert.encoder.layer.3.attention.self.key.bias',\r\n 'bert.encoder.layer.3.attention.self.key.weight',\r\n 'bert.encoder.layer.3.attention.self.query.bias',\r\n 'bert.encoder.layer.3.attention.self.query.weight',\r\n 'bert.encoder.layer.3.attention.self.value.bias',\r\n 'bert.encoder.layer.3.attention.self.value.weight',\r\n 'bert.encoder.layer.3.intermediate.dense.bias',\r\n 'bert.encoder.layer.3.intermediate.dense.weight',\r\n 'bert.encoder.layer.3.output.LayerNorm.bias',\r\n 'bert.encoder.layer.3.output.LayerNorm.weight',\r\n 'bert.encoder.layer.3.output.dense.bias',\r\n 'bert.encoder.layer.3.output.dense.weight',\r\n 'bert.encoder.layer.4.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.4.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.4.attention.output.dense.bias',\r\n 'bert.encoder.layer.4.attention.output.dense.weight',\r\n 'bert.encoder.layer.4.attention.self.key.bias',\r\n 'bert.encoder.layer.4.attention.self.key.weight',\r\n 'bert.encoder.layer.4.attention.self.query.bias',\r\n 'bert.encoder.layer.4.attention.self.query.weight',\r\n 'bert.encoder.layer.4.attention.self.value.bias',\r\n 'bert.encoder.layer.4.attention.self.value.weight',\r\n 'bert.encoder.layer.4.intermediate.dense.bias',\r\n 'bert.encoder.layer.4.intermediate.dense.weight',\r\n 'bert.encoder.layer.4.output.LayerNorm.bias',\r\n 'bert.encoder.layer.4.output.LayerNorm.weight',\r\n 'bert.encoder.layer.4.output.dense.bias',\r\n 'bert.encoder.layer.4.output.dense.weight',\r\n 'bert.encoder.layer.5.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.5.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.5.attention.output.dense.bias',\r\n 'bert.encoder.layer.5.attention.output.dense.weight',\r\n 'bert.encoder.layer.5.attention.self.key.bias',\r\n 'bert.encoder.layer.5.attention.self.key.weight',\r\n 'bert.encoder.layer.5.attention.self.query.bias',\r\n 'bert.encoder.layer.5.attention.self.query.weight',\r\n 'bert.encoder.layer.5.attention.self.value.bias',\r\n 'bert.encoder.layer.5.attention.self.value.weight',\r\n 'bert.encoder.layer.5.intermediate.dense.bias',\r\n 'bert.encoder.layer.5.intermediate.dense.weight',\r\n 'bert.encoder.layer.5.output.LayerNorm.bias',\r\n 'bert.encoder.layer.5.output.LayerNorm.weight',\r\n 'bert.encoder.layer.5.output.dense.bias',\r\n 'bert.encoder.layer.5.output.dense.weight',\r\n 'bert.encoder.layer.6.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.6.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.6.attention.output.dense.bias',\r\n 'bert.encoder.layer.6.attention.output.dense.weight',\r\n 'bert.encoder.layer.6.attention.self.key.bias',\r\n 'bert.encoder.layer.6.attention.self.key.weight',\r\n 'bert.encoder.layer.6.attention.self.query.bias',\r\n 'bert.encoder.layer.6.attention.self.query.weight',\r\n 'bert.encoder.layer.6.attention.self.value.bias',\r\n 'bert.encoder.layer.6.attention.self.value.weight',\r\n 'bert.encoder.layer.6.intermediate.dense.bias',\r\n 'bert.encoder.layer.6.intermediate.dense.weight',\r\n 'bert.encoder.layer.6.output.LayerNorm.bias',\r\n 'bert.encoder.layer.6.output.LayerNorm.weight',\r\n 'bert.encoder.layer.6.output.dense.bias',\r\n 'bert.encoder.layer.6.output.dense.weight',\r\n 'bert.encoder.layer.7.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.7.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.7.attention.output.dense.bias',\r\n 'bert.encoder.layer.7.attention.output.dense.weight',\r\n 'bert.encoder.layer.7.attention.self.key.bias',\r\n 'bert.encoder.layer.7.attention.self.key.weight',\r\n 'bert.encoder.layer.7.attention.self.query.bias',\r\n 'bert.encoder.layer.7.attention.self.query.weight',\r\n 'bert.encoder.layer.7.attention.self.value.bias',\r\n 'bert.encoder.layer.7.attention.self.value.weight',\r\n 'bert.encoder.layer.7.intermediate.dense.bias',\r\n 'bert.encoder.layer.7.intermediate.dense.weight',\r\n 'bert.encoder.layer.7.output.LayerNorm.bias',\r\n 'bert.encoder.layer.7.output.LayerNorm.weight',\r\n 'bert.encoder.layer.7.output.dense.bias',\r\n 'bert.encoder.layer.7.output.dense.weight',\r\n 'bert.encoder.layer.8.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.8.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.8.attention.output.dense.bias',\r\n 'bert.encoder.layer.8.attention.output.dense.weight',\r\n 'bert.encoder.layer.8.attention.self.key.bias',\r\n 'bert.encoder.layer.8.attention.self.key.weight',\r\n 'bert.encoder.layer.8.attention.self.query.bias',\r\n 'bert.encoder.layer.8.attention.self.query.weight',\r\n 'bert.encoder.layer.8.attention.self.value.bias',\r\n 'bert.encoder.layer.8.attention.self.value.weight',\r\n 'bert.encoder.layer.8.intermediate.dense.bias',\r\n 'bert.encoder.layer.8.intermediate.dense.weight',\r\n 'bert.encoder.layer.8.output.LayerNorm.bias',\r\n 'bert.encoder.layer.8.output.LayerNorm.weight',\r\n 'bert.encoder.layer.8.output.dense.bias',\r\n 'bert.encoder.layer.8.output.dense.weight',\r\n 'bert.encoder.layer.9.attention.output.LayerNorm.bias',\r\n 'bert.encoder.layer.9.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.9.attention.output.dense.bias',\r\n 'bert.encoder.layer.9.attention.output.dense.weight',\r\n 'bert.encoder.layer.9.attention.self.key.bias',\r\n 'bert.encoder.layer.9.attention.self.key.weight',\r\n 'bert.encoder.layer.9.attention.self.query.bias',\r\n 'bert.encoder.layer.9.attention.self.query.weight',\r\n 'bert.encoder.layer.9.attention.self.value.bias',\r\n 'bert.encoder.layer.9.attention.self.value.weight',\r\n 'bert.encoder.layer.9.intermediate.dense.bias',\r\n 'bert.encoder.layer.9.intermediate.dense.weight',\r\n 'bert.encoder.layer.9.output.LayerNorm.bias',\r\n 'bert.encoder.layer.9.output.LayerNorm.weight',\r\n 'bert.encoder.layer.9.output.dense.bias',\r\n 'bert.encoder.layer.9.output.dense.weight',\r\n 'bert.pooler.dense.bias',\r\n 'bert.pooler.dense.weight',\r\n 'llama.layers.0.input_layernorm.weight',\r\n 'llama.layers.0.post_attention_layernorm.weight',\r\n 'llama.layers.1.input_layernorm.weight',\r\n 'llama.layers.1.post_attention_layernorm.weight',\r\n 'llama.layers.10.input_layernorm.weight',\r\n 'llama.layers.10.post_attention_layernorm.weight',\r\n 'llama.layers.11.input_layernorm.weight',\r\n 'llama.layers.11.post_attention_layernorm.weight',\r\n 'llama.layers.12.input_layernorm.weight',\r\n 'llama.layers.12.post_attention_layernorm.weight',\r\n 'llama.layers.13.input_layernorm.weight',\r\n 'llama.layers.13.post_attention_layernorm.weight',\r\n 'llama.layers.14.input_layernorm.weight',\r\n 'llama.layers.14.post_attention_layernorm.weight',\r\n 'llama.layers.15.input_layernorm.weight',\r\n 'llama.layers.15.post_attention_layernorm.weight',\r\n 'llama.layers.16.input_layernorm.weight',\r\n 'llama.layers.16.post_attention_layernorm.weight',\r\n 'llama.layers.17.input_layernorm.weight',\r\n 'llama.layers.17.post_attention_layernorm.weight',\r\n 'llama.layers.18.input_layernorm.weight',\r\n 'llama.layers.18.post_attention_layernorm.weight',\r\n 'llama.layers.19.input_layernorm.weight',\r\n 'llama.layers.19.post_attention_layernorm.weight',\r\n 'llama.layers.2.input_layernorm.weight',\r\n 'llama.layers.2.post_attention_layernorm.weight',\r\n 'llama.layers.20.input_layernorm.weight',\r\n 'llama.layers.20.post_attention_layernorm.weight',\r\n 'llama.layers.21.input_layernorm.weight',\r\n 'llama.layers.21.post_attention_layernorm.weight',\r\n 'llama.layers.22.input_layernorm.weight',\r\n 'llama.layers.22.post_attention_layernorm.weight',\r\n 'llama.layers.23.input_layernorm.weight',\r\n 'llama.layers.23.post_attention_layernorm.weight',\r\n 'llama.layers.24.input_layernorm.weight',\r\n 'llama.layers.24.post_attention_layernorm.weight',\r\n 'llama.layers.25.input_layernorm.weight',\r\n 'llama.layers.25.post_attention_layernorm.weight',\r\n 'llama.layers.26.input_layernorm.weight',\r\n 'llama.layers.26.post_attention_layernorm.weight',\r\n 'llama.layers.27.input_layernorm.weight',\r\n 'llama.layers.27.post_attention_layernorm.weight',\r\n 'llama.layers.28.input_layernorm.weight',\r\n 'llama.layers.28.post_attention_layernorm.weight',\r\n 'llama.layers.29.input_layernorm.weight',\r\n 'llama.layers.29.post_attention_layernorm.weight',\r\n 'llama.layers.3.input_layernorm.weight',\r\n 'llama.layers.3.post_attention_layernorm.weight',\r\n 'llama.layers.30.input_layernorm.weight',\r\n 'llama.layers.30.post_attention_layernorm.weight',\r\n 'llama.layers.31.input_layernorm.weight',\r\n 'llama.layers.31.post_attention_layernorm.weight',\r\n 'llama.layers.4.input_layernorm.weight',\r\n 'llama.layers.4.post_attention_layernorm.weight',\r\n 'llama.layers.5.input_layernorm.weight',\r\n 'llama.layers.5.post_attention_layernorm.weight',\r\n 'llama.layers.6.input_layernorm.weight',\r\n 'llama.layers.6.post_attention_layernorm.weight',\r\n 'llama.layers.7.input_layernorm.weight',\r\n 'llama.layers.7.post_attention_layernorm.weight',\r\n 'llama.layers.8.input_layernorm.weight',\r\n 'llama.layers.8.post_attention_layernorm.weight',\r\n 'llama.layers.9.input_layernorm.weight',\r\n 'llama.layers.9.post_attention_layernorm.weight',\r\n 'llama.norm.weight'}</details>\r\n\r\n\r\n<details><summary>Different params</summary>{'llama.embed_tokens.weight',\r\n 'llama.layers.0.mlp.down_proj.weight',\r\n 'llama.layers.0.mlp.gate_proj.weight',\r\n 'llama.layers.0.mlp.up_proj.weight',\r\n 'llama.layers.0.self_attn.k_proj.weight',\r\n 'llama.layers.0.self_attn.o_proj.weight',\r\n 'llama.layers.0.self_attn.q_proj.weight',\r\n 'llama.layers.0.self_attn.v_proj.weight',\r\n 'llama.layers.1.mlp.down_proj.weight',\r\n 'llama.layers.1.mlp.gate_proj.weight',\r\n 'llama.layers.1.mlp.up_proj.weight',\r\n 'llama.layers.1.self_attn.k_proj.weight',\r\n 'llama.layers.1.self_attn.o_proj.weight',\r\n 'llama.layers.1.self_attn.q_proj.weight',\r\n 'llama.layers.1.self_attn.v_proj.weight',\r\n 'llama.layers.10.mlp.down_proj.weight',\r\n 'llama.layers.10.mlp.gate_proj.weight',\r\n 'llama.layers.10.mlp.up_proj.weight',\r\n 'llama.layers.10.self_attn.k_proj.weight',\r\n 'llama.layers.10.self_attn.o_proj.weight',\r\n 'llama.layers.10.self_attn.q_proj.weight',\r\n 'llama.layers.10.self_attn.v_proj.weight',\r\n 'llama.layers.11.mlp.down_proj.weight',\r\n 'llama.layers.11.mlp.gate_proj.weight',\r\n 'llama.layers.11.mlp.up_proj.weight',\r\n 'llama.layers.11.self_attn.k_proj.weight',\r\n 'llama.layers.11.self_attn.o_proj.weight',\r\n 'llama.layers.11.self_attn.q_proj.weight',\r\n 'llama.layers.11.self_attn.v_proj.weight',\r\n 'llama.layers.12.mlp.down_proj.weight',\r\n 'llama.layers.12.mlp.gate_proj.weight',\r\n 'llama.layers.12.mlp.up_proj.weight',\r\n 'llama.layers.12.self_attn.k_proj.weight',\r\n 'llama.layers.12.self_attn.o_proj.weight',\r\n 'llama.layers.12.self_attn.q_proj.weight',\r\n 'llama.layers.12.self_attn.v_proj.weight',\r\n 'llama.layers.13.mlp.down_proj.weight',\r\n 'llama.layers.13.mlp.gate_proj.weight',\r\n 'llama.layers.13.mlp.up_proj.weight',\r\n 'llama.layers.13.self_attn.k_proj.weight',\r\n 'llama.layers.13.self_attn.o_proj.weight',\r\n 'llama.layers.13.self_attn.q_proj.weight',\r\n 'llama.layers.13.self_attn.v_proj.weight',\r\n 'llama.layers.14.mlp.down_proj.weight',\r\n 'llama.layers.14.mlp.gate_proj.weight',\r\n 'llama.layers.14.mlp.up_proj.weight',\r\n 'llama.layers.14.self_attn.k_proj.weight',\r\n 'llama.layers.14.self_attn.o_proj.weight',\r\n 'llama.layers.14.self_attn.q_proj.weight',\r\n 'llama.layers.14.self_attn.v_proj.weight',\r\n 'llama.layers.15.mlp.down_proj.weight',\r\n 'llama.layers.15.mlp.gate_proj.weight',\r\n 'llama.layers.15.mlp.up_proj.weight',\r\n 'llama.layers.15.self_attn.k_proj.weight',\r\n 'llama.layers.15.self_attn.o_proj.weight',\r\n 'llama.layers.15.self_attn.q_proj.weight',\r\n 'llama.layers.15.self_attn.v_proj.weight',\r\n 'llama.layers.16.mlp.down_proj.weight',\r\n 'llama.layers.16.mlp.gate_proj.weight',\r\n 'llama.layers.16.mlp.up_proj.weight',\r\n 'llama.layers.16.self_attn.k_proj.weight',\r\n 'llama.layers.16.self_attn.o_proj.weight',\r\n 'llama.layers.16.self_attn.q_proj.weight',\r\n 'llama.layers.16.self_attn.v_proj.weight',\r\n 'llama.layers.17.mlp.down_proj.weight',\r\n 'llama.layers.17.mlp.gate_proj.weight',\r\n 'llama.layers.17.mlp.up_proj.weight',\r\n 'llama.layers.17.self_attn.k_proj.weight',\r\n 'llama.layers.17.self_attn.o_proj.weight',\r\n 'llama.layers.17.self_attn.q_proj.weight',\r\n 'llama.layers.17.self_attn.v_proj.weight',\r\n 'llama.layers.18.mlp.down_proj.weight',\r\n 'llama.layers.18.mlp.gate_proj.weight',\r\n 'llama.layers.18.mlp.up_proj.weight',\r\n 'llama.layers.18.self_attn.k_proj.weight',\r\n 'llama.layers.18.self_attn.o_proj.weight',\r\n 'llama.layers.18.self_attn.q_proj.weight',\r\n 'llama.layers.18.self_attn.v_proj.weight',\r\n 'llama.layers.19.mlp.down_proj.weight',\r\n 'llama.layers.19.mlp.gate_proj.weight',\r\n 'llama.layers.19.mlp.up_proj.weight',\r\n 'llama.layers.19.self_attn.k_proj.weight',\r\n 'llama.layers.19.self_attn.o_proj.weight',\r\n 'llama.layers.19.self_attn.q_proj.weight',\r\n 'llama.layers.19.self_attn.v_proj.weight',\r\n 'llama.layers.2.mlp.down_proj.weight',\r\n 'llama.layers.2.mlp.gate_proj.weight',\r\n 'llama.layers.2.mlp.up_proj.weight',\r\n 'llama.layers.2.self_attn.k_proj.weight',\r\n 'llama.layers.2.self_attn.o_proj.weight',\r\n 'llama.layers.2.self_attn.q_proj.weight',\r\n 'llama.layers.2.self_attn.v_proj.weight',\r\n 'llama.layers.20.mlp.down_proj.weight',\r\n 'llama.layers.20.mlp.gate_proj.weight',\r\n 'llama.layers.20.mlp.up_proj.weight',\r\n 'llama.layers.20.self_attn.k_proj.weight',\r\n 'llama.layers.20.self_attn.o_proj.weight',\r\n 'llama.layers.20.self_attn.q_proj.weight',\r\n 'llama.layers.20.self_attn.v_proj.weight',\r\n 'llama.layers.21.mlp.down_proj.weight',\r\n 'llama.layers.21.mlp.gate_proj.weight',\r\n 'llama.layers.21.mlp.up_proj.weight',\r\n 'llama.layers.21.self_attn.k_proj.weight',\r\n 'llama.layers.21.self_attn.o_proj.weight',\r\n 'llama.layers.21.self_attn.q_proj.weight',\r\n 'llama.layers.21.self_attn.v_proj.weight',\r\n 'llama.layers.22.mlp.down_proj.weight',\r\n 'llama.layers.22.mlp.gate_proj.weight',\r\n 'llama.layers.22.mlp.up_proj.weight',\r\n 'llama.layers.22.self_attn.k_proj.weight',\r\n 'llama.layers.22.self_attn.o_proj.weight',\r\n 'llama.layers.22.self_attn.q_proj.weight',\r\n 'llama.layers.22.self_attn.v_proj.weight',\r\n 'llama.layers.23.mlp.down_proj.weight',\r\n 'llama.layers.23.mlp.gate_proj.weight',\r\n 'llama.layers.23.mlp.up_proj.weight',\r\n 'llama.layers.23.self_attn.k_proj.weight',\r\n 'llama.layers.23.self_attn.o_proj.weight',\r\n 'llama.layers.23.self_attn.q_proj.weight',\r\n 'llama.layers.23.self_attn.v_proj.weight',\r\n 'llama.layers.24.mlp.down_proj.weight',\r\n 'llama.layers.24.mlp.gate_proj.weight',\r\n 'llama.layers.24.mlp.up_proj.weight',\r\n 'llama.layers.24.self_attn.k_proj.weight',\r\n 'llama.layers.24.self_attn.o_proj.weight',\r\n 'llama.layers.24.self_attn.q_proj.weight',\r\n 'llama.layers.24.self_attn.v_proj.weight',\r\n 'llama.layers.25.mlp.down_proj.weight',\r\n 'llama.layers.25.mlp.gate_proj.weight',\r\n 'llama.layers.25.mlp.up_proj.weight',\r\n 'llama.layers.25.self_attn.k_proj.weight',\r\n 'llama.layers.25.self_attn.o_proj.weight',\r\n 'llama.layers.25.self_attn.q_proj.weight',\r\n 'llama.layers.25.self_attn.v_proj.weight',\r\n 'llama.layers.26.mlp.down_proj.weight',\r\n 'llama.layers.26.mlp.gate_proj.weight',\r\n 'llama.layers.26.mlp.up_proj.weight',\r\n 'llama.layers.26.self_attn.k_proj.weight',\r\n 'llama.layers.26.self_attn.o_proj.weight',\r\n 'llama.layers.26.self_attn.q_proj.weight',\r\n 'llama.layers.26.self_attn.v_proj.weight',\r\n 'llama.layers.27.mlp.down_proj.weight',\r\n 'llama.layers.27.mlp.gate_proj.weight',\r\n 'llama.layers.27.mlp.up_proj.weight',\r\n 'llama.layers.27.self_attn.k_proj.weight',\r\n 'llama.layers.27.self_attn.o_proj.weight',\r\n 'llama.layers.27.self_attn.q_proj.weight',\r\n 'llama.layers.27.self_attn.v_proj.weight',\r\n 'llama.layers.28.mlp.down_proj.weight',\r\n 'llama.layers.28.mlp.gate_proj.weight',\r\n 'llama.layers.28.mlp.up_proj.weight',\r\n 'llama.layers.28.self_attn.k_proj.weight',\r\n 'llama.layers.28.self_attn.o_proj.weight',\r\n 'llama.layers.28.self_attn.q_proj.weight',\r\n 'llama.layers.28.self_attn.v_proj.weight',\r\n 'llama.layers.29.mlp.down_proj.weight',\r\n 'llama.layers.29.mlp.gate_proj.weight',\r\n 'llama.layers.29.mlp.up_proj.weight',\r\n 'llama.layers.29.self_attn.k_proj.weight',\r\n 'llama.layers.29.self_attn.o_proj.weight',\r\n 'llama.layers.29.self_attn.q_proj.weight',\r\n 'llama.layers.29.self_attn.v_proj.weight',\r\n 'llama.layers.3.mlp.down_proj.weight',\r\n 'llama.layers.3.mlp.gate_proj.weight',\r\n 'llama.layers.3.mlp.up_proj.weight',\r\n 'llama.layers.3.self_attn.k_proj.weight',\r\n 'llama.layers.3.self_attn.o_proj.weight',\r\n 'llama.layers.3.self_attn.q_proj.weight',\r\n 'llama.layers.3.self_attn.v_proj.weight',\r\n 'llama.layers.30.mlp.down_proj.weight',\r\n 'llama.layers.30.mlp.gate_proj.weight',\r\n 'llama.layers.30.mlp.up_proj.weight',\r\n 'llama.layers.30.self_attn.k_proj.weight',\r\n 'llama.layers.30.self_attn.o_proj.weight',\r\n 'llama.layers.30.self_attn.q_proj.weight',\r\n 'llama.layers.30.self_attn.v_proj.weight',\r\n 'llama.layers.31.mlp.down_proj.weight',\r\n 'llama.layers.31.mlp.gate_proj.weight',\r\n 'llama.layers.31.mlp.up_proj.weight',\r\n 'llama.layers.31.self_attn.k_proj.weight',\r\n 'llama.layers.31.self_attn.o_proj.weight',\r\n 'llama.layers.31.self_attn.q_proj.weight',\r\n 'llama.layers.31.self_attn.v_proj.weight',\r\n 'llama.layers.4.mlp.down_proj.weight',\r\n 'llama.layers.4.mlp.gate_proj.weight',\r\n 'llama.layers.4.mlp.up_proj.weight',\r\n 'llama.layers.4.self_attn.k_proj.weight',\r\n 'llama.layers.4.self_attn.o_proj.weight',\r\n 'llama.layers.4.self_attn.q_proj.weight',\r\n 'llama.layers.4.self_attn.v_proj.weight',\r\n 'llama.layers.5.mlp.down_proj.weight',\r\n 'llama.layers.5.mlp.gate_proj.weight',\r\n 'llama.layers.5.mlp.up_proj.weight',\r\n 'llama.layers.5.self_attn.k_proj.weight',\r\n 'llama.layers.5.self_attn.o_proj.weight',\r\n 'llama.layers.5.self_attn.q_proj.weight',\r\n 'llama.layers.5.self_attn.v_proj.weight',\r\n 'llama.layers.6.mlp.down_proj.weight',\r\n 'llama.layers.6.mlp.gate_proj.weight',\r\n 'llama.layers.6.mlp.up_proj.weight',\r\n 'llama.layers.6.self_attn.k_proj.weight',\r\n 'llama.layers.6.self_attn.o_proj.weight',\r\n 'llama.layers.6.self_attn.q_proj.weight',\r\n 'llama.layers.6.self_attn.v_proj.weight',\r\n 'llama.layers.7.mlp.down_proj.weight',\r\n 'llama.layers.7.mlp.gate_proj.weight',\r\n 'llama.layers.7.mlp.up_proj.weight',\r\n 'llama.layers.7.self_attn.k_proj.weight',\r\n 'llama.layers.7.self_attn.o_proj.weight',\r\n 'llama.layers.7.self_attn.q_proj.weight',\r\n 'llama.layers.7.self_attn.v_proj.weight',\r\n 'llama.layers.8.mlp.down_proj.weight',\r\n 'llama.layers.8.mlp.gate_proj.weight',\r\n 'llama.layers.8.mlp.up_proj.weight',\r\n 'llama.layers.8.self_attn.k_proj.weight',\r\n 'llama.layers.8.self_attn.o_proj.weight',\r\n 'llama.layers.8.self_attn.q_proj.weight',\r\n 'llama.layers.8.self_attn.v_proj.weight',\r\n 'llama.layers.9.mlp.down_proj.weight',\r\n 'llama.layers.9.mlp.gate_proj.weight',\r\n 'llama.layers.9.mlp.up_proj.weight',\r\n 'llama.layers.9.self_attn.k_proj.weight',\r\n 'llama.layers.9.self_attn.o_proj.weight',\r\n 'llama.layers.9.self_attn.q_proj.weight',\r\n 'llama.layers.9.self_attn.v_proj.weight',\r\n 'lm_head.weight'}</details>\r\n\r\n\r\n\r\n\r\nNote: \r\n* Your class should inherit from `LlamaPreTrainedModel`, however I tried with both `LlamaPreTrainedModel` and `LlamaForCausalLM` and it works\r\n* Layernorm params are the same, even for randomly initialized layers as they're [initially set to 1 for the weight and 0 for the bias](https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html)", "> Hi @rangehow, thanks for raising this issue!\r\n> \r\n> Are you sure the weights save in `dir_to_bert_ckpt` aren't just gaussian initialized with bias at 0 and that's what's being loaded in? If I make my own dummy model, following the pattern in the example, the weights for the `self.bert` layer are loaded in, as expected, from the checkpoint and the weights for `self.llama` and `self.lm_head` are randomly initialized, as expected.\r\n> \r\n> Running:\r\n> \r\n> ```python\r\n> from transformers import BertModel, LlamaModel, LlamaConfig, LlamaPreTrainedModel\r\n> from torch import nn\r\n> \r\n> class Foo(LlamaPreTrainedModel):\r\n> def __init_(self, config, *model_args, **model_kwargs):\r\n> super().__init__(config)\r\n> self.llama = LlamaModel(config)\r\n> self.bert = BertModel.from_pretrained('bert-base-uncased')\r\n> self.lm_head = nn.Linear(config.hidden_size, config.vocab_size)\r\n> self.post_init()\r\n> \r\n> config = LlamaConfig()\r\n> model_0 = Foo(config)\r\n> model_1 = Foo(config)\r\n> \r\n> same_params = set()\r\n> different_params = set()\r\n> for ((name_0, param_0), (name_1, param_1)) in zip(model_0.named_parameters(), model_1.named_parameters()):\r\n> assert name_0 == name_1\r\n> if not (param_0 == param_1).all():\r\n> different_params.add(name_0)\r\n> else:\r\n> same_params.add(name_0)\r\n> ```\r\n> \r\n> Same params\r\n> Different params\r\n> Note:\r\n> \r\n> * Your class should inherit from `LlamaPreTrainedModel`, however I tried with both `LlamaPreTrainedModel` and `LlamaForCausalLM` and it works\r\n> * Layernorm params are the same, even for randomly initialized layers as they're [initially set to 1 for the weight and 0 for the bias](https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html)\r\n\r\nSorry for the late reply and thank you for your patient help! The first question that ckpt incorrect saved magically disappeared after I updated Transformers without editing any code (The difference I can observe is save_pretrained method default save safetensor instead pytorch file). \r\nI still have some doubts:\r\n1. Is my approach an appropriate way to combine 2 pretrained model together so that I can use transformers trainer? \r\n2. **How can I save the strange combined model after training since they have two totally different tokenizer.** The solution I can think of now is after I initialize c class and save it ,I can delete the ckpt in bert folder and change the class c code from bert.from_pretrained(bert_folder) to load config and self.bert(BertConfig) . Now I have to keep a c_folder (contains llama tokenizer, combined weights of bert and llama, llama config) and bert_folder( bert_tokenizer and bert_config). This doesn't seem like the normal form of a transformers model.\r\n\r\nI am trying to jointly training retriever (a bert like model) and llm together using transformers's trainer .Not sure if other members of the community has tried this too, because RAG is really popular now : )", "@rangehow Thanks for updating about what made things work on your end. \r\n\r\nRegarding your other questions, these are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. You'll be able to have in-depth discussions and find out what other members of the community have tried there. \r\n\r\nIn general:\r\n1. It depends what you mean by \"appropriate\". You can certainly combine different modules into another `PretraindeModel` subclass. However, there's a few things to be aware of: \r\n* Using `from_pretrained` means the same checkpoint will be loaded everytime. So, if you train the model, these weight will update alongside the others in the model. When you reload, the weights in the `self.llama` model will expecting values from the fine-tuned bert model. Unless you freeze the weights in `self.bert` you'll likely encounter performance issues. \r\n* The `PretrainedModel` class you choose as the parent class controls how the model weights are initialized. If you use `from_pretrained` then the pretrained weights are loaded in, but if you use the config and `LlamaPreTrainedModel` then you will end up initializing bert's weights with Llama's logic.\r\n2. If they have two different tokenizers, then why combine them into one model? If you need both tokenizers, you can look into combining them with a processor class\r\n\r\n\r\n\r\n" ]
1,702
1,704
1,704
NONE
null
### Feature request I want to combine two pretrained model(LLAMA and BERT) in a new python class. More specific,The way I've tried is to define a new class c that inherits llama and load bert in c's \_\_init\_\_ function. ![image](https://github.com/huggingface/transformers/assets/88258534/c5428b78-68ec-4cc2-8667-587b62853152) So that I can use c.from_pretrained('llama_ckpt_dir') to load two model together. `model=C.from_pretrained('llama_ckpt_dir',low_cpu_mem_usage=True)` After I use c.save_pretrained(), even the checkpoint keeps total structure of llama and bert ,bert's params are all random initialize(weights Gaussian initialization bias all zero). (I checked this by torch.load the saved c checkpoint and print it out) Sincerely requesting some help, what should be done? ### Motivation Since trainer can be passed only one model at a time, so it seems a good feature that should be concerned for who wants to do things like train two model together? But there is another difficulty that how to deal with two total diffrent tokenizer from bert and llama(even though this is not required for trainer(since tokenizer usually only used by data preprocess), but I hope I can fix this so that I can completely transform c into a total hf model) ### Your contribution I'm not sure what I can help, but I can fully support anything that can contribute to this issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28025/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28025/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28024
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28024/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28024/comments
https://api.github.com/repos/huggingface/transformers/issues/28024/events
https://github.com/huggingface/transformers/issues/28024
2,040,835,363
I_kwDOCUB6oc55pK0j
28,024
Significant memory usage increase since 4.36
{ "login": "oraluben", "id": 5031346, "node_id": "MDQ6VXNlcjUwMzEzNDY=", "avatar_url": "https://avatars.githubusercontent.com/u/5031346?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oraluben", "html_url": "https://github.com/oraluben", "followers_url": "https://api.github.com/users/oraluben/followers", "following_url": "https://api.github.com/users/oraluben/following{/other_user}", "gists_url": "https://api.github.com/users/oraluben/gists{/gist_id}", "starred_url": "https://api.github.com/users/oraluben/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oraluben/subscriptions", "organizations_url": "https://api.github.com/users/oraluben/orgs", "repos_url": "https://api.github.com/users/oraluben/repos", "events_url": "https://api.github.com/users/oraluben/events{/privacy}", "received_events_url": "https://api.github.com/users/oraluben/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@oraluben `_flash_attn_2_enabled` does not exist anymore in 4.36. Can you try to load your model with `model = LlamaForCausalLM.from_config(config, attention_implementation=\"flash_attention_2\")` and could you `print(model)` as well?", "Fixed by https://github.com/huggingface/transformers/pull/28031" ]
1,702
1,703
1,703
NONE
null
bisected to #26681 ### System Info Device: A10 - huggingface_hub version: 0.19.4 - Platform: Linux-5.10.134-15.al8.x86_64-x86_64-with-glibc2.32 - Python version: 3.10.13 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/ecs-user/.cache/huggingface/token - Has saved token ?: False - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 2.2.0.dev20231213+cu121 - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 9.3.0 - hf_transfer: N/A - gradio: N/A - tensorboard: N/A - numpy: 1.24.1 - pydantic: 2.5.2 - aiohttp: 3.9.1 - ENDPOINT: https://huggingface.co. - HF_HUB_CACHE: /home/ecs-user/.cache/huggingface/hub - HF_ASSETS_CACHE: /home/ecs-user/.cache/huggingface/assets - HF_TOKEN_PATH: /home/ecs-user/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False - HF_HUB_ETAG_TIMEOUT: 10 - HF_HUB_DOWNLOAD_TIMEOUT: 10 ### Who can help? @tomaarsen ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Script: ``` import json import numpy as np import torch.nn.functional as F from datasets import Dataset, load_dataset from transformers import LlamaConfig, LlamaForCausalLM, Trainer, TrainingArguments, DataCollatorForLanguageModeling from transformers import LlamaTokenizer from transformers.models.llama.modeling_llama import LlamaFlashAttention2 config = LlamaConfig(num_hidden_layers=2) config._flash_attn_2_enabled = True def _flash_attention_forward(self, q, k, v, m, ql, dropout=0.0, softmax_scale=None): assert m is None return F.scaled_dot_product_attention( q.transpose(1, 2), k.transpose(1, 2), v.transpose(1, 2), is_causal=True).transpose(1, 2) LlamaFlashAttention2._flash_attention_forward = _flash_attention_forward model = LlamaForCausalLM(config) DEEPSPEED_TEMPLATE = '{"optimizer": {"type": "AdamW", "params": {"lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto"}}, "scheduler": {"type": "WarmupLR", "params": {"warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto"}}, "zero_optimization": {"stage": 3, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e8, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": "auto"}, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false}' ds_config = json.loads(DEEPSPEED_TEMPLATE) ds_config['zero_optimization']['stage'] = 3 training_args = TrainingArguments( remove_unused_columns=False, log_level='info', per_device_train_batch_size=2, logging_steps=1, output_dir='./tmp', bf16=True, deepspeed=ds_config, gradient_checkpointing=True, ) input_ids = np.random.randint(100, 30000, (1000, 2048)) data_set = Dataset.from_dict({ "input_ids": input_ids, "labels": input_ids }) trainer = Trainer( model, args=training_args, train_dataset=data_set, ) trainer.train() ``` 1. `torchrun llama.py` 2. fail with `torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.06 GiB. GPU 0 has a total capacity of 21.99 GiB of which 2.79 GiB is free. Including non-PyTorch memory, this process has 19.19 GiB memory in use. Of the allocated memory 16.93 GiB is allocated by PyTorch, and 1.40 GiB is reserved by PyTorch but unallocated.` ### Expected behavior The tranning runs normally. With `transformers==4.35.2`: ``` $ nvidia-smi Thu Dec 14 11:24:56 2023 +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 545.23.06 Driver Version: 545.23.06 CUDA Version: 12.3 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A10 On | 00000000:00:07.0 Off | 0 | | 0% 37C P0 157W / 150W | 20660MiB / 23028MiB | 100% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 1281528 C ...al/miniconda3/envs/zero3/bin/python 20648MiB | +---------------------------------------------------------------------------------------+ ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28024/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28023
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28023/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28023/comments
https://api.github.com/repos/huggingface/transformers/issues/28023/events
https://github.com/huggingface/transformers/issues/28023
2,040,821,101
I_kwDOCUB6oc55pHVt
28,023
PEFT+gradient checkpointing causes attention mask shape mismatch during backward pass
{ "login": "geoffreyangus", "id": 29719151, "node_id": "MDQ6VXNlcjI5NzE5MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/29719151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/geoffreyangus", "html_url": "https://github.com/geoffreyangus", "followers_url": "https://api.github.com/users/geoffreyangus/followers", "following_url": "https://api.github.com/users/geoffreyangus/following{/other_user}", "gists_url": "https://api.github.com/users/geoffreyangus/gists{/gist_id}", "starred_url": "https://api.github.com/users/geoffreyangus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geoffreyangus/subscriptions", "organizations_url": "https://api.github.com/users/geoffreyangus/orgs", "repos_url": "https://api.github.com/users/geoffreyangus/repos", "events_url": "https://api.github.com/users/geoffreyangus/events{/privacy}", "received_events_url": "https://api.github.com/users/geoffreyangus/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[ { "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false } ]
[ "hi @geoffreyangus \r\nthanks for the clean reproducer, I can confirm #28031 fixes the bug", "Ah, @younesbelkada were you able to try with Mixtral as well (the commented out MODEL_ID in the reproducer)? It still seems to be broken on my side.", "Hi @geoffreyangus \r\nThanks, indeed, just made #28061" ]
1,702
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.36.0 - Platform: Linux-5.4.0-152-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: x1 80GB A100 - Using distributed or parallel set-up in script?: No ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This should reproduce the error. This was originally run on x1 80GB A100. Note that both Llama-2-7B and Mixtral-8x7B are affected by this change (mixtral is testable– commented out– in the repro script below). ```python import torch from torch.optim import Adam from transformers import BitsAndBytesConfig from transformers import AutoModelForCausalLM, AutoTokenizer from peft import get_peft_config, get_peft_model, LoraConfig, TaskType MODEL_ID = "meta-llama/Llama-2-7b-hf" # MODEL_ID = "mistralai/Mixtral-8x7B-v0.1" # this is broken too tokenizer = AutoTokenizer.from_pretrained(MODEL_ID) inputs = tokenizer("hello world what's up", return_tensors="pt") inputs = {k: v.to("cuda") for k, v in inputs.items()} print(inputs) model = AutoModelForCausalLM.from_pretrained(MODEL_ID, device_map="auto", attn_implementation="eager", torch_dtype=torch.float16) peft_config = LoraConfig(task_type=TaskType.CAUSAL_LM, target_modules=['q_proj', 'v_proj'], inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1) model = get_peft_model(model, peft_config) model.print_trainable_parameters() model.gradient_checkpointing_enable() model.enable_input_require_grads() optimizer = Adam(model.parameters(), lr=1e-5) model.train() for i in range(10): outputs = model(labels=inputs['input_ids'], **inputs) loss = outputs.loss print(loss) loss.backward() optimizer.step() optimizer.zero_grad() ``` The culprit in the above script is the `model.train()` call after LoRA is configured for `model`. One can workaround it by (1) avoiding calling `model.train()` and (2) if you have to call `model.eval()`, be sure to save and reuse the `module.training` values from the model's initial state when reverting back to train mode. The above will throw the following error: ``` File "mixtral_train_debug.py", line 47, in <module> loss.backward() File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/_tensor.py", line 492, in backward torch.autograd.backward( File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/autograd/__init__.py", line 251, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/autograd/function.py", line 288, in apply return user_fn(self, *args) File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 271, in backward outputs = ctx.run_function(*detached_inputs) File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ray/anaconda3/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 789, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/ray/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/ray/anaconda3/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 423, in forward raise ValueError( ValueError: Attention mask should be of size (1, 1, 7, 14), but is torch.Size([1, 1, 7, 7]) ``` ### Expected behavior I would expect calling `model.train()` and `model.eval()` to be callable despite the presence of PEFT modules and/or gradient checkpointing.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28023/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28023/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28022
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28022/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28022/comments
https://api.github.com/repos/huggingface/transformers/issues/28022/events
https://github.com/huggingface/transformers/issues/28022
2,040,774,659
I_kwDOCUB6oc55o8AD
28,022
No effect of gradient_checkpointing when training llama-2
{ "login": "getao", "id": 12735658, "node_id": "MDQ6VXNlcjEyNzM1NjU4", "avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/getao", "html_url": "https://github.com/getao", "followers_url": "https://api.github.com/users/getao/followers", "following_url": "https://api.github.com/users/getao/following{/other_user}", "gists_url": "https://api.github.com/users/getao/gists{/gist_id}", "starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/getao/subscriptions", "organizations_url": "https://api.github.com/users/getao/orgs", "repos_url": "https://api.github.com/users/getao/repos", "events_url": "https://api.github.com/users/getao/events{/privacy}", "received_events_url": "https://api.github.com/users/getao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Could you share a minimal reproducer that shows that there is no improvements / how you are enabling it? ", "> Could you share a minimal reproducer that shows that there is no improvements / how you are enabling it?\r\n\r\n```\r\n\r\ndef main():\r\n parser = transformers.HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n\r\n print(model_args)\r\n print(data_args)\r\n print(training_args)\r\n\r\n data_prefix = data_args.data_path\r\n \r\n train_file = f\"{data_prefix}.train.json\"\r\n eval_file = f\"{data_prefix}.eval.json\"\r\n \r\n dataset = load_dataset(\"json\", data_files={\"train\": train_file, \"eval\": eval_file})\r\n train_dataset = dataset[\"train\"]\r\n eval_dataset = dataset[\"eval\"]\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path)\r\n train_dataset = train_dataset.map(tokenize_function, batched=True, fn_kwargs={\"tokenizer\": tokenizer, \"max_seq_length\": data_args.max_seq_length)\r\n eval_dataset = eval_dataset.map(tokenize_function, batched=True, fn_kwargs={\"tokenizer\": tokenizer, \"max_seq_length\": data_args.max_seq_length)\r\n\r\n model_download_flag = False\r\n while model_download_flag is False: \r\n try:\r\n model = AutoModelForCausalLM.from_pretrained(model_args.model_name_or_path, torch_dtype=torch.float16 if training_args.bf16 is False else torch.bfloat16, use_flash_attention_2=True, resume_download=True)\r\n model_download_flag = True\r\n except:\r\n print(\"Downloading incomplete... Retry\")\r\n\r\n train_model(model, train_dataset, eval_dataset, training_args)\r\n\r\nmain()\r\n\r\n```\r\n\r\nAbove is the training script for training with llama-2.\r\n\r\nThe command line is:\r\n\r\n```\r\ntorchrun --nproc-per-node=4 baseline.py --adam_beta2 0.95 --adam_epsilon 1e-6 --num_train_epochs $epoch --per_device_train_batch_size $batch --per_device_eval_batch_size $batch --gradient_accumulation_steps 1 \\\r\n --learning_rate $lr --seed $seed --data_seed $seed **--gradient_checkpointing** --logging_steps 10 --save_strategy 'no' --evaluation_strategy 'steps' --eval_steps $eval_steps --save_steps $save_steps --bf16 --max_seq_length $max_seq_len --output_dir $OUT_DIR --logging_dir $OUT_DIR/logs --data_path $INPUT_DATA | tee $OUT_DIR/train.log\r\n\r\n```", "Sorry, you're gonna have to give more details about what the `train_model` function does. If you don't use the official trainer / training script then maybe you are just not enabling it. `model.enable_gradient_checkpointing()` should be called. \r\n", "> model.enable_gradient_checkpointing()\r\n\r\nI used the official trainer:\r\n\r\n```\r\ndef train_model(model, train_dataset, eval_dataset, training_args, data_collator=None):\r\n\r\n last_checkpoint = None\r\n if os.path.isdir(training_args.output_dir):\r\n last_checkpoint = get_last_checkpoint(training_args.output_dir)\r\n if last_checkpoint is not None and training_args.resume_from_checkpoint is None:\r\n print(\r\n f\"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change \"\r\n \"the `--output_dir` or add `--overwrite_output_dir` to train from scratch.\"\r\n )\r\n\r\n trainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n data_collator=data_collator\r\n )\r\n\r\n checkpoint = None\r\n if training_args.resume_from_checkpoint is not None:\r\n checkpoint = training_args.resume_from_checkpoint\r\n elif last_checkpoint is not None:\r\n checkpoint = last_checkpoint\r\n\r\n print(f\"Loaded from the checkpoint: {checkpoint}\")\r\n\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n \r\n trainer.save_model()\r\n trainer.log_metrics(\"train\", train_result.metrics)\r\n metrics = trainer.evaluate()\r\n trainer.log_metrics(\"eval\", metrics)\r\n trainer.save_metrics(\"eval\", metrics)\r\n```\r\n\r\nOr, do you mean that I still need to add ```model.enable_gradient_checkpointing()``` even if I use the default official Trainer?", "Not necessarily no, the code here https://github.com/huggingface/transformers/blob/04e67503d804c5f51a667e2db8e817c1d744048b/src/transformers/trainer.py#L1650 should automatically enable it if the arg is properly passed. As it is one of the most important aspect w.r.t training with peft, this is heavily tested. \r\nCould you make sure that it is set by printing the `training_args`? ", "> Not necessarily no, the code here\r\n> \r\n> https://github.com/huggingface/transformers/blob/04e67503d804c5f51a667e2db8e817c1d744048b/src/transformers/trainer.py#L1650\r\n> \r\n> should automatically enable it if the arg is properly passed. As it is one of the most important aspect w.r.t training with peft, this is heavily tested.\r\n> Could you make sure that it is set by printing the `training_args`?\r\n\r\nSure, I checked this and found it was enabled by printing training_args:\r\n\r\ngradient_checkpointing=True,\r\ngradient_checkpointing_kwargs=None,\r\n\r\nAlso, I have observed some logs showing the gradient_checkpointing is enabled before the training progress bar appears:\r\n\r\n`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`...\r\n\r\nUnfortunately, no GPU memory saving is observed.", "I am probably missing something so pinging @younesbelkada here! 🤗 \r\n", "hi @getao \r\nThanks for the issue! I don't think training should be faster with GC - do you observe the same behavioru without FA?", "> hi @getao Thanks for the issue! I don't think training should be faster with GC - do you observe the same behavioru without FA?\r\n\r\nHi @younesbelkada , I didn't say that GC helps training become faster. GC is thought to save memory but at a cost of slowing down training. What I said is that I found enabling GC didn't save memory and didn't slow down training, which is weird to me.\r\n\r\nBTW, What is FA?", "I see thanks for explaining, OK! \r\nSorry I meant Flash Attention, I wonder if it is not a weird interaction between Flash Attention and GC", "> I see thanks for explaining, OK! Sorry I meant Flash Attention, I wonder if it is not a weird interaction between Flash Attention and GC\r\n\r\nOh, I see. I'll take a look at it without Flash attention.", "Regarding potential weird interaction between Flash Attention and GC -- The HF gradient checkpointing will cause inefficient redundant computation when using flash attention. TL;DR: it (computationally equivalent) computes a redundant FA forward.\r\n\r\nThe reason is the flash attention backward kernel will recompute the Q^tK for rematerialization, which is similar to the recomputation in the backward process of the original HF gradient checkpointing. \r\n\r\nIf you're interested, we implemented a more efficient flash attention friendly gradient checkpointing in [FastCkpt](https://github.com/RulinShao/FastCkpt/tree/main) to mitigate this issue, where you just need to pip install fastckpt and import a monkey patch to accelerate your training by saving one flash attention forward in every layer compared with the HF gradient checkpointing.", "> Regarding potential weird interaction between Flash Attention and GC -- The HF gradient checkpointing will cause inefficient redundant computation when using flash attention. TL;DR: it (computationally equivalent) computes a redundant FA forward.\r\n> \r\n> The reason is the flash attention backward kernel will recompute the Q^tK for rematerialization, which is similar to the recomputation in the backward process of the original HF gradient checkpointing.\r\n> \r\n> If you're interested, we implemented a more efficient flash attention friendly gradient checkpointing in [FastCkpt](https://github.com/RulinShao/FastCkpt/tree/main) to mitigate this issue, where you just need to pip install fastckpt and import a monkey patch to accelerate your training by saving one flash attention forward in every layer compared with the HF gradient checkpointing.\r\n\r\nThank you for your answer. It makes sense for the speed issue. However, does it make sense that the GPU memory is not saved?", "Hello, just an update.\r\n\r\nI find that --gradient_checkpointing is highly useful to save memory when it is used with deepspeed along with the deepspeed config file. \r\n\r\nHowever, if I use it in torchrun (without any specified config), it doesn't work to save memory. I wonder if there is anything that I missed for properly enabling gradient checkpointing.\r\n\r\nAny help would be appreciated.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,707
1,707
NONE
null
### System Info transformers == 4.35.2 pytorch == 2.1.1 ### Who can help? @ArthurZucker Hello, I'm training Llama-2 with flash-attn 2 using torchrun. However, I found that using gradient_checkpointing doesn't help save GPU memory. The training speed doesn't reduce either. I doubt there is something wrong with the gradient_checkpointing. Could you please help take a look at the issue? I enable gradient_checkpointing in the training_args. ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction You can continuously train the llama-2-7b-hf with any text data with batch=1, seq_len=1024, using flash-attn. One enables gradient_checkpointing and the other doesn't enable. To compare the difference. ### Expected behavior Training speed reduces and GPU memory cost reduced.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28022/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28022/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28021
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28021/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28021/comments
https://api.github.com/repos/huggingface/transformers/issues/28021/events
https://github.com/huggingface/transformers/issues/28021
2,040,645,353
I_kwDOCUB6oc55ocbp
28,021
Incorrect router probability calculation
{ "login": "lhallee", "id": 72926928, "node_id": "MDQ6VXNlcjcyOTI2OTI4", "avatar_url": "https://avatars.githubusercontent.com/u/72926928?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhallee", "html_url": "https://github.com/lhallee", "followers_url": "https://api.github.com/users/lhallee/followers", "following_url": "https://api.github.com/users/lhallee/following{/other_user}", "gists_url": "https://api.github.com/users/lhallee/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhallee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhallee/subscriptions", "organizations_url": "https://api.github.com/users/lhallee/orgs", "repos_url": "https://api.github.com/users/lhallee/repos", "events_url": "https://api.github.com/users/lhallee/events{/privacy}", "received_events_url": "https://api.github.com/users/lhallee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry could you either show the issue or detail where you had a problem? The computation is different because the output shape are also different, the routing mecanism is also different. 🤗 ", "Sure! @ArthurZucker\r\n\r\nHere's the current loss function for convenience\r\n```\r\ndef load_balancing_loss_func(gate_logits: torch.Tensor, num_experts: torch.Tensor = None, top_k=2) -> float:\r\n r\"\"\"\r\n Computes auxiliary load balancing loss as in Switch Transformer - implemented in Pytorch.\r\n\r\n See Switch Transformer (https://arxiv.org/abs/2101.03961) for more details. This function implements the loss\r\n function presented in equations (4) - (6) of the paper. It aims at penalizing cases where the routing between\r\n experts is too unbalanced.\r\n\r\n Args:\r\n gate_logits (Union[`torch.Tensor`, Tuple[torch.Tensor]):\r\n Logits from the `gate`, should be a tuple of tensors. Shape: [batch_size, seqeunce_length, num_experts].\r\n num_experts (`int`, *optional*):\r\n Number of experts\r\n\r\n Returns:\r\n The auxiliary loss.\r\n \"\"\"\r\n if gate_logits is None:\r\n return 0\r\n\r\n if isinstance(gate_logits, tuple):\r\n # cat along the layers?\r\n gate_logits = torch.cat(gate_logits, dim=0)\r\n\r\n routing_weights, selected_experts = torch.topk(gate_logits, top_k, dim=-1)\r\n routing_weights = routing_weights.softmax(dim=-1)\r\n\r\n # cast the expert indices to int64, otherwise one-hot encoding will fail\r\n if selected_experts.dtype != torch.int64:\r\n selected_experts = selected_experts.to(torch.int64)\r\n\r\n if len(selected_experts.shape) == 2:\r\n selected_experts = selected_experts.unsqueeze(2)\r\n\r\n expert_mask = torch.nn.functional.one_hot(selected_experts, num_experts)\r\n\r\n # For a given token, determine if it was routed to a given expert.\r\n expert_mask = torch.max(expert_mask, axis=-2).values\r\n\r\n # cast to float32 otherwise mean will fail\r\n expert_mask = expert_mask.to(torch.float32)\r\n tokens_per_group_and_expert = torch.mean(expert_mask, axis=-2)\r\n\r\n router_prob_per_group_and_expert = torch.mean(routing_weights, axis=-1)\r\n return torch.mean(tokens_per_group_and_expert * router_prob_per_group_and_expert.unsqueeze(-1)) * (num_experts**2)\r\n```\r\n\r\nAn example\r\n\r\n```\r\nnum_hidden_layers=30\r\nbatch_size = 16\r\nseq_len = 32\r\nnum_experts = 8\r\ngate_logits = tuple(torch.randn(batch_size, seq_len, num_experts) for _ in range(num_hidden_layers))\r\nload_balancing_loss_func(gate_logits=gate_logits, num_experts=num_experts)\r\n```\r\nShape error\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\n[c:\\Users\\Logan](file:///C:/Users/Logan) Hallee\\Desktop\\MOE-PLM\\moesm_testing.ipynb Cell 13 line 6\r\n [3](vscode-notebook-cell:/c%3A/Users/Logan%20Hallee/Desktop/MOE-PLM/moesm_testing.ipynb#X15sZmlsZQ%3D%3D?line=2) num_experts = 8\r\n [5](vscode-notebook-cell:/c%3A/Users/Logan%20Hallee/Desktop/MOE-PLM/moesm_testing.ipynb#X15sZmlsZQ%3D%3D?line=4) gate_logits = tuple(torch.randn(batch_size, seq_len, num_experts) for _ in range(30))\r\n----> [6](vscode-notebook-cell:/c%3A/Users/Logan%20Hallee/Desktop/MOE-PLM/moesm_testing.ipynb#X15sZmlsZQ%3D%3D?line=5) load_balancing_loss_func(gate_logits=gate_logits, num_experts=8)\r\n\r\n[c:\\Users\\Logan](file:///C:/Users/Logan) Hallee\\Desktop\\MOE-PLM\\moesm_testing.ipynb Cell 13 line 4\r\n [42](vscode-notebook-cell:/c%3A/Users/Logan%20Hallee/Desktop/MOE-PLM/moesm_testing.ipynb#X15sZmlsZQ%3D%3D?line=41) tokens_per_group_and_expert = torch.mean(expert_mask, axis=-2)\r\n [44](vscode-notebook-cell:/c%3A/Users/Logan%20Hallee/Desktop/MOE-PLM/moesm_testing.ipynb#X15sZmlsZQ%3D%3D?line=43) router_prob_per_group_and_expert = torch.mean(routing_weights, axis=-1)\r\n---> [45](vscode-notebook-cell:/c%3A/Users/Logan%20Hallee/Desktop/MOE-PLM/moesm_testing.ipynb#X15sZmlsZQ%3D%3D?line=44) return torch.mean(tokens_per_group_and_expert * router_prob_per_group_and_expert.unsqueeze(-1)) * (num_experts**2)\r\n\r\nRuntimeError: The size of tensor a (480) must match the size of tensor b (32) at non-singleton dimension 1\r\n```\r\n ", "The loss is made to be used with the outputs of the model, which merge batch and sequence length 😉", "It looks like the documentation is wrong then. Could you clarify where the merge happens and the correct shape of the input?", "Hello~ does this function \"load_balancing_loss_func\" really work? It always output a constant for me.", "> Hello~ does this function \"load_balancing_loss_func\" really work? It always output a constant for me.\r\n\r\nSame to me, and the grad norm is 0. @ArthurZucker ", "Thanks all for the feedback I'll check it and update the doc with an example! \r\nThe merge happens in the forward of the `MixtralSparseMoeBlock` here: https://github.com/huggingface/transformers/blob/cfd3e8d1e05e11b12bf50efb90691a4ad1f68926/src/transformers/models/mixtral/modeling_mixtral.py#L706", "> Thanks all for the feedback I'll check it and update the doc with an example! The merge happens in the forward of the `MixtralSparseMoeBlock` here: https://github.com/huggingface/transformers/blob/cfd3e8d1e05e11b12bf50efb90691a4ad1f68926/src/transformers/models/mixtral/modeling_mixtral.py#L706\r\n\r\nHi, have you fix the constant loss problem ?", "Yes, #28115 fixes this sorry everyone ! 🤗 " ]
1,702
1,703
1,703
NONE
null
### System Info transformers version 4.36.0 ### Who can help? @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I think load_balancing_loss_func in modeling_mixtral creates router_prob_per_group_and_expert incorrectly https://github.com/huggingface/transformers/blob/v4.36.0/src/transformers/models/mixtral/modeling_mixtral.py#L120 Trying to multiply something batch_size * num_hidden_layers, num_experts by batch_size * num_hidden_layers, topk, 1 `torch.mean(tokens_per_group_and_expert * router_prob_per_group_and_expert.unsqueeze(-1)) * (num_experts**2)` Correct creation of routing_weights should likely be from gate_logits, which ensures it is the correct size `routing_weights = gate_logits.softamx(dim=-1)` The unsqueeze(-1) is necessary with this. Also router_prob_per_group_and_expert should average over axis=-2 `router_prob_per_group_and_expert = torch.mean(routing_weights, axis=-2)` This follows the previous implementation in modeling_switch_transformers https://github.com/huggingface/transformers/blob/v4.36.0/src/transformers/models/switch_transformers/modeling_switch_transformers.py#L91 ### Expected behavior Something like this would fix it ``` def router_loss_func_test(gate_logits: torch.Tensor, top_k=2) -> float: if gate_logits is None: return 0 if isinstance(gate_logits, tuple): # cat along the layers? gate_logits = torch.cat(gate_logits, dim=0) # batch_size * num_hidden_layers, sequence_length, num_experts num_experts = gate_logits.shape[-1] _, expert_indicies = torch.topk(gate_logits, top_k, dim=-1) # this is done so you don't need to pass expert_indicies routing_probs = gate_logits.softmax(dim=-1) # routing probs if expert_indicies.dtype != torch.int64: # cast the expert indices to int64, otherwise one-hot encoding will fail expert_indicies = expert_indicies.to(torch.int64) if len(expert_indicies.shape) == 2: expert_indicies = expert_indicies.unsqueeze(2) expert_mask = torch.nn.functional.one_hot(expert_indicies, num_experts) # For a given token, determine if it was routed to a given expert. expert_mask = torch.max(expert_mask, axis=-2).values expert_mask = expert_mask.to(torch.float32) # cast to float32 otherwise mean will fail tokens_per_group_and_expert = torch.mean(expert_mask, axis=-2) router_prob_per_group_and_expert = torch.mean(routing_probs, axis=-2) loss = torch.mean(tokens_per_group_and_expert * router_prob_per_group_and_expert) * (num_experts**2) return loss ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28021/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28021/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28020
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28020/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28020/comments
https://api.github.com/repos/huggingface/transformers/issues/28020/events
https://github.com/huggingface/transformers/pull/28020
2,040,514,024
PR_kwDOCUB6oc5h8KMu
28,020
Fix wrong examples in llava usage.
{ "login": "Lyken17", "id": 7783214, "node_id": "MDQ6VXNlcjc3ODMyMTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7783214?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lyken17", "html_url": "https://github.com/Lyken17", "followers_url": "https://api.github.com/users/Lyken17/followers", "following_url": "https://api.github.com/users/Lyken17/following{/other_user}", "gists_url": "https://api.github.com/users/Lyken17/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lyken17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lyken17/subscriptions", "organizations_url": "https://api.github.com/users/Lyken17/orgs", "repos_url": "https://api.github.com/users/Lyken17/repos", "events_url": "https://api.github.com/users/Lyken17/events{/privacy}", "received_events_url": "https://api.github.com/users/Lyken17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts I have revised the docs, please have a check." ]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR aims to fix the demo code of `LlavaForConditionalGeneration` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28020/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28020/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28020", "html_url": "https://github.com/huggingface/transformers/pull/28020", "diff_url": "https://github.com/huggingface/transformers/pull/28020.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28020.patch", "merged_at": 1702660191000 }
https://api.github.com/repos/huggingface/transformers/issues/28019
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28019/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28019/comments
https://api.github.com/repos/huggingface/transformers/issues/28019/events
https://github.com/huggingface/transformers/pull/28019
2,040,461,517
PR_kwDOCUB6oc5h7-qh
28,019
Fix languages covered by M4Tv2
{ "login": "ylacombe", "id": 52246514, "node_id": "MDQ6VXNlcjUyMjQ2NTE0", "avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylacombe", "html_url": "https://github.com/ylacombe", "followers_url": "https://api.github.com/users/ylacombe/followers", "following_url": "https://api.github.com/users/ylacombe/following{/other_user}", "gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions", "organizations_url": "https://api.github.com/users/ylacombe/orgs", "repos_url": "https://api.github.com/users/ylacombe/repos", "events_url": "https://api.github.com/users/ylacombe/events{/privacy}", "received_events_url": "https://api.github.com/users/ylacombe/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the suggestion @amyeroberts ! I've integrated them and will merge when the CI is green!" ]
1,702
1,702
1,702
COLLABORATOR
null
# What does this PR do? Currently, M4Tv2 into-text tasks (ASR, S2TT, T2TT) do not work for languages outside of the 36 for which audio is supported. This is linked to a test at the beginning of the model generate. The model previously verified if the `tgt_lang` was in a bunch of dictionaries, independently from the output modality. This PR aims to fix that. I've added a test to make sure it works. cc @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28019/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28019/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28019", "html_url": "https://github.com/huggingface/transformers/pull/28019", "diff_url": "https://github.com/huggingface/transformers/pull/28019.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28019.patch", "merged_at": 1702565024000 }
https://api.github.com/repos/huggingface/transformers/issues/28018
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28018/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28018/comments
https://api.github.com/repos/huggingface/transformers/issues/28018/events
https://github.com/huggingface/transformers/pull/28018
2,040,430,141
PR_kwDOCUB6oc5h73xE
28,018
[GPTQ] Fix test
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@SunMarc Do you have permissions to merge? If not, I can merge this in if it's good to go", "I'll merge it ! thx for the reminder " ]
1,702
1,705
1,705
MEMBER
null
# What does this PR do ? This PR fixes failing tests related to GPTQ quantization. The breaking tests are related to modification on optimum side and OOM from the new runner. I've also replaced for a smaller model. related optimum [PR](https://github.com/huggingface/optimum/pull/1574/files#)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28018/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28018/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28018", "html_url": "https://github.com/huggingface/transformers/pull/28018", "diff_url": "https://github.com/huggingface/transformers/pull/28018.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28018.patch", "merged_at": 1705335775000 }
https://api.github.com/repos/huggingface/transformers/issues/28017
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28017/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28017/comments
https://api.github.com/repos/huggingface/transformers/issues/28017/events
https://github.com/huggingface/transformers/pull/28017
2,040,398,233
PR_kwDOCUB6oc5h7wxc
28,017
[`chore`] Update warning text, a word was missing
{ "login": "tomaarsen", "id": 37621491, "node_id": "MDQ6VXNlcjM3NjIxNDkx", "avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tomaarsen", "html_url": "https://github.com/tomaarsen", "followers_url": "https://api.github.com/users/tomaarsen/followers", "following_url": "https://api.github.com/users/tomaarsen/following{/other_user}", "gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}", "starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions", "organizations_url": "https://api.github.com/users/tomaarsen/orgs", "repos_url": "https://api.github.com/users/tomaarsen/repos", "events_url": "https://api.github.com/users/tomaarsen/events{/privacy}", "received_events_url": "https://api.github.com/users/tomaarsen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@amyeroberts This should be a simple merge; but I'd like to leave it to a core maintainer as I'm not 100% sure whether non-core maintainers should be merging on `transformers`.", "We usually let HF people merge their own PRs 😉 Could you rebase on main before merging? ", "Good to know!\r\n\r\nRebased & force pushed :) ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28017). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,702
1,705
1,705
MEMBER
null
Hello! # What does this PR do? * Updates a warning text: "lead" was missing. I missed this in the original PR, apologies. ## Before submitting - [x] This PR fixes a typo or improves the docs ## Who can review? @ArthurZucker cc: @stas00 Thanks for pointing this out here: https://github.com/huggingface/transformers/pull/26681#discussion_r1425760204 - Tom Aarsen
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28017/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28017/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28017", "html_url": "https://github.com/huggingface/transformers/pull/28017", "diff_url": "https://github.com/huggingface/transformers/pull/28017.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28017.patch", "merged_at": 1705309683000 }
https://api.github.com/repos/huggingface/transformers/issues/28016
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28016/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28016/comments
https://api.github.com/repos/huggingface/transformers/issues/28016/events
https://github.com/huggingface/transformers/pull/28016
2,040,375,252
PR_kwDOCUB6oc5h7r4k
28,016
[docs] MPS
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
MEMBER
null
As a part of a larger effort to clean up the `Trainer` API docs in #27986, this PR moves the [Trainer for accelerated PyTorch training on Mac](https://huggingface.co./docs/transformers/main/en/main_classes/trainer#using-trainer-for-accelerated-pytorch-training-on-mac) section to the currently empty [Training on Specialized Hardware](https://huggingface.co./docs/transformers/main/en/perf_train_special) page. Other updates include rewriting it a bit so it doesn't sound like it's copied directly from the blog post and removing the link to the paywalled article for setup 🙂
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28016/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28016/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28016", "html_url": "https://github.com/huggingface/transformers/pull/28016", "diff_url": "https://github.com/huggingface/transformers/pull/28016.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28016.patch", "merged_at": 1702675050000 }
https://api.github.com/repos/huggingface/transformers/issues/28015
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28015/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28015/comments
https://api.github.com/repos/huggingface/transformers/issues/28015/events
https://github.com/huggingface/transformers/issues/28015
2,040,353,985
I_kwDOCUB6oc55nVTB
28,015
Race condition while saving the checkpoint of the model
{ "login": "upperwal", "id": 5246435, "node_id": "MDQ6VXNlcjUyNDY0MzU=", "avatar_url": "https://avatars.githubusercontent.com/u/5246435?v=4", "gravatar_id": "", "url": "https://api.github.com/users/upperwal", "html_url": "https://github.com/upperwal", "followers_url": "https://api.github.com/users/upperwal/followers", "following_url": "https://api.github.com/users/upperwal/following{/other_user}", "gists_url": "https://api.github.com/users/upperwal/gists{/gist_id}", "starred_url": "https://api.github.com/users/upperwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/upperwal/subscriptions", "organizations_url": "https://api.github.com/users/upperwal/orgs", "repos_url": "https://api.github.com/users/upperwal/repos", "events_url": "https://api.github.com/users/upperwal/events{/privacy}", "received_events_url": "https://api.github.com/users/upperwal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@upperwal see https://github.com/huggingface/transformers/pull/28009", "@thundergolfer Oh, awesome! Will close this issue. " ]
1,702
1,702
1,702
NONE
null
### System Info - `transformers` version: 4.36.0 - Platform: Linux-5.4.0-80-generic-x86_64-with-glibc2.31 - Python version: 3.11.5 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Distributed ### Who can help? @muellerz @pacman100 ### Error There is a race condition at `Trainer > _maybe_log_save_evaluate` in a `multi_gpu` setup with `DDP`. Race condition happens because `self.control.should_save` allows all nodes to enter `self._save_checkpoint()`. All nodes (even non-root nodes) enter `self._save_checkpoint()` and creates `staging_output_dir` if not available to save the random number generator state using `_save_rng_state`. Then eventually one of these nodes reaches [`if staging_output_dir != output_dir`](https://github.com/huggingface/transformers/blob/ec43d6870aa1afb42a6d2b1b0a03743d3f9b3ce6/src/transformers/trainer.py#L2385) and renames the `staging_output_dir` to `output_dir`. Any node including the root node trying to save the model state in `staging_output_dir` will fail as that directory has been moved. https://github.com/huggingface/transformers/blob/ec43d6870aa1afb42a6d2b1b0a03743d3f9b3ce6/src/transformers/trainer.py#L2276C1-L2278 ``` Traceback (most recent call last): File "...py", line 123, in <module> main() File "....py", line 119, in main trainer.train(resume_from_checkpoint=ckpt_dir) File ".../lib/python3.11/site-packages/transformers/trainer.py", line 1537, in train return inner_training_loop( ^^^^^^^^^^^^^^^^^^^^ File ".../lib/python3.11/site-packages/transformers/trainer.py", line 1914, in _inner_training_loop self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File ".../lib/python3.11/site-packages/transformers/trainer.py", line 2274, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File ".../lib/python3.11/site-packages/transformers/trainer.py", line 2350, in _save_checkpoint self.save_model(staging_output_dir, _internal_call=True) File ".../lib/python3.11/site-packages/transformers/trainer.py", line 2837, in save_model self._save(output_dir) File ".../lib/python3.11/site-packages/transformers/trainer.py", line 2893, in _save safetensors.torch.save_file(state_dict, os.path.join(output_dir, SAFE_WEIGHTS_NAME)) File ".../lib/python3.11/site-packages/safetensors/torch.py", line 281, in save_file serialize_file(_flatten(tensors), filename, metadata=metadata) RuntimeError: Parent directory ./tmp-checkpoint-500 does not exist. super().__init__(torch._C.PyTorchFileWriter(str(name))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ``` ### Fix Only global root process should rename the staging directory. ```py if self.args.process_index == 0 and staging_output_dir != output_dir: os.rename(staging_output_dir, output_dir) ``` This will still have race condition incase `save_on_each_node == True` as root process might rename the directory when some non-root node is saving the state in `staging_output_dir`. Should ideally sync all the processes and then rename but that will halt some processes. Better write to the `output_dir` directly. ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Setup the `Trainer` in DDP with multi-node 2. Enable saving checkpoints at some intervals 3. Train ### Expected behavior Should see the following file in the checkpoint directory without `RuntimeError: Parent directory ./tmp-checkpoint-500 does not exist` error 1. model.safetensors 2. rng_state_0.pth 3. rng_state_1.pth 4. rng_state_XXX.pth 5. scheduler.pt 6. tokenizer.json 7. trainer_state.json 8. optimizer.pt 9. special_tokens_map.json 10. tokenizer_config.json 11. training_args.bin
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28015/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28015/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28014
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28014/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28014/comments
https://api.github.com/repos/huggingface/transformers/issues/28014/events
https://github.com/huggingface/transformers/pull/28014
2,040,315,855
PR_kwDOCUB6oc5h7e7a
28,014
Fixed spelling error in T5 tokenizer warning message (s/thouroughly/t…
{ "login": "jeddobson", "id": 11461294, "node_id": "MDQ6VXNlcjExNDYxMjk0", "avatar_url": "https://avatars.githubusercontent.com/u/11461294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeddobson", "html_url": "https://github.com/jeddobson", "followers_url": "https://api.github.com/users/jeddobson/followers", "following_url": "https://api.github.com/users/jeddobson/following{/other_user}", "gists_url": "https://api.github.com/users/jeddobson/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeddobson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeddobson/subscriptions", "organizations_url": "https://api.github.com/users/jeddobson/orgs", "repos_url": "https://api.github.com/users/jeddobson/repos", "events_url": "https://api.github.com/users/jeddobson/events{/privacy}", "received_events_url": "https://api.github.com/users/jeddobson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28014). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,702
1,704
1,702
CONTRIBUTOR
null
# Spelling correction This is a simple word change for a warning message generated by the T5 tokenizer ('src/transformers/models/t5/tokenization_t5.py') that appeared when converting LLAMA weights to HF format.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28014/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28014/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28014", "html_url": "https://github.com/huggingface/transformers/pull/28014", "diff_url": "https://github.com/huggingface/transformers/pull/28014.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28014.patch", "merged_at": 1702565523000 }
https://api.github.com/repos/huggingface/transformers/issues/28013
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28013/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28013/comments
https://api.github.com/repos/huggingface/transformers/issues/28013/events
https://github.com/huggingface/transformers/issues/28013
2,040,312,392
I_kwDOCUB6oc55nLJI
28,013
What's the purpose of Pad to 64 in LLaVA
{ "login": "ShoufaChen", "id": 28682908, "node_id": "MDQ6VXNlcjI4NjgyOTA4", "avatar_url": "https://avatars.githubusercontent.com/u/28682908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShoufaChen", "html_url": "https://github.com/ShoufaChen", "followers_url": "https://api.github.com/users/ShoufaChen/followers", "following_url": "https://api.github.com/users/ShoufaChen/following{/other_user}", "gists_url": "https://api.github.com/users/ShoufaChen/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShoufaChen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShoufaChen/subscriptions", "organizations_url": "https://api.github.com/users/ShoufaChen/orgs", "repos_url": "https://api.github.com/users/ShoufaChen/repos", "events_url": "https://api.github.com/users/ShoufaChen/events{/privacy}", "received_events_url": "https://api.github.com/users/ShoufaChen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ShoufaChen \r\nThanks for the issue,\r\nIn contrast to the official implementation, we decided to padd the language model's vocabulary size to introduce the new `<image>` token. Instead of adding 2, which is in fact sufficient, we decided to go for 64 for performance reasons as explained in this document: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc shared by @ArthurZucker here: https://huggingface.co./llava-hf/llava-1.5-7b-hf/discussions/5 \r\nLet me know if this makes sense to you", "Thanks for your detailed reply", "Thank you very much @ShoufaChen !" ]
1,702
1,702
1,702
NONE
null
Hello @younesbelkada, Thanks for your awesome work that integrates LLaVA to transformers repo. Would you mind providing more details about the padding tokenizer to 64 here? https://github.com/huggingface/transformers/blob/fe44b1f1a974139cd32a8884a63686425283b07c/src/transformers/models/llava/convert_llava_weights_to_hf.py#L71-L73 What's the advantage of 64? I thought 2 is enough? Thanks in advance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28013/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28012
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28012/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28012/comments
https://api.github.com/repos/huggingface/transformers/issues/28012/events
https://github.com/huggingface/transformers/pull/28012
2,040,227,416
PR_kwDOCUB6oc5h7LgK
28,012
[Flax BERT] Update deprecated 'split' method
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? Fixes #27644. JAX Array split method was deprecated in JAX 0.4.5: https://jax.readthedocs.io/en/latest/changelog.html#jax-0-4-5-mar-2-2023 This PR updates the four uses in the codebase to use the (recommended) `jnp.split` replacement.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28012/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28012/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28012", "html_url": "https://github.com/huggingface/transformers/pull/28012", "diff_url": "https://github.com/huggingface/transformers/pull/28012.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28012.patch", "merged_at": 1702637839000 }
https://api.github.com/repos/huggingface/transformers/issues/28011
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28011/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28011/comments
https://api.github.com/repos/huggingface/transformers/issues/28011/events
https://github.com/huggingface/transformers/pull/28011
2,040,213,311
PR_kwDOCUB6oc5h7Ib7
28,011
[`Whisper`] nit
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ArthurZucker is there any plans to include this fix in patch release?", "I don't think so! But the next release will have it" ]
1,702
1,704
1,702
COLLABORATOR
null
# What does this PR do? Was getting these strange warnings: ```python Ignored unknown kwarg option normalize Ignored unknown kwarg option normalize Ignored unknown kwarg option normalize Ignored unknown kwarg option normalize ``` with processor = AutoProcessor.from_pretrained("asapp/sew-d-tiny-100k-ft-ls100h")
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28011/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/28011/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28011", "html_url": "https://github.com/huggingface/transformers/pull/28011", "diff_url": "https://github.com/huggingface/transformers/pull/28011.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28011.patch", "merged_at": 1702533064000 }
https://api.github.com/repos/huggingface/transformers/issues/28010
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28010/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28010/comments
https://api.github.com/repos/huggingface/transformers/issues/28010/events
https://github.com/huggingface/transformers/pull/28010
2,040,127,379
PR_kwDOCUB6oc5h61ri
28,010
[`Core tokenization`] `add_dummy_prefix_space` option to help with latest issues
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28010). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Just wanted to say this would be hugely helpful for us over at https://github.com/probcomp/hfppl !", "Likewise the ability to not include an extra SPIECE_UNDERLINE / Llama token 29871 when encoding a word with a space in front ( ` <word>`) would be huge for https://github.com/EleutherAI/lm-evaluation-harness !", "will make it for next release I hope! ", "Would love to see this as well! Thanks for working on this.", "Failing test is unrelated 😉 " ]
1,702
1,708
1,708
COLLABORATOR
null
# What does this PR do? Allows users to use `tokenizer.tokenize` controlling the addition of prefix space. Let's also update fast! fixes #28622
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28010/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28010/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28010", "html_url": "https://github.com/huggingface/transformers/pull/28010", "diff_url": "https://github.com/huggingface/transformers/pull/28010.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28010.patch", "merged_at": 1708429832000 }
https://api.github.com/repos/huggingface/transformers/issues/28009
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28009/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28009/comments
https://api.github.com/repos/huggingface/transformers/issues/28009/events
https://github.com/huggingface/transformers/pull/28009
2,040,069,367
PR_kwDOCUB6oc5h6o8t
28,009
Fix bug with rotating checkpoints
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Somewhat of an aside, but there's no guarantee that a previous writer has created the directory before this point: https://github.com/huggingface/transformers/pull/28009/files#diff-ed55888e6665791fe92cc8fc0c499da54f4ace6738551cd9a2591881cda076deR2379\r\n\r\nI've seen recently that a process entering this function can skip past save operations which would create the directory and arrive at this point before another process (the 'main') has a chance to create the directory. \r\n\r\n---\r\n\r\nAlso, is `should_save` only ever `True` for a single process, the main process? If so then it's a misnomer. It's documented as: \r\n\r\n> Whether or not the current process should write to disk, e.g., to save models and checkpoints.\r\n\r\nBut in a multi-GPU scenario, multiple processes participate in disk writing against the checkpoint directory. \r\n\r\nPS. Sorry for the bug! I didn't test my original change on multi-GPU. ", "@thundergolfer re: \r\n\r\n> Whether or not the current process should write to disk, e.g., to save models and checkpoints.\r\n\r\nYes. You'll find that it's used sparingly during saving of the weights, but the internal check is that we're on process 0", "@thundergolfer re;\r\n\r\n> I've seen recently that a process entering this function can skip past save operations which would create the directory and arrive at this point before another process (the 'main') has a chance to create the directory.\r\n\r\nCan you give an example so I can contextualize the logic in the code to see where we need to fix/make the directory instead?\r\n\r\nMy best guess is you don't have a model?\r\n\r\nI can put an `os.mkdir` there instead as an option, but would be good for us to be able to write a test for it. ", "> Can you give an example...\r\n\r\nIt was happening in multi-GPU scenario when using the [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) framework. Figured it was because a non-main process was skipping all the model saving steps and doing the unconditional save of state before main process created the directory. Can look to contribute a failing test which would motivate the change 👍 ", "That would be great @thundergolfer. I'm hesitant to do too much in this one PR, since this addresses the main issue for base users of the Trainer. A follow up (possibly not part of this patch?) where we can look in-depth at it for axolotl and ensure that it doesn't break other parts would be good. ", " It works. Thanks for the quick fix. @muellerzr ", "it didn't fix the issue when multi-node training. \r\ntransformer 4.36.1\r\n'''\r\nFileNotFoundError: [Errno 2] No such file or directory: '*/tmp-checkpoint-14696' -> '*/checkpoint-14696'\r\n'''", "@dumpmemory do you have the full stack-trace and perhaps a small reproduction script? \r\n\r\nI think there's still a race condition where checking for directory existence is not atomic with the rename attempt. If it's possible that the dir has already been moved, should just attempt the rename and catch the potential exception.\r\n\r\nhttps://github.com/huggingface/transformers/pull/28009/files#diff-ed55888e6665791fe92cc8fc0c499da54f4ace6738551cd9a2591881cda076deR2390\r\n\r\nAlso it seems simpler to have only the `.should_save == True` (main process) do the rename. The rename can only succeed once, and only the main process should perform it.", "> @dumpmemory do you have the full stack-trace and perhaps a small reproduction script?\r\n> \r\n> I think there's still a race condition where checking for directory existence is not atomic with the rename attempt. If it's possible that the dir has already been moved, should just attempt the rename and catch the potential exception.\r\n> \r\n> https://github.com/huggingface/transformers/pull/28009/files#diff-ed55888e6665791fe92cc8fc0c499da54f4ace6738551cd9a2591881cda076deR2390\r\n> \r\n> Also it seems simpler to have only the `.should_save == True` (main process) do the rename. The rename can only succeed once, and only the main process should perform it.\r\n\r\nI am training with multi-gpu setting and shared file system. so each node's rank 0 process try to do the rename which is a race condition. ", "it didn't fix the issue when multi-node training.\r\ntransformer 4.36.2\r\n'''\r\nFileNotFoundError: [Errno 2] No such file or directory: '/tmp-checkpoint-5' -> '/checkpoint-5'", "> it didn't fix the issue when multi-node training. transformer 4.36.2 ''' FileNotFoundError: [Errno 2] No such file or directory: '/tmp-checkpoint-5' -> '/checkpoint-5'\r\n\r\npls check the main branch. it might be the issue of nfs" ]
1,702
1,705
1,702
CONTRIBUTOR
null
# What does this PR do? There was a bug introduced in https://github.com/huggingface/transformers/pull/27820 where if we were on multi-GPU systems we would hit a race condition after saving on the processes because we cannot rename the staging directory multiple times. This PR ensures that it only happens on the main process. Fixes # (issue) Fixes https://github.com/huggingface/transformers/issues/27925 Alternative to https://github.com/huggingface/transformers/pull/27929 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts I would recommend a patch release as this is fully blocking users on multi-GPU after the last release.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28009/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28009/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28009", "html_url": "https://github.com/huggingface/transformers/pull/28009", "diff_url": "https://github.com/huggingface/transformers/pull/28009.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28009.patch", "merged_at": 1702487850000 }
https://api.github.com/repos/huggingface/transformers/issues/28008
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28008/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28008/comments
https://api.github.com/repos/huggingface/transformers/issues/28008/events
https://github.com/huggingface/transformers/issues/28008
2,040,014,898
I_kwDOCUB6oc55mCgy
28,008
Supoport batch image processing
{ "login": "wcy1122", "id": 31536861, "node_id": "MDQ6VXNlcjMxNTM2ODYx", "avatar_url": "https://avatars.githubusercontent.com/u/31536861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wcy1122", "html_url": "https://github.com/wcy1122", "followers_url": "https://api.github.com/users/wcy1122/followers", "following_url": "https://api.github.com/users/wcy1122/following{/other_user}", "gists_url": "https://api.github.com/users/wcy1122/gists{/gist_id}", "starred_url": "https://api.github.com/users/wcy1122/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wcy1122/subscriptions", "organizations_url": "https://api.github.com/users/wcy1122/orgs", "repos_url": "https://api.github.com/users/wcy1122/repos", "events_url": "https://api.github.com/users/wcy1122/events{/privacy}", "received_events_url": "https://api.github.com/users/wcy1122/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The input images are not necessary homogenous (unlike a tensor with batch dimension along which the elements are the same format).\r\n\r\nYou can definitely use `datasets` batch processing feature to speedup the task.\r\n\r\ncc @amyeroberts to see if they has any further suggestion/comment.", "Hi @wcy1122, thanks for raising this feature request! \r\n\r\nAs @ydshieh highlights, because the input images can be of different sizes and formats, and making them batchable may or may-not happen depending on the processor's configuration. Using `map` alongside `datasets` is a great way to parallize this process.\r\n\r\nThe image processors are, admittedly and unfortunately, very slow. Their principle purpose is to make it as easy as possible to go from an image to inputs which can be fed to the model i.e. a user can quickly test and get a prediction. \r\n\r\nPart of the reason for not engineering a highly-performant image processing library is that many great libraries already exist. You'll notice that in our training examples for vision models, [we use torchvision](https://github.com/huggingface/transformers/blob/3060899be51fe1a96b12de97376f2e2b8315bc4c/examples/flax/vision/run_image_classification.py#L338) for this very reason.", "Get it. Thanks for the reply and suggestion. " ]
1,702
1,703
1,703
NONE
null
### Feature request A faster image processing which supports batch image processing, especially for input data like video. ### Motivation The speed of image processing is too slow if the amount of image is large. > ### Your contribution The image processor like CLIPImageProcessor processes each image sequentially, which makes it very slow.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28008/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28008/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28007
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28007/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28007/comments
https://api.github.com/repos/huggingface/transformers/issues/28007/events
https://github.com/huggingface/transformers/issues/28007
2,039,893,753
I_kwDOCUB6oc55lk75
28,007
Can't do word timestamps and beam search at the same time (whisper)
{ "login": "Snarkdoof", "id": 5370689, "node_id": "MDQ6VXNlcjUzNzA2ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/5370689?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Snarkdoof", "html_url": "https://github.com/Snarkdoof", "followers_url": "https://api.github.com/users/Snarkdoof/followers", "following_url": "https://api.github.com/users/Snarkdoof/following{/other_user}", "gists_url": "https://api.github.com/users/Snarkdoof/gists{/gist_id}", "starred_url": "https://api.github.com/users/Snarkdoof/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Snarkdoof/subscriptions", "organizations_url": "https://api.github.com/users/Snarkdoof/orgs", "repos_url": "https://api.github.com/users/Snarkdoof/repos", "events_url": "https://api.github.com/users/Snarkdoof/events{/privacy}", "received_events_url": "https://api.github.com/users/Snarkdoof/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante ", "I believe there's already an active PR for this: https://github.com/huggingface/transformers/pull/26699 However, the PR might need a little more work as it is not a polished solution yet (at least that's what I think).", "Fixed by https://github.com/huggingface/transformers/pull/28114 👍 " ]
1,702
1,703
1,703
NONE
null
### System Info Tested on python 3.8.10, transformers 4.36.0.dev0 ### Who can help? @ArthurZucker @sanchit-gandhi (suggested by peregilk) ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import pipeline import torch model = "NbAiLabBeta/nb-whisper-base" device = "cuda:0" p = pipeline("automatic-speech-recognition", model, torch_dtype=torch.float16, device=device, return_timestamps="word") args = {"language": "norwegian", "task": "transcribe", "num_beams": 3} outputs = p(audiofile, chunk_length_s=28, batch_size=6, generate_kwargs=args) ``` Fails with: > Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py", line 357, in __call__ return super().__call__(inputs, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1132, in __call__ return next( File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 124, in __next__ item = next(self.iterator) File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 266, in __next__ processed = self.infer(next(self.iterator), **self.params) File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1046, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py", line 552, in _forward generate_kwargs["num_frames"] = stride[0] // self.feature_extractor.hop_length TypeError: unsupported operand type(s) for //: 'tuple' and 'int' It works with *either* num_beams:1 OR return_timestamps=True/False, but not combined. ### Expected behavior It should return processed data. :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28007/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28007/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28006
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28006/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28006/comments
https://api.github.com/repos/huggingface/transformers/issues/28006/events
https://github.com/huggingface/transformers/pull/28006
2,039,889,300
PR_kwDOCUB6oc5h6B8a
28,006
Clearer error for SDPA when explicitely requested
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@fxmarty Are you happy for me to merge? ", "sure @amyeroberts !" ]
1,702
1,705
1,705
COLLABORATOR
null
As per title, partially fixes https://github.com/huggingface/transformers/issues/28003.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28006/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28006/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/28006", "html_url": "https://github.com/huggingface/transformers/pull/28006", "diff_url": "https://github.com/huggingface/transformers/pull/28006.diff", "patch_url": "https://github.com/huggingface/transformers/pull/28006.patch", "merged_at": 1705421444000 }
https://api.github.com/repos/huggingface/transformers/issues/28005
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28005/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28005/comments
https://api.github.com/repos/huggingface/transformers/issues/28005/events
https://github.com/huggingface/transformers/issues/28005
2,039,623,205
I_kwDOCUB6oc55ki4l
28,005
Open to contribution: adding `torch.nn.functional.scaled_dot_product_attention` support for more architectures
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[ { "id": 6126880899, "node_id": "LA_kwDOCUB6oc8AAAABbTDIgw", "url": "https://api.github.com/repos/huggingface/transformers/labels/contributions-welcome", "name": "contributions-welcome", "color": "F99E09", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @fxmarty I can take a look at this issue. Of I can ask questions if necessary. Or has anyone taken it already?", "does someone know if longT5 and all T5 models are blocked by bias support in flash attention ?\r\n\r\nhttps://github.com/Dao-AILab/flash-attention/pull/617", "Hi @davidan5 are you working on the implementation?", "@ENate I was trying to understand the status and have an estimation of the code change to see if I can contribute.", "I see.", "I'm interested in taking a look at this for the Mistral model if that's still needed. Otherwise, please let me know if there are any other models that still need some work. Thanks", "Is LongT5 still open?", "Mistral is already covered! LongT5 if it is like T5 and has attention bias that might not be supported", "Oh yea, looks like you added support for Mistral/Mixtral last month.\r\n\r\nIt doesn't seem to be supported for BERT yet (I think someone else is working on FA2 but not SDPA), so I'll take a crack at it. It looks like there is a config for relative position embeddings for BERT, so I'll just have it fallback to the original attention for configs using relative position embeddings.\r\n\r\n@ArthurZucker - Please let me know if you know if someone else is already working on SDPA for BERT and I can look for something else to do. Thanks!", "Not sure anyone is working on that but bert is already so small that I doubt it will have a lot of impact on perf! " ]
1,702
1,706
null
COLLABORATOR
null
### Feature request In [`Transformers 4.36`](https://github.com/huggingface/transformers/releases/tag/v4.36.0), we started adding native support of [torch.nn.functional.scaled_dot_product_attention](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html) (SDPA), enabled by default in Transformers: https://huggingface.co./docs/transformers/perf_infer_gpu_one#flashattention-and-memory-efficient-attention-through-pytorchs-scaleddotproductattention SDPA allows to dispatch to memory-efficient attention, flash attention on supported GPUs (currently NVIDIA-only), and even on [Intel CPUs](https://pytorch.org/blog/new-features-for-ai/#flash-attention-based-scaled-dot-product-algorithm-for-cpu). For the record, here's a benchmark on some currently supported models: **[Training benchmark](https://gist.github.com/fxmarty/7e75cc3942d6974e4849093ebea0a331), run on A100-SXM4-80GB.** | Model | Batch size | Sequence length | Time per batch (`"eager"`, s) | Time per batch (`"sdpa"`, s) | **Speedup** | Peak memory (`"eager"`, MB) | Peak memory (`"sdpa"`, MB) | **Memory savings** | |-----------|------------|-----------------|-------------------------------|------------------------------|-------------|-----------------------------|----------------------------|-----------------------| | llama2 7b | 4 | 1024 | 1.065 | 0.90 | **19.4%** | 73878.28 | 45977.81 | **60.7%** | | llama2 7b | 4 | 2048 | OOM | 1.87 | / | OOM | 78394.58 | **SDPA does not OOM** | | llama2 7b | 1 | 2048 | 0.64 | 0.48 | **32.0%** | 55557.01 | 29795.63 | **86.4%** | | llama2 7b | 1 | 3072 | OOM | 0.75 | / | OOM | 37916.08 | **SDPA does not OOM** | | llama2 7b | 1 | 4096 | OOM | 1.03 | / | OOM | 46028.14 | **SDPA does not OOM** | | llama2 7b | 2 | 4096 | OOM | 2.05 | / | OOM | 78428.14 | **SDPA does not OOM** | **[Inference benchmark](https://gist.github.com/fxmarty/5113e4304fbdd38c9c3702ce44683f6a), run on A100-SXM4-80GB.** | Model | Batch size | Prompt length | Num new tokens | Per token latency `"eager"` (ms) | Per token latency `"sdpa"` (ms) | **Speedup** | |------------------|------------|---------------|----------------|----------------------------------|---------------------------------|-------------| | llama2 13b | 1 | 1024 | 1 (prefill) | 178.66 | 159.36 | **12.11%** | | llama2 13b | 1 | 100 | 100 | 40.35 | 37.62 | **7.28%** | | llama2 13b | 8 | 100 | 100 | 40.55 | 38.06 | **6.53%** | | Whisper v3 large | 1 | / | 62 | 20.05 | 18.90 | **6.10%** | | Whisper v3 large | 8 | / | 77 | 25.42 | 24.77 | **2.59%** | | Whisper v3 large | 16 | / | 77 | 28.51 | 26.32 | **8.34%** | Previously, we had a partial support of SDPA in [Optimum BetterTransformer](https://huggingface.co./docs/optimum/bettertransformer/overview) but we are now looking to slowly deprecate it in favor of upstream support of SDPA directly in Transformers. Here are the architectures for which support has been requested: - [ ] Codegen (https://github.com/huggingface/optimum/issues/1050) - [ ] LLAVA (https://github.com/huggingface/optimum/issues/1592) - [ ] Marian (https://github.com/huggingface/optimum/issues/1142) - [ ] Mistral (https://github.com/huggingface/optimum/issues/1553) - [ ] LongT5 (https://github.com/huggingface/optimum/issues/1506) - [ ] ViT (https://github.com/huggingface/optimum/issues/1553) The integration could take inspiration from https://github.com/huggingface/optimum/blob/main/optimum/bettertransformer/models/decoder_models.py & https://github.com/huggingface/optimum/blob/main/optimum/bettertransformer/models/attention.py ### Motivation Faster training & inference, lower memory requirement ### Your contribution I may work on some at some point, but contributions are most welcome. You should refer to https://github.com/huggingface/transformers/pull/26572 to add the support of SDPA for a model, roughly following these steps: * Create a `XxxSdpaAttention` class inheriting from `XxxAttention` and implement the attention logic using SDPA * Use `_prepare_4d_causal_attention_mask_for_sdpa` instead of `_prepare_4d_causal_attention_mask` for SDPA * Use `_prepare_4d_attention_mask_for_sdpa` instead of `_prepare_4d_attention_mask` for SDPA * Add `_supports_sdpa = True` to `XxxPreTrainedModel` * Add `"sdpa"` key to `XXX_ATTENTION_CLASSES` in the model modeling file
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28005/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28005/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/28004
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28004/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28004/comments
https://api.github.com/repos/huggingface/transformers/issues/28004/events
https://github.com/huggingface/transformers/issues/28004
2,039,618,109
I_kwDOCUB6oc55kho9
28,004
Error in loading reduced multilingual layoutxlm model: RuntimeError: Error(s) in loading state_dict for LayoutLMv2ForTokenClassification:
{ "login": "Merchaoui", "id": 80455763, "node_id": "MDQ6VXNlcjgwNDU1NzYz", "avatar_url": "https://avatars.githubusercontent.com/u/80455763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Merchaoui", "html_url": "https://github.com/Merchaoui", "followers_url": "https://api.github.com/users/Merchaoui/followers", "following_url": "https://api.github.com/users/Merchaoui/following{/other_user}", "gists_url": "https://api.github.com/users/Merchaoui/gists{/gist_id}", "starred_url": "https://api.github.com/users/Merchaoui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Merchaoui/subscriptions", "organizations_url": "https://api.github.com/users/Merchaoui/orgs", "repos_url": "https://api.github.com/users/Merchaoui/repos", "events_url": "https://api.github.com/users/Merchaoui/events{/privacy}", "received_events_url": "https://api.github.com/users/Merchaoui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @Merchaoui, thanks for raising this issue! \r\n\r\nLoading the model with `ignore_mismatched_sizes=True` is the correct way to go i.e.:\r\n\r\n```py\r\nmodel = LayoutLMv2ForTokenClassification.from_pretrained(\r\n \"./layoutxlm_reduced/microsoft/layoutxlm-base\",\r\n num_labels=len(labels),\r\n ignore_mismatched_sizes=True\r\n)\r\n```\r\n\r\nThe other error shouldn't be happening though. Could you share a minimal code snippet we could run which reproduces the error when training? \r\n\r\nCould you also share the error message with the full traceback? \r\n", "Thank you so much, I just figured it out. When I reduced the model, I removed unnecessary tokens from LayoutXLMTokenizerFast and for the training I was processing with LayoutXLMTokenizer. Now it´s working thanks", "@Merchaoui Great - glad to hear it's working! Thanks for following up on the issue to confirm :) ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.33.3 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cpu (False) - Tensorflow version (GPU?): 2.10.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: No ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction model = LayoutLMv2ForTokenClassification.from_pretrained("./layoutxlm_reduced/microsoft/layoutxlm-base", num_labels=len(labels)) ### Expected behavior I followed the official tuto to reduce the size of the model because I only need english and spanish languages for layoutxlm. (https://medium.com/@coding-otter/reduce-your-transformers-model-size-by-removing-unwanted-tokens-and-word-embeddings-eec08166d2f9). When I load the reduced model I have this error: "RuntimeError: Error(s) in loading state_dict for LayoutLMv2ForTokenClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([2, 768]) from checkpoint, the shape in current model is torch.Size([6, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([2]) from checkpoint, the shape in current model is torch.Size([6]). You may consider adding ignore_mismatched_sizes=True` in the model `from_pretrained` method." and when I set `ignore_mismatched_sizes=True`, I get the error "IndexError: index out of range in self" when it starts training
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28004/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28004/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28003
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28003/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28003/comments
https://api.github.com/repos/huggingface/transformers/issues/28003/events
https://github.com/huggingface/transformers/issues/28003
2,039,616,571
I_kwDOCUB6oc55khQ7
28,003
(LLama-2) (4.36.0) TensorParallelPreTrainedModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention
{ "login": "VasilGeorgiev39", "id": 149842188, "node_id": "U_kgDOCO5pDA", "avatar_url": "https://avatars.githubusercontent.com/u/149842188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VasilGeorgiev39", "html_url": "https://github.com/VasilGeorgiev39", "followers_url": "https://api.github.com/users/VasilGeorgiev39/followers", "following_url": "https://api.github.com/users/VasilGeorgiev39/following{/other_user}", "gists_url": "https://api.github.com/users/VasilGeorgiev39/gists{/gist_id}", "starred_url": "https://api.github.com/users/VasilGeorgiev39/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VasilGeorgiev39/subscriptions", "organizations_url": "https://api.github.com/users/VasilGeorgiev39/orgs", "repos_url": "https://api.github.com/users/VasilGeorgiev39/repos", "events_url": "https://api.github.com/users/VasilGeorgiev39/events{/privacy}", "received_events_url": "https://api.github.com/users/VasilGeorgiev39/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Thank you @VasilGeorgiev39, a quick fix is to load your transformers model with\r\n\r\n```python\r\nimport transformers\r\nimport tensor_parallel as tp\r\n\r\ntokenizer = transformers.AutoTokenizer.from_pretrained(\"meta-llama/Llama-2-13b-chat-hf\")\r\nmodel = transformers.AutoModelForCausalLM.from_pretrained(\"meta-llama/Llama-2-13b-chat-hf\", attn_implementation=\"eager\")\r\n\r\nmodelp = tp.tensor_parallel(model)\r\n```\r\n\r\nIt looks like `tp.tensor_parallel` is overloading Transformers classes and does not have `_supports_sdpa = True`, which results in this error. I'll see what we can do.\r\n\r\nFor reference: https://huggingface.co./docs/transformers/perf_infer_gpu_one#flashattention-and-memory-efficient-attention-through-pytorchs-scaleddotproductattention", "I am having the same issue with `Llama-2-70b-chat-hf model`. Tried to add `attn_implementation=\"eager`, but got the following new \"no attribute 'to_legacy_cache'\" errors:\r\n\r\n```\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1527, in _call_impl [1/1003]\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensor_parallel/pretrained_model.py\", line 76, in forward\r\n return self.wrapped_model(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/tensor_parallel/tensor_parallel.py\", line 159, in forward\r\n return parallel_apply(self.module_shards, inputs, kwargs_tup, self.devices)[self.output_device_index]\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py\", line 110, in parallel_apply\r\n output.reraise()\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/_utils.py\", line 694, in reraise\r\n raise exception\r\nAttributeError: Caught AttributeError in replica 0 on device 0.\r\nOriginal Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py\", line 85, in _worker\r\n output = module(*input, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py\", line 1174, in forward\r\n outputs = self.model(\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py\", line 1086, in forward\r\n next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache\r\nAttributeError: 'tuple' object has no attribute 'to_legacy_cache'\r\n```\r\n", "cc @gante as well! ", "@taozhang9527 since `tensor_parallel` overloads the models, it is possible their classes are not compatible with the models that got the new cache format in `transformers` v4.36. I'd open an issue in `tensor_parallel` :)" ]
1,702
1,707
null
NONE
null
### System Info - `transformers` version: 4.36.0 - Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @fxmarty ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import transformers import tensor_parallel as tp tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf") model = transformers.AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-chat-hf") modelp = tp.tensor_parallel(model) ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /root/code/cot-unfaithfulness/test-hf.py in line 8 [5](file:///root/code/cot-unfaithfulness/test-hf.py?line=4) tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf") [6](file:///root/code/cot-unfaithfulness/test-hf.py?line=5) model = transformers.AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-chat-hf") ----> [8](file:///root/code/cot-unfaithfulness/test-hf.py?line=7) modelp = tp.tensor_parallel(model) File /opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py:61, in tensor_parallel(module, device_ids, tensor_parallel_config, distributed, sharded, sharded_param_names, **kwargs) [59](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=58) else: [60](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=59) if isinstance(module, PreTrainedModel): ---> [61](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=60) return TensorParallelPreTrainedModel( [62](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=61) module, [63](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=62) device_ids=device_ids, [64](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=63) tensor_parallel_config=tensor_parallel_config, [65](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=64) distributed=distributed, [66](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=65) sharded=sharded, [67](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=66) sharded_param_names=sharded_param_names, [68](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=67) **kwargs, [69](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=68) ) [70](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=69) else: [71](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=70) return TensorParallel( [72](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=71) module, [73](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=72) device_ids=device_ids, (...) [78](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=77) **kwargs, [79](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/factory.py?line=78) ) File /opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py:47, in TensorParallelPreTrainedModel.__init__(self, module, device_ids, output_device, output_device_index, tensor_parallel_config, **kwargs) [38](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=37) def __init__( [39](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=38) self, [40](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=39) module: PreTrainedModel, (...) [45](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=44) **kwargs, [46](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=45) ): ---> [47](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=46) super().__init__(module.config) # Temporary empty config. Gets replaced in from_pretrained [49](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=48) if hasattr(module, "_hf_hook"): [50](file:///opt/conda/lib/python3.10/site-packages/tensor_parallel/pretrained_model.py?line=49) from accelerate.hooks import remove_hook_from_module File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:1190, in PreTrainedModel.__init__(self, config, *inputs, **kwargs) [1184](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1183) raise ValueError( [1185](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1184) f"Parameter config in `{self.__class__.__name__}(config)` should be an instance of class " [1186](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1185) "`PretrainedConfig`. To create a model from a pretrained model use " [1187](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1186) f"`model = {self.__class__.__name__}.from_pretrained(PRETRAINED_MODEL_NAME)`" [1188](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1187) ) [1189](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1188) # Save config and origin of the pretrained weights if given in model -> [1190](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1189) config = self._autoset_attn_implementation( [1191](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1190) config, torch_dtype=torch.get_default_dtype(), check_device_map=False [1192](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1191) ) [1193](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1192) self.config = config [1195](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1194) self.name_or_path = config.name_or_path File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:1311, in PreTrainedModel._autoset_attn_implementation(cls, config, use_flash_attention_2, torch_dtype, device_map, check_device_map) [1302](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1301) cls._check_and_enable_flash_attn_2( [1303](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1302) config, [1304](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1303) torch_dtype=torch_dtype, (...) [1307](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1306) check_device_map=check_device_map, [1308](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1307) ) [1309](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1308) elif requested_attn_implementation in [None, "sdpa"]: [1310](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1309) # use_flash_attention_2 takes priority over SDPA, hence SDPA treated in this elif. -> [1311](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1310) config = cls._check_and_enable_sdpa( [1312](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1311) config, hard_check_only=False if requested_attn_implementation is None else True [1313](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1312) ) [1314](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1313) else: [1315](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1314) config._attn_implementation = "eager" File /opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py:1464, in PreTrainedModel._check_and_enable_sdpa(cls, config, hard_check_only) [1462](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1461) if hard_check_only: [1463](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1462) if not cls._supports_sdpa: -> [1464](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1463) raise ValueError( [1465](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1464) f"{cls.__name__} does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to " [1466](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1465) "request support for this architecture: https://github.com/huggingface/transformers/issues/new" [1467](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1466) ) [1468](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1467) if not is_torch_sdpa_available(): [1469](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1468) raise ImportError( [1470](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1469) "PyTorch SDPA requirements in Transformers are not met. Please install torch>=2.1.1." [1471](file:///opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py?line=1470) ) ValueError: TensorParallelPreTrainedModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new ``` ### Expected behavior Wrap the model using the tensor_parallel library https://github.com/BlackSamorez/tensor_parallel This succeeds with transformers==4.35.2 The exception seems to be raised from this line added in 4.36 https://github.com/huggingface/transformers/commit/80377eb018c077dba434bc8e7912bcaed3a64d09#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaR1435
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28003/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28003/timeline
reopened
null
null
https://api.github.com/repos/huggingface/transformers/issues/28002
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28002/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28002/comments
https://api.github.com/repos/huggingface/transformers/issues/28002/events
https://github.com/huggingface/transformers/issues/28002
2,039,559,344
I_kwDOCUB6oc55kTSw
28,002
Not handled case when use_weighted_layer_sum and return-dict=True in WhisperForAudioClassification
{ "login": "ElsebaiyMohamed", "id": 77920008, "node_id": "MDQ6VXNlcjc3OTIwMDA4", "avatar_url": "https://avatars.githubusercontent.com/u/77920008?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ElsebaiyMohamed", "html_url": "https://github.com/ElsebaiyMohamed", "followers_url": "https://api.github.com/users/ElsebaiyMohamed/followers", "following_url": "https://api.github.com/users/ElsebaiyMohamed/following{/other_user}", "gists_url": "https://api.github.com/users/ElsebaiyMohamed/gists{/gist_id}", "starred_url": "https://api.github.com/users/ElsebaiyMohamed/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ElsebaiyMohamed/subscriptions", "organizations_url": "https://api.github.com/users/ElsebaiyMohamed/orgs", "repos_url": "https://api.github.com/users/ElsebaiyMohamed/repos", "events_url": "https://api.github.com/users/ElsebaiyMohamed/events{/privacy}", "received_events_url": "https://api.github.com/users/ElsebaiyMohamed/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ElsebaiyMohamed, thanks for raising this issue and providing details on the error + a snippet. Could you also provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output? ", "Hi @amyeroberts ,\r\nApologies for the delayed response! 🙏 Life threw a curveball, but I'm back on track. Thanks for your patience!\r\n\r\nRegarding your request, here's the output of `transformers-cli env`:\r\n\r\n```bash\r\ntransformers version: 4.36.0\r\nPlatform: Linux-5.15.133+-x86_64-with-glibc2.35\r\nPython version: 3.10.12\r\nHuggingface_hub version: 0.19.4\r\nSafetensors version: 0.4.1\r\nAccelerate version: 0.25.0\r\nAccelerate config: \tnot found\r\nPyTorch version (GPU?): 2.0.0 (True)\r\nTensorflow version (GPU?): 2.13.0 (True)\r\nFlax version (CPU?/GPU?/TPU?): 0.7.5 (gpu)\r\nJax version: 0.4.21\r\nJaxLib version: 0.4.21\r\nUsing GPU in script?: yes\r\nUsing distributed or parallel set-up in script?: no\r\n```\r\n\r\nLet me know if there's anything else I can help you with.", "@ElsebaiyMohamed Great - thanks for providing this info! \r\n\r\ncc @sanchit-gandhi @ylacombe " ]
1,702
1,705
1,705
NONE
null
@sanchit-gandhi I use WhisperForAudioClassification task and want to use `use_weighted_layer_sum=True`, but there is a problem when call forward, the encoder part can return tuple or dict if `return_dict=True` but the code for use `use_weighted_layer_sum=True` assume the return to be tuple only and this line raise error `hidden_states = torch.stack(encoder_outputs, dim=1)` if the encoder return dict, there are workaround by using `return_dict=False` but when use the model later with `pipeline` it will raise error because it assume the model to return dict not tuple. [Link to code with the problem](https://github.com/huggingface/transformers/blob/c7f076a00ee54f777b3d3322c91bc11489a47950/src/transformers/models/whisper/modeling_whisper.py#L2918C6-L2918C6) ```py if self.config.use_weighted_layer_sum: hidden_states = torch.stack(encoder_outputs, dim=1) # This line raise error when return_dict=True and use_weighted_layer_sum=True norm_weights = nn.functional.softmax(self.layer_weights, dim=-1) hidden_states = (hidden_states * norm_weights.view(-1, 1, 1)).sum(dim=1) else: hidden_states = encoder_outputs[0] ``` **Reproduce error** ```py from transformers import WhisperForAudioClassification, AutoFeatureExtractor from datasets import load_dataset dataset = load_dataset('seba3y/speechocean762',) dataset = dataset['train'] sampling_rate = dataset.features["audio"].sampling_rate dataset = dataset.remove_columns(['utt_name', 'text', 'completeness', 'fluency', 'prosodic']) feature_extractor = AutoFeatureExtractor.from_pretrained("seba3y/whisper-tiny") model = WhisperForAudioClassification.from_pretrained("seba3y/whisper-tiny", use_weighted_layer_sum=True, return_dict=True) # test if it work inputs = feature_extractor(dataset['train'][3]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") with torch.no_grad(): logits = model(**inputs).logits predicted_class_ids = torch.argmax(logits, dim=-1).item() predicted_label = model.config.id2label[predicted_class_ids] print(predicted_label) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28002/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28002/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28001
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28001/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28001/comments
https://api.github.com/repos/huggingface/transformers/issues/28001/events
https://github.com/huggingface/transformers/issues/28001
2,039,508,919
I_kwDOCUB6oc55kG-3
28,001
UserWarning: Using `max_length`'s default (448) at Inference Enpoint deployment
{ "login": "SeeknnDestroy", "id": 44926076, "node_id": "MDQ6VXNlcjQ0OTI2MDc2", "avatar_url": "https://avatars.githubusercontent.com/u/44926076?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SeeknnDestroy", "html_url": "https://github.com/SeeknnDestroy", "followers_url": "https://api.github.com/users/SeeknnDestroy/followers", "following_url": "https://api.github.com/users/SeeknnDestroy/following{/other_user}", "gists_url": "https://api.github.com/users/SeeknnDestroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/SeeknnDestroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SeeknnDestroy/subscriptions", "organizations_url": "https://api.github.com/users/SeeknnDestroy/orgs", "repos_url": "https://api.github.com/users/SeeknnDestroy/repos", "events_url": "https://api.github.com/users/SeeknnDestroy/events{/privacy}", "received_events_url": "https://api.github.com/users/SeeknnDestroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @SeeknnDestroy, thanks for raising an issue! \r\n\r\nThere's three parts to the issue being raised. \r\n\r\nWith regards to the error message, this is because the deprecated argument `max_length` is used for that checkpoint's [generation config](https://huggingface.co./distil-whisper/distil-large-v2/blob/c204f3c76ec464a0ab9bcfd19afa0add93f69983/generation_config.json#L127). @sanchit-gandhi \r\n\r\nThe second is about the transcription behaviour. @sanchit-gandhi is best placed to answer about the recommended way to treat long audio files. \r\n\r\nThe final point is how to configure the model behaviour using inference endpoints, which I'll defer to @philschmid :) ", "Whisper has a receptive field of 30s. For long-form transcription (>30s audio), we need to enable \"chunking\" to transcribe chunks of 30s audios incrementally, and then \"stitch\" the resulting transcriptions together at the boundaries. You can see how to run this in Python here: https://huggingface.co./distil-whisper/distil-large-v2#long-form-transcription\r\nIt's quite simple by passing one extra line to the pipeline: `chunk_length_s=15`\r\n\r\nI'll leave @philschmid to advise on how to integrate this into your endpoint!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,707
1,707
NONE
null
### System Info **Inference Endpoints** - **Model**: distil-whisper/distil-large-v2 - **Task**: automatic-speech-recognition - **Revision**: c204f3c76ec464a0ab9bcfd19afa0add93f69983 - **Container type**: Default - **Instance**: AWS, us-east-1 - **Instance Type**: GPU · Nvidia Tesla T4 · 1x GPU · 16 GB ### Who can help? @sanchit-gandhi @Narsil ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1-deploy distil-whisper/distil-large-v2 model via Inference Endpoints and above system configurations 2-run reference code its given: ```python import requests API_URL = "https://ovibb90ga7zdc5qa.us-east-1.aws.endpoints.huggingface.cloud" headers = { "Authorization": "Bearer XXXXXX", "Content-Type": "audio/flac" } def query(filename): with open(filename, "rb") as f: data = f.read() response = requests.post(API_URL, headers=headers, data=data) return response.json() output = query("sample1.flac") ``` ### Expected behavior Ideally, the model should transcribe the full content of longer audio inputs without being constrained by the `max_length` parameter, especially given the warning about its upcoming deprecation. Above is warning that I am getting: ### Full warning message ``` 2023/12/13 14:22:36 ~ /opt/conda/lib/python3.9/site-packages/transformers/generation/utils.py:1369: UserWarning: Using `max_length`'s default (448) to control the generation length. This behaviour is deprecated and will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation. ``` **Additional Context**: We have Hugging Face enterprise account as @safevideo. Using `distil-whisper/distil-large-v2` for ASR, we face a `UserWarning` regarding `max_length`, potentially affecting our ability to transcribe longer audio files. Seeking advice for handling this and potentially a way to get full transcription of longer audio at Inference Endpoints.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28001/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28001/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/28000
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28000/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28000/comments
https://api.github.com/repos/huggingface/transformers/issues/28000/events
https://github.com/huggingface/transformers/issues/28000
2,039,461,943
I_kwDOCUB6oc55j7g3
28,000
XLM question-answering pipeline is flacky
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @fxmarty - thanks for raising this! \r\n\r\nTo help with debugging - has this been observed with other checkpoints or only the tiny random ones for testing? " ]
1,702
1,705
1,705
COLLABORATOR
null
### System Info transformers main, but tested on commits in the last three weeks, same issue ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python for i in range(50): from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline import torch model = AutoModelForQuestionAnswering.from_pretrained("hf-internal-testing/tiny-random-XLMModel") tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-XLMModel") pipe = pipeline("question-answering", model=model, tokenizer=tokenizer) question = "Whats my name?" context = "My Name is Philipp and I live in Nuremberg." outputs = pipe(question, context) ``` sometimes fail with ``` Traceback (most recent call last): File "<tmp 4>", line 23, in <module> outputs = pipe(question, context) File "/home/fxmarty/hf_internship/transformers/src/transformers/pipelines/question_answering.py", line 393, in __call__ return super().__call__(examples[0], **kwargs) File "/home/fxmarty/hf_internship/transformers/src/transformers/pipelines/base.py", line 1132, in __call__ return next( File "/home/fxmarty/hf_internship/transformers/src/transformers/pipelines/pt_utils.py", line 125, in __next__ processed = self.infer(item, **self.params) File "/home/fxmarty/hf_internship/transformers/src/transformers/pipelines/question_answering.py", line 563, in postprocess "start": np.where(char_to_word == token_to_orig_map[s])[0][0].item(), KeyError: 5 ``` ### Expected behavior no error. I can have a look if I have time
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28000/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28000/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27999
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27999/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27999/comments
https://api.github.com/repos/huggingface/transformers/issues/27999/events
https://github.com/huggingface/transformers/pull/27999
2,039,327,117
PR_kwDOCUB6oc5h4GDC
27,999
[`CI slow`] Fix expected values
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks a lot!" ]
1,702
1,702
1,702
COLLABORATOR
null
# What does this PR do? Fix slow test. The init was probably done twice because: ```pyton Some weights of ViTMSNForImageClassification were not initialized from the model checkpoint at facebook/vit-msn-small and are newly initialized: ['classifier.bias', 'classifier.weight'] ``` so the weights should no be initialized by the form_pretrained but by the `_init_weights`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27999/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27999/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27999", "html_url": "https://github.com/huggingface/transformers/pull/27999", "diff_url": "https://github.com/huggingface/transformers/pull/27999.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27999.patch", "merged_at": 1702471031000 }
https://api.github.com/repos/huggingface/transformers/issues/27998
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27998/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27998/comments
https://api.github.com/repos/huggingface/transformers/issues/27998/events
https://github.com/huggingface/transformers/issues/27998
2,039,321,707
I_kwDOCUB6oc55jZRr
27,998
CodeLlama-34b-Instruct-hf
{ "login": "zhaotyer", "id": 89376832, "node_id": "MDQ6VXNlcjg5Mzc2ODMy", "avatar_url": "https://avatars.githubusercontent.com/u/89376832?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhaotyer", "html_url": "https://github.com/zhaotyer", "followers_url": "https://api.github.com/users/zhaotyer/followers", "following_url": "https://api.github.com/users/zhaotyer/following{/other_user}", "gists_url": "https://api.github.com/users/zhaotyer/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhaotyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhaotyer/subscriptions", "organizations_url": "https://api.github.com/users/zhaotyer/orgs", "repos_url": "https://api.github.com/users/zhaotyer/repos", "events_url": "https://api.github.com/users/zhaotyer/events{/privacy}", "received_events_url": "https://api.github.com/users/zhaotyer/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @younesbelkada as the error seems to be being hit in bitsandbytes ", "Hi @zhaotyer -- can you reproduce the issue with a smaller, publically available model like `codellama/CodeLlama-7b-hf`?\r\n\r\nThe `RuntimeError` you got often shows up in OOM cases. It would be weird to get one given that you're using a 34B 4-bit quantized model on a 80GB GPU, but let's rule that out first.", "Oops sorry, this issue somehow got out of my eyes \r\nI second what @gante said! If you can confirm that, it would great\r\nMoreover, the issue \r\n```\r\nRuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`\r\n```\r\nSometimes indicates that there is something off with embedding layers and passed indices, to further debug, can you please:\r\n1- Re-run your script with `CUDA_LAUNCH_BLOCKING=1 yourscript.py` --> this will give a new traceback with the exact position where things are off\r\n2- Try out to run the same script on CPU, without quantization\r\nThanks!" ]
1,702
1,707
null
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.25.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction infer.py ``` from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig import torch model_id = "/workspace/CodeLlama-34b-Instruct-hf" quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.float16 ) tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, quantization_config=quantization_config, device_map="auto", ) while True: question = input("请输入你的问题:") print(len(question)) prompt = 'def remove_non_ascii(s: str) -> str:\n """ ' inputs = tokenizer(question, return_tensors="pt").to("cuda") output = model.generate( inputs["input_ids"], max_new_tokens=200, do_sample=True, top_p=0.9, temperature=0.1, pad_token_id=tokenizer.eos_token_id ) output = output[0].to("cpu") print(tokenizer.decode(output)) ``` python3 infer.py question:`def remove_non_ascii(s: str) -> str:\n """ ` infer success question:`def remove_non_ascii(s: str) -> str:\n """ <FILL_ME>\nprint(remove_non_ascii('afkdj$$('))` occer error error info: ``` ../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [774,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [774,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [774,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [774,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "infer.py", line 22, in <module> output = model.generate( File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 1719, in generate return self.sample( File "/usr/local/lib/python3.8/dist-packages/transformers/generation/utils.py", line 2801, in sample outputs = self( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 1034, in forward outputs = self.model( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 922, in forward layer_outputs = decoder_layer( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 672, in forward hidden_states, self_attn_weights, present_key_value = self.self_attn( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/llama/modeling_llama.py", line 366, in forward query_states = self.q_proj(hidden_states) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/nn/modules.py", line 248, in forward out = bnb.matmul_4bit(x, self.weight.t(), bias=bias, quant_state=self.weight.quant_state) File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py", line 579, in matmul_4bit return MatMul4Bit.apply(A, B, out, bias, quant_state) File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/usr/local/lib/python3.8/dist-packages/bitsandbytes/autograd/_functions.py", line 516, in forward output = torch.nn.functional.linear(A, F.dequantize_4bit(B, state).to(A.dtype).t(), bias) RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` ``` GPU 1*NVIDIA A100-SXM4-80GB ### Expected behavior Any question can be answered normally
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27998/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/27998/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27997
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27997/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27997/comments
https://api.github.com/repos/huggingface/transformers/issues/27997/events
https://github.com/huggingface/transformers/pull/27997
2,039,306,675
PR_kwDOCUB6oc5h4Bmb
27,997
Fix PatchTSMixer slow tests
{ "login": "ajati", "id": 41211350, "node_id": "MDQ6VXNlcjQxMjExMzUw", "avatar_url": "https://avatars.githubusercontent.com/u/41211350?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ajati", "html_url": "https://github.com/ajati", "followers_url": "https://api.github.com/users/ajati/followers", "following_url": "https://api.github.com/users/ajati/following{/other_user}", "gists_url": "https://api.github.com/users/ajati/gists{/gist_id}", "starred_url": "https://api.github.com/users/ajati/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ajati/subscriptions", "organizations_url": "https://api.github.com/users/ajati/orgs", "repos_url": "https://api.github.com/users/ajati/repos", "events_url": "https://api.github.com/users/ajati/events{/privacy}", "received_events_url": "https://api.github.com/users/ajati/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@kashif " ]
1,702
1,702
1,702
CONTRIBUTOR
null
Fix `PatchTSMixer` slow tests and relax asset conditions in functional tests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27997/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27997", "html_url": "https://github.com/huggingface/transformers/pull/27997", "diff_url": "https://github.com/huggingface/transformers/pull/27997.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27997.patch", "merged_at": 1702470866000 }
https://api.github.com/repos/huggingface/transformers/issues/27996
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27996/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27996/comments
https://api.github.com/repos/huggingface/transformers/issues/27996/events
https://github.com/huggingface/transformers/pull/27996
2,039,288,621
PR_kwDOCUB6oc5h39qu
27,996
add torch.compile in pipeline
{ "login": "jiqing-feng", "id": 107918818, "node_id": "U_kgDOBm614g", "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiqing-feng", "html_url": "https://github.com/jiqing-feng", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @gante @sywangyi ", "Hi @jiqing-feng, thanks for opening this PR! \r\n\r\nI don't think this is a feature we want to add at the moment. Users can already pass compiled models to the pipeline to use, and there they can control which parts will or won't be compiled. There's many optimization choices one might make when loading a model e.g. whether to quantize the weights. To keep the API of the pipeline as simple as possible - we can leave additional configuration of the model outside of the pipeline.\r\n\r\nI'll let @Narsil's opinion weigh more heavily on whether this should be added to the pipeline over mine. \r\n", "The main idea is that users may not know if the `pipeline` uses `model.forward` or `model.generate`. If we integrate `torch.compile` in the `pipeline`, we can decide which function should be compiled. We only need 2 extra params: `torch_compile` and `torch_compile_config`, or we can just pass `torch_compile_backend` to `kwargs`. It will help users to apply torch compile and avoid compiling useless functions easily. \r\n\r\nFor example, in [ASR pipeline](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/automatic_speech_recognition.py#L541-L601), it uses `model.generate` if `model_type in {\"seq2seq\", \"seq2seq_whisper\"}`, and use `model.forward` otherwise. \r\n\r\nI think users won't check the `pipeline` code in such detail, so the best way is we define which model function needs to be compiled in `pipeline`.\r\n\r\nBTW, it is important to clarify that `model=torch.compile(model)` won't work on `model.generate.forward`, the model in generation will still use the original forward function to inference.", "yes. user may not be aware of the calling API in each Task of Pipeline in __call__. we could do it for user in each task if user want to use torch.compile for acceleration. for some task, we see 20%-30% perf boost.", "Hi @Narsil . Do you think that we should add `torch.compile` in the pipeline since users may not know which function of model should be compiled?", "No you can and should use ` pipeline(model=torch.compile(model)`.\r\nIf that doesn't work with generate it's up to the caller to know what to do.\r\n\r\nJust as a reminder `pipeline` is meant to make ML models easy to use, while have *decent* performance (shouldn't be 10x slower than it should). However, complexifying the codebase for a potential 10-20% is not really worth it imo (especially when sending directly a compiled model WILL work, you just need to know when you can and when you cannot, but that' s more of a torch.compile limitation than anything related to pipelines... pipelines also can work with Tensorflow models for instance)", "> No you can and should use ` pipeline(model=torch.compile(model)`. If that doesn't work with generate it's up to the caller to know what to do.\r\n> \r\n> Just as a reminder `pipeline` is meant to make ML models easy to use, while have _decent_ performance (shouldn't be 10x slower than it should). However, complexifying the codebase for a potential 10-20% is not really worth it imo (especially when sending directly a compiled model WILL work, you just need to know when you can and when you cannot, but that' s more of a torch.compile limitation than anything related to pipelines... pipelines also can work with Tensorflow models for instance)\r\n\r\nThanks for the clarification!" ]
1,702
1,706
1,706
CONTRIBUTOR
null
Hi @Narsil . Since torch compile is supported in pytorch2.0, see [here](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html). However, users usually use pipeline and don't know which model function (`forward` or `generate`) should be compiled. I was thinking add `torch.compile` in pipeline to make users easier to use it. Would like to hear your opinion. Thank! BTW, do you have any idea about how to determine which model function should be compiled? I compiled both model functions because I don't know which function will be used by the model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27996/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27996/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27996", "html_url": "https://github.com/huggingface/transformers/pull/27996", "diff_url": "https://github.com/huggingface/transformers/pull/27996.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27996.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27995
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27995/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27995/comments
https://api.github.com/repos/huggingface/transformers/issues/27995/events
https://github.com/huggingface/transformers/pull/27995
2,039,173,651
PR_kwDOCUB6oc5h3kl1
27,995
Assitant model may on a different device
{ "login": "jiqing-feng", "id": 107918818, "node_id": "U_kgDOBm614g", "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiqing-feng", "html_url": "https://github.com/jiqing-feng", "followers_url": "https://api.github.com/users/jiqing-feng/followers", "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}", "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions", "organizations_url": "https://api.github.com/users/jiqing-feng/orgs", "repos_url": "https://api.github.com/users/jiqing-feng/repos", "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}", "received_events_url": "https://api.github.com/users/jiqing-feng/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @gante @amyeroberts . Would you please help me to review it? Thx!" ]
1,702
1,704
1,704
CONTRIBUTOR
null
Hi @gante . Would you please have a look at this PR. The motivation is that I try to put assistant model and self model in different cuda device, or put assistant model on CPU. This PR should enable assistant model on a different device. I have tested it on both decoder-only model and encoder-decoder model. Could you please help me to review it? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27995/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27995/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27995", "html_url": "https://github.com/huggingface/transformers/pull/27995", "diff_url": "https://github.com/huggingface/transformers/pull/27995.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27995.patch", "merged_at": 1704968700000 }
https://api.github.com/repos/huggingface/transformers/issues/27994
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27994/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27994/comments
https://api.github.com/repos/huggingface/transformers/issues/27994/events
https://github.com/huggingface/transformers/issues/27994
2,038,987,548
I_kwDOCUB6oc55iHsc
27,994
Performance degradation with BF16 precision
{ "login": "jerin-scalers-ai", "id": 125901005, "node_id": "U_kgDOB4EYzQ", "avatar_url": "https://avatars.githubusercontent.com/u/125901005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerin-scalers-ai", "html_url": "https://github.com/jerin-scalers-ai", "followers_url": "https://api.github.com/users/jerin-scalers-ai/followers", "following_url": "https://api.github.com/users/jerin-scalers-ai/following{/other_user}", "gists_url": "https://api.github.com/users/jerin-scalers-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerin-scalers-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerin-scalers-ai/subscriptions", "organizations_url": "https://api.github.com/users/jerin-scalers-ai/orgs", "repos_url": "https://api.github.com/users/jerin-scalers-ai/repos", "events_url": "https://api.github.com/users/jerin-scalers-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/jerin-scalers-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The inference time in bfloat16 depends on the hardware you are using and the pytorch version as well. Recommending you to use float16 for inference. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
### System Info Transformers: 4.35.2 Torch: 2.1.1-cpu CPU: Intel Xeon 4th Gen processor ### Who can help? @ArthurZucker Hi, I was comparing performance of Llama 2 7b chat hf model with different precisions. I observed that there is a significant degrade on performance (inference time) with bfloat16 compared to fp32 model in Intel CPU . Bf16 is suppose to give better performance than fp32 . Please refer below table for details: | Precision | Tokens Generated | Infer time (sec) | |------------------------|-----------------------|------------------| | FP32 | 186 | 12.51 | | BF16 | 186 | 115.37 | ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline import torch import time model_id = "meta-llama/Llama-2-7b-chat-hf" device = "cpu" torch_dtype = torch.bfloat16 tokenizer = AutoTokenizer.from_pretrained(model_id) input_text = "In maximum 180 words, explain why purchasing Dell Poweredge servers offer much better TCO to enterprises compared to using public cloud infrastructure, for AI initiatives" text_generator = pipeline( "text-generation", model=model_id, tokenizer=tokenizer, return_tensors=True, device=device, torch_dtype = torch_dtype, ) for _ in range(5): s_time = time.time() # Inference benchmarking output = text_generator( input_text, max_new_tokens=256, temperature=1, ) e_time = time.time() # print(output) print(tokenizer.decode(output[0]["generated_token_ids"])) num_tokens = len(output[0]["generated_token_ids"]) print(f"Num tokens: {num_tokens}") print(f"Infer time: {e_time-s_time}") ``` ### Expected behavior Bf16 is suppose to give better performance than fp32
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27994/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27994/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27993
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27993/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27993/comments
https://api.github.com/repos/huggingface/transformers/issues/27993/events
https://github.com/huggingface/transformers/pull/27993
2,038,894,860
PR_kwDOCUB6oc5h2nxD
27,993
When save a model on TPU, make a copy to be moved to CPU
{ "login": "qihqi", "id": 1719482, "node_id": "MDQ6VXNlcjE3MTk0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/1719482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qihqi", "html_url": "https://github.com/qihqi", "followers_url": "https://api.github.com/users/qihqi/followers", "following_url": "https://api.github.com/users/qihqi/following{/other_user}", "gists_url": "https://api.github.com/users/qihqi/gists{/gist_id}", "starred_url": "https://api.github.com/users/qihqi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qihqi/subscriptions", "organizations_url": "https://api.github.com/users/qihqi/orgs", "repos_url": "https://api.github.com/users/qihqi/repos", "events_url": "https://api.github.com/users/qihqi/events{/privacy}", "received_events_url": "https://api.github.com/users/qihqi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@qihqi this is done inside `_save_tpu` https://github.com/huggingface/transformers/pull/27799/files, or we missed something?", "> @qihqi this is done inside `_save_tpu` https://github.com/huggingface/transformers/pull/27799/files, or we missed something?\r\n\r\nI see, we need to make a copy. Should we do that inside `_save_tpu` or do it before here?", "Thanks for the fix! Quick question: I used to see this error `indices should be either on cpu or on the same device as the indexed tensor (XLA). When using XLA, the indexed tensor must be an XLA tensor.` and it seems some index ops failed so I tried to fix the op instead of the model (and it didn't work out). How did you find out it's the model's problem?", "> Thanks for the fix! Quick question: I used to see this error `indices should be either on cpu or on the same device as the indexed tensor (XLA). When using XLA, the indexed tensor must be an XLA tensor.` and it seems some index ops failed so I tried to fix the op instead of the model (and it didn't work out). How did you find out it's the model's problem?\r\n\r\nThanks! So I also started with assumption that the input is wrong, and traced where the input comes from, and arrived here: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L2668 and here clearly the input is moving to the device. \r\n\r\nAlso I ran the script the first few steps trains fine. So I start suspecting something happened after training is done. ", "Hi @LysandreJik @ArthurZucker would you guys be able to help with a review? Thanks!", "Pinging our TPU expert @muellerzr ", "> Thanks for adding this!\r\n> \r\n> I'm not very familiar with TPUs. My only concern with this is, if it behaves the same as the model being on GPU, then it requires enough space for 2 models on the device when `copy.deepcopy(self.model)` is called i.e. there is no offloading.\r\n\r\nGood point! I have changed the code to first move it CPU then move it back. PTAL.", "@qihqi Just to confirm that you were able to successfully run the script in the PR description with the most recent changes to the PR: \r\n\r\n```\r\npython3 examples/pytorch/text-classification/run_glue.py \\\r\n --model_name_or_path=distilbert-base-uncased \\\r\n --task_name=MNLI \\\r\n --do_train=true \\\r\n --num_train_epochs=1 \\\r\n --max_seq_length=128 \\\r\n --learning_rate=3e-5 \\\r\n --overwrite_output_dir=true \\\r\n --save_steps=3000 \\\r\n --save_strategy=no --output_dir=/workspace/mnli\r\n```\r\n", "> @qihqi Just to confirm that you were able to successfully run the script in the PR description with the most recent changes to the PR:\r\n> \r\n> ```\r\n> python3 examples/pytorch/text-classification/run_glue.py \\\r\n> --model_name_or_path=distilbert-base-uncased \\\r\n> --task_name=MNLI \\\r\n> --do_train=true \\\r\n> --num_train_epochs=1 \\\r\n> --max_seq_length=128 \\\r\n> --learning_rate=3e-5 \\\r\n> --overwrite_output_dir=true \\\r\n> --save_steps=3000 \\\r\n> --save_strategy=no --output_dir=/workspace/mnli\r\n> ```\r\n\r\nYes. (Well, to be exact it's this script: https://github.com/GoogleCloudPlatform/ml-testing-accelerators/blob/master/tests/pytorch/nightly/hf-glue.libsonnet#L41) ", "Hi everyone,\r\nOne question, was this fix supposed to work with XLA FSDP? \r\n\r\nI'm having problems with the model.to ('cpu') part before unwrapping the model (memory access errors), and if I unwrap it before applying .to('cpu'), just one shard of the model is saved." ]
1,702
1,706
1,702
CONTRIBUTOR
null
# What does this PR do? When we save a model on TPU, we first move it to CPU because TPU tensors have no storage. However, we should do it with a copy of the model, so that the original model still on TPU. Because otherwise `model.to('cpu')` would also modify the model in-place. Then, it would raise the following error when that model is used in compute: ``` indices should be either on cpu or on the same device as the indexed tensor (XLA). When using XLA, the indexed tensor must be an XLA tensor. ``` Tested by running this command on TPU v4-8: ``` python3 examples/pytorch/text-classification/run_glue.py \ --model_name_or_path=distilbert-base-uncased \ --task_name=MNLI \ --do_train=true \ --num_train_epochs=1 \ --max_seq_length=128 \ --learning_rate=3e-5 \ --overwrite_output_dir=true \ --save_steps=3000 \ --save_strategy=no --output_dir=/workspace/mnli ``` cc @muellerzr and @pacman100
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27993/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27993/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27993", "html_url": "https://github.com/huggingface/transformers/pull/27993", "diff_url": "https://github.com/huggingface/transformers/pull/27993.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27993.patch", "merged_at": 1702980531000 }
https://api.github.com/repos/huggingface/transformers/issues/27992
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27992/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27992/comments
https://api.github.com/repos/huggingface/transformers/issues/27992/events
https://github.com/huggingface/transformers/issues/27992
2,038,857,369
I_kwDOCUB6oc55hn6Z
27,992
Memory leak (not released) when calling Seq2SeqTrainer for fine-tuning
{ "login": "xyx361100238", "id": 19569322, "node_id": "MDQ6VXNlcjE5NTY5MzIy", "avatar_url": "https://avatars.githubusercontent.com/u/19569322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xyx361100238", "html_url": "https://github.com/xyx361100238", "followers_url": "https://api.github.com/users/xyx361100238/followers", "following_url": "https://api.github.com/users/xyx361100238/following{/other_user}", "gists_url": "https://api.github.com/users/xyx361100238/gists{/gist_id}", "starred_url": "https://api.github.com/users/xyx361100238/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xyx361100238/subscriptions", "organizations_url": "https://api.github.com/users/xyx361100238/orgs", "repos_url": "https://api.github.com/users/xyx361100238/repos", "events_url": "https://api.github.com/users/xyx361100238/events{/privacy}", "received_events_url": "https://api.github.com/users/xyx361100238/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @xyx361100238! Do you have a script I can use to reproduce the error and identify the memory increase? It's maybe worth trying adding the flag for full fp16 eval as well to avoid upcasting the weights to fp32 for inference:\r\n\r\n```\r\n--fp16_full_eval\r\n```\r\n\r\n_c.f._ https://huggingface.co./docs/transformers/v4.36.1/en/main_classes/trainer#transformers.TrainingArguments.fp16_full_eval", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,707
1,707
NONE
null
### System Info - `transformers` version: 4.34.0 - Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sanchit-gandhi @muellerzr @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I was use project finetune.py from [Whisper-Finetune](https://github.com/yeyupiaoling/Whisper-Finetune) to finetune whisper large on my own datasets,which code use transformers library. If I use the validation process during fine-tuning, it will lead to an increase in system memory, and there is a chance that it will not be released after validation. Over time, this can lead to Out Of Memory (OOM) and cause a crash. Is this a bug in the tool, or do I need to make some settings? ### Expected behavior After each verification is completed, normalize the memory to maintain it at a stable level.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27992/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27992/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27991
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27991/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27991/comments
https://api.github.com/repos/huggingface/transformers/issues/27991/events
https://github.com/huggingface/transformers/issues/27991
2,038,842,972
I_kwDOCUB6oc55hkZc
27,991
Error in all_reduce when GPT2 200B inferencing with dynamo and multi GPU
{ "login": "jcai04", "id": 62895533, "node_id": "MDQ6VXNlcjYyODk1NTMz", "avatar_url": "https://avatars.githubusercontent.com/u/62895533?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcai04", "html_url": "https://github.com/jcai04", "followers_url": "https://api.github.com/users/jcai04/followers", "following_url": "https://api.github.com/users/jcai04/following{/other_user}", "gists_url": "https://api.github.com/users/jcai04/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcai04/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcai04/subscriptions", "organizations_url": "https://api.github.com/users/jcai04/orgs", "repos_url": "https://api.github.com/users/jcai04/repos", "events_url": "https://api.github.com/users/jcai04/events{/privacy}", "received_events_url": "https://api.github.com/users/jcai04/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @jcai04, this does not seems to be an issue with transformers. Please submit this issue in the pytorch repo. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
### System Info version: - `transformers` version: 4.35.2 - Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.4.0 - Accelerate version: 0.4.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @SunMarc ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Code snippet: ``` class GPT2Block(nn.Module): def __init__(self, config, window_size): super().__init__() self.mp_size = iint(os.getenv("WORLD_SIZE", "1")) self.hidden_size = config.hidden_size self.ln_1 = nn.LayerNorm(self.hidden_size, eps=1e-5) self.attn = GPT2Attention(config, window_size) self.mlp = GPT2MLP(config) def forward(self, hidden_states, attention_mask, past_kv, kv_len, wpe=None): residual = hidden_states hidden_states = self.ln_1(hidden_states) attn_output, _ = self.attn(hidden_states, attention_mask, past_kv, kv_len, wpe) mlp_output = self.mlp(hidden_states) layer_out = attn_output + mlp_output if self.mp_size > 1: torch.distributed.all_reduce(layer_out) layer_out = layer_out + residual return layer_out ``` Error messages: ``` Traceback (most recent call last): File"./ut_test/seperate_200b.py", line 393, in <module> out_v2 = inference_engine(inputs) File "./ut_test/seperate_200b.py", line 250, in inference_engine context_output = context_infer(BI_model, curr_input) File "./ut_test/seperate_200b.py", line 199, in context_infer outputs = model(**one_input) File "/home/gpt2_200b/models/gpt2_200b_ptb.py", line 799 in forward hidden_states = self.transformer(input_tensor, input_mask, past_key, past_key_values, kv_len, query_len) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/eval_frame.py", line 328, in _fn return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/eval_frame.py", line 490, in catch_errors return callback(frame, cache_entry, hooks, frame_state) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 133, in _fn return fn(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 389, in_convert_frame_assert return _compile( File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 569, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/utils.py", line 189, in time_wrapper r = func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 491, in compile_inner out_code = transform_code_object(code, transform) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/bytecode_transformation.py" line 1028, in transform_code_object transformations(instructions, code_options) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/convert_frame.py", line 458, in transform tracer.run() File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2074, in run super().run() File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 724, in run and self.step() File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 688, in step getattr(self, inst.opname)(inst) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper return inner_fn(self, inst) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 1115, in CALL_FUNCTION self.call_function(fn, args, {}) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function self.push(fn.call_function(self, args, kwargs)) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/nn_module.py", line 331, in call_function return tx.inline_user_function_return( File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call return cls.inline_call_(parent, func, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_ tracer.run() File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 724, in run and self.step() File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 688, in step getattr(self, inst.opname)(inst) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper return inner_fn(self, inst) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX self.call_function(fn, argsvars.items, kwargsvars.items) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function self.push(fn.call_function(self, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 307, in call_function return super().call_function(tx, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 261, in call_function return super().call_function(tx, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 90, in call_function return tx.inline_user_function_return( File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call return cls.inline_call_(parent, func, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2286, in inline_call_ tracer.run() File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 724, in run and self.step() File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 688, in step getattr(self, inst.opname)(inst) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 392, in wrapper return inner_fn(self, inst) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 1155, in CALL_FUNCTION_EX self.call_function(fn, argsvars.items, kwargsvars.items) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 562, in call_function self.push(fn.call_function(self, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 307, in call_function return super().call_function(tx, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 261, in call_function return super().call_function(tx, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/variables/functions.py", line 90, in call_function return tx.inline_user_function_return( File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 598, in inline_user_function_return result = InliningInstructionTranslator.inline_call(self, fn, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2179, in inline_call return cls.inline_call_(parent, func, args, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2232, in inline_call InliningInstructionTranslator.check_inlineable(func) File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/symbolic_convert.py", line 2191, in check_inlineable unimplemented(f"inlining disallowed: {func.get_function()}") File "/usr/local/lib/python3.8/dist-packages/torch/_dynamo/exc.py", line 172, in unimplemented raise Unsupported(msg) torch._dynamo.exc.Unsupported: inlining disallowed: <function all_reduce at 0x7fab7fff78b0> from user code: File "/home/gpt2_200b/models/gpt2_200b_ptb.py", line 508, in forward hidden_states = block(hidden_states, attention_mask, past_kv[idx], kv_len, self.wpe) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/gpt2_200b/models/gpt2_200b_ptb.py", line 460, in forward torch.distributed.all_reduce(layer_out) ``` torch.compile setting: ``` torch.compile(self.transformer, dynamic=True, fullgraph=True) #default backend = inductor ``` ### Expected behavior We except to be able to do inference with dynamo, and we successfully inference when setting "fullgraph=False" in torch.compile. However, it is doesn't work when "fullgraph=True" in torch.compile with the same code
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27991/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27991/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27990
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27990/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27990/comments
https://api.github.com/repos/huggingface/transformers/issues/27990/events
https://github.com/huggingface/transformers/pull/27990
2,038,800,937
PR_kwDOCUB6oc5h2UYP
27,990
[DETA] Improvement and Sync from DETA especially for training
{ "login": "SangbumChoi", "id": 34004152, "node_id": "MDQ6VXNlcjM0MDA0MTUy", "avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SangbumChoi", "html_url": "https://github.com/SangbumChoi", "followers_url": "https://api.github.com/users/SangbumChoi/followers", "following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}", "gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}", "starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions", "organizations_url": "https://api.github.com/users/SangbumChoi/orgs", "repos_url": "https://api.github.com/users/SangbumChoi/repos", "events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}", "received_events_url": "https://api.github.com/users/SangbumChoi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Also due to change of major arguments I think we need to rewrite the test function for swin_backbone.", "Hi @SangbumChoi thanks so much for your PR. Could you try to make the CI green by running `make fixup` locally and fixing the errors it gives?\r\n\r\nBtw if you would have the time, would be great look into why DETA evaluation numbers on the [open object detection leaderboard](https://huggingface.co./spaces/hf-vision/object_detection_leaderboard) are not as high as reported in the paper. ", "@NielsRogge I think my PR will also affect to this leaderboard. I'm not sure if it will have positive or negative but I hope it is positive :)\r\n\r\nIn a viewpoint of inference pipeline, [the original implementation\r\n](https://huggingface.co./jozhang97/deta-swin-large/discussions/2) of config.json was not using DETA-specific assigner(which was DETAassigner1 and DETAassigner2 in the code that you have written) even though \"assign_first_stage=true\". You may recheck this to confirm this! \r\n(See the line of 1941 in modeling_deta.py)\r\n\r\nIn addition postprocessing pipeline with class_threshold is default by 0.7 however original code doesn't use classification thresholding (=using threshold 0)\r\n\r\nAnd also I will do some extra modification to make all the CI works!", "@NielsRogge I updated and passed the CI. There are few changes that you should know! please don't hestitate to ask details", "Awesome, I'll take a look in more detail in the next coming days.\r\n\r\nAlso, I was working on a notebook to fine-tune DETA on a custom dataset (by mainly copying [this notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) - you can replace `DetrForObjectDetection` by `DetaForObjectDetection`). However, the results were not as expected. If you have time to look into how to successfully fine-tune DETA, that would be awesome.", "@NielsRogge I also saw this notebook and my custom code is very very similar to this notebook which is using pytorch_lightning and distributed training.\r\nOne important fact is that matching lr_backbone and lr with the same hyperparameter with 2e-4. I also used Swin backbone for default training and gave me 41AP in final (not his balloon but the dataset that i mentioned above)\r\n```\r\n results = self.postprocess.post_process_object_detection(\r\n outputs, target_sizes=orig_target_sizes, threshold=0.0\r\n )\r\n```\r\nAlso remind that thresholding and **setting auxiliary_loss to true** is also important\r\n\r\nCan you also share the final AP?\r\n", "@NielsRogge @amyeroberts Ping for reminder", "Hi @SangbumChoi I tried out fine-tuning DETA from your branch on the balloon dataset with [this notebook](https://colab.research.google.com/drive/1fyZLR97V9RcWSjPN6djLdPylOjIV95r1?usp=sharing). Initially results didn't look good, but now they do! Had to set `auxiliary_loss=True` when loading the model.\r\n\r\n![image](https://github.com/huggingface/transformers/assets/48327001/c8f71e23-e6e4-4241-908b-5772d1ee16af)\r\n\r\nThe only thing that is still weird is that bounding boxes are shown with confidence > 0.4, even though I've passed in threshold=0.2. Will need to look into this.\r\n\r\nAlready awesome that training seems to have improved a lot!\r\n\r\nThanks!\r\n", "@NielsRogge Sounds great, it will have huge improvement. For the threshold part, because it has NMSpostprocess already implemented on the code, maybe overlapped bounding box with 0.2~0.39 might disappear at the visualization. (Check this part)", "> Thanks for working on this!\r\n> \r\n> There's a few changes we might not be able to add because of backwards compatibility and compatibility with the rest of the codebase. Otherwise looks good.\r\n\r\n@amyeroberts Hi, I also did some modification and resolved several conversations that is obvious. However, I still opened the conversation that has some misalignment or comments! (I think only one comment is not still fixed which is multi-gpu distribute part!)", "@NielsRogge Is there anything that I can help to make this PR merged? ", "@amyeroberts would need to approve the PR.\r\n\r\nFor reference, I don't think anyone was successfully fine-tuning DETA before this PR, so would be great to get it merged. Also haven't shared about this model yet due to differences in reported COCO eval", "@amyeroberts Thanks for your suggestion! Everything is fixed as your recommendation 👍🏼 ", "@amyeroberts There was one mistake importing sorry about that (see last commit)\r\n\r\nfor the slow test\r\n\r\n```\r\nroot@0a2b4fe54761:/mnt/nas2/users/sbchoi/transformers# RUN_SLOW=1 pytest tests/models/deta/\r\n=========================================================== test session starts ============================================================\r\nplatform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\nrootdir: /mnt/nas2/users/sbchoi/transformers\r\nconfigfile: pyproject.toml\r\nplugins: hypothesis-6.92.0, hydra-core-1.3.2\r\ncollected 150 items\r\n\r\ntests/models/deta/test_image_processing_deta.py .............. [ 9%]\r\ntests/models/deta/test_modeling_deta.py ........s...............ssssss.ssssssssss.....s..............s......s.............ssssssssss [ 70%]\r\ns.sssssssssssssss.s..s.......s.s............ [100%]\r\n\r\n============================================================= warnings summary =============================================================\r\n../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\nsrc/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n\r\n\r\n declare_namespace(pkg)\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\nsrc/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\r\n declare_namespace(pkg)\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_disk_offload_bin\r\n /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.__get__(instance, owner)()\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:460: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:463: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_model_outputs_equivalence\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:1894: UserWarning: Use of index_put_ on expanded tensors is deprecated. Please clone() the tensor before performing this operation. This also applies to advanced indexing e.g. tensor[indices] = tensor (Triggered internally at /opt/conda/conda-bld/pytorch_1702400410390/work/aten/src/ATen/native/TensorAdvancedIndexing.cpp:708.)\r\n t[t != t] = 0\r\n\r\ntests/models/deta/test_modeling_deta.py::DetaModelTest::test_model_outputs_equivalence\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:1894: UserWarning: Use of masked_fill_ on expanded tensors is deprecated. Please clone() the tensor before performing this operation. This also applies to advanced indexing e.g. tensor[mask] = scalar (Triggered internally at /opt/conda/conda-bld/pytorch_1702400410390/work/aten/src/ATen/native/cuda/Indexing.cu:1564.)\r\n t[t != t] = 0\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n========================================= 100 passed, 50 skipped, 10 warnings in 206.53s (0:03:26) =========================================\r\n```\r\n\r\nI think it has no problem. Does this answer your question of 'reduce' function? \r\nFrom my training code `accelerate launch --num_processes 4 src/train.py` and `accelerate launch --num_processes 1 src/train.py` both works fine.", "@SangbumChoi Yep! That answers everything. Thanks for running the tests. Once the doc test runner finishes I'll merge in. Thanks again for all of your work on this! ", "Finally! Thanks for the fast review @amyeroberts " ]
1,702
1,704
1,704
CONTRIBUTOR
null
# What does this PR do? Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. There are several changes in DETA not only for inference but also for training. 1. Add assign_second_stage argument that used in original DETA 2. add output_proposals for "anchors" that used in assign_first_stage 3. num_boxes normalization based on multi-gpu environment 4. add "enc_outputs" and "auxiliary_outputs" loss while training 5. minor changes in variable name that was not appropriate I tested with [custom dataset](https://universe.roboflow.com/roboflow-100/cable-damage) with finetuning. As a result original performance after 12 epoch was 1.0 AP, now is above 35 AP with same hyperparameter setting (such as learning rate). I am not the author or related group to DETA but I am a co-contributor of [DETA](https://github.com/jozhang97/DETA) so I know pretty much all details of DETA. this changes will give a great improvement to user who wants to train/fine-tune DETA with [sagemaker script](https://huggingface.co./jozhang97/deta-swin-large/blob/main/config.json?sagemaker_train=true). BTW, I think @NielsRogge missed important variable "assign_second_stage" to set True in [config.json](https://huggingface.co./jozhang97/deta-swin-large/blob/main/config.json) @ArthurZucker Could you review this PR? (I couldn't share my test code for this, sorry about that) I manually added to enable the auxiliary_loss and second_stage pipeline or see [this link](https://huggingface.co./sbchoi/deta-swin-large/tree/main) ``` transformer_model.config.auxiliary_loss = cfg.auxiliary_loss transformer_model.config.assign_second_stage = cfg.assign_second_stage ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27990/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27990/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27990", "html_url": "https://github.com/huggingface/transformers/pull/27990", "diff_url": "https://github.com/huggingface/transformers/pull/27990.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27990.patch", "merged_at": 1704464421000 }
https://api.github.com/repos/huggingface/transformers/issues/27989
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27989/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27989/comments
https://api.github.com/repos/huggingface/transformers/issues/27989/events
https://github.com/huggingface/transformers/pull/27989
2,038,797,508
PR_kwDOCUB6oc5h2TsF
27,989
Added @property into the modeliing_encoder_decoder file.
{ "login": "hi-sushanta", "id": 93595990, "node_id": "U_kgDOBZQpVg", "avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hi-sushanta", "html_url": "https://github.com/hi-sushanta", "followers_url": "https://api.github.com/users/hi-sushanta/followers", "following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}", "gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}", "starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions", "organizations_url": "https://api.github.com/users/hi-sushanta/orgs", "repos_url": "https://api.github.com/users/hi-sushanta/repos", "events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}", "received_events_url": "https://api.github.com/users/hi-sushanta/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @hi-sushanta, thanks for opening a PR! \r\n\r\nWhy make these changes? ", "Hello, @amyeroberts.\r\n\r\nHere are the reasons why I made the changes described in the code review comment:\r\n\r\n***1. Improved user-friendliness:***\r\n\r\nMaking the \"encoder\" and \"decoder\" properties read-only prevents accidental modifications and simplifies the API for users. This makes it easier to understand and use the code, especially for beginners.\r\nCreating a dedicated method for changing the \"output embeddings\" provides a controlled and documented way to modify this important parameter. This helps ensure consistency and avoid potential issues.\r\n\r\n***2. Enhanced maintainability:***\r\n\r\nRead-only properties make the code more robust and less prone to errors. By separating the access and modification functionality, the code becomes easier to maintain and understand.\r\nThe dedicated \"output_embeddings\" setter allows for cleaner and more concise code, as it avoids inline logic for handling the update. This improves code readability and maintainability.\r\n\r\n***3. Additional benefits:***\r\n\r\nRead-only properties can improve performance by avoiding unnecessary calculations and memory allocations.\r\nHaving a dedicated setter for \"output embeddings\" enables future enhancements, such as adding validations or logging changes.\r\n", "@hi-sushanta Thank you for taking the time to do this. \r\n\r\nHowever, adding the `_encoder` and `_decoder` attributes is not something we're going to merge in. It goes against the pattern of other model implementations in the code base, potentially breaks things as users still expect to be able to use `model.encoder`, generally makes things harder to read and we're yet to have any complaints from users about `model.encoder` being hard to use. \r\n\r\nIf you wish to enable a setter for the output embeddings, you can add this following the pattern for other models e.g. [here for llava](https://github.com/huggingface/transformers/blob/ec43d6870aa1afb42a6d2b1b0a03743d3f9b3ce6/src/transformers/models/llava/modeling_llava.py#L250C1-L250C1).", "I understand your concerns regarding the _encoder and _decoder attributes. I agree that maintaining consistency with the existing pattern is important. I'm happy to revert this change and ensure my contribution aligns with the code base standards.\r\n" ]
1,702
1,702
1,702
CONTRIBUTOR
null
I made the "encoder" and "decoder" properties easier to use by making them "read-only". This means you can see their values, but not change them directly. Additionally, I created a special way to change the "output embeddings" of the decoder. You can use this by assigning a new value to the "output_embeddings" property. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts , @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27989/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27989/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27989", "html_url": "https://github.com/huggingface/transformers/pull/27989", "diff_url": "https://github.com/huggingface/transformers/pull/27989.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27989.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27988
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27988/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27988/comments
https://api.github.com/repos/huggingface/transformers/issues/27988/events
https://github.com/huggingface/transformers/issues/27988
2,038,788,279
I_kwDOCUB6oc55hXC3
27,988
Design of xxxAttention, xxxFlashAttention and xxxSdpaAttention
{ "login": "ccdv-ai", "id": 94319594, "node_id": "U_kgDOBZ8z6g", "avatar_url": "https://avatars.githubusercontent.com/u/94319594?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ccdv-ai", "html_url": "https://github.com/ccdv-ai", "followers_url": "https://api.github.com/users/ccdv-ai/followers", "following_url": "https://api.github.com/users/ccdv-ai/following{/other_user}", "gists_url": "https://api.github.com/users/ccdv-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/ccdv-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ccdv-ai/subscriptions", "organizations_url": "https://api.github.com/users/ccdv-ai/orgs", "repos_url": "https://api.github.com/users/ccdv-ai/repos", "events_url": "https://api.github.com/users/ccdv-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/ccdv-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you. It's a tradeoff to have between the \"one-model one-file\" philosophy (https://huggingface.co./blog/transformers-design-philosophy) and offloading code to other files/classes.\r\n\r\nWe already kind of deviate from the \"one-model one-file\" philosophy with the KV cache refactor (https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_attn_mask_utils.py) and attention mask refactor (https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_attn_mask_utils.py), and one can indeed argue that we could do the same for the attention.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
Hey Following the addition of `torch.nn.functional.scaled_dot_product_attention` (#26572), there is a lot of deduplicated code between the `xxxAttention`, `xxxFlashAttention2` and `xxxSdpaAttention` classes. The main differences between the classes lie in the attention computation, the rest being the same (Q, K, V computation, cross attention and cache logic etc...). Wouldn't it be simpler to offload the attention computation in a new shared file making the modeling files cleaner and simplify the use of these optimizations for older models? This would also ease the addition of new variants of attention in the future if there is any.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27988/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27987
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27987/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27987/comments
https://api.github.com/repos/huggingface/transformers/issues/27987/events
https://github.com/huggingface/transformers/issues/27987
2,038,783,490
I_kwDOCUB6oc55hV4C
27,987
An error occurred when saving the model
{ "login": "Decem-Y", "id": 68498490, "node_id": "MDQ6VXNlcjY4NDk4NDkw", "avatar_url": "https://avatars.githubusercontent.com/u/68498490?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Decem-Y", "html_url": "https://github.com/Decem-Y", "followers_url": "https://api.github.com/users/Decem-Y/followers", "following_url": "https://api.github.com/users/Decem-Y/following{/other_user}", "gists_url": "https://api.github.com/users/Decem-Y/gists{/gist_id}", "starred_url": "https://api.github.com/users/Decem-Y/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Decem-Y/subscriptions", "organizations_url": "https://api.github.com/users/Decem-Y/orgs", "repos_url": "https://api.github.com/users/Decem-Y/repos", "events_url": "https://api.github.com/users/Decem-Y/events{/privacy}", "received_events_url": "https://api.github.com/users/Decem-Y/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I find it https://github.com/huggingface/transformers/issues/27925#issuecomment-1851779674", "Please fix the bug caused by this merged PR https://github.com/huggingface/transformers/pull/27820.", "cc @pacman100 @muellerzr " ]
1,702
1,702
1,702
NONE
null
https://github.com/huggingface/transformers/blob/14666775a296a76c88e1aa686a9547f393d322e2/src/transformers/trainer.py#L2349 When I update transformers == 4.36.0 for multi-GPU training to save the model, an error occurs prompting to save to "tmp-checkpoint-xx" instead of "checkpoint-xx".
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27987/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27987/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27986
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27986/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27986/comments
https://api.github.com/repos/huggingface/transformers/issues/27986/events
https://github.com/huggingface/transformers/pull/27986
2,038,731,291
PR_kwDOCUB6oc5h2Fa8
27,986
[docs] Trainer
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
MEMBER
null
This PR attempts to clean up some of the current navigational complexity of the [`Trainer`](https://huggingface.co./docs/transformers/main/en/main_classes/trainer) API doc to make it easier to use as a purely reference lookup page. A lot of the content (checkpoints, logging, customization, etc.) is moved and organized into a separate guide. The API page still has some content that doesn't entirely belong there (specific GPU selection, training on M1, etc.), but that'll be addressed in a separate PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27986/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27986/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27986", "html_url": "https://github.com/huggingface/transformers/pull/27986", "diff_url": "https://github.com/huggingface/transformers/pull/27986.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27986.patch", "merged_at": 1702670815000 }
https://api.github.com/repos/huggingface/transformers/issues/27985
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27985/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27985/comments
https://api.github.com/repos/huggingface/transformers/issues/27985/events
https://github.com/huggingface/transformers/issues/27985
2,038,727,057
I_kwDOCUB6oc55hIGR
27,985
`KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'`
{ "login": "MohamedAliRashad", "id": 26205298, "node_id": "MDQ6VXNlcjI2MjA1Mjk4", "avatar_url": "https://avatars.githubusercontent.com/u/26205298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MohamedAliRashad", "html_url": "https://github.com/MohamedAliRashad", "followers_url": "https://api.github.com/users/MohamedAliRashad/followers", "following_url": "https://api.github.com/users/MohamedAliRashad/following{/other_user}", "gists_url": "https://api.github.com/users/MohamedAliRashad/gists{/gist_id}", "starred_url": "https://api.github.com/users/MohamedAliRashad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MohamedAliRashad/subscriptions", "organizations_url": "https://api.github.com/users/MohamedAliRashad/orgs", "repos_url": "https://api.github.com/users/MohamedAliRashad/repos", "events_url": "https://api.github.com/users/MohamedAliRashad/events{/privacy}", "received_events_url": "https://api.github.com/users/MohamedAliRashad/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @MohamedAliRashad, thanks for raising this issue! \r\n\r\nCould you share the error encountered including its full traceback as well? ", "@amyeroberts \r\nYou are unable to reproduce ?", "Hi! \r\nI'm Having the same Error: \r\nonly happens when the token length is greater than the sliding window size\r\n(I do not have the error with transformers version 4.34.0, but when I upgrade to 4.36.0 I get the error)\r\n\r\nThanks! :) \r\n\r\n- transformers version: 4.36.0\r\n- Platform: Linux 5.10.0-26-cloud-amd64\r\n- Python version: 3.10.13\r\n- Huggingface_hub version: 0.19.4\r\n- Safetensors version: 0.4.1\r\n- PyTorch version (GPU?): 2.1.1+cu121 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: Yes\r\n\r\nTraceback (most recent call last):\r\nFile \"/opt/conda/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 426, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/opt/conda/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py\", line 84, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/applications.py\", line 1106, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/applications.py\", line 122, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 184, in __call__\r\n raise exc\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/cors.py\", line 83, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 79, in __call__\r\n raise exc\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 68, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py\", line 20, in __call__\r\n raise e\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py\", line 17, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/routing.py\", line 718, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/routing.py\", line 276, in handle\r\n await self.app(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/routing.py\", line 66, in app\r\n response = await func(request)\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/routing.py\", line 274, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/home/mmockus/dev/chatbot/rGPT/host.py\", line 301, in benchmark_model\r\n response, time, previous_prompt = rgpt(\r\n File \"/home/mmockus/dev/chatbot/rGPT/RGPT.py\", line 384, in __call__\r\n output = self.__generate_text(final_prompt)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1140, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1147, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1046, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"/home/mmockus/dev/chatbot/rGPT/instruct_pipeline.py\", line 60, in _forward\r\n generated_sequence = self.model.generate(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py\", line 1764, in generate\r\n return self.sample(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py\", line 2861, in sample\r\n outputs = self(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = module._old_forward(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/mixtral/modeling_mixtral.py\", line 1212, in forward\r\n outputs = self.model(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/mixtral/modeling_mixtral.py\", line 1080, in forward\r\n layer_outputs = decoder_layer(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = module._old_forward(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/mixtral/modeling_mixtral.py\", line 796, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = module._old_forward(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/mixtral/modeling_mixtral.py\", line 441, in forward\r\n past_key = past_key_value[0]\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/cache_utils.py\", line 78, in __getitem__\r\n raise KeyError(f\"Cache only has {len(self)} layers, attempted to access layer with index {layer_idx}\")\r\nKeyError: 'Cache only has 0 layers, attempted to access layer with index 0'", "@manueljmockus Thanks for providing these details! As Mixtral was only part of the most recent release - does this mean this error message was generated loading and using a different model than in the issue example? \r\n\r\n> @amyeroberts You are unable to reproduce ?\r\n\r\n@MohamedAliRashad We ask that anyone posting an issue posts the full traceback. We have many issues opened and PRs to review a day; to help get your issue resolved as soon as possible, you need to help us help you. With partial information we can't know if we're trying to solve the right issue - or we might be able to reply with a solution from just seeing the stack trace. ", "cc @tomaarsen as this seems like a cache issue ", "@amyeroberts \r\nI understand.\r\n\r\nI am asking if you were unable to produce the issue with the code i provided because i think it is an OOM probelm (the cached layers can't be accessed because the input has overwritten them).", "@amyeroberts \r\nHi, sorry if I was not clear. I was Using \"mistralai/Mistral-7B-Instruct-v0.1\" with transformers version 4.34.0 without an issue. \r\nYesterday after upgrading to 4.36.0 to test mixtral I am getting the error mentioned, Both with \"mistralai/Mixtral-8x7B-Instruct-v0.1\" and \"mistralai/Mistral-7B-Instruct-v0.1\" models. Here are the logs from the mistral execution:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 426, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/opt/conda/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py\", line 84, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/applications.py\", line 1106, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/applications.py\", line 122, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 184, in __call__\r\n raise exc\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/cors.py\", line 83, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 79, in __call__\r\n raise exc\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 68, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py\", line 20, in __call__\r\n raise e\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py\", line 17, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/routing.py\", line 718, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/routing.py\", line 276, in handle\r\n await self.app(scope, receive, send)\r\n File \"/opt/conda/lib/python3.10/site-packages/starlette/routing.py\", line 66, in app\r\n response = await func(request)\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/routing.py\", line 274, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/opt/conda/lib/python3.10/site-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/home/mmockus/dev/chatbot/rGPT/host.py\", line 302, in benchmark_model\r\n response, time, previous_prompt = rgpt(\r\n File \"/home/mmockus/dev/chatbot/rGPT/RGPT.py\", line 386, in __call__\r\n output = self.__generate_text(final_prompt)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1140, in __call__\r\n return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1147, in run_single\r\n model_outputs = self.forward(model_inputs, **forward_params)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1046, in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n File \"/home/mmockus/dev/chatbot/rGPT/instruct_pipeline.py\", line 60, in _forward\r\n generated_sequence = self.model.generate(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py\", line 1764, in generate\r\n return self.sample(\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/generation/utils.py\", line 2861, in sample\r\n outputs = self(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = module._old_forward(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py\", line 1044, in forward\r\n outputs = self.model(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py\", line 929, in forward\r\n layer_outputs = decoder_layer(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = module._old_forward(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py\", line 654, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/accelerate/hooks.py\", line 165, in new_forward\r\n output = module._old_forward(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/models/mistral/modeling_mistral.py\", line 391, in forward\r\n past_key = past_key_value[0]\r\n File \"/opt/conda/lib/python3.10/site-packages/transformers/cache_utils.py\", line 78, in __getitem__\r\n raise KeyError(f\"Cache only has {len(self)} layers, attempted to access layer with index {layer_idx}\")\r\nKeyError: 'Cache only has 0 layers, attempted to access layer with index 0'\r\n\r\nThanks for yout help!", "I am facing the same problem while using the official transformers examples (pytorch/run_clm with trainer). \r\n\r\nSpecifically, I only get this exact error when 1. training context window is larger than sliding window (4096), and 2. I am using zero3/fsdp. When context window is less than 4096, I get a different error, and when using zero1/2, training works as expected.", "This indeed seems like a caching issue. cc @gante\r\nIt seems like this snippet was not updated to work with the new Cache class: https://github.com/huggingface/transformers/blob/2788f8d8d5f9cee2fe33a9292b0f3570bd566a6d/src/transformers/models/mistral/modeling_mistral.py#L388-L407\r\n\r\nI suspect that not using Flash Attention 2 may solve the issue in the meantime.\r\n\r\n- Tom Aarsen", "> This indeed seems like a caching issue. cc @gante It seems like this snippet was not updated to work with the new Cache class:\r\n> \r\n> https://github.com/huggingface/transformers/blob/2788f8d8d5f9cee2fe33a9292b0f3570bd566a6d/src/transformers/models/mistral/modeling_mistral.py#L388-L407\r\n> \r\n> I suspect that not using Flash Attention 2 may solve the issue in the meantime.\r\n> \r\n> * Tom Aarsen\r\n\r\nHi, Thanks for the quick response. I can confirm that removing flash attention solves this issue. However removing it means that long contexts take up too much memory and i get a Cuda OOM.", "Opening a PR to fix it :)", "FYI, this seems to be a distributed error: in a single GPU device, the following script runs with no exceptions\r\n\r\n**EDIT**: it doesn't fail because `mistralai/Mistral-7B-Instruct-v0.2` doesn't use a sliding window 👀 \r\n\r\n```py\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer\r\n\r\nlong_text = \"Foo Bar \" * 5000\r\n\r\nmodel_id = \"mistralai/Mistral-7B-Instruct-v0.2\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, torch_dtype=torch.float16, device_map=\"auto\")\r\n\r\nmessages = [\r\n {\"role\": \"user\", \"content\": f\"Summarize the following:\\n{long_text}\"}\r\n]\r\n\r\ninputs = tokenizer.apply_chat_template(messages, return_tensors=\"pt\").to(model.device)\r\nprint(inputs.shape)\r\n\r\noutputs = model.generate(inputs, max_new_tokens=8192, do_sample=True)\r\nprint(outputs.shape)\r\n```", "I can confirm, this lines up with my experience since zero1/2 works but zero3/fsdp does not.", "The PR linked above fixes it :)", "Thank you so much! I can confirm training works now :)", "Hi @gante , my transformers version is 4.36.2 (having viewed this issue I pip -U immediatly ) but still face the same question above when using standard llama2-7b-hf.\r\nWhen loading model, I get a warning \r\n\r\n> 'Instantiating LlamaAttentionLayerBetterTransformer without passing `layer_idx` is not recommended and will to errors during the forward call, if caching is used. Please make sure to provide a `layer_idx` when creating this class.'\r\n\r\nEventually\r\n\r\n> KeyError: 'Cache only has 0 layers, attempted to access layer with index 0'\r\n\r\n`self.model = AutoModelForCausalLM.from_pretrained(model_name,torch_dtype='auto',low_cpu_mem_usage=True).to_bettertransformer()`\r\nI dive into the code about modeling_llama.py \r\n![image](https://github.com/huggingface/transformers/assets/88258534/b8180aa0-7ef1-40b5-aeb3-6a849bbd254f)\r\nSo I check my config.json also, it seems nothing wrong with num_hidden_layers.\r\n`print(self.model.config.num_hidden_layers)` ->32\r\nis there any possible reason cause this? (btw since this issue is closed, shall I open a new one? : | Thanks for your help in advance! \r\n", "Hey @rangehow 👋 \r\n\r\nFrom your error, it seems that you are using BetterTransformer + Llama, which is deprecated. See [here](https://github.com/huggingface/optimum/releases/tag/v1.16.1) for more information (and how to update it)", "> > This indeed seems like a caching issue. cc @gante It seems like this snippet was not updated to work with the new Cache class:\r\n> > https://github.com/huggingface/transformers/blob/2788f8d8d5f9cee2fe33a9292b0f3570bd566a6d/src/transformers/models/mistral/modeling_mistral.py#L388-L407\r\n> > \r\n> > I suspect that not using Flash Attention 2 may solve the issue in the meantime.\r\n> > \r\n> > * Tom Aarsen\r\n> \r\n> Hi, Thanks for the quick response. I can confirm that removing flash attention solves this issue. However removing it means that long contexts take up too much memory and i get a Cuda OOM.\r\n\r\n\r\n\r\nDo you know if have same issue? I am using TheBloke/Llama-2-13B-chat-GPTQ with GPU A100 80GB? if yes, please help me which file to change or update?\r\n\r\n\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1520, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 1183, in forward\r\n outputs = self.model(\r\n ^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1511, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1520, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 1070, in forward\r\n layer_outputs = decoder_layer(\r\n ^^^^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1511, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1520, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py\", line 798, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n ^^^^^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1511, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1520, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/auto_gptq/nn_modules/fused_llama_attn.py\", line 62, in forward\r\n kv_seq_len += past_key_value[0].shape[-2]\r\n ~~~~~~~~~~~~~~^^^\r\n File \"/home//miniconda3/envs/NetAI_B/lib/python3.11/site-packages/transformers/cache_utils.py\", line 78, in __getitem__\r\n raise KeyError(f\"Cache only has {len(self)} layers, attempted to access layer with index {layer_idx}\")\r\nKeyError: 'Cache only has 0 layers, attempted to access layer with index 0'", "@bp020108 make sure you're running the latest `transformers` version. If the issue persists after updating `transformers` to `v4.38`, we'll need a script to reproduce your issue 🤗 ", "looks like I have 4.37.2. \r\n\r\n-vm:~/miniconda3/LLAMA/localchat$ pip3.11 list | grep transf\r\nsentence-transformers 2.2.2\r\ntransformers 4.37.2\r\n\r\nyou want me to upgrade this?\r\npip3.11 install transformers==4.38?\r\n" ]
1,702
1,704
1,702
NONE
null
### System Info - `transformers` version: 4.36.0 - Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.35 - Python version: 3.11.4 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.3.3 - Accelerate version: 0.25.0.dev0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @gante ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer long_text = # ... model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, use_flash_attention_2=True, torch_dtype=torch.float16, device_map="auto") streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) messages = [ {"role": "user", "content": f"Summarize the following:\n{long_text}"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) outputs = model.generate(inputs, max_new_tokens=8192, do_sample=True, streamer=streamer) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### Expected behavior Expected it to work or at least give me a cuda error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27985/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27985/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27983
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27983/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27983/comments
https://api.github.com/repos/huggingface/transformers/issues/27983/events
https://github.com/huggingface/transformers/pull/27983
2,038,546,330
PR_kwDOCUB6oc5h1dmL
27,983
fix typo in dvclive callback
{ "login": "dberenbaum", "id": 2308172, "node_id": "MDQ6VXNlcjIzMDgxNzI=", "avatar_url": "https://avatars.githubusercontent.com/u/2308172?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dberenbaum", "html_url": "https://github.com/dberenbaum", "followers_url": "https://api.github.com/users/dberenbaum/followers", "following_url": "https://api.github.com/users/dberenbaum/following{/other_user}", "gists_url": "https://api.github.com/users/dberenbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/dberenbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dberenbaum/subscriptions", "organizations_url": "https://api.github.com/users/dberenbaum/orgs", "repos_url": "https://api.github.com/users/dberenbaum/repos", "events_url": "https://api.github.com/users/dberenbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/dberenbaum/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? Fixes a typo in the dvclive callback that prevents it from being set as initialized. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts or @muellerzr Would one of you mind taking a look? Apologies for not catching this. Our internal tests missed this scenario where initialization depends on the `setup()` method.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27983/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27983/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27983", "html_url": "https://github.com/huggingface/transformers/pull/27983", "diff_url": "https://github.com/huggingface/transformers/pull/27983.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27983.patch", "merged_at": 1702416599000 }
https://api.github.com/repos/huggingface/transformers/issues/27982
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27982/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27982/comments
https://api.github.com/repos/huggingface/transformers/issues/27982/events
https://github.com/huggingface/transformers/pull/27982
2,038,542,337
PR_kwDOCUB6oc5h1cuw
27,982
fix bug in dvclive callback
{ "login": "dberenbaum", "id": 2308172, "node_id": "MDQ6VXNlcjIzMDgxNzI=", "avatar_url": "https://avatars.githubusercontent.com/u/2308172?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dberenbaum", "html_url": "https://github.com/dberenbaum", "followers_url": "https://api.github.com/users/dberenbaum/followers", "following_url": "https://api.github.com/users/dberenbaum/following{/other_user}", "gists_url": "https://api.github.com/users/dberenbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/dberenbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dberenbaum/subscriptions", "organizations_url": "https://api.github.com/users/dberenbaum/orgs", "repos_url": "https://api.github.com/users/dberenbaum/repos", "events_url": "https://api.github.com/users/dberenbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/dberenbaum/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? Fixes a typo in the dvclive callback that prevents it from being set as initialized. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @amyeroberts or @muellerzr Would one of you mind taking a look? Apologies for not catching this. Our internal tests missed this scenario where initialization depends on the `setup()` method.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27982/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27982/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27982", "html_url": "https://github.com/huggingface/transformers/pull/27982", "diff_url": "https://github.com/huggingface/transformers/pull/27982.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27982.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27981
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27981/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27981/comments
https://api.github.com/repos/huggingface/transformers/issues/27981/events
https://github.com/huggingface/transformers/pull/27981
2,038,458,905
PR_kwDOCUB6oc5h1Kwy
27,981
[doc] fix typo
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
CONTRIBUTOR
null
Fixing doc to use the correct package name. Thank you.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27981/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27981/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27981", "html_url": "https://github.com/huggingface/transformers/pull/27981", "diff_url": "https://github.com/huggingface/transformers/pull/27981.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27981.patch", "merged_at": 1702413162000 }
https://api.github.com/repos/huggingface/transformers/issues/27980
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27980/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27980/comments
https://api.github.com/repos/huggingface/transformers/issues/27980/events
https://github.com/huggingface/transformers/issues/27980
2,038,207,555
I_kwDOCUB6oc55fJRD
27,980
LLaMa-VID: An Image is Worth 2 Tokens in LLMs
{ "login": "amyeroberts", "id": 22614925, "node_id": "MDQ6VXNlcjIyNjE0OTI1", "avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/amyeroberts", "html_url": "https://github.com/amyeroberts", "followers_url": "https://api.github.com/users/amyeroberts/followers", "following_url": "https://api.github.com/users/amyeroberts/following{/other_user}", "gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}", "starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions", "organizations_url": "https://api.github.com/users/amyeroberts/orgs", "repos_url": "https://api.github.com/users/amyeroberts/repos", "events_url": "https://api.github.com/users/amyeroberts/events{/privacy}", "received_events_url": "https://api.github.com/users/amyeroberts/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[ "Hi @amyeroberts , Can I take up the task of porting this model ?\r\n", "@venkateshtata Certainly! Feel free to open a PR and ping me when it's ready for review. Let us know if you have any questions about porting into transformers in the meantime :) " ]
1,702
1,704
null
COLLABORATOR
null
### Model description LLaMA-VID is an open-source chatbot trained by fine-tuning LLaMA/Vicuna on GPT-generated multimodal instruction-following data. LLaMA-VID empowers existing frameworks to support hour-long videos and pushes their upper limit with an extra context token. We build this repo based on LLaVA. LLaMA-VID contains three parts: encoder and decoder are adopted to produce visual embedding and text-guided features, respectively; context token and content token are transformed with the tailored token generation strategy; instruction tuning is designed to unleash the potential of LLMs for image and video. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Page: https://llama-vid.github.io/ Weights already available on HF: https://huggingface.co./YanweiLi/llama-vid-7b-pretrain-224 Code: https://github.com/dvlab-research/LLaMA-VID
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27980/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27980/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27979
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27979/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27979/comments
https://api.github.com/repos/huggingface/transformers/issues/27979/events
https://github.com/huggingface/transformers/pull/27979
2,038,075,732
PR_kwDOCUB6oc5hz3Of
27,979
Generate: speculative decoding
{ "login": "gante", "id": 12240844, "node_id": "MDQ6VXNlcjEyMjQwODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gante", "html_url": "https://github.com/gante", "followers_url": "https://api.github.com/users/gante/followers", "following_url": "https://api.github.com/users/gante/following{/other_user}", "gists_url": "https://api.github.com/users/gante/gists{/gist_id}", "starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gante/subscriptions", "organizations_url": "https://api.github.com/users/gante/orgs", "repos_url": "https://api.github.com/users/gante/repos", "events_url": "https://api.github.com/users/gante/events{/privacy}", "received_events_url": "https://api.github.com/users/gante/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@patrickvonplaten tagging you here for a 2nd set of eyes on the speculative decoding method (changes in `utils.py`), which I'm assuming you're familiar with. Feel free to delegate to someone else who is familiar with the method! 🤗 ", "Thanks for adding this! Can we split this up into two separate PRs: one changing the assisted generation and the other adding speculative decoding?", "@amyeroberts pulled the assisted generation changes into this PR: https://github.com/huggingface/transformers/pull/28030\r\n\r\nAfter it is merged, I will rebase this one and ping you again -- this one will become exclusively about speculative decoding 🤗 ", "@amyeroberts I've rerun the slow tests, and I can confirm they are passing. Ready for a review :)", "@patrickvonplaten the two types of sampling are needed :D \r\n\r\nNew candidate-based methods are popping up (e.g. https://github.com/huggingface/transformers/pull/27775), and they don't necessarily have logits. As such, speculative decoding, which needs the candidates' logits, can't be applied to those methods. ", "> @patrickvonplaten the two types of sampling are needed :D\r\n> \r\n> New candidate-based methods are popping up (e.g. #27775), and they don't necessarily have logits. As such, speculative decoding, which needs the candidates' logits, can't be applied to those methods.\r\n\r\nBut shouldn't they just be the \"own\" method now? I.e. I don't think we should put https://github.com/huggingface/transformers/pull/27775 into the speculative decoding method no? \r\n\r\n", "@patrickvonplaten #27775 does not introduce changes to assisted generation 🤗 In #28030 I've abstracted the candidate generation part of assisted generation. We now load candidate generators the same way as we load the logits processors:\r\n\r\nhttps://github.com/huggingface/transformers/blob/e6dcf8abd6f65bb4b6dfc1831b20d9ba49ce00e2/src/transformers/generation/utils.py#L899-L919\r\n\r\nIn assisted generation, we call the candidate generator to get candidate sequences (which may or may not contain associated logits, depending on the method)\r\n\r\nhttps://github.com/huggingface/transformers/blob/e6dcf8abd6f65bb4b6dfc1831b20d9ba49ce00e2/src/transformers/generation/utils.py#L4588\r\n\r\nThe technique in #27775 can thus be added by adding a new candidate generator in ` _get_candidate_generator`. Other candidate generators may be added the same way, enabling users to experiment with the concept of candidates!\r\n\r\nBecause needing the logits (for speculative decoding) is a very limiting constraint, I'd rather keep the two sampling paths.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27979). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@amyeroberts PR comments addressed 🤗 \r\n\r\n@patrickvonplaten Unless you don't strongly oppose, I'd like to keep the two sampling paths, for the reasons I've written [here](https://github.com/huggingface/transformers/pull/27979#issuecomment-1860125398) -- I think it will be beneficial in the long run! :) (otherwise, a whole new generation method has to be written for #27775)", "@amyeroberts -- @patrickvonplaten and I had a chat about whether to keep the two sampling paths or not. For context, here's what we agreed on:\r\n- It's okay to leave it as is, and perhaps abstract the different ways we accept candidates into a `candidate_checker` block. \r\n- Be conservative on adding new candidate generators, so we don't end up with unused methods\r\n- [in a follow-up PR] squash other cases where the decoding method is the same except for the token selection, like `greedy_decoding` + `sample`\r\n- [in a follow-up PR] mode each decoding method into its own file. There are several private functions in `generation/utils.py` that are exclusively used with one generation method.", "@gante \r\nAccording to experiments reported in Leviathan's paper, speculative decoding (SD) has higher speedup with greedy decoding (temp=0). However, in the current implementation, SD works only with do_sample=True. ", "@jmamou speculative decoding with `do_sample=False` (or `temp=0`) was already encoded in `assisted_generation`, before this PR -- try calling `model.generate(input_ids, do_sample=False, assistant_model=assistant_model)` :)", "@gante \r\nSince acceptance criteria are different between speculative decoding and assisted generation, I think that it would be great to be able to run both speculative decoding and assisted generation with no sampling.", "@gante \r\nI implemented it. I can submit a PR.", "@gante \r\nIn previous implementation of assisted generation (4.33) with heuristical update of `num_assistant_tokens` (or `max_assistant_tokens`), the value of `num_assistant_tokens` was preserved between 2 consecutive generate() calls.\r\n\r\nIn current implementation (4.38), `num_assistant_tokens` is updated by the `candidate_generator` during the generation but `assistant_model.generation_config.num_assistant_tokens` is not updated at the end of the generation. Therefore, next call to generate will start with the initial value of `assistant_model.generation_config.num_assistant_tokens` (5).\r\n\r\nIs it intentional? If that's a bug, I can open a PR to fix it. ", "@jmamou \r\n\r\n> Since acceptance criteria are different between speculative decoding and assisted generation, I think that it would be great to be able to run both speculative decoding and assisted generation with no sampling.\r\n\r\nNot sure if this is a good idea\r\n1. if we see greedy decoding as applying `temperature=0`, the model probability will be `1` at the most likely token and `0` everywhere else. In turn, this implies that `p_i/q_i` is `>=1` at all positions, and thus all candidate tokens would be accepted 👉 speculative decoding would be the same as simply using the assistant model \r\n2. If we don't apply `temperature=0`, then it would be sampling -- in other words, it wouldn't be greedy decoding\r\n\r\n> In previous implementation of assisted generation (4.33) with heuristical update of num_assistant_tokens (or max_assistant_tokens), the value of num_assistant_tokens was preserved between 2 consecutive generate() calls.\r\nIn current implementation (4.38), num_assistant_tokens is updated by the candidate_generator during the generation but assistant_model.generation_config.num_assistant_tokens is not updated at the end of the generation. Therefore, next call to generate will start with the initial value of assistant_model.generation_config.num_assistant_tokens (5).\r\nIs it intentional? If that's a bug, I can open a PR to fix it.\r\n\r\nThis is a good point! A PR to revert to the previous behaviour (with a test) would be appreciated 🙏 " ]
1,702
1,706
1,702
MEMBER
null
# What does this PR do? Useful context: In a recent PR (#27750), the candidate generation in assisted generation got abstracted, so we can host new candidate generation techniques (such as #27722). _______________________________________________________ This PR: 1. ~Reworks assisted candidate generation to call `.generate()`, instead of having its own custom generation loop. For most models this is nothing more than a nice abstraction. However, for models with a custom `generate()` function, this means the assistant model will now make use of it! (🤔 does this mean that DistilWhisper gets better numbers with this refactor?)~ Edit: moved to #28030 2. Adds speculative decoding ([paper](https://arxiv.org/pdf/2211.17192.pdf), see Algorithm 1). This implied a minor interface change in the candidate generation class, which should be okay since it hasn't been released :) The following tests were run locally and are passing: 1. `RUN_SLOW=1 py.test tests/models/whisper/ -k speculative` 2. `py.test tests/ -k test_assisted` (which now triggers speculative decoding) ________________________________________________________ TODO: - [ ] Benchmark speculative decoding
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27979/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27979", "html_url": "https://github.com/huggingface/transformers/pull/27979", "diff_url": "https://github.com/huggingface/transformers/pull/27979.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27979.patch", "merged_at": 1702994310000 }
https://api.github.com/repos/huggingface/transformers/issues/27978
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27978/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27978/comments
https://api.github.com/repos/huggingface/transformers/issues/27978/events
https://github.com/huggingface/transformers/pull/27978
2,038,063,478
PR_kwDOCUB6oc5hz0iz
27,978
[`Add Deci`] Llama with variable GQA per layer
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27978). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Glad to know it doesn't cause (many) extra issues 😆 ", "Not at all! It's really nice for reviews, we are certain that only this part is what we need to look", "Picking this back soon 🤗 " ]
1,702
1,706
null
COLLABORATOR
null
# What does this PR do? Add support for Deci. `# Ignore copy` makes it a lot easier @ydshieh 🪂
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27978/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27978/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27978", "html_url": "https://github.com/huggingface/transformers/pull/27978", "diff_url": "https://github.com/huggingface/transformers/pull/27978.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27978.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27977
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27977/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27977/comments
https://api.github.com/repos/huggingface/transformers/issues/27977/events
https://github.com/huggingface/transformers/issues/27977
2,037,859,553
I_kwDOCUB6oc55d0Th
27,977
Image size understanding in DinoV2 and Transformers generally
{ "login": "lombardata", "id": 39915110, "node_id": "MDQ6VXNlcjM5OTE1MTEw", "avatar_url": "https://avatars.githubusercontent.com/u/39915110?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lombardata", "html_url": "https://github.com/lombardata", "followers_url": "https://api.github.com/users/lombardata/followers", "following_url": "https://api.github.com/users/lombardata/following{/other_user}", "gists_url": "https://api.github.com/users/lombardata/gists{/gist_id}", "starred_url": "https://api.github.com/users/lombardata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lombardata/subscriptions", "organizations_url": "https://api.github.com/users/lombardata/orgs", "repos_url": "https://api.github.com/users/lombardata/repos", "events_url": "https://api.github.com/users/lombardata/events{/privacy}", "received_events_url": "https://api.github.com/users/lombardata/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @lombardata,\r\n\r\nYou can specify both the size the image is resized to during the `resize` call and the crop size e.g.: \r\n\r\n```\r\nimage_processor = AutoImageProcessor.from_pretrained(checkpoint, crop_size={\"height\": 320, \"width\": 256})\r\n```\r\n\r\nnote: I don't know if in this timm example if the dimensions are in (h,w) or (w,h) order. ", "hi @amyeroberts , thank you very much for your quick reply. In this case what is the \"meaning\" of the **image_size** parameter in the model config file?\r\nLet's say that my images are 1080 x 1920 and I process them with : \r\n`image_processor = AutoImageProcessor(checkpoint, crop_size={\"height\": 1080, \"width\": 1920})\r\n`then if in the model config.json file I keep the default parameter : \r\n` \"image_size\": 518,\r\n`What would be the behaviour of the training?\r\nThank you a lot !", "You're hitting on one of the tricky coupling issues between models and their data!\r\n\r\nFor the image processor, `crop_size` and `size` control the processing logic. This is independent of the model and modifying the behaviour of the image processor won't automatically update the necessary model params. \r\n\r\nFor the DinoV2 model, `image_size` refers to the input image size i.e. the dimensions of the processed images. It controls how the [patches are extracted when creating the embeddings](https://github.com/huggingface/transformers/blob/78172dcdb7cdaf04ec6697d4747c505b2e7b0df0/src/transformers/models/dinov2/modeling_dinov2.py#L147). However, the model employs [interpolation on the embeddings](https://github.com/huggingface/transformers/blob/78172dcdb7cdaf04ec6697d4747c505b2e7b0df0/src/transformers/models/dinov2/modeling_dinov2.py#L83) if the input image is of a different resolution. So for inference, you should be able to pass in different sized images and run things fine. \r\n\r\nIf you want to train a model, then I'd suggest aligning the processor and model configurations. \r\n\r\nTo align the two, you'll need to do something like this: \r\n\r\n```py\r\nfrom transformers import Dinov2Config, Dinov2ForImageClassification, AutoImageProcessor\r\n\r\nimage_height, image_width = 1080, 1920\r\n\r\ncheckpoint = \"facebook/dinov2-base\"\r\n\r\n# Create a new model with randomly initialized weights\r\nmodel_config = Dinov2Config.from_pretrained(checkpoint, image_size=(image_height, image_width))\r\nmodel = Dinov2ForImageClassification(model_config)\r\nimage_processor = AutoImageProcessor.from_pretrained(\r\n checkpoint, image_size={\"height\": image_heaight, \"width\": image_width}\r\n)\r\n```", "> You're hitting on one of the tricky coupling issues between models and their data!\r\n> \r\n> For the image processor, `crop_size` and `size` control the processing logic. This is independent of the model and modifying the behaviour of the image processor won't automatically update the necessary model params.\r\n> \r\n> For the DinoV2 model, `image_size` refers to the input image size i.e. the dimensions of the processed images. It controls how the [patches are extracted when creating the embeddings](https://github.com/huggingface/transformers/blob/78172dcdb7cdaf04ec6697d4747c505b2e7b0df0/src/transformers/models/dinov2/modeling_dinov2.py#L147). However, the model employs [interpolation on the embeddings](https://github.com/huggingface/transformers/blob/78172dcdb7cdaf04ec6697d4747c505b2e7b0df0/src/transformers/models/dinov2/modeling_dinov2.py#L83) if the input image is of a different resolution. So for inference, you should be able to pass in different sized images and run things fine.\r\n> \r\n> If you want to train a model, then I'd suggest aligning the processor and model configurations.\r\n> \r\n> To align the two, you'll need to do something like this:\r\n> \r\n> ```python\r\n> from transformers import Dinov2Config, Dinov2ForImageClassification, AutoImageProcessor\r\n> \r\n> image_height, image_width = 1080, 1920\r\n> \r\n> checkpoint = \"facebook/dinov2-base\"\r\n> \r\n> # Create a new model with randomly initialized weights\r\n> model_config = Dinov2Config.from_pretrained(checkpoint, image_size=(image_height, image_width))\r\n> model = Dinov2ForImageClassification(model_config)\r\n> image_processor = AutoImageProcessor.from_pretrained(\r\n> checkpoint, image_size={\"height\": image_heaight, \"width\": image_width}\r\n> )\r\n> ```\r\n\r\nThank you very much @amyeroberts for your complete reply.\r\nLooking at the source code of `Dinov2Config` I found that the `image_size` parameter must be an int (and not a dict of heigth and width) : \r\n` image_size (`int`, *optional*, defaults to 224):`\r\nso that I don't know if we are allowed to pass rectangular images to this specific model. \r\nMoreover the `AutoImageProcessor` (which in our case is a `BitImageProcessor`) should accept as you said an input size : \r\n`size: Dict[str, int] = None,\r\n`\r\nand\r\n\r\n` if \"shortest_edge\" in size:\r\n size = size[\"shortest_edge\"]\r\n default_to_square = False\r\n elif \"height\" in size and \"width\" in size:\r\n size = (size[\"height\"], size[\"width\"])\r\n else:\r\n raise ValueError(\"Size must contain either 'shortest_edge' or 'height' and 'width'.\")\r\n`\r\nbut when I try to instantiate it : \r\n```\r\nimage_processor2= AutoImageProcessor.from_pretrained(checkpoint_name, \r\n size={\"height\": 720, \"width\": 1080},\r\n #size={\"shortest_edge\": 518},\r\n do_center_crop=True, \r\n do_resize=True, \r\n do_rescale = True, \r\n do_normalize=True)\r\n```\r\nI get the following error : \r\n```\r\nFile .../lib/python3.8/site-packages/transformers/models/bit/image_processing_bit.py:152, in BitImageProcessor.resize(self, image, size, resample, data_format, input_data_format, **kwargs)\r\n size = get_size_dict(size, default_to_square=False)\r\n if \"shortest_edge\" not in size:\r\n--> raise ValueError(f\"The `size` parameter must contain the key `shortest_edge`. Got {size.keys()}\")\r\n output_size = get_resize_output_image_size(\r\n image, size=size[\"shortest_edge\"], default_to_square=False, input_data_format=input_data_format\r\n )\r\n```\r\nwhich is strange since in the corresponding doc : \r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/bit/image_processing_bit.py\r\nthe error is different : \r\n` raise ValueError(\"Size must contain either 'shortest_edge' or 'height' and 'width'.\")\r\n`\r\nDo you know where I'm wrong ?\r\nSorry for bothering you but I'm a little bit lost :) ", "Hi @lombardata, \r\n\r\n`image_size` for the model is an int, but `size` for the image processors should be a dictionary. This is because, when images are passed to the model, their (h, w) dimensions are defined. \r\n\r\nHowever, when processing an image, the output size isn't always fixed, the output height, width it can be calculated based on the input dimensions. For example `size={\"shortest_edge\": s}`, will resize the image so that the shortest edge of the image matches, `\"shortest_edge\"` and it will rescale the other edge to match the input aspect ratio. \r\n\r\nWith regards to the error you're encountering, which version of transformers are you running from? I was able to run the example snippet without error on the most recent version. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hi @amyeroberts , thank you very much for your reply.\r\n\r\n> With regards to the error you're encountering, which version of transformers are you running from? I was able to run the example snippet without error on the most recent version.\r\n\r\nI was running version : 4.34.1, now I upgraded to the latest and it's wirking fine. Thanks !\r\n" ]
1,702
1,705
1,705
NONE
null
### Feature request Hi everyone, I was playing with Dinov2 model of the transformers library of HF and I have a question : is there a way to change model input image sizes like in the timm library? i.e. the 11 August they added here : https://github.com/huggingface/pytorch-image-models an option to change input img sizes e.g. `"Example validation cmd to test w/ non-square resize python validate.py /imagenet --model swin_base_patch4_window7_224.ms_in22k_ft_in1k --amp --amp-dtype bfloat16 --input-size 3 256 320 --model-kwargs window_size=8,10 img_size=256,320"` Is there a way to do the same with the transformers library? I tried to change the image_size in the config.json file, but since the image is then processed by the processor, in my understanding, the output would be always the one of the "crop_size" parameter in the preprocessor_config.json What would be the best practice in order to fill an entire image to the model (if there is a way)? Thank you all in advance! ### Motivation add custom image input size like in timm ### Your contribution timm is a hf library so it would be easy to integrate this function to transformers lib
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27977/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27976
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27976/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27976/comments
https://api.github.com/repos/huggingface/transformers/issues/27976/events
https://github.com/huggingface/transformers/pull/27976
2,037,857,883
PR_kwDOCUB6oc5hzG-V
27,976
Better key error for AutoConfig
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
MEMBER
null
When users try to load a model with AutoModel/AutoConfig but the model type isn't recognized, they get a confusing error about missing keys. However, these errors are usually caused by their version of `Transformers` being out of date. I've seen several users asking for help with this issue trying to load `mixtral`, so I wrote a better error message for next time!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27976/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27976", "html_url": "https://github.com/huggingface/transformers/pull/27976", "diff_url": "https://github.com/huggingface/transformers/pull/27976.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27976.patch", "merged_at": 1702392115000 }
https://api.github.com/repos/huggingface/transformers/issues/27975
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27975/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27975/comments
https://api.github.com/repos/huggingface/transformers/issues/27975/events
https://github.com/huggingface/transformers/issues/27975
2,037,705,585
I_kwDOCUB6oc55dOtx
27,975
ImageToTextPipeline does not support InstructBlip Models
{ "login": "elena-soare20", "id": 114069526, "node_id": "U_kgDOBsyQFg", "avatar_url": "https://avatars.githubusercontent.com/u/114069526?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elena-soare20", "html_url": "https://github.com/elena-soare20", "followers_url": "https://api.github.com/users/elena-soare20/followers", "following_url": "https://api.github.com/users/elena-soare20/following{/other_user}", "gists_url": "https://api.github.com/users/elena-soare20/gists{/gist_id}", "starred_url": "https://api.github.com/users/elena-soare20/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elena-soare20/subscriptions", "organizations_url": "https://api.github.com/users/elena-soare20/orgs", "repos_url": "https://api.github.com/users/elena-soare20/repos", "events_url": "https://api.github.com/users/elena-soare20/events{/privacy}", "received_events_url": "https://api.github.com/users/elena-soare20/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Hi @elena-soare20, thanks for raising this issue! \r\n\r\nYes, at the moment InstructBLIP isn't compatible with the pipeline because of the specific processing it does - which is different from many other models. Specifically, it has two tokenizers to create `qformer_input_ids` and `input_ids` to be passed to the model. There's some ongoing work to unify our processors so that hopefully more models like these can be quickly integrated. \r\n\r\nHappy to review any PRs for anyone in the community who would like to enable this. See also: #21110 \r\n", "hey @amyeroberts I would be happy to work on this\r\n", "@nakranivaibhav Awesome! Feel free to ping me for review when you have a PR ready 🤗 ", "@amyeroberts Give me some time on this. The models are very large to reproduce the error. I am figuring out where to reproduce the error to start working on it.", "@nakranivaibhav If all you need is a model to test functionality i.e. a randomly initialized model that outputs nonsense is fine, then the small model used during tests might help here. The config to build the model and test inputs can be [found here](https://github.com/huggingface/transformers/blob/f40b87de0ca234df61f76928956c4a2118c0b548/tests/models/instructblip/test_modeling_instructblip.py#L410).", "@amyeroberts Yes that i what I need. Thank you for pointing it out." ]
1,702
1,706
null
NONE
null
### System Info - `transformers` version: 4.36.0.dev0 - Platform: Linux-generic-x86_64 - Python version: 3.8.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 1.10.0a0+0aef44c (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @Narsil @amyeroberts ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xl") pipe = pipeline("image-to-text", model="Salesforce/instructblip-flan-t5-xl", processor=processor.image_processor, tokenizer=processor.tokenizer, device=0) prompt = "describe te following image" url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) pipe(images=image, prompt=prompt) ### Expected behavior returns a textual description of the image. Instead, I get an error: `TypeError: ones_like(): argument 'input' (position 1) must be Tensor, not NoneType` I suspect this is caused by the `ImageToTextPipeline.preprocess()`, where we should ave custom behaviour for InstructBlip models to process the image and text in one go: `inputs = processor(images=image, text=prompt, return_tensors="pt")`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27975/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27975/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27974
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27974/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27974/comments
https://api.github.com/repos/huggingface/transformers/issues/27974/events
https://github.com/huggingface/transformers/issues/27974
2,037,705,024
I_kwDOCUB6oc55dOlA
27,974
how to replace the existing token in a tokenizer
{ "login": "muziyongshixin", "id": 21971718, "node_id": "MDQ6VXNlcjIxOTcxNzE4", "avatar_url": "https://avatars.githubusercontent.com/u/21971718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muziyongshixin", "html_url": "https://github.com/muziyongshixin", "followers_url": "https://api.github.com/users/muziyongshixin/followers", "following_url": "https://api.github.com/users/muziyongshixin/following{/other_user}", "gists_url": "https://api.github.com/users/muziyongshixin/gists{/gist_id}", "starred_url": "https://api.github.com/users/muziyongshixin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muziyongshixin/subscriptions", "organizations_url": "https://api.github.com/users/muziyongshixin/orgs", "repos_url": "https://api.github.com/users/muziyongshixin/repos", "events_url": "https://api.github.com/users/muziyongshixin/events{/privacy}", "received_events_url": "https://api.github.com/users/muziyongshixin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @muziyongshixin, thanks for raising an issue!\r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nTo learn about how to how to modify the tokenizers, you can check out the documentation, [1](https://huggingface.co./docs/transformers/v4.36.0/en/main_classes/tokenizer#tokenizer), [2](https://huggingface.co./docs/transformers/v4.36.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer.add_tokens). For example, you can add tokens to the tokenzers vocabulary by using the [`add_tokens` method](https://huggingface.co./docs/transformers/v4.36.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer.add_tokens). ", "> Hi @muziyongshixin, thanks for raising an issue!\r\n> \r\n> This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n> \r\n> To learn about how to how to modify the tokenizers, you can check out the documentation, [1](https://huggingface.co./docs/transformers/v4.36.0/en/main_classes/tokenizer#tokenizer), [2](https://huggingface.co./docs/transformers/v4.36.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer.add_tokens). For example, you can add tokens to the tokenzers vocabulary by using the [`add_tokens` method](https://huggingface.co./docs/transformers/v4.36.0/en/main_classes/tokenizer#transformers.PreTrainedTokenizer.add_tokens).\r\n\r\nthanks for your reply@amyeroberts \r\n\r\nI know the `add_tokens` function, but I don't want to change the vocabulary size, this will introduce some inconvience in finetuning process(like resize the embedding layer and the lm_head).\r\nSo I am wondering whether there is a method to replace the existing tokens.\r\n\r\nP.S. I am using a sentencepiece tokenizer which is released in [this baichuan2 repo](https://github.com/baichuan-inc/Baichuan2) \r\n", "@ArthurZucker will be best placed to answer this. I believe `add_tokens` or `add_special_tokens` is the safe way to do this, as it correctly keeps track of the modifications vs. from the loaded checkpoint and ensures the correct preprocessing is applied before mapping to the token id. It should be possible to define your own vocab files to load a tokenizer but I don't know about the guarantees of the behaviour vs the original tokenizer. ", "hey! You should modify manually both the `added_tokens_decoder` field (saved in the `tokenizer_config.json` ) and the `added_tokens` field (saved in the `tokenizer.json`). We don't really support this manually, but that is the recommended way to do it! (If the reserved tokens were already part of the vocab, so not AddedTokens, then you have to overwrite the vocab as well, the vocab files, to make sure they are removed) that would be hard than if it's just the content of the added tokens that you are trying to modify 😉 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
### Feature request I have a tokenizer which have lots of preserved tokens like bellow: ``` '<reserved_7>': 100, '<reserved_8>': 101, '<reserved_9>': 102, '<reserved_10>': 103, '<reserved_11>': 104, '<reserved_12>': 105, '<reserved_13>': 106, '<reserved_14>': 107, ``` I want to replace the '<reserved_7>' with '<|im_start|>' and replace '<reserved_8>' with '<|im_end|>' what I want to get is a tokenizer which can act as below: tokenizer.encode('<|im_start|>') => 100 ### Motivation I want to replace the '<reserved_7>' with '<|im_start|>' and replace '<reserved_8>' with '<|im_end|>' ### Your contribution no
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27974/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27974/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27973
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27973/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27973/comments
https://api.github.com/repos/huggingface/transformers/issues/27973/events
https://github.com/huggingface/transformers/pull/27973
2,037,623,817
PR_kwDOCUB6oc5hyTqg
27,973
Fix SDPA correctness following torch==2.1.2 regression
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
COLLABORATOR
null
As explained in https://github.com/pytorch/pytorch/issues/112577, `torch==2.1.2` reintroduces a bug (that was first introduced in 2.1.0 and fixed in 2.1.1) in SDPA where the operator produces wrong outputs when using a custom `attn_mask`, cuda device and memory-efficient attention backend. This PR makes it so that we don't silently fall into this bug. Running `RUN_SLOW=1 CUDA_VISIBLE_DEVICES=0 pytest tests/models/gpt_bigcode/ -s -vvvvv`, we have: ### With `torch==2.1.1` without this patch all tests pass ### With `torch==2.1.2` without this patch (regression) ``` FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_beam_sample_generate - RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_beam_search_generate - AssertionError: Lists differ: [[15, 95, 23, 94, 98, 62], [82, 51, 84, 98, 1, 0]] != [[15, 95, 23, 94, 98, 62], [82, 51, 84, 98, 66, 21]] FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_constrained_beam_search_generate - AssertionError: Lists differ: [[29,[207 chars] 48, 73, 79, 64, 93, 83, 40], [74, 76, 22, 92,[58 chars] 40]] != [[29,[207 chars] 48, 14, 82, 4, 46, 83, 4... FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_flash_attn_2_generate_padding_right - AssertionError: False is not true FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_generate_continue_from_past_key_values - AssertionError: False is not true FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelTest::test_group_beam_search_generate - AssertionError: Lists differ: [[85,[33 chars] 31, 68, 25], [93, 70, 87, 4, 69, 8], [93, 70, 87, 31, 68, 91]] != [[85,[33 chars] 31, 68, 7], [93, 70, 87,... FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelLanguageGenerationTest::test_generate_batched - AssertionError: Lists differ: ['def[78 chars]say_hello():\n // 1. Create a new array with the values of'] != ['def[78 chars]say_hello():\n print("... FAILED tests/models/gpt_bigcode/test_modeling_gpt_bigcode.py::GPTBigCodeModelLanguageGenerationTest::test_generate_simple - AssertionError: 'def print_hello_world():\n print("Hello World")\n\ndef print_hello_' != 'def print_hello_world():\n print("Hello World!")\n\n\nde... ``` ### With `torch==2.1.2` & `torch==2.1.1` with this patch All tests pass. For the other archs supporting SDPA (llama, whisper, falcon, idefics, bart), the tests are running fine & manual tests go fine as well.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27973/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27973/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27973", "html_url": "https://github.com/huggingface/transformers/pull/27973", "diff_url": "https://github.com/huggingface/transformers/pull/27973.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27973.patch", "merged_at": 1702395226000 }
https://api.github.com/repos/huggingface/transformers/issues/27972
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27972/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27972/comments
https://api.github.com/repos/huggingface/transformers/issues/27972/events
https://github.com/huggingface/transformers/issues/27972
2,037,560,259
I_kwDOCUB6oc55crPD
27,972
T5 model: There were missing keys in the checkpoint model loaded: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight'].
{ "login": "alexcoca", "id": 30216068, "node_id": "MDQ6VXNlcjMwMjE2MDY4", "avatar_url": "https://avatars.githubusercontent.com/u/30216068?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexcoca", "html_url": "https://github.com/alexcoca", "followers_url": "https://api.github.com/users/alexcoca/followers", "following_url": "https://api.github.com/users/alexcoca/following{/other_user}", "gists_url": "https://api.github.com/users/alexcoca/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexcoca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexcoca/subscriptions", "organizations_url": "https://api.github.com/users/alexcoca/orgs", "repos_url": "https://api.github.com/users/alexcoca/repos", "events_url": "https://api.github.com/users/alexcoca/events{/privacy}", "received_events_url": "https://api.github.com/users/alexcoca/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "I also have this issue, and once prompted, the training will be terminated directly. Have you resolved it?\r\n![image](https://github.com/huggingface/transformers/assets/136712931/bae68395-4845-4163-ac51-48709db670ac)\r\nWhen this happens, the training will be immediately terminated. What problem would it be? Thank you first。", "cc @muellerzr @pacman100 as the warning seems to be coming from trainer ", "I also get with [run_summarizaton.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/summarization/run_summarization.py) and `--model_name_or_path \"google/mt5-base\"`\r\n```\r\n.. missing keys in the checkpoint model loaded: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']\r\n```\r\nBut fine-tuning continues from the last checkpoint rather than crashing. However, `eval_loss` increases for the next checkpoint after restart, suggesting these weights are important and are really not saved/reloaded.", "Related to https://github.com/huggingface/transformers/issues/27293", "@muellerzr thanks for linking to the issue. But the solution mentioned there is for `accelerate`, and in this case I have a problem with checkpoints saved by Trainer.", "Facing the same issue for all T5 as well as RoBERTa models. Any solution yet? ", "@muellerzr and @pacman100 - it's slightly concerning that this warning still appears. Is there any understanding of what transformers release guarantees correct checkpoint saving & loading? I have (natively) used the library to implement my next research paper, but I don't know whether or not I can actually use any of the models given the warning on model loading? Let's chat and see how we can get to the bottom of this.", "@alexcoca can you give us a full clean reproducer please? That's the best way we can help. (Cc @Narsil)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,708
null
NONE
null
### System Info - `transformers` version: 4.35.2 - Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.27 - Python version: 3.10.11 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes (RTX3090) - Using distributed or parallel set-up in script? no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce. 1. Run any transformer example fine-tuning a t5 model (I am using `Salesforce/codet5p-220m` but the issue can probably be reproduced with other T5 models (certainly FlanT5) 2. Stop the trainer 3. Restart the training using the `restart_from_chekpoint=True` CLI option and setting `output_dir` to be the checkpoint directory (ie where the `checkpoint-[step]` directories are created) 4. Observe the warning: [WARNING|trainer.py:2231] 2023-12-12 11:09:58,921 >> There were missing keys in the checkpoint model loaded: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']. ### Expected behavior Either there is no warning or the warning message tells the user if the warning applies to them. My intuition here is that nothing is wrong: I am using `T5ForConditionlGeneration` out of the box (so no `lm_head`) and the encoder and decoder enmbedings are tied (and hopefully loaded ?!). Is this a case of extending the warning to be more explicit? @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27972/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27971
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27971/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27971/comments
https://api.github.com/repos/huggingface/transformers/issues/27971/events
https://github.com/huggingface/transformers/pull/27971
2,037,529,925
PR_kwDOCUB6oc5hx_Eb
27,971
[`Whisper`] raise better errors
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
COLLABORATOR
null
fixes #27893 for the new cantonese language, whisper needs to properly error out if the model does not support it
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27971/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27971", "html_url": "https://github.com/huggingface/transformers/pull/27971", "diff_url": "https://github.com/huggingface/transformers/pull/27971.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27971.patch", "merged_at": 1702455181000 }
https://api.github.com/repos/huggingface/transformers/issues/27970
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27970/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27970/comments
https://api.github.com/repos/huggingface/transformers/issues/27970/events
https://github.com/huggingface/transformers/pull/27970
2,037,477,029
PR_kwDOCUB6oc5hxzeZ
27,970
[Trainer] move dataloader after the model wrapping
{ "login": "kashif", "id": 8100, "node_id": "MDQ6VXNlcjgxMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kashif", "html_url": "https://github.com/kashif", "followers_url": "https://api.github.com/users/kashif/followers", "following_url": "https://api.github.com/users/kashif/following{/other_user}", "gists_url": "https://api.github.com/users/kashif/gists{/gist_id}", "starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kashif/subscriptions", "organizations_url": "https://api.github.com/users/kashif/orgs", "repos_url": "https://api.github.com/users/kashif/repos", "events_url": "https://api.github.com/users/kashif/events{/privacy}", "received_events_url": "https://api.github.com/users/kashif/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27970). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
CONTRIBUTOR
null
# What does this PR do? Call the dataloader after the model has been prepared
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27970/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27970", "html_url": "https://github.com/huggingface/transformers/pull/27970", "diff_url": "https://github.com/huggingface/transformers/pull/27970.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27970.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27969
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27969/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27969/comments
https://api.github.com/repos/huggingface/transformers/issues/27969/events
https://github.com/huggingface/transformers/pull/27969
2,037,458,927
PR_kwDOCUB6oc5hxvhs
27,969
Fix link in README.md of Image Captioning
{ "login": "saswatmeher", "id": 35535056, "node_id": "MDQ6VXNlcjM1NTM1MDU2", "avatar_url": "https://avatars.githubusercontent.com/u/35535056?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saswatmeher", "html_url": "https://github.com/saswatmeher", "followers_url": "https://api.github.com/users/saswatmeher/followers", "following_url": "https://api.github.com/users/saswatmeher/following{/other_user}", "gists_url": "https://api.github.com/users/saswatmeher/gists{/gist_id}", "starred_url": "https://api.github.com/users/saswatmeher/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saswatmeher/subscriptions", "organizations_url": "https://api.github.com/users/saswatmeher/orgs", "repos_url": "https://api.github.com/users/saswatmeher/repos", "events_url": "https://api.github.com/users/saswatmeher/events{/privacy}", "received_events_url": "https://api.github.com/users/saswatmeher/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
CONTRIBUTOR
null
Update the link for vision encoder decoder doc used by FlaxVisionEncoderDecoderModel link inside README.md of Image Captioning. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #27968 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @stevhliu and @MKhalusova <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27969/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27969", "html_url": "https://github.com/huggingface/transformers/pull/27969", "diff_url": "https://github.com/huggingface/transformers/pull/27969.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27969.patch", "merged_at": 1702386435000 }
https://api.github.com/repos/huggingface/transformers/issues/27967
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27967/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27967/comments
https://api.github.com/repos/huggingface/transformers/issues/27967/events
https://github.com/huggingface/transformers/issues/27967
2,037,358,828
I_kwDOCUB6oc55b6Ds
27,967
`device_map = "auto"` failed for LLaMA model on H800
{ "login": "ruikangliu", "id": 69446971, "node_id": "MDQ6VXNlcjY5NDQ2OTcx", "avatar_url": "https://avatars.githubusercontent.com/u/69446971?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruikangliu", "html_url": "https://github.com/ruikangliu", "followers_url": "https://api.github.com/users/ruikangliu/followers", "following_url": "https://api.github.com/users/ruikangliu/following{/other_user}", "gists_url": "https://api.github.com/users/ruikangliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ruikangliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruikangliu/subscriptions", "organizations_url": "https://api.github.com/users/ruikangliu/orgs", "repos_url": "https://api.github.com/users/ruikangliu/repos", "events_url": "https://api.github.com/users/ruikangliu/events{/privacy}", "received_events_url": "https://api.github.com/users/ruikangliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @younesbelkada ", "This may arise from hardware issues... Thanks for your time and attention.", "> This may arise from hardware issues... Thanks for your time and attention.\r\n\r\nhi, i have encountered the same issue with a4500 and a6000 gpu. may i ask if you have identified the cause of this problem later?" ]
1,702
1,703
1,702
NONE
null
### System Info When I use `device_map = "auto"` for LLaMA model on more than 3 H800 GPUs, errors pop up during model inference. But when I use fewer than 3 H800 GPUs, everything is OK. It seems to be something wrong with data transfer across devices on H800 GPUs. My transformers version is 4.36.0, cuda version is 11.8, torch version is 2.0.0. I also tried transformers 4.35.2, cuda 12.1, torch 2.1.1, which also failed. ``` Exception has occurred: RuntimeError (note: full exception trace is shown but execution is paused at: _run_module_as_main) CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 1193, in forward logits = logits.float() File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/generation/utils.py", line 2579, in greedy_search outputs = self( File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/generation/utils.py", line 1718, in generate return self.greedy_search( File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 271, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1046, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1147, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1140, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 208, in __call__ return super().__call__(text_inputs, **kwargs) File "/home/liuruikang/workspace/quant/ptq-lora/tmp.py", line 7, in <module> print(generator("More and more large language models are opensourced so Hugging Face has")) File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/liuruikang/anaconda3/envs/ptqlora/lib/python3.8/runpy.py", line 194, in _run_module_as_main (Current frame) return _run_code(code, main_globals, None, RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` Here is the sample code for reproducing the error (more than 3 H800 GPUs needed): ``` import torch from transformers import pipeline checkpoint = "./modelzoo/llama/llama-7b" # path to llama ckpt generator = pipeline("text-generation", model=checkpoint, device_map="auto", torch_dtype=torch.float16) print(generator("More and more large language models are opensourced so Hugging Face has")) ``` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Here is the sample code for reproducing the error (more than 3 H800 GPUs needed): ``` import torch from transformers import pipeline checkpoint = "./modelzoo/llama/llama-7b" # path to llama ckpt generator = pipeline("text-generation", model=checkpoint, device_map="auto", torch_dtype=torch.float16) print(generator("More and more large language models are opensourced so Hugging Face has")) ``` ### Expected behavior `device_map = "auto"` failed for LLaMA model on more than 3 H800 GPUs during model inference
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27967/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27966
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27966/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27966/comments
https://api.github.com/repos/huggingface/transformers/issues/27966/events
https://github.com/huggingface/transformers/issues/27966
2,037,353,627
I_kwDOCUB6oc55b4yb
27,966
Fine tuned Mistral inference issue for >4k context length
{ "login": "oooodoori", "id": 153339467, "node_id": "U_kgDOCSPGSw", "avatar_url": "https://avatars.githubusercontent.com/u/153339467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/oooodoori", "html_url": "https://github.com/oooodoori", "followers_url": "https://api.github.com/users/oooodoori/followers", "following_url": "https://api.github.com/users/oooodoori/following{/other_user}", "gists_url": "https://api.github.com/users/oooodoori/gists{/gist_id}", "starred_url": "https://api.github.com/users/oooodoori/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/oooodoori/subscriptions", "organizations_url": "https://api.github.com/users/oooodoori/orgs", "repos_url": "https://api.github.com/users/oooodoori/repos", "events_url": "https://api.github.com/users/oooodoori/events{/privacy}", "received_events_url": "https://api.github.com/users/oooodoori/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @oooodoori, thanks for raising this issue! \r\n\r\nSo that we can best help, could you provide some more details about the issue? e.g.: \r\n* Have you tried this on 4.35 as well? How about the recent stable 4.36 release?\r\n* Could you provide an example of a good versus bad output? \r\n* Is the only difference between getting good and bad outputs the transformers version? \r\n* Could you share a code snippet we can run to reproduce running the inference? Do you see this behaviour on pretrained weights as well e.g. [mistralai/Mistral-7B-v0.1](https://huggingface.co./mistralai/Mistral-7B-v0.1) or only with your own finetuned model? \r\n* Can you provide more details about how the model was trained e.g. a training script to reproduce including the lora configuration? ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
**System Info** - `transformers` version: 4.36.0-dev (main branch) - Huggingface_hub version: 0.19.4 - PyTorch version: 2.1.0 - Using GPU in script?: yes We fine tuned a mistralai/Mistral-7B-Instruct-v0.1using LoRa on some 8k context length data. The inferencing was fine with transformers at 4.34.0, but after updating the version, the inferencing became irrelevant repeition for token length > 4096. We were able to get around this by disabling fast attention 2, but the overall model performance suffered. It seems to be a problem related to the 4D attention mask implementation in transformers 4.35+. This only happens when the token length exceeds 4k. Any ideas what might be wrong?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27966/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27966/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27968
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27968/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27968/comments
https://api.github.com/repos/huggingface/transformers/issues/27968/events
https://github.com/huggingface/transformers/issues/27968
2,037,377,705
I_kwDOCUB6oc55b-qp
27,968
Link is invalid in “examples/flax/image-captioning/README.md”
{ "login": "wplf", "id": 95006218, "node_id": "U_kgDOBamuCg", "avatar_url": "https://avatars.githubusercontent.com/u/95006218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wplf", "html_url": "https://github.com/wplf", "followers_url": "https://api.github.com/users/wplf/followers", "following_url": "https://api.github.com/users/wplf/following{/other_user}", "gists_url": "https://api.github.com/users/wplf/gists{/gist_id}", "starred_url": "https://api.github.com/users/wplf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wplf/subscriptions", "organizations_url": "https://api.github.com/users/wplf/orgs", "repos_url": "https://api.github.com/users/wplf/repos", "events_url": "https://api.github.com/users/wplf/events{/privacy}", "received_events_url": "https://api.github.com/users/wplf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
NONE
null
**This repository is focused on the Hub experience and documentation. If you're facing an issue with a specific library, please open an issue in the corresponding GitHub repo. If you're facing an issue with a specific model or dataset, please open an issue in the corresponding HF repo.** **Bug description.** A clear and concise description of what the problem is. Ex. Clicking this button is not working when [...] The superlink behind 【FlaxVisionEncoderDecoderModel】 is not working ![image](https://github.com/huggingface/hub-docs/assets/95006218/4459b545-7c3c-4b6f-9b53-7717ba4fbb51) **Describe the expected behaviour** A clear and concise description of what you want to happen. **Additional context** Add any other relevant context or screenshots here. Please share details such as browser when appropriate.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27968/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27964
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27964/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27964/comments
https://api.github.com/repos/huggingface/transformers/issues/27964/events
https://github.com/huggingface/transformers/issues/27964
2,037,255,672
I_kwDOCUB6oc55bg34
27,964
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): cannot import name 'AutoModelForImageToImage' from 'transformers.models.auto.modeling_auto' (/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py)
{ "login": "LucaYoy", "id": 40484649, "node_id": "MDQ6VXNlcjQwNDg0NjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/40484649?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LucaYoy", "html_url": "https://github.com/LucaYoy", "followers_url": "https://api.github.com/users/LucaYoy/followers", "following_url": "https://api.github.com/users/LucaYoy/following{/other_user}", "gists_url": "https://api.github.com/users/LucaYoy/gists{/gist_id}", "starred_url": "https://api.github.com/users/LucaYoy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LucaYoy/subscriptions", "organizations_url": "https://api.github.com/users/LucaYoy/orgs", "repos_url": "https://api.github.com/users/LucaYoy/repos", "events_url": "https://api.github.com/users/LucaYoy/events{/privacy}", "received_events_url": "https://api.github.com/users/LucaYoy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @LucaYoy, thanks for raising this issue! \r\n\r\nWhich version of tensorflow are you running in your environment? ", "@amyeroberts im using 2.13.0", "It seems this is an incompatibility between numpy and tensorflow or tensorflow-related libraries. You'll need to find compatible versions to get things to run. There's a related discussion on the [tensorflow forum](https://discuss.tensorflow.org/t/attributeerror-module-numpy-has-no-attribute-typedict/14929) which people suggest upgrading h5py: `python3 -m pip install --upgrade h5py`", "We're still getting this issue, specially with `mistral` and derivatives models... upgrade all possible libraries didn't make any changes:\r\n```\r\nRuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):\r\n<lambda>() takes 0 positional arguments but 1 was given\r\n```\r\nHope you can provide the `requirements.txt` of the working versions for each of the underling dependencies...", "Hi @bitsnaps, without knowing which versions of the libraries you're running and the full error message, we won't be able to help you. You can find our list of dependencies and any specific versions under [our setup.py file](https://github.com/huggingface/transformers/blob/main/setup.py).\r\n\r\nDid you try upgrading h5py, [as suggested in the forum posts](https://discuss.tensorflow.org/t/attributeerror-module-numpy-has-no-attribute-typedict/14929)? ", "Hi @amyeroberts, thanks for all the details, after digging a little more deeper into this issue in that particular case, I found that it has to do with an old [colab issue](https://github.com/googlecolab/colabtools/issues/3409) but only when installing langchain first, I had to load the model before install langchain, and yep, I've already tried to upgrade many packages including `h5py`.", "@bitsnaps Thanks for the update! " ]
1,702
1,705
1,703
NONE
null
Hi I also have a similar issue to #23340 but this type numpy is the culprit ```cannot import name 'AutoModelForImageToImage' from 'transformers.models.auto.modeling_auto' (/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py) ``` transformers-cli env gives: ``` To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-12-11 13:51:15.375941: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Traceback (most recent call last): File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1382, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/opt/conda/envs/prototype/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/models/auto/image_processing_auto.py", line 26, in <module> from ...image_processing_utils import ImageProcessingMixin File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/image_processing_utils.py", line 28, in <module> from .image_transforms import center_crop, normalize, rescale File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/image_transforms.py", line 47, in <module> import tensorflow as tf File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/__init__.py", line 38, in <module> from tensorflow.python.tools import module_util as _module_util File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/__init__.py", line 45, in <module> from tensorflow.python.feature_column import feature_column_lib as feature_column File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/feature_column/feature_column_lib.py", line 18, in <module> from tensorflow.python.feature_column.feature_column import * File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/feature_column/feature_column.py", line 143, in <module> from tensorflow.python.layers import base File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/layers/base.py", line 16, in <module> from tensorflow.python.keras.legacy_tf_layers import base File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/__init__.py", line 25, in <module> from tensorflow.python.keras import models File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/models.py", line 22, in <module> from tensorflow.python.keras.engine import functional File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/engine/functional.py", line 32, in <module> from tensorflow.python.keras.engine import training as training_lib File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 53, in <module> from tensorflow.python.keras.saving import hdf5_format File "/opt/conda/envs/prototype/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 37, in <module> import h5py File "/opt/conda/envs/prototype/lib/python3.8/site-packages/h5py/__init__.py", line 46, in <module> from ._conv import register_converters as _register_converters File "h5py/h5t.pxd", line 14, in init h5py._conv File "h5py/h5t.pyx", line 293, in init h5py.h5t File "/opt/conda/envs/prototype/lib/python3.8/site-packages/numpy/__init__.py", line 320, in __getattr__ raise AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'typeDict' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/envs/prototype/bin/transformers-cli", line 5, in <module> from transformers.commands.transformers_cli import main File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 24, in <module> from .pt_to_tf import PTtoTFCommand File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/commands/pt_to_tf.py", line 24, in <module> from .. import ( File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1373, in __getattr__ value = getattr(module, name) File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1372, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/opt/conda/envs/prototype/lib/python3.8/site-packages/transformers/utils/import_utils.py", line 1384, in _get_module raise RuntimeError( RuntimeError: Failed to import transformers.models.auto.image_processing_auto because of the following error (look up to see its traceback): module 'numpy' has no attribute 'typeDict ``` Im using: Numpy 1.24.4 Torch 2.1.1+cu118 Transformers 4.36.0
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27964/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27963
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27963/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27963/comments
https://api.github.com/repos/huggingface/transformers/issues/27963/events
https://github.com/huggingface/transformers/issues/27963
2,037,028,845
I_kwDOCUB6oc55apft
27,963
(LLama-2) TensorParallelPreTrainedModel does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention
{ "login": "VasilGeorgiev39", "id": 149842188, "node_id": "U_kgDOCO5pDA", "avatar_url": "https://avatars.githubusercontent.com/u/149842188?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VasilGeorgiev39", "html_url": "https://github.com/VasilGeorgiev39", "followers_url": "https://api.github.com/users/VasilGeorgiev39/followers", "following_url": "https://api.github.com/users/VasilGeorgiev39/following{/other_user}", "gists_url": "https://api.github.com/users/VasilGeorgiev39/gists{/gist_id}", "starred_url": "https://api.github.com/users/VasilGeorgiev39/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VasilGeorgiev39/subscriptions", "organizations_url": "https://api.github.com/users/VasilGeorgiev39/orgs", "repos_url": "https://api.github.com/users/VasilGeorgiev39/repos", "events_url": "https://api.github.com/users/VasilGeorgiev39/events{/privacy}", "received_events_url": "https://api.github.com/users/VasilGeorgiev39/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @VasilGeorgiev39, thanks for raising an issue! \r\n\r\nSo that we can best help you, please make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and provide information about the running environment and full details of the error encountered, including traceback. \r\n\r\nNote than widespread support for SDPA was added in the [recent v4.26 release](https://github.com/huggingface/transformers/releases/tag/v4.36.0). I would first make sure to install the latest version of transformers and see if this resolves the issue. \r\n\r\n`tp.tensor_parallel` is a third party to the transformers repo. Providing the full traceback and error information will enable us to figure out which library the error is coming from. ", "Hi @amyeroberts, apologies for not following the format. I was just following the link that was displayed when [throwing the exception](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L1466) I have now opened a more detailed [issue](https://github.com/huggingface/transformers/issues/28003) using the template so I will close this one." ]
1,702
1,702
1,702
NONE
null
```python import transformers import tensor_parallel as tp tokenizer = transformers.AutoTokenizer.from_pretrained("meta-llama/Llama-2-13b-chat-hf") model = transformers.AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-chat-hf") modelp = tp.tensor_parallel(model) #error ``` As is the example from https://github.com/BlackSamorez/tensor_parallel
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27963/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27962
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27962/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27962/comments
https://api.github.com/repos/huggingface/transformers/issues/27962/events
https://github.com/huggingface/transformers/issues/27962
2,036,853,873
I_kwDOCUB6oc55Z-xx
27,962
IterableDatasetShard' object has no attribute '_epoch'
{ "login": "johnchienbronci", "id": 27708347, "node_id": "MDQ6VXNlcjI3NzA4MzQ3", "avatar_url": "https://avatars.githubusercontent.com/u/27708347?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnchienbronci", "html_url": "https://github.com/johnchienbronci", "followers_url": "https://api.github.com/users/johnchienbronci/followers", "following_url": "https://api.github.com/users/johnchienbronci/following{/other_user}", "gists_url": "https://api.github.com/users/johnchienbronci/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnchienbronci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnchienbronci/subscriptions", "organizations_url": "https://api.github.com/users/johnchienbronci/orgs", "repos_url": "https://api.github.com/users/johnchienbronci/repos", "events_url": "https://api.github.com/users/johnchienbronci/events{/privacy}", "received_events_url": "https://api.github.com/users/johnchienbronci/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @johnchienbronci, thanks for raising this issue! \r\n\r\nThe examples in the research projects, e.g. run_speech_recognition_ctc_streaming.py, aren't actively maintained. Have you observed this error anywhere else when using the library? ", "I only found the problem with this script, because `ShuffleCallback` only appears in run_speech_recognition_ctc_streaming.py", "@johnchienbronci OK. The `ShuffleCallback` is just a class in this script. To get the research example to work, you can modify the code to use `epoch` instead of `_epoch` in the call, which should resolve this.", "@amyeroberts thanks, my current solution is check condition of multiple or one gpu\r\nI found train_dataloader.dataset object is not same:\r\n1 gpu:is `IterableDataset` (datasets.iterable_dataset.IterableDataset )\r\nmultiple gpu: `IterableDatasetShard` (accelerate.data_loader.IterableDatasetShard)\r\n\r\n```\r\nclass ShuffleCallback(TrainerCallback):\r\n def on_epoch_begin(self, args, state, control, train_dataloader, **kwargs):\r\n if isinstance(train_dataloader.dataset, IterableDatasetShard):\r\n pass # set_epoch() is handled by the Trainer\r\n elif isinstance(train_dataloader.dataset, IterableDataset):\r\n # train_dataloader.dataset.set_epoch(train_dataloader.dataset._epoch + 1)\r\n if int(os.environ[\"WORLD_SIZE\"]) == 1: \r\n train_dataloader.dataset.set_epoch(train_dataloader.dataset._epoch + 1)\r\n else:\r\n train_dataloader.dataset.set_epoch(train_dataloader.dataset.epoch + 1)\r\n```", "@johnchienbronci Thanks for sharing! ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
NONE
null
transformers>=4.35.2 finetune wav2vec2 ctc using multiple gpu by streaming mode have error. This error occurs on multiple gpus. ( No errors will occur on a single GPU or transformers version 4.30.2) code: ``` class ShuffleCallback(TrainerCallback): def on_epoch_begin(self, args, state, control, train_dataloader, **kwargs): if isinstance(train_dataloader.dataset, IterableDatasetShard): pass # set_epoch() is handled by the Trainer elif isinstance(train_dataloader.dataset, IterableDataset): train_dataloader.dataset.set_epoch(train_dataloader.dataset._epoch + 1) ``` error message: ``` Traceback (most recent call last): File "/workspace/wav2vec2/speech_recognition/run_speech_recognition_ctc_streaming.py", line 980, in <module> main() File "/workspace/wav2vec2/speech_recognition/run_speech_recognition_ctc_streaming.py", line 929, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1537, in train return inner_training_loop( File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/trainer.py", line 1807, in _inner_training_loop self.control = self.callback_handler.on_epoch_begin(args, self.state, self.control) File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 377, in on_epoch_begin return self.call_event("on_epoch_begin", args, state, control) File "/home/ubuntu/.local/lib/python3.10/site-packages/transformers/trainer_callback.py", line 414, in call_event result = getattr(callback, event)( File "/workspace/wav2vec2/speech_recognition/run_speech_recognition_ctc_streaming.py", line 894, in on_epoch_begin train_dataloader.dataset.set_epoch(train_dataloader.dataset._epoch + 1) AttributeError: 'IterableDatasetShard' object has no attribute '_epoch'. Did you mean: 'epoch'? ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27962/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27961
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27961/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27961/comments
https://api.github.com/repos/huggingface/transformers/issues/27961/events
https://github.com/huggingface/transformers/issues/27961
2,036,767,263
I_kwDOCUB6oc55Zpof
27,961
CLIPTokenizer (and others based on the same telephoned OpenAI code) incorrect tokenize 1138 out of 34483 words that have an exact match in vocab
{ "login": "doctorpangloss", "id": 2229300, "node_id": "MDQ6VXNlcjIyMjkzMDA=", "avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4", "gravatar_id": "", "url": "https://api.github.com/users/doctorpangloss", "html_url": "https://github.com/doctorpangloss", "followers_url": "https://api.github.com/users/doctorpangloss/followers", "following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}", "gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}", "starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions", "organizations_url": "https://api.github.com/users/doctorpangloss/orgs", "repos_url": "https://api.github.com/users/doctorpangloss/repos", "events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}", "received_events_url": "https://api.github.com/users/doctorpangloss/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "I'll have a look", "Few things that seem wrong to me:\r\n- why should each whole word be tokenized as a single word? This is not how BPE works. \r\n- if the original tokenizer works this way, there is no way for us to change this as we try to reproduce the results. \r\n- What issue did this actually have on your training? CLIP powers all the stable diffusion models which are very very strong ", "> why should each whole word be tokenized as a single word? This is not how BPE works.\r\n\r\nI don't know. Both Hugging Face's and OpenAI's implementations in the respective repos are buggy.\r\n\r\nThe vocab file itself was trained, and it was created using the BPE implementation (or big picture tokenization implementation) that was actually correct and is not in the OpenAI repo. It would have only emitted those vocab entries if they were used.\r\n \r\n> if the original tokenizer works this way, there is no way for us to change this as we try to reproduce the results.\r\n\r\nThe original tokenizer doesn't work this way. The OpenAI clip repo's tokenizer isn't the one they used to tokenize the training set their CLIP was trained on. We don't know how the original tokenizer works, but probably not this one with bugs and no tests.\r\n\r\n> What issue did this actually have on your training? CLIP powers all the stable diffusion models which are very very strong\r\n\r\nLoRAs on the text encoder for Stability's models will perform worse if the tokenization of captions used for the enthusiast's training doesn't match the tokenization used by Stability.\r\n\r\n", "Hey, I hope the following threads help you. I was not around when the tokenizer was published, but we did have such feedback for the past years so highly doubt there's a real issue! \r\n- https://github.com/openai/gpt-2/issues/80 \r\n- https://discuss.huggingface.co/t/bpe-tokenizers-and-spaces-before-words/475/4\r\n", "Thanks for the links.\r\n\r\nDo you think this [Python code in OpenAI/clip](https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py) is the one that was used to tokenize the dataset to train CLIP?\r\n\r\nHere are the reasons it could be the code that was used to train CLIP:\r\n\r\n - It is published in their repository.\r\n - It was used for their inference code.\r\n\r\nHere are some reasons, I speculate, it (OpenAI's code) was not used to train CLIP:\r\n\r\n - This ordinary Python code is extremely slow.\r\n - It's so slow, it's intractably slow to use it to tokenize any of the datasets this was trained on.\r\n - People who do this for real use an accelerated implementation that works with the datasets in their disk format.\r\n - There are no tests in the repo.\r\n - There were no replies or updates to the repo.\r\n - Usually code that people use, people maintain. It is essentially unmaintained.\r\n\r\nYou could go and implement \"BPE\" from the paper. Maybe you'll publish tests for it, and maintain it. Maybe there will be users of it, from OpenAI, who will write bugs from when they use the tokenization on real data.\r\n\r\nHowever, the opposite is happening, it's amateurs who are using this because it's free. It's people who don't deal with large datasets or whose \"KPIs\" don't matter, so there aren't outcomes that matter, so they run the code and they get an output and none of it is material. None of it is materially measured. Then you are put in the uncomfortable position, supposing you are doing the default stance that the issue is not a real issue, \"How do I delicately tell someone that they don't know what they are talking about?\" And then you yourself might be unsure, you're a reasonable person, you didn't invent CLIP and you don't talk to the guy who did, nor to the PMs at OpenAI, or anyone.\r\n\r\nSo maybe find the real implementation. It isn't really material if this one correctly implements something material.", "Regarding the performances, there's https://github.com/instant-labs/instant-clip-tokenizer which seems to be super efficient. I'll try to see what can be taken from it.\r\nAlso Openai now uses [the `tiktoken` library ](https://github.com/openai/tiktoken), but it was not around at the time. \r\n\r\nUnfortunately I am not super sure what are the steps you suggest, but the slow version of the BPE (which is in transformers) is known to be slow, which is why we have the Fast version (automatically used with AutoTokenizer). ", "> Unfortunately I am not super sure what are the steps you suggest,\r\n\r\nSome options:\r\n\r\n 1. Contact someone at OpenAI to share the actual implementation they used to train CLIP, which will never happen. Not a super satisfactory answer, no. I agree that writing tokenization libraries is a grind.\r\n\r\n 2. Compare to their likely authoritative implementation. I don't know what the authoritative implementation of BPE is. Probably `sentencepiece`. Maybe OpenAI forked it to make the UTF-8 / Unicode adjustment they do in their CLIP repo. Based on my reading of https://github.com/openai/CLIP/blob/a1d071733d7111c9c014f024669f959182114e33/clip/simple_tokenizer.py#L76, the BPE scores are implied by the order of texts in the merges file. It's not clear how this is done with the `vocab.json` dicts that Hugging Face ships. It is possible to convert the `merges` file into a `.model` proto that `sentencepiece` uses, assigning scores based on the order of text pieces in the `merges`. That is still iffy, but maybe that will get closer to the implementation used for training.\r\n\r\n 3. Add a fuzzer and tests to this library to discover all the bugs.\r\n\r\nEither way, the results of Hugging Face's implementation versus OpenAI's are different.\r\n\r\n# why this matters\r\n\r\nIf the tokenizations differ slightly, all sorts of things go wrong:\r\n\r\n - end users will observe that `clip` may not be familiar with text `X` that tokenizes into `[x, y, z]` using Hugging Face's method, but when using the method `clip` was trained on, tokenizes to `[a, b]`, and hence is actually well trained. My bet is that this happens around 3.5% of the time.\r\n - however, if you are trying to do the community-authored LoRA text encoder image generation training pipelines, you choose \"rare\" tokens, which means you are far disproportionately selecting from the texts that appear in that 3.5%. So actually, users are probably observe this in the context of LoRA training all the time, and perhaps that's why LoRA training with the text encoder in SDXL has been going so poorly, because the mismatch between the tokenizer implementation used by the community and the implementation used to train (by OpenCLIP) is now very large.\r\n\r\nI think you already know all of this. I can't tell you directly if the issue I reported is real. I don't know, I am not an expert. But I am observing buggy behavior nonetheless. We can ticket it as something else. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> Unfortunately I am not super sure what are the steps you suggest, but the slow version of the BPE (which is in transformers) is known to be slow, which is why we have the Fast version (automatically used with AutoTokenizer).\r\n\r\nI'm bumping the issue because I'm just some guy. If someone, like you, with a @huggingface email account, messages the Stability team for the actual, bonafide tokenizer implementation they used to create the tokenized captions in their proprietary dataset, you will have solved this issue. You are more experienced in this than I am, so you can imagine the impacts, the mistakes / flaws when the tokenizer used for the training differs from the one used for fine tuning.\r\n\r\nI am essentially saying that it doesn't matter what the bugs are, so long as the behavior is exactly consistent with the training dataset, and that I have found evidence that this BPE vocab file was adapted from the actual implementation, it doesn't even have the priorities, so it's almost certainly too buggy to match the behavior of training." ]
1,702
1,705
null
NONE
null
### System Info - `transformers` version: 4.36.0 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.0+cu118 (False) - Tensorflow version (GPU?): 2.14.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu) - Jax version: 0.4.20 - JaxLib version: 0.4.20 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Visit https://colab.research.google.com/drive/18I0mYxTV-UCDjKWTxfuaR3P6o00w18Q9?usp=sharing for a reproduction. ``` from transformers import CLIPProcessor tokenizer = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32").tokenizer # match whole words whole_words = {k: v for k, v in tokenizer.get_vocab().items() if k.endswith("</w>")} to_trim = len("<w/>") missed = 0 for token_str, token_int in whole_words.items(): tokenized = tokenizer.tokenize(token_str[:-to_trim]) if len(tokenized) != 1: missed += 1 print(f"transformers {missed} words out of {len(whole_words)} incorrectly tokenized ({missed/len(whole_words)*100})%") ``` this prints `transformers 1138 words out of 34483 incorrectly tokenized (3.3001768987617086)%` I see that everyone copied OpenAI's buggy tokenization code. Besides this issue there is also https://github.com/openai/CLIP/issues/343. The code in that repository was obviously not used for training, so this could explain a lot of misses / poor performance in CLIP based models. ### Expected behavior tokenization of a word that exactly matches an entry in the vocab file should return exactly 1 token
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27961/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27961/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27960
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27960/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27960/comments
https://api.github.com/repos/huggingface/transformers/issues/27960/events
https://github.com/huggingface/transformers/pull/27960
2,036,751,270
PR_kwDOCUB6oc5hvXto
27,960
Auto model time series
{ "login": "wgifford", "id": 79663411, "node_id": "MDQ6VXNlcjc5NjYzNDEx", "avatar_url": "https://avatars.githubusercontent.com/u/79663411?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wgifford", "html_url": "https://github.com/wgifford", "followers_url": "https://api.github.com/users/wgifford/followers", "following_url": "https://api.github.com/users/wgifford/following{/other_user}", "gists_url": "https://api.github.com/users/wgifford/gists{/gist_id}", "starred_url": "https://api.github.com/users/wgifford/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wgifford/subscriptions", "organizations_url": "https://api.github.com/users/wgifford/orgs", "repos_url": "https://api.github.com/users/wgifford/repos", "events_url": "https://api.github.com/users/wgifford/events{/privacy}", "received_events_url": "https://api.github.com/users/wgifford/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @wgifford, thanks for opening this PR! \r\n\r\nAutoModelForXxx APIs are intended for crossloading of models with the same inputs and outputs for a specific task. As there's just two (new) models at the moment, I suggest waiting until we know what the standardised API should be for the time series case. ", "Hi @amyeroberts. The intent here was to make it a bit easier to work with a collection of serialized models where we may not know exactly which model class form which it originates. For example, \r\n```\r\nfrom transformers import AutoModelForTimeSeriesPrediction\r\nsaved_model = \"ibm/patchtsmixer-etth1-forecasting\"\r\nmodel=AutoModelForTimeSeriesPrediction.from_pretrained(saved_model)\r\n```\r\nwhich works when `saved_model` is not known in advance.\r\n\r\nOne thought I had was that we might enable this for the three other time series models which support the \"prediction\" task.", "@wgifford - I understand. Knowing which model is saved out is really a handling issue on the user's side and one that can be resolved by looking at the model configs. \r\n\r\nNevertheless, if this is extended to 3 other models then there's more reason for an auto class for the prediction task. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,705
1,705
CONTRIBUTOR
null
# What does this PR do? Adds auto model capability for PatchTSMixer and PatchTST models. This includes support for: - AutoModelForTimeSeriesClassification - AutoModelForTimeSeriesPrediction - AutoModelForTimeSeriesRegression @kashif
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27960/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27960/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27960", "html_url": "https://github.com/huggingface/transformers/pull/27960", "diff_url": "https://github.com/huggingface/transformers/pull/27960.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27960.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/27959
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27959/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27959/comments
https://api.github.com/repos/huggingface/transformers/issues/27959/events
https://github.com/huggingface/transformers/issues/27959
2,036,723,024
I_kwDOCUB6oc55Ze1Q
27,959
KeyError: 'mistral'
{ "login": "brian-bould", "id": 144232955, "node_id": "U_kgDOCJjR-w", "avatar_url": "https://avatars.githubusercontent.com/u/144232955?v=4", "gravatar_id": "", "url": "https://api.github.com/users/brian-bould", "html_url": "https://github.com/brian-bould", "followers_url": "https://api.github.com/users/brian-bould/followers", "following_url": "https://api.github.com/users/brian-bould/following{/other_user}", "gists_url": "https://api.github.com/users/brian-bould/gists{/gist_id}", "starred_url": "https://api.github.com/users/brian-bould/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/brian-bould/subscriptions", "organizations_url": "https://api.github.com/users/brian-bould/orgs", "repos_url": "https://api.github.com/users/brian-bould/repos", "events_url": "https://api.github.com/users/brian-bould/events{/privacy}", "received_events_url": "https://api.github.com/users/brian-bould/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Would it be possible for you to provide the complete code for your reference kindly?", "this means you aren't using latest transformers version\r\n", "Hi @brian-bould, thanks for raising this issue! \r\n\r\nAs @ehartford mentions, this is most likely because the transformers version in your environment does not contain the recent mixtral release. Please make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github/ISSUE_TEMPLATE/bug-report.yml) and fill-out the running environment, including the transformers version. \r\n\r\nThis error is unfortunately not super clear. @Rocketknight1 has just merge in a commit - #27976 - which should make the error messages when this occurs clearer. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "I still got the same problem using the last transformers version\r\n", "Hi @kapscudi, that's very unusual! The latest version of `transformers` should recognize the architecture correctly. Can you double-check the version using `import transformers` followed by `transformers.__version__` in `python`? It should be at least `4.36.2`.", "I am currently facing the same error when I am loading --model TheBloke_Mistral-7B-OpenOrca-GPTQ --loader ExLlamav2_HF : The current version of `transformers` is 4.37.2 in my environment.\r\n\r\nError: \r\n```\r\nStarted Ooba Booga text generation web UI.\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: 2024-02-01 18:10:22 INFO:Loading settings from settings.yaml...\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: 2024-02-01 18:10:22 INFO:Loading TheBloke_Mistral-7B-OpenOrca-GPTQ...\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: 2024-02-01 18:10:22 INFO:The AutoGPTQ params are: {'model_basename': 'model', 'device': 'cuda:0', 'use_triton': False, 'inject_fused_attention': True, 'inject_fused_mlp': True, 'use_safetensors': True, 'trust_remote_code': False, 'max_memory': None, 'quantize_config': None}\r\nTraceback (most recent call last):\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: File \"/home/kdubey/text-generation-webui/server.py\", line 1008, in <module>\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: shared.model, shared.tokenizer = load_model(shared.model_name)\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: File \"/home/kdubey/text-generation-webui/modules/models.py\", line 66, in load_model\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: output = load_func_map[loader](model_name)\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: File \"/home/kdubey/text-generation-webui/modules/models.py\", line 272, in AutoGPTQ_loader\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: return modules.AutoGPTQ_loader.load_quantized(model_name)\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: File \"/home/kdubey/text-generation-webui/modules/AutoGPTQ_loader.py\", line 55, in load_quantized\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: model = AutoGPTQForCausalLM.from_quantized(path_to_model, **params)\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: File \"/opt/conda/envs/textgen/lib/python3.10/site-packages/auto_gptq/modeling/auto.py\", line 79, in from_quantized\r\n model_type = check_and_get_model_type(save_dir or model_name_or_path, trust_remote_code)\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: File \"/opt/conda/envs/textgen/lib/python3.10/site-packages/auto_gptq/modeling/_utils.py\", line 123, in check_and_get_model_type\r\nconfig = AutoConfig.from_pretrained(model_dir, trust_remote_code=trust_remote_code)\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: File \"/opt/conda/envs/textgen/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py\", line 957, in from_pretrained\r\n config_class = CONFIG_MAPPING[config_dict[\"model_type\"]]\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: File \"/opt/conda/envs/textgen/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py\", line 671, in __getitem__\r\n raise KeyError(key)\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: KeyError: 'mistral'\r\nFeb 01 18:10:24 ooba-booga-web-ui conda[14316]: ERROR conda.cli.main_run:execute(49): `conda run python /home/kdubey/text-generation-webui/server.py --listen --listen-port 80 --model TheBloke_Mistral-7B-OpenOrca-GPTQ --loader ExLlamav2_HF --gpu-split 8,10 --api` failed. (See above for error)\r\nbin /opt/conda/envs/textgen/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda113_nocublaslt.so\r\nooba_booga.service: Main process exited, code=exited, status=1/FAILURE\r\nFeb 01 18:10:24 ooba-booga-web-ui systemd[1]: ooba_booga.service: Failed with result 'exit-code'.\r\n```", "If you are using ooba-booga-web-ui then the issue should probably be opened on that repo 😉 " ]
1,702
1,706
null
NONE
null
### System Info System Info M2 ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction running on pinokio. loading mistralai_Mixtral-8x7B-v0.1 model. error: Traceback (most recent call last): File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/modules/ui_model_menu.py", line 209, in load_model_wrapper shared.model, shared.tokenizer = load_model(shared.model_name, loader) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/modules/models.py", line 88, in load_model output = load_func_map[loader](model_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/modules/models.py", line 146, in huggingface_loader config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=params['trust_remote_code']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1064, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] ~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/bafsr/pinokio/api/oobabooga.pinokio.git/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 761, in __getitem__ raise KeyError(key) KeyError: 'mixtral' ### Expected behavior run the model
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27959/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27959/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27958
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27958/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27958/comments
https://api.github.com/repos/huggingface/transformers/issues/27958/events
https://github.com/huggingface/transformers/pull/27958
2,036,665,751
PR_kwDOCUB6oc5hvE1b
27,958
[Doc] Spanish translation of glossary.md
{ "login": "aaronjimv", "id": 67152883, "node_id": "MDQ6VXNlcjY3MTUyODgz", "avatar_url": "https://avatars.githubusercontent.com/u/67152883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aaronjimv", "html_url": "https://github.com/aaronjimv", "followers_url": "https://api.github.com/users/aaronjimv/followers", "following_url": "https://api.github.com/users/aaronjimv/following{/other_user}", "gists_url": "https://api.github.com/users/aaronjimv/gists{/gist_id}", "starred_url": "https://api.github.com/users/aaronjimv/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aaronjimv/subscriptions", "organizations_url": "https://api.github.com/users/aaronjimv/orgs", "repos_url": "https://api.github.com/users/aaronjimv/repos", "events_url": "https://api.github.com/users/aaronjimv/events{/privacy}", "received_events_url": "https://api.github.com/users/aaronjimv/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello, there are a lot of technical concepts so I am open to any feedback.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27958). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Thanks @stevhliu and @osanseviero. I appreciate it 🤗" ]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add the Spanish version of `glossary.md` to `transformers/docs/source/es` Fix some typos in `en/glossary.md` Fix `TensorParallel` link at `Z` section in both files. Fixes #15947 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 --> @omarespejel @sgugger @osanseviero @stevhliu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27958/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27958/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27958", "html_url": "https://github.com/huggingface/transformers/pull/27958", "diff_url": "https://github.com/huggingface/transformers/pull/27958.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27958.patch", "merged_at": 1702488120000 }
https://api.github.com/repos/huggingface/transformers/issues/27957
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27957/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27957/comments
https://api.github.com/repos/huggingface/transformers/issues/27957/events
https://github.com/huggingface/transformers/issues/27957
2,036,541,854
I_kwDOCUB6oc55Yyme
27,957
XLMRoberta with Flash Attention 2
{ "login": "IvanPy96", "id": 64599936, "node_id": "MDQ6VXNlcjY0NTk5OTM2", "avatar_url": "https://avatars.githubusercontent.com/u/64599936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/IvanPy96", "html_url": "https://github.com/IvanPy96", "followers_url": "https://api.github.com/users/IvanPy96/followers", "following_url": "https://api.github.com/users/IvanPy96/following{/other_user}", "gists_url": "https://api.github.com/users/IvanPy96/gists{/gist_id}", "starred_url": "https://api.github.com/users/IvanPy96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/IvanPy96/subscriptions", "organizations_url": "https://api.github.com/users/IvanPy96/orgs", "repos_url": "https://api.github.com/users/IvanPy96/repos", "events_url": "https://api.github.com/users/IvanPy96/events{/privacy}", "received_events_url": "https://api.github.com/users/IvanPy96/received_events", "type": "User", "site_admin": false }
[ { "id": 2392046359, "node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5", "url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue", "name": "Good Second Issue", "color": "dd935a", "default": false, "description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!" }, { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Thanks for opening, will mark as a good second issue 🤗 ", "Hi @IvanPy96 & @ArthurZucker I want to work on this issue. Could you please assign it to me? ", "Hey, we don't assign issue, feel free to open a PR and link it to this issue 😉 " ]
1,702
1,703
null
NONE
null
### System Info - transformers version: 4.36.0 - Platform: Linux-4.19.0-22-amd64-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.19.4 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("my_model/", attn_implementation="flash_attention_2") ### Expected behavior Ability to use flash attention 2 for inference. Is it possible to add support of flash attention 2 for XLMRoberta model?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27957/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/27956
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27956/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27956/comments
https://api.github.com/repos/huggingface/transformers/issues/27956/events
https://github.com/huggingface/transformers/pull/27956
2,036,515,405
PR_kwDOCUB6oc5hujXW
27,956
add `modules_in_block_to_quantize` arg in GPTQconfig
{ "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_27956). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,702
1,702
1,702
MEMBER
null
# What does this PR do? This PR adds the `modules_in_block_to_quantize ` quantization arg for gptq. This is necessary for converting specific layers to quantized layers. With this PR, we should be able to run the gptq mixtral model. See related [PR](https://github.com/huggingface/optimum/pull/1585) in optimum. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig model_name = "TheBloke/Mixtral-8x7B-v0.1-GPTQ" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, device_map={"":0}) print(model) inputs = tokenizer.encode("Hello, how are you today ?", return_tensors="pt").to(0) outputs = model.generate(inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0])) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27956/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27956", "html_url": "https://github.com/huggingface/transformers/pull/27956", "diff_url": "https://github.com/huggingface/transformers/pull/27956.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27956.patch", "merged_at": 1702494824000 }
https://api.github.com/repos/huggingface/transformers/issues/27955
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27955/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27955/comments
https://api.github.com/repos/huggingface/transformers/issues/27955/events
https://github.com/huggingface/transformers/pull/27955
2,036,220,067
PR_kwDOCUB6oc5htic8
27,955
[`Mixtral`] Change mistral op order
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
CONTRIBUTOR
null
# What does this PR do? This PR slightly refactors the forward pass logic of `MixtralBLockSparseTop2MLP` to not have `routing_weights` as a required arg in the forward pass as AWQ does not handle multiple args in the forward pass of the modules (assumes all modules have `hidden_states` as input) cc @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27955/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27955/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27955", "html_url": "https://github.com/huggingface/transformers/pull/27955", "diff_url": "https://github.com/huggingface/transformers/pull/27955.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27955.patch", "merged_at": 1702317799000 }
https://api.github.com/repos/huggingface/transformers/issues/27954
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27954/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27954/comments
https://api.github.com/repos/huggingface/transformers/issues/27954/events
https://github.com/huggingface/transformers/issues/27954
2,036,020,560
I_kwDOCUB6oc55WzVQ
27,954
does not appear to have a file named config.json
{ "login": "riyaj8888", "id": 29457825, "node_id": "MDQ6VXNlcjI5NDU3ODI1", "avatar_url": "https://avatars.githubusercontent.com/u/29457825?v=4", "gravatar_id": "", "url": "https://api.github.com/users/riyaj8888", "html_url": "https://github.com/riyaj8888", "followers_url": "https://api.github.com/users/riyaj8888/followers", "following_url": "https://api.github.com/users/riyaj8888/following{/other_user}", "gists_url": "https://api.github.com/users/riyaj8888/gists{/gist_id}", "starred_url": "https://api.github.com/users/riyaj8888/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/riyaj8888/subscriptions", "organizations_url": "https://api.github.com/users/riyaj8888/orgs", "repos_url": "https://api.github.com/users/riyaj8888/repos", "events_url": "https://api.github.com/users/riyaj8888/events{/privacy}", "received_events_url": "https://api.github.com/users/riyaj8888/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can't reproduce. On colab, the following works\r\n\r\n```\r\nfrom transformers import AutoModel\r\nAutoModel.from_pretrained(\"codellama/CodeLlama-7b-Instruct-hf\")\r\n```\r\n\r\nPlease follow the template to provide the necessary information when opening an issue, like system info and a code snippet.", "first i did this\r\n\r\n!pip install git+https://github.com/huggingface/transformers.git@main accelerate\r\n\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig\r\nimport torch\r\n\r\nmodel_id = \"codellama/CodeLlama-7b-Instruct-hf\"\r\nquantization_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_compute_dtype=torch.float16\r\n)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n quantization_config=quantization_config,\r\n device_map=\"cuda:0\",cache_dir=\"codellama/CodeLlama-7b-Instruct-hf\",\r\n)", "ur suggestions also failed.", "OSError Traceback (most recent call last)\r\nCell In[3], line 2\r\n 1 from transformers import AutoModel\r\n----> 2 AutoModel.from_pretrained(\"codellama/CodeLlama-7b-Instruct-hf\")\r\n\r\nFile ~/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py:526, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)\r\n 523 if kwargs.get(\"quantization_config\", None) is not None:\r\n 524 _ = kwargs.pop(\"quantization_config\")\r\n--> 526 config, kwargs = AutoConfig.from_pretrained(\r\n 527 pretrained_model_name_or_path,\r\n 528 return_unused_kwargs=True,\r\n 529 trust_remote_code=trust_remote_code,\r\n 530 code_revision=code_revision,\r\n 531 _commit_hash=commit_hash,\r\n 532 **hub_kwargs,\r\n 533 **kwargs,\r\n 534 )\r\n 536 # if torch_dtype=auto was passed here, ensure to pass it on\r\n 537 if kwargs_orig.get(\"torch_dtype\", None) == \"auto\":\r\n\r\nFile ~/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1082, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 1079 trust_remote_code = kwargs.pop(\"trust_remote_code\", None)\r\n 1080 code_revision = kwargs.pop(\"code_revision\", None)\r\n-> 1082 config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n 1083 has_remote_code = \"auto_map\" in config_dict and \"AutoConfig\" in config_dict[\"auto_map\"]\r\n 1084 has_local_code = \"model_type\" in config_dict and config_dict[\"model_type\"] in CONFIG_MAPPING\r\n\r\nFile ~/.local/lib/python3.10/site-packages/transformers/configuration_utils.py:644, in PretrainedConfig.get_config_dict(cls, pretrained_model_name_or_path, **kwargs)\r\n 642 original_kwargs = copy.deepcopy(kwargs)\r\n 643 # Get config dict associated with the base config file\r\n--> 644 config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n 645 if \"_commit_hash\" in config_dict:\r\n 646 original_kwargs[\"_commit_hash\"] = config_dict[\"_commit_hash\"]\r\n\r\nFile ~/.local/lib/python3.10/site-packages/transformers/configuration_utils.py:699, in PretrainedConfig._get_config_dict(cls, pretrained_model_name_or_path, **kwargs)\r\n 695 configuration_file = kwargs.pop(\"_configuration_file\", CONFIG_NAME)\r\n 697 try:\r\n 698 # Load from local folder or from cache or download from model Hub and cache\r\n--> 699 resolved_config_file = cached_file(\r\n 700 pretrained_model_name_or_path,\r\n 701 configuration_file,\r\n 702 cache_dir=cache_dir,\r\n 703 force_download=force_download,\r\n 704 proxies=proxies,\r\n 705 resume_download=resume_download,\r\n 706 local_files_only=local_files_only,\r\n 707 token=token,\r\n 708 user_agent=user_agent,\r\n 709 revision=revision,\r\n 710 subfolder=subfolder,\r\n 711 _commit_hash=commit_hash,\r\n 712 )\r\n 713 commit_hash = extract_commit_hash(resolved_config_file, commit_hash)\r\n 714 except EnvironmentError:\r\n 715 # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to\r\n 716 # the original exception.\r\n\r\nFile ~/.local/lib/python3.10/site-packages/transformers/utils/hub.py:360, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs)\r\n 358 if not os.path.isfile(resolved_file):\r\n 359 if _raise_exceptions_for_missing_entries:\r\n--> 360 raise EnvironmentError(\r\n 361 f\"{path_or_repo_id} does not appear to have a file named {full_filename}. Checkout \"\r\n 362 f\"'[https://huggingface.co./{path_or_repo_id}/{](https://huggingface.co./%7Bpath_or_repo_id%7D/%7Brevision)[revision](https://huggingface.co./%7Bpath_or_repo_id%7D/%7Brevision)}' for available files.\"\r\n 363 )\r\n 364 else:\r\n 365 return None\r\n\r\nOSError: codellama/CodeLlama-7b-Instruct-hf does not appear to have a file named config.json. Checkout 'https://huggingface.co./codellama/CodeLlama-7b-Instruct-hf/None' for available files.", "You mentioned you work on a colab notebook. Could you share it? As it works on my notebook, except the memory issue at the end (not enough RAM), but the config is loaded without issue.", "i have enough RAM approximately 80GB", "> You mentioned you work on a colab notebook. Could you share it? As it works on my notebook, except the memory issue at the end (not enough RAM), but the config is loaded without issue.\r\n\r\nNOT COLAB NOTEBOOK , its jupyter notebook. in the morning same notebook worked without any issue.", "unable to run this line also\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)", "So you are running on your machine not on colab? Then it's likely some connection issue. I am afraid.\r\n\r\nyou can check here\r\n\r\nhttps://colab.research.google.com/drive/15Q1_311ZhLRwqEDP3Ib4Vnd0PSeX9Qg9?usp=sharing\r\n\r\nMaybe try (my 2 line code) with another model id (say bert, gpt2) in your local env, and see if the same issue also occurs. ", "other model able to download , but not this codellama", "Could you can try\r\n\r\n```\r\nfrom transformers import AutoConfig\r\nAutoConfig.from_pretrained(\"codellama/CodeLlama-7b-Instruct-hf\")\r\n```\r\n? If this gives error, please also share the full error.\r\n\r\n", "Also , check if you have a local directory named\r\n\r\n```\r\n./codellama/CodeLlama-7b-Instruct-hf\r\n```\r\nIf it exists, `from_pretrained` will look at that path instead of the Hub.", "> from transformers import AutoConfig\r\n> AutoConfig.from_pretrained(\"codellama/CodeLlama-7b-Instruct-hf\")\r\n\r\nOSError Traceback (most recent call last)\r\nCell In[4], line 2\r\n 1 from transformers import AutoConfig\r\n----> 2 AutoConfig.from_pretrained(\"codellama/CodeLlama-7b-Instruct-hf\")\r\n\r\nFile ~/.local/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py:1082, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\r\n 1079 trust_remote_code = kwargs.pop(\"trust_remote_code\", None)\r\n 1080 code_revision = kwargs.pop(\"code_revision\", None)\r\n-> 1082 config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n 1083 has_remote_code = \"auto_map\" in config_dict and \"AutoConfig\" in config_dict[\"auto_map\"]\r\n 1084 has_local_code = \"model_type\" in config_dict and config_dict[\"model_type\"] in CONFIG_MAPPING\r\n\r\nFile ~/.local/lib/python3.10/site-packages/transformers/configuration_utils.py:644, in PretrainedConfig.get_config_dict(cls, pretrained_model_name_or_path, **kwargs)\r\n 642 original_kwargs = copy.deepcopy(kwargs)\r\n 643 # Get config dict associated with the base config file\r\n--> 644 config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n 645 if \"_commit_hash\" in config_dict:\r\n 646 original_kwargs[\"_commit_hash\"] = config_dict[\"_commit_hash\"]\r\n\r\nFile ~/.local/lib/python3.10/site-packages/transformers/configuration_utils.py:699, in PretrainedConfig._get_config_dict(cls, pretrained_model_name_or_path, **kwargs)\r\n 695 configuration_file = kwargs.pop(\"_configuration_file\", CONFIG_NAME)\r\n 697 try:\r\n 698 # Load from local folder or from cache or download from model Hub and cache\r\n--> 699 resolved_config_file = cached_file(\r\n 700 pretrained_model_name_or_path,\r\n 701 configuration_file,\r\n 702 cache_dir=cache_dir,\r\n 703 force_download=force_download,\r\n 704 proxies=proxies,\r\n 705 resume_download=resume_download,\r\n 706 local_files_only=local_files_only,\r\n 707 token=token,\r\n 708 user_agent=user_agent,\r\n 709 revision=revision,\r\n 710 subfolder=subfolder,\r\n 711 _commit_hash=commit_hash,\r\n 712 )\r\n 713 commit_hash = extract_commit_hash(resolved_config_file, commit_hash)\r\n 714 except EnvironmentError:\r\n 715 # Raise any environment error raise by `cached_file`. It will have a helpful error message adapted to\r\n 716 # the original exception.\r\n\r\nFile ~/.local/lib/python3.10/site-packages/transformers/utils/hub.py:360, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs)\r\n 358 if not os.path.isfile(resolved_file):\r\n 359 if _raise_exceptions_for_missing_entries:\r\n--> 360 raise EnvironmentError(\r\n 361 f\"{path_or_repo_id} does not appear to have a file named {full_filename}. Checkout \"\r\n 362 f\"'[https://huggingface.co./{path_or_repo_id}/{](https://huggingface.co./%7Bpath_or_repo_id%7D/%7Brevision)[revision](https://huggingface.co./%7Bpath_or_repo_id%7D/%7Brevision)}' for available files.\"\r\n 363 )\r\n 364 else:\r\n 365 return None\r\n\r\nOSError: codellama/CodeLlama-7b-Instruct-hf does not appear to have a file named config.json. Checkout 'https://huggingface.co./codellama/CodeLlama-7b-Instruct-hf/None' for available files.", "```python\r\nfrom transformers import Owlv2ForObjectDetection, Owlv2Processor\r\n\r\n\r\nprocessor = Owlv2Processor.from_pretrained(\"google/owlv2-base-patch16-ensemble\")\r\n\r\n```\r\n\r\nOSError: google/owlv2-base-patch16-ensemble does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co./google/owlv2-base-patch16-ensemble/main' for available files.", "Please check this comment https://github.com/huggingface/transformers/issues/27954#issuecomment-1850480411", "> Please check this comment [#27954 (comment)](https://github.com/huggingface/transformers/issues/27954#issuecomment-1850480411)\r\n\r\nyes, a download scripts make such a directory", "if you have a directory having the name the same as the repo id, and if it doesn't contain the necessary files, it will fail, as `from_pretrained` is looking for files inside that local directory instead of the remote model repository.\r\n", "Delete models in hub cache by using this command `huggingface-cli delete-cache`, then delete models from local directory could resolve this issue. Reasonably, somehow the model is saved locally could cause this issue. For example. I download the model by using this `...from_pretrained().save_pretrained(\"./local\")` then I received exact error message.", "Yes, if a local directory having the same name , then it will be checked - and would fail if that directory doesn't have all the necessary files", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,707
1,707
NONE
null
initially i was able to load this model , now suddenly its giving below error, in the same notebook codellama/CodeLlama-7b-Instruct-hf does not appear to have a file named config.json
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27954/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27954/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27953
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27953/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27953/comments
https://api.github.com/repos/huggingface/transformers/issues/27953/events
https://github.com/huggingface/transformers/issues/27953
2,035,940,203
I_kwDOCUB6oc55Wftr
27,953
Multi GPU infrerence not supported with Mixtral(moe)!
{ "login": "DataCTE", "id": 105170707, "node_id": "U_kgDOBkTHEw", "avatar_url": "https://avatars.githubusercontent.com/u/105170707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DataCTE", "html_url": "https://github.com/DataCTE", "followers_url": "https://api.github.com/users/DataCTE/followers", "following_url": "https://api.github.com/users/DataCTE/following{/other_user}", "gists_url": "https://api.github.com/users/DataCTE/gists{/gist_id}", "starred_url": "https://api.github.com/users/DataCTE/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DataCTE/subscriptions", "organizations_url": "https://api.github.com/users/DataCTE/orgs", "repos_url": "https://api.github.com/users/DataCTE/repos", "events_url": "https://api.github.com/users/DataCTE/events{/privacy}", "received_events_url": "https://api.github.com/users/DataCTE/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "cc @SunMarc? ", "There's a related PR with the fix here: #27948 ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,702
1,704
1,704
NONE
null
### System Info (most recent call last): File "/deep-pool/inference/text-generation-webui/modules/callbacks.py", line 57, in gentask ret = self.mfunc(callback=_callback, args, self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/deep-pool/inference/text-generation-webui/modules/text_generation.py", line 352, in generate_with_callback shared.model.generate(kwargs) File "/home/alex/miniconda/envs/textgen/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/deep-pool/inference/text-generation-webui/transformers/src/transformers/generation/utils.py", line 1764, in generate return self.sample( ^^^^^^^^^^^^ File "/deep-pool/inference/text-generation-webui/transformers/src/transformers/generation/utils.py", line 2861, in sample outputs = self( ^^^^^ File "/home/alex/miniconda/envs/textgen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/alex/miniconda/envs/textgen/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/alex/miniconda/envs/textgen/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward output = module._old_forward(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/deep-pool/inference/text-generation-webui/transformers/src/transformers/models/mixtral/modeling_mixtral.py", line 1244, in forward aux_loss = load_balancing_loss_func( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/deep-pool/inference/text-generation-webui/transformers/src/transformers/models/mixtral/modeling_mixtral.py", line 98, in load_balancing_loss_func gate_logits = torch.cat(gate_logits, dim=0) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument tensors in method wrapper_CUDA_cat) Output generated in 2.42 seconds (0.00 tokens/s, 0 tokens, context 65, seed 459973075) it seems no matter what I try Mixtral models explicitly do not support multi-GPU inference. No other model on via transformers has this from what I know and this seems to be a bug of some kind. thank you so much for your time. ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_id = "mistralai/Mixtral-8x7B-Instruct-v0.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True, device_map="auto" ,use_flash_attention_2=True) text = "Hello my name is" inputs = tokenizer(text, return_tensors="pt").to(0) outputs = model.generate(**inputs, max_new_tokens=20) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ### Expected behavior model output (but getting multi gpu inference not supported)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27953/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27953/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27951
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27951/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27951/comments
https://api.github.com/repos/huggingface/transformers/issues/27951/events
https://github.com/huggingface/transformers/pull/27951
2,035,826,425
PR_kwDOCUB6oc5hsLfw
27,951
Fix AMD scheduled CI not triggered
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,702
1,702
1,702
COLLABORATOR
null
# What does this PR do? A bug is introduced in #27743: the AMD scheduled CI is restructured, but the `github.event_name == 'schedule'` should be changed to `github.event_name == 'workflow_run'`. Curretnly, the (actual) AMD scheduled CI is not triggered.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27951/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27951/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/27951", "html_url": "https://github.com/huggingface/transformers/pull/27951", "diff_url": "https://github.com/huggingface/transformers/pull/27951.diff", "patch_url": "https://github.com/huggingface/transformers/pull/27951.patch", "merged_at": 1702308131000 }