url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/28457 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28457/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28457/comments | https://api.github.com/repos/huggingface/transformers/issues/28457/events | https://github.com/huggingface/transformers/pull/28457 | 2,077,277,222 | PR_kwDOCUB6oc5j12zh | 28,457 | Bump jinja2 from 2.11.3 to 3.1.3 in /examples/research_projects/decision_transformer | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
},
{
"id": 6410654816,
"node_id": "LA_kwDOCUB6oc8AAAABfhrUYA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/python",
"name": "python",
"color": "2b67c6",
"default": false,
"description": "Pull requests that update Python code"
}
] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28457). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"FYI @Rocketknight1 if we see failures"
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | Bumps [jinja2](https://github.com/pallets/jinja) from 2.11.3 to 3.1.3.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/releases">jinja2's releases</a>.</em></p>
<blockquote>
<h2>3.1.3</h2>
<p>This is a fix release for the 3.1.x feature branch.</p>
<ul>
<li>Fix for <a href="https://github.com/pallets/jinja/security/advisories/GHSA-h5c8-rqwp-cp95">GHSA-h5c8-rqwp-cp95</a>. You are affected if you are using <code>xmlattr</code> and passing user input as attribute keys.</li>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-3">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-3</a></li>
<li>Milestone: <a href="https://github.com/pallets/jinja/milestone/15?closed=1">https://github.com/pallets/jinja/milestone/15?closed=1</a></li>
</ul>
<h2>3.1.2</h2>
<p>This is a fix release for the <a href="https://github.com/pallets/jinja/releases/tag/3.1.0">3.1.0</a> feature release.</p>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-2">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-2</a></li>
<li>Milestone: <a href="https://github.com/pallets/jinja/milestone/13?closed=1">https://github.com/pallets/jinja/milestone/13?closed=1</a></li>
</ul>
<h2>3.1.1</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-1">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-1</a></li>
<li>Milestone: <a href="https://github.com/pallets/jinja/milestone/12?closed=1">https://github.com/pallets/jinja/milestone/12?closed=1</a></li>
</ul>
<h2>3.1.0</h2>
<p>This is a feature release, which includes new features and removes previously deprecated features. The 3.1.x branch is now the supported bugfix branch, the 3.0.x branch has become a tag marking the end of support for that branch. We encourage everyone to upgrade, and to use a tool such as <a href="https://pypi.org/project/pip-tools/">pip-tools</a> to pin all dependencies and control upgrades. We also encourage upgrading to MarkupSafe 2.1.1, the latest version at this time.</p>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-0">https://jinja.palletsprojects.com/en/3.1.x/changes/#version-3-1-0</a></li>
<li>Milestone: <a href="https://github.com/pallets/jinja/milestone/8?closed=1">https://github.com/pallets/jinja/milestone/8?closed=1</a></li>
<li>MarkupSafe changes: <a href="https://markupsafe.palletsprojects.com/en/2.1.x/changes/#version-2-1-1">https://markupsafe.palletsprojects.com/en/2.1.x/changes/#version-2-1-1</a></li>
</ul>
<h2>3.0.3</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-3">https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-3</a></li>
</ul>
<h2>3.0.2</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-2">https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-2</a></li>
</ul>
<h2>3.0.1</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-1">https://jinja.palletsprojects.com/en/3.0.x/changes/#version-3-0-1</a></li>
</ul>
<h2>3.0.0</h2>
<p>New major versions of all the core Pallets libraries, including Jinja 3.0, have been released! :tada:</p>
<ul>
<li>Read the announcement on our blog: <a href="https://palletsprojects.com/blog/flask-2-0-released/">https://palletsprojects.com/blog/flask-2-0-released/</a></li>
<li>Read the full list of changes: <a href="https://jinja.palletsprojects.com/changes/#version-3-0-0">https://jinja.palletsprojects.com/changes/#version-3-0-0</a></li>
<li>Retweet the announcement on Twitter: <a href="https://twitter.com/PalletsTeam/status/1392266507296514048">https://twitter.com/PalletsTeam/status/1392266507296514048</a></li>
<li>Follow our blog, Twitter, or GitHub to see future announcements.</li>
</ul>
<p>This represents a significant amount of work, and there are quite a few changes. Be sure to carefully read the changelog, and use tools such as pip-compile and Dependabot to pin your dependencies and control your updates.</p>
<h2>3.0.0rc2</h2>
<p>Fixes an issue with the deprecated <code>Markup</code> subclass, <a href="https://redirect.github.com/pallets/jinja/issues/1401">#1401</a>.</p>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/master/changes/#version-3-0-0">https://jinja.palletsprojects.com/en/master/changes/#version-3-0-0</a></li>
</ul>
<h2>3.0.0rc1</h2>
<ul>
<li>Changes: <a href="https://jinja.palletsprojects.com/en/master/changes/#version-3-0-0">https://jinja.palletsprojects.com/en/master/changes/#version-3-0-0</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/pallets/jinja/blob/main/CHANGES.rst">jinja2's changelog</a>.</em></p>
<blockquote>
<h2>Version 3.1.3</h2>
<p>Released 2024-01-10</p>
<ul>
<li>Fix compiler error when checking if required blocks in parent templates are
empty. :pr:<code>1858</code></li>
<li><code>xmlattr</code> filter does not allow keys with spaces. GHSA-h5c8-rqwp-cp95</li>
<li>Make error messages stemming from invalid nesting of <code>{% trans %}</code> blocks
more helpful. :pr:<code>1918</code></li>
</ul>
<h2>Version 3.1.2</h2>
<p>Released 2022-04-28</p>
<ul>
<li>Add parameters to <code>Environment.overlay</code> to match <code>__init__</code>.
:issue:<code>1645</code></li>
<li>Handle race condition in <code>FileSystemBytecodeCache</code>. :issue:<code>1654</code></li>
</ul>
<h2>Version 3.1.1</h2>
<p>Released 2022-03-25</p>
<ul>
<li>The template filename on Windows uses the primary path separator.
:issue:<code>1637</code></li>
</ul>
<h2>Version 3.1.0</h2>
<p>Released 2022-03-24</p>
<ul>
<li>
<p>Drop support for Python 3.6. :pr:<code>1534</code></p>
</li>
<li>
<p>Remove previously deprecated code. :pr:<code>1544</code></p>
<ul>
<li><code>WithExtension</code> and <code>AutoEscapeExtension</code> are built-in now.</li>
<li><code>contextfilter</code> and <code>contextfunction</code> are replaced by
<code>pass_context</code>. <code>evalcontextfilter</code> and
<code>evalcontextfunction</code> are replaced by <code>pass_eval_context</code>.
<code>environmentfilter</code> and <code>environmentfunction</code> are replaced
by <code>pass_environment</code>.</li>
<li><code>Markup</code> and <code>escape</code> should be imported from MarkupSafe.</li>
<li>Compiled templates from very old Jinja versions may need to be
recompiled.</li>
<li>Legacy resolve mode for <code>Context</code> subclasses is no longer
supported. Override <code>resolve_or_missing</code> instead of</li>
</ul>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/pallets/jinja/commit/d9de4bb215fd1cc8092a410fb834c7c4060b1fc1"><code>d9de4bb</code></a> release version 3.1.3</li>
<li><a href="https://github.com/pallets/jinja/commit/50124e16561f17f6c1ec85a692f6551418971cdc"><code>50124e1</code></a> skip test pypi</li>
<li><a href="https://github.com/pallets/jinja/commit/9ea7222ef3f184480be0d0884e30ccfb4172b17b"><code>9ea7222</code></a> use trusted publishing</li>
<li><a href="https://github.com/pallets/jinja/commit/da703f7aae36b1e88baaa20de334d7ff6378fdde"><code>da703f7</code></a> use trusted publishing</li>
<li><a href="https://github.com/pallets/jinja/commit/bce174692547464512383ec40e0f8338b8811983"><code>bce1746</code></a> use trusted publishing</li>
<li><a href="https://github.com/pallets/jinja/commit/7277d8068be593deab3555c7c14f974ada373af1"><code>7277d80</code></a> update pre-commit hooks</li>
<li><a href="https://github.com/pallets/jinja/commit/5c8a10522421270f66376a24ec8e0d6812bc4b14"><code>5c8a105</code></a> Make nested-trans-block exceptions nicer (<a href="https://redirect.github.com/pallets/jinja/issues/1918">#1918</a>)</li>
<li><a href="https://github.com/pallets/jinja/commit/19a55db3b411343309f2faaffaedbb089e841895"><code>19a55db</code></a> Make nested-trans-block exceptions nicer</li>
<li><a href="https://github.com/pallets/jinja/commit/716795349a41d4983a9a4771f7d883c96ea17be7"><code>7167953</code></a> Merge pull request from GHSA-h5c8-rqwp-cp95</li>
<li><a href="https://github.com/pallets/jinja/commit/7dd3680e6eea0d77fde024763657aa4d884ddb23"><code>7dd3680</code></a> xmlattr filter disallows keys with spaces</li>
<li>Additional commits viewable in <a href="https://github.com/pallets/jinja/compare/2.11.3...3.1.3">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=jinja2&package-manager=pip&previous-version=2.11.3&new-version=3.1.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28457/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28457/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28457",
"html_url": "https://github.com/huggingface/transformers/pull/28457",
"diff_url": "https://github.com/huggingface/transformers/pull/28457.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28457.patch",
"merged_at": 1705069735000
} |
https://api.github.com/repos/huggingface/transformers/issues/28456 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28456/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28456/comments | https://api.github.com/repos/huggingface/transformers/issues/28456/events | https://github.com/huggingface/transformers/issues/28456 | 2,077,240,191 | I_kwDOCUB6oc570Ct_ | 28,456 | Very slow on conditional check of HF tokenizers | {
"login": "pseudotensor",
"id": 2249614,
"node_id": "MDQ6VXNlcjIyNDk2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2249614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pseudotensor",
"html_url": "https://github.com/pseudotensor",
"followers_url": "https://api.github.com/users/pseudotensor/followers",
"following_url": "https://api.github.com/users/pseudotensor/following{/other_user}",
"gists_url": "https://api.github.com/users/pseudotensor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pseudotensor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pseudotensor/subscriptions",
"organizations_url": "https://api.github.com/users/pseudotensor/orgs",
"repos_url": "https://api.github.com/users/pseudotensor/repos",
"events_url": "https://api.github.com/users/pseudotensor/events{/privacy}",
"received_events_url": "https://api.github.com/users/pseudotensor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Thanks for raising this, I have no idea how to fix this 🤣 python is trying to convert the object to a boolean, but I don't know what's happening other than that! 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
transformers==4.36.2
Python 3.10
Ubuntu 20
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
It's pretty pythonic to avoid "x is not None" if could have just done "if x". I know for numpy objects this isn't same thing, but I don't know why tokenizer would do alot of extra work here. Seems like bug.
```
import time
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b", trust_remote_code=True)
t0 = time.time()
if tokenizer:
pass
print(time.time() - t0)
t0 = time.time()
if tokenizer is not None:
pass
print(time.time() - t0)
tokenizer = AutoTokenizer.from_pretrained("h2oai/h2ogpt-4096-llama2-7b-chat", trust_remote_code=True)
t0 = time.time()
if tokenizer:
pass
print(time.time() - t0)
t0 = time.time()
if tokenizer is not None:
pass
print(time.time() - t0)
```
```
0.0909724235534668
9.5367431640625e-07
0.0019714832305908203
2.384185791015625e-07
```
### Expected behavior
Not be 100000x slower if do:
```
if tokenizer:
pass
```
vs. faster:
```
if tokenizer is not None:
pass
```
Point is that I might check tokenizer many times, and actually tokenizing things is much faster than that check, which is bad. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28456/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28456/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28455 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28455/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28455/comments | https://api.github.com/repos/huggingface/transformers/issues/28455/events | https://github.com/huggingface/transformers/pull/28455 | 2,077,174,720 | PR_kwDOCUB6oc5j1f1e | 28,455 | Changed type hinting for attentions | {
"login": "nakranivaibhav",
"id": 67785830,
"node_id": "MDQ6VXNlcjY3Nzg1ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/67785830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nakranivaibhav",
"html_url": "https://github.com/nakranivaibhav",
"followers_url": "https://api.github.com/users/nakranivaibhav/followers",
"following_url": "https://api.github.com/users/nakranivaibhav/following{/other_user}",
"gists_url": "https://api.github.com/users/nakranivaibhav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nakranivaibhav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nakranivaibhav/subscriptions",
"organizations_url": "https://api.github.com/users/nakranivaibhav/orgs",
"repos_url": "https://api.github.com/users/nakranivaibhav/repos",
"events_url": "https://api.github.com/users/nakranivaibhav/events{/privacy}",
"received_events_url": "https://api.github.com/users/nakranivaibhav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | Issue No: #28345
# What does this PR do?
Changed type hinting for attentions to 'attentions: Optional[tuple[torch.FloatTensor,...]] = None'
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28455/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28455/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28455",
"html_url": "https://github.com/huggingface/transformers/pull/28455",
"diff_url": "https://github.com/huggingface/transformers/pull/28455.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28455.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28454 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28454/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28454/comments | https://api.github.com/repos/huggingface/transformers/issues/28454/events | https://github.com/huggingface/transformers/issues/28454 | 2,077,166,524 | I_kwDOCUB6oc57zwu8 | 28,454 | Generate with Logits Processor not working - even with no modifications to logits | {
"login": "SamSJackson",
"id": 86316114,
"node_id": "MDQ6VXNlcjg2MzE2MTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/86316114?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamSJackson",
"html_url": "https://github.com/SamSJackson",
"followers_url": "https://api.github.com/users/SamSJackson/followers",
"following_url": "https://api.github.com/users/SamSJackson/following{/other_user}",
"gists_url": "https://api.github.com/users/SamSJackson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamSJackson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamSJackson/subscriptions",
"organizations_url": "https://api.github.com/users/SamSJackson/orgs",
"repos_url": "https://api.github.com/users/SamSJackson/repos",
"events_url": "https://api.github.com/users/SamSJackson/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamSJackson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I spent far too long on this. I had forgotten to instantiate my custom processor class.",
"Thanks for sharing! 🤗 "
] | 1,704 | 1,705 | 1,704 | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.5
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Not that I am aware of.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
# Overview
I am using a custom logits processor but I have made a custom class here that does nothing to the logits but shows that the process is not working.
Similarly, I am working with mistral's 7B instruct model but problem persists with other models, such as GPT2 - not found a working model yet.
## Minimum code to reproduce:
```
from transformers import AutoModelForCausalLM, AutoTokenizer, LogitsProcessorList, LogitsProcessor
from data.code.implementation.newkirch.extended_watermark_processor import WatermarkLogitsProcessor
import torch
device = "cuda"
model_name = "gpt2"
# model_name = "mistralai/Mistral-7B-Instruct-v0.2" # Mistral AI model
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map=device)
tokenizer = AutoTokenizer.from_pretrained(model_name)
class CustomLogitsProcessor(LogitsProcessor):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch.FloatTensor:
return scores
def generate_essay(prompt):
messages = [{
"role": "user",
"content": prompt
}]
model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(device)
# Setting `pad_token_id` to `eos_token_id` for open-ended generation.
generated_ids = model.generate(
model_inputs,
max_new_tokens=7500,
do_sample=True,
pad_token_id=tokenizer.eos_token_id,
logits_processor=LogitsProcessorList([CustomLogitsProcessor])
)
decoded = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
text = decoded[0].split("[/INST]")[1]
return text
prompt = '''You are a student working on the following assignment.
Create an essay based on the following topic in no more than a 100 words.
Topic: Why are cats better than dogs?
'''
text = generate_essay(prompt)
print(text)
```
## Response/error:
```
Traceback (most recent call last):
File "C:\Users\Sam\Desktop\Level4-Proj\data\mistral_test.py", line 41, in <module>
text = generate_essay(prompt)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Sam\Desktop\Level4-Proj\data\mistral_test.py", line 22, in generate_essay
generated_ids = model.generate(
^^^^^^^^^^^^^^^
File "C:\Users\Sam\anaconda3\envs\wmark-pt\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Sam\anaconda3\envs\wmark-pt\Lib\site-packages\transformers\generation\utils.py", line 1777, in generate
return self.sample(
^^^^^^^^^^^^
File "C:\Users\Sam\anaconda3\envs\wmark-pt\Lib\site-packages\transformers\generation\utils.py", line 2887, in sample
next_token_scores = logits_processor(input_ids, next_token_logits)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Sam\anaconda3\envs\wmark-pt\Lib\site-packages\transformers\generation\logits_process.py", line 94, in __call__
raise ValueError(
ValueError: Make sure that all the required parameters: ['self', 'input_ids', 'scores'] for <class 'type'> are passed to the logits processor.
```
From my investigation, it looks like it is getting caught up [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py), on line 90.
I have verified that "scores" and "input_ids" are both present when the call is made.
Not sure if the `function_args` should be checking in `list(function_args.keys()[3:])` but this is where the error is happening.
# Expectations
I would expect the model to generate some text corresponding to the prompt.
In an effort to show that the logit processor is doing nothing, sampling is off - results should be deterministic.
The text produced should be very similar to the text produced without the logits processor. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28454/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28454/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28453 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28453/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28453/comments | https://api.github.com/repos/huggingface/transformers/issues/28453/events | https://github.com/huggingface/transformers/issues/28453 | 2,077,008,882 | I_kwDOCUB6oc57zKPy | 28,453 | Loading safetensor version of mistralai/Mistral-7B-Instruct-v0.1(7b) in Triton Server results in cuda OOM | {
"login": "jimmymanianchira",
"id": 9268915,
"node_id": "MDQ6VXNlcjkyNjg5MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9268915?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimmymanianchira",
"html_url": "https://github.com/jimmymanianchira",
"followers_url": "https://api.github.com/users/jimmymanianchira/followers",
"following_url": "https://api.github.com/users/jimmymanianchira/following{/other_user}",
"gists_url": "https://api.github.com/users/jimmymanianchira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimmymanianchira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimmymanianchira/subscriptions",
"organizations_url": "https://api.github.com/users/jimmymanianchira/orgs",
"repos_url": "https://api.github.com/users/jimmymanianchira/repos",
"events_url": "https://api.github.com/users/jimmymanianchira/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimmymanianchira/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reporting, pinging @Narsil for visibility, do you have any reproducers / examples of inputs that can lead to this? ",
"@jimmymanianchira this is quite unusual as safetensors do not interact with CUDA right @Narsil ? \r\nAre you sure you are using the same transformers version across your tests? How did you moved your tests to use the safetensors version? ",
"Safetensors doesn' t interact with it in anyway. I supposed something else is at play.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
transformers==4.36
Triton Instance - nvcr.io/nvidia/tritonserver:23.09-pyt-python-py3
torch==2.1.2
### Who can help?
We are trying to load and serve [Mistral 7B Instruct v0.1](https://huggingface.co./mistralai/Mistral-7B-Instruct-v0.1) using Triton Inference server.
We initially was loading the weights using .bin and it worked fine. But we later moved to the safe tensor version. We noticed that after calling the model couple of times, its getting OOM. I did it in 4 gpu and same thing is happening. It works for the 1st inference call and after that, we end up with OOM. Its very strange and .bin weights work normally
@ArthurZucker and @younesbelkada @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Load Mistral 7b in Triton Inference Server using safe tensor weights
2. Send couple of inference calls
3. Will notice OOM
### Expected behavior
Loading weights with safetensors shouldn't cause OOM. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28453/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28453/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28452 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28452/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28452/comments | https://api.github.com/repos/huggingface/transformers/issues/28452/events | https://github.com/huggingface/transformers/pull/28452 | 2,076,802,059 | PR_kwDOCUB6oc5j0Mjb | 28,452 | Fix docker file | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | COLLABORATOR | null | # What does this PR do?
The change in #28400 and #28432 break the docker image build. This PR fixes 2 issues to we can build the image for CI. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28452/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28452/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28452",
"html_url": "https://github.com/huggingface/transformers/pull/28452",
"diff_url": "https://github.com/huggingface/transformers/pull/28452.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28452.patch",
"merged_at": 1704983646000
} |
https://api.github.com/repos/huggingface/transformers/issues/28451 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28451/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28451/comments | https://api.github.com/repos/huggingface/transformers/issues/28451/events | https://github.com/huggingface/transformers/pull/28451 | 2,076,714,857 | PR_kwDOCUB6oc5jz4zg | 28,451 | Fix broken link on page | {
"login": "keenranger",
"id": 18392918,
"node_id": "MDQ6VXNlcjE4MzkyOTE4",
"avatar_url": "https://avatars.githubusercontent.com/u/18392918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keenranger",
"html_url": "https://github.com/keenranger",
"followers_url": "https://api.github.com/users/keenranger/followers",
"following_url": "https://api.github.com/users/keenranger/following{/other_user}",
"gists_url": "https://api.github.com/users/keenranger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keenranger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keenranger/subscriptions",
"organizations_url": "https://api.github.com/users/keenranger/orgs",
"repos_url": "https://api.github.com/users/keenranger/repos",
"events_url": "https://api.github.com/users/keenranger/events{/privacy}",
"received_events_url": "https://api.github.com/users/keenranger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28451). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes broken link in documents. see the `Hub` link in [page](https://huggingface.co./docs/transformers/main/en/add_new_pipeline)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28451/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28451/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28451",
"html_url": "https://github.com/huggingface/transformers/pull/28451",
"diff_url": "https://github.com/huggingface/transformers/pull/28451.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28451.patch",
"merged_at": 1704993973000
} |
https://api.github.com/repos/huggingface/transformers/issues/28450 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28450/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28450/comments | https://api.github.com/repos/huggingface/transformers/issues/28450/events | https://github.com/huggingface/transformers/pull/28450 | 2,076,610,815 | PR_kwDOCUB6oc5jzhVb | 28,450 | Fix docstring checker issues with PIL enums | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | MEMBER | null | The docstring checker is called by `make repo-consistency` or `make fix-copies`, but it struggles with enums, and particularly struggles with the `PIL.Resampling` enum, as this moved in PIL 10 and we set some code in `image_utils` to always put it in the same place. This caused issues where, depending on the installed PIL version, the docstring checker would try to replace enum names like `Resampling.BICUBIC` with the enum int value for that entry.
After this fix, people should be able to upgrade to the latest version of `PIL` and run `make fixup` or `make fix-copies` without issues! The issue may persist on older versions of PIL, unfortunately, where the value is sometimes just a raw `int` rather than an `Enum`, but we can just ask users to upgrade if they encounter issues there. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28450/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28450/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28450",
"html_url": "https://github.com/huggingface/transformers/pull/28450",
"diff_url": "https://github.com/huggingface/transformers/pull/28450.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28450.patch",
"merged_at": 1704993822000
} |
https://api.github.com/repos/huggingface/transformers/issues/28449 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28449/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28449/comments | https://api.github.com/repos/huggingface/transformers/issues/28449/events | https://github.com/huggingface/transformers/issues/28449 | 2,076,606,399 | I_kwDOCUB6oc57xn-_ | 28,449 | Intel/dpt-swinv2: TypeError: unsupported operand type(s) for //: 'NoneType' and 'NoneType' | {
"login": "kadirnar",
"id": 36204372,
"node_id": "MDQ6VXNlcjM2MjA0Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/36204372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kadirnar",
"html_url": "https://github.com/kadirnar",
"followers_url": "https://api.github.com/users/kadirnar/followers",
"following_url": "https://api.github.com/users/kadirnar/following{/other_user}",
"gists_url": "https://api.github.com/users/kadirnar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kadirnar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kadirnar/subscriptions",
"organizations_url": "https://api.github.com/users/kadirnar/orgs",
"repos_url": "https://api.github.com/users/kadirnar/repos",
"events_url": "https://api.github.com/users/kadirnar/events{/privacy}",
"received_events_url": "https://api.github.com/users/kadirnar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This model doesn't work either.\r\n\r\nhttps://huggingface.co./Intel/dpt-beit-base-384",
"cc @NielsRogge as well ",
"Hi,\r\n\r\nLooks like you may need to update your Transformers version. Works for me on main",
"> Hi,\r\n> \r\n> Looks like you may need to update your Transformers version. Works for me on main\r\n\r\nI updated the package version.\r\n\r\n- `transformers` version: 4.36.2\r\n- Platform: Windows-10-10.0.22631-SP0\r\n- Python version: 3.8.18\r\n- Huggingface_hub version: 0.19.4\r\n- Safetensors version: 0.4.1\r\n- Accelerate version: 0.25.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.1.1+cu118 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\nThe \"Intel/dpt-beit-base-384\" model works, but the \"swinv2\" model doesn't.\r\n\r\n<img width=\"1619\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/36204372/98ea8310-8746-4180-a696-71ccd94d87d2\">\r\n\r\n",
"Ok, that's because https://github.com/huggingface/transformers/pull/27742 was merged 3 weeks ago, which probably was not part yet of 4.36.2. Hence this one will be part of the next Transformers release."
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.8.18
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Narsil @amyeroberts
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
pipe = pipeline(task="depth-estimation", model="Intel/dpt-swinv2-large-384")
result = pipe("http://images.cocodataset.org/val2017/000000039769.jpg")
result["depth"]
```
Error Message:
```
image_size = image_size if isinstance(image_size, collections.abc.Iterable) else (image_size, image_size)
patch_size = patch_size if isinstance(patch_size, collections.abc.Iterable) else (patch_size, patch_size)
num_patches = (image_size[1] // patch_size[1]) * (image_size[0] // patch_size[0])
TypeError: unsupported operand type(s) for //: 'NoneType' and 'NoneType'
```
### Expected behavior
I want to test dpt-brit-large-384 models.
Model Page: https://huggingface.co./Intel/dpt-swinv2-large-384 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28449/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28449/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28448 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28448/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28448/comments | https://api.github.com/repos/huggingface/transformers/issues/28448/events | https://github.com/huggingface/transformers/issues/28448 | 2,076,593,743 | I_kwDOCUB6oc57xk5P | 28,448 | Interested in YOLOv6 Addition? | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi @SangbumChoi thanks for this amazing draft!\r\n\r\nIs the YOLOv6 model trained from scratch or are you using any pre-trained weights?\r\n\r\nRegarding the design, that looks great already, however we would need to include the preparation of the targets inside the image processor, such that users can pass images + targets to it, such that the image processor then outputs a `BatchFeature` containing both `pixel_values` and `labels`.\r\n\r\nSame goes for the postprocessing, we would need to make it conform our existing models, meaning that a `post_process_object_detection` method would need to be implemented. This would also allow the model to be compatible with the pipeline API, making sure it will work with the inference widgets on the hub, etc.\r\n\r\nI'll discuss with team regarding the addition of the model :)",
"Hi @NielsRogge!\r\n\r\nCurrent pipeline of model yolov6n and yolov6s is all public pre-trained weights and I also tested tolerence (1e-3 rather than 1e-4). You may check [convert_yolov6_to_pytorch.py] (https://github.com/SangbumChoi/transformers/blob/yolov6/src/transformers/models/yolov6/convert_yolov6_to_pytorch.py) however you might need to install YOLOv6 and unwrap the model.pt and store pure stated_dict since it is wrapped with the python class yolo.\r\n\r\nhttps://github.com/meituan/YOLOv6/releases/download/0.4.0/yolov6n.pt\r\n\r\nAlso I think understand more than 90% of whole pipeline in transformers. So currently i'm working on training pipeline and also including minority features for `BatchFeature`, `post_process_object_detection`, and etc...\r\n\r\nI think perfect or PR-ready version might take some more time but feel free to discuss with your team and happy to get feedbacks of mandatory requirements 😄 "
] | 1,704 | 1,706 | 1,706 | CONTRIBUTOR | null | ### Model description
Hi, transformers team my question is very simple. Does team is interested in implementing [YOLOv6](https://github.com/meituan/YOLOv6/tree/main)?
I have finished making inference pipeline and working on training pipeline.
https://github.com/SangbumChoi/transformers/tree/yolov6
Currently, it might occur little bug and unpretty but it works. I will continue to make it regardless of whether it is goint to be officially implemented or not.
```
from transformers import Yolov6Model, Yolov6ForObjectDetection
from transformers import Yolov6Config
import io
import requests
from PIL import Image
import torch
import numpy
from transformers.image_transforms import center_to_corners_format
from transformers import Yolov6ImageProcessor
from torchvision.ops.boxes import batched_nms
object_model = Yolov6ForObjectDetection.from_pretrained("superb-ai/yolov6n").cuda()
object_model.eval()
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_processor = Yolov6ImageProcessor()
inputs = image_processor(images=image, size={"shortest_edge": 640, "longest_edge": 640}, return_tensors="pt")
label = False
if label:
n_targets = 8
batch_size = 1
torch_device = 'cuda'
labels = []
for i in range(batch_size):
target = {}
target["class_labels"] = torch.ones(
size=(n_targets,), device=torch_device, dtype=torch.long
)
target["boxes"] = torch.rand(
n_targets, 4, device=torch_device, dtype=torch.float
)
labels.append(target)
inputs['labels'] = labels
inputs["pixel_values"] = inputs["pixel_values"].cuda()
outputs = object_model(**inputs)
out_logits, out_bbox = outputs.logits, outputs.pred_boxes
batch_size, num_queries, num_labels = out_logits.shape
prob = out_logits.sigmoid()
all_scores = prob.reshape(batch_size, -1).to(out_logits.device)
all_indexes = torch.arange(num_queries * num_labels)[None].repeat(batch_size, 1).to(out_logits.device)
all_boxes = torch.div(all_indexes, out_logits.shape[2], rounding_mode="floor")
all_labels = all_indexes % out_logits.shape[2]
boxes = center_to_corners_format(out_bbox)
boxes = torch.gather(boxes, 1, all_boxes.unsqueeze(-1).repeat(1, 1, 4))
nms_threshold = 0.7
threshold = 0.3
results = []
for b in range(batch_size):
box = boxes[b]
score = all_scores[b]
lbls = all_labels[b]
# apply NMS
keep_inds = batched_nms(box, score, lbls, nms_threshold)[:100]
score = score[keep_inds]
lbls = lbls[keep_inds]
box = box[keep_inds]
results.append(
{
"scores": score[score > threshold],
"labels": lbls[score > threshold],
"boxes": box[score > threshold],
}
)
import matplotlib.pyplot as plt
# colors for visualization
COLORS = [[0.000, 0.447, 0.741], [0.850, 0.325, 0.098], [0.929, 0.694, 0.125],
[0.494, 0.184, 0.556], [0.466, 0.674, 0.188], [0.301, 0.745, 0.933]]
def plot_results(pil_img, scores, labels, boxes):
plt.figure(figsize=(16,10))
plt.imshow(pil_img)
ax = plt.gca()
colors = COLORS * 100
for score, label, (xmin, ymin, xmax, ymax),c in zip(scores.tolist(), labels.tolist(), boxes.tolist(), colors):
ax.add_patch(plt.Rectangle((xmin, ymin), xmax - xmin, ymax - ymin,
fill=False, color=c, linewidth=3))
text = f'{object_model.config.id2label[label]}: {score:0.2f}'
ax.text(xmin, ymin, text, fontsize=15,
bbox=dict(facecolor='yellow', alpha=0.5))
plt.axis('off')
plt.show()
# postprocess model outputs
width, height = image.size
result = results[0]
plot_results(image, result['scores'], result['labels'], result['boxes'])
```
![스크린샷 2024-01-11 오후 10 01 31](https://github.com/huggingface/transformers/assets/34004152/0c5b62a5-19b9-4979-899f-8b58f75e7147)
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
https://huggingface.co./superb-ai/yolov6n | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28448/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28448/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28447 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28447/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28447/comments | https://api.github.com/repos/huggingface/transformers/issues/28447/events | https://github.com/huggingface/transformers/pull/28447 | 2,076,391,809 | PR_kwDOCUB6oc5jywC5 | 28,447 | symbolic_trace: add past_key_values, llama, sdpa support | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28447). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,704 | 1,705 | 1,705 | COLLABORATOR | null | This PR:
* Allows to use `transformers.utils.fx.symbolic_trace` with `past_key_values` inputs for some architectures (currently opt, llama)
* Adds llama support for symbolic_trace.
* Adds SDPA support for symbolic_trace.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28447/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28447/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28447",
"html_url": "https://github.com/huggingface/transformers/pull/28447",
"diff_url": "https://github.com/huggingface/transformers/pull/28447.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28447.patch",
"merged_at": 1705488654000
} |
https://api.github.com/repos/huggingface/transformers/issues/28446 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28446/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28446/comments | https://api.github.com/repos/huggingface/transformers/issues/28446/events | https://github.com/huggingface/transformers/issues/28446 | 2,076,195,029 | I_kwDOCUB6oc57wDjV | 28,446 | Failed to import transformers.models.transfo_xl.configuration_transfo_xl | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"TransfoXL was deprecated and is now in the legacy folder see `/transformers/src/transformers/models/deprecated/transfo_xl` as it is no longer maintained",
"> TransfoXL was deprecated and is now in the legacy folder see `/transformers/src/transformers/models/deprecated/transfo_xl` as it is no longer maintained\r\n\r\nAny workaround for above code to run? ",
"Just here because I am having this same issue right now :( ",
"Same here, i just did from transformers import Train\r\n\r\nOn Fri, Jan 12, 2024 at 9:07 AM RichardAragon ***@***.***>\r\nwrote:\r\n\r\n> Just here because I am having this same issue right now :(\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/28446#issuecomment-1888382637>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AE4LJNKV23HELVQMCBAHRILYOCVXZAVCNFSM6AAAAABBWGNETKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQOBYGM4DENRTG4>\r\n> .\r\n> You are receiving this because you authored the thread.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"` from transformers import TransfoXLForSequenceClassification` should help you. \r\ncc @ydshieh this is a regression and should throw a deprecated warning not an error! Can you have a look as you did the deprecation cycle! ",
"OK, taking look into this",
"@andysingal \r\n\r\nI am running\r\n\r\n```\r\nckpt = \"transfo-xl-wt103\"\r\n\r\nfrom transformers import AutoModelForSequenceClassification\r\n\r\nmodel = AutoModelForSequenceClassification.from_pretrained(ckpt)\r\n```\r\nand it works.\r\n\r\nCould you share the colab that can produce the issue?",
"I got the same error after I upgraded the transformers package. If you are downloading the files from a hugging face repo, can you try removing the local model cache files, and redownload them? That worked for me. ",
"following this issue...",
"@andysingal @sanalsprasad @nandakishorebellammuralidhar \r\n\r\nAs mentioned, I tried on colab and I am not able to reproduce the error.\r\n\r\nCould you provide your system information by running `transformers-cli env` (command) as well as a code snippet.\r\nOr you can try to reproduce it on colab.\r\n\r\nOtherwise, I'm afraid that I won't be able to help on this.",
"I tried the same processes again yesterday or the day before that threw\r\nthis error for me before. I did get the configuration error again, I had to\r\nuninstall transformers and install huggingface transformers, that fixed it\r\nthis time.\r\n\r\nThe use case was that I was trying to find tune an already quantized model.\r\nIt was a model I had already fine tuned, and I wanted to find tune it\r\nagain. If memory serves, that is the issue that brought me here too. I\r\nthink then I was attempting to merge two already merged models via mergekit.\r\n\r\nOn Mon, Jan 22, 2024, 6:26 AM Yih-Dar ***@***.***> wrote:\r\n\r\n> @andysingal <https://github.com/andysingal> @sanalsprasad\r\n> <https://github.com/sanalsprasad> @nandakishorebellammuralidhar\r\n> <https://github.com/nandakishorebellammuralidhar>\r\n>\r\n> As mentioned, I tried on colab and I am not able to reproduce the error.\r\n>\r\n> Could you provide your system information by running transformers-cli env\r\n> (command) as well as a code snippet.\r\n> Or you can try to reproduce it on colab.\r\n>\r\n> Otherwise, I'm afraid that I won't be able to help on this.\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/28446#issuecomment-1904115533>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/BA44S7PLUJZ3YNDOD6MUCRLYPZZKPAVCNFSM6AAAAABBWGNETKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMBUGEYTKNJTGM>\r\n> .\r\n> You are receiving this because you commented.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"transformers.models.transfo_xl.configuration_transfo_xl is deprecated from transformers v.4.36\r\nso install version 4.35\r\n!pip install -q -U git+https://github.com/huggingface/[email protected]\r\nand restart colab kernel.",
"@kasiwoos it is deprecated, but it will continue to work. We just don't run any test against this model anymore and it won't be maintained.\r\n\r\nBut I can't reproduce the issue people reported here.",
"Same issue for me, @kasiwoos fix worked. To reiterate the issue for me was if you are loading a fine tuned llama 2 8bit quantized from 2 weeks ago it won't work with the latest transformers release.",
"@patruff Could you give more details on how to reproduce, please. That would be really helpful.",
"> @patruff Could you give more details on how to reproduce, please. That would be really helpful.\n\n\nSure, run this on a T4 in Colab with the latest transformers\n\n# name can be any 8bit model\nname='patruff/chucklesEFT1'\n\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\nimport torch\n\nmodel_8bit = AutoModelForCausalLM.from_pretrained(name, device_map=\"auto\", load_in_8bit=True)\ntokenizer = AutoTokenizer.from_pretrained(name)",
"@patruff First thanks for sharing. I am still not able to reproduce however.\r\n\r\n`name='patruff/chucklesEFT1'` is a dataset, so I changed it to `name='patruff/toxic-llama2-7b-tuneEFT1'`.\r\n\r\nOn colab, it works (even if I upgrade `transformers` to v4.37).\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | null | NONE | null | ### System Info
Colab Notebook
### Who can help?
@ArthurZucker @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
model = AutoModelForSequenceClassification.from_pretrained(
TEACHER_MODEL,
problem_type="multi_label_classification",
num_labels=len(unique_labels),
id2label=id2label,
label2id=label2id
)
```
ERROR:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)
1352 self._objects = {} if extra_objects is None else extra_objects
-> 1353 self._name = name
1354 self._import_structure = import_structure
11 frames
[/usr/lib/python3.10/importlib/__init__.py](https://localhost:8080/#) in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
/usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
ModuleNotFoundError: No module named 'transformers.models.transfo_xl.configuration_transfo_xl'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
[<ipython-input-24-49d540f006ea>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model = AutoModel.from_pretrained(
2 TEACHER_MODEL,
3 problem_type="multi_label_classification",
4 num_labels=len(unique_labels),
5 id2label=id2label,
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
541
542 has_remote_code = hasattr(config, "auto_map") and cls.__name__ in config.auto_map
--> 543 has_local_code = type(config) in cls._model_mapping.keys()
544 trust_remote_code = resolve_trust_remote_code(
545 trust_remote_code, pretrained_model_name_or_path, has_local_code, has_remote_code
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in keys(self)
755
756 def keys(self):
--> 757 mapping_keys = [
758 self._load_attr_from_module(key, name)
759 for key, name in self._config_mapping.items()
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in <listcomp>(.0)
756 def keys(self):
757 mapping_keys = [
--> 758 self._load_attr_from_module(key, name)
759 for key, name in self._config_mapping.items()
760 if key in self._model_mapping.keys()
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in _load_attr_from_module(self, model_type, attr)
752 if module_name not in self._modules:
753 self._modules[module_name] = importlib.import_module(f".{module_name}", "transformers.models")
--> 754 return getattribute_from_module(self._modules[module_name], attr)
755
756 def keys(self):
[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in getattribute_from_module(module, attr)
696 if isinstance(attr, tuple):
697 return tuple(getattribute_from_module(module, a) for a in attr)
--> 698 if hasattr(module, attr):
699 return getattr(module, attr)
700 # Some of the mappings have entries model_type -> object of another model type. In that case we try to grab the
[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in __getattr__(self, name)
1341 super().__init__(name)
1342 self._modules = set(import_structure.keys())
-> 1343 self._class_to_module = {}
1344 for key, values in import_structure.items():
1345 for value in values:
[/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)
1353 self._name = name
1354 self._import_structure = import_structure
-> 1355
1356 # Needed for autocompletion in an IDE
1357 def __dir__(self):
RuntimeError: Failed to import transformers.models.transfo_xl.configuration_transfo_xl because of the following error (look up to see its traceback):
No module named 'transformers.models.transfo_xl.configuration_transfo_xl'
```
### Expected behavior
run smoothly | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28446/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28446/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28445 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28445/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28445/comments | https://api.github.com/repos/huggingface/transformers/issues/28445/events | https://github.com/huggingface/transformers/pull/28445 | 2,076,151,831 | PR_kwDOCUB6oc5jx69l | 28,445 | When using npu to reproduce the training results, `torch.manual_seed` is also needed | {
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | null | CONTRIBUTOR | null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As per title.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @muellerzr | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28445/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28445/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28445",
"html_url": "https://github.com/huggingface/transformers/pull/28445",
"diff_url": "https://github.com/huggingface/transformers/pull/28445.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28445.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28444 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28444/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28444/comments | https://api.github.com/repos/huggingface/transformers/issues/28444/events | https://github.com/huggingface/transformers/issues/28444 | 2,076,025,396 | I_kwDOCUB6oc57vaI0 | 28,444 | Attribute Error: 'GenerationConfig' object has no attribute 'lang_to_id' | {
"login": "kimbaang",
"id": 154123661,
"node_id": "U_kgDOCS-9jQ",
"avatar_url": "https://avatars.githubusercontent.com/u/154123661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kimbaang",
"html_url": "https://github.com/kimbaang",
"followers_url": "https://api.github.com/users/kimbaang/followers",
"following_url": "https://api.github.com/users/kimbaang/following{/other_user}",
"gists_url": "https://api.github.com/users/kimbaang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kimbaang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kimbaang/subscriptions",
"organizations_url": "https://api.github.com/users/kimbaang/orgs",
"repos_url": "https://api.github.com/users/kimbaang/repos",
"events_url": "https://api.github.com/users/kimbaang/events{/privacy}",
"received_events_url": "https://api.github.com/users/kimbaang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey @kimbaang 👋 \r\n\r\nThis issue (https://github.com/huggingface/transformers/issues/25084) seems related -- can you check if the solutions presented there help with your problem? 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'nvme', 'offload_optimizer_nvme_path': '/nvme', 'offload_param_device': 'nvme', 'offload_param_nvme_path': '/nvme', 'zero3_init_flag': False, 'zero_stage': 2}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
class GenerateModel(tf.Module):
def __init__(self, model):
super(GenerateModel, self).__init__()
self.model = model
@tf.function(
# shouldn't need static batch size, but throws exception without it (needs to be fixed)
input_signature=[
tf.TensorSpec((1, 80, 3000), tf.float32, name="input_features"),
],
)
def serving(self, input_features):
outputs = self.model.generate(
inputs=input_features,
task="transcribe",
language="<|ko|>",
max_new_tokens=450, # change as needed
return_dict_in_generate=True,
)
return {"sequences": outputs["sequences"]}
```
I have provided `task` and `language` param to the self.model.generate() function,
Below is just a normal script that converts `whisper-tiny` model into tflite version.
```python
model = TFWhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
generate_model = GenerateModel(model=model)
tf.saved_model.save(
generate_model,
args.from_huggingface_dir,
signatures={"serving_default": generate_model.serving},
)
# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(args.from_huggingface_dir)
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS, # enable TensorFlow ops.
]
# Perform full integer 8-bit quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
# Save the model
with open(args.to_tflite_path, "wb") as f:
f.write(tflite_model)
# Loading dataset
ds = datasets.load_from_disk(args.dataset)
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-tiny")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-tiny", predict_timestamps=True)
processor = WhisperProcessor(feature_extractor, tokenizer)
inputs = feature_extractor(
ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="tf"
)
input_features = inputs.input_features
labels = ds[0]["text"]
# loaded model... now with generate!
interpreter = tf.lite.Interpreter(args.to_tflite_path)
tflite_generate = interpreter.get_signature_runner()
generated_ids = tflite_generate(input_features=input_features)["sequences"]
print("label: ", labels)
print("prediction: ", generated_ids)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
But the following error has occurred:
```sh
outputs = self.model.generate(
File "/home/chris/anaconda3/envs/pt212-cuda122/lib/python3.11/site-packages/transformers/models/whisper/modeling_tf_whisper.py", line 1486, in generate *
if generation_config.language in generation_config.lang_to_id.keys():
AttributeError: 'GenerationConfig' object has no attribute 'lang_to_id'
```
### Expected behavior
I wanted to make sure the `task` and `language` parameters are passed to the function and according tokens are added prior to token generation so to generate korean transcripts successfully. Expected as in the order like below:
"<|startoftranscript|>": 50258 -> "<|ko|>": 50264 -> "<|transcribe|>": 50359 -> "<|notimestamps|>": 50363,
However 'lang_to_id' attribute doesn't seem to be included in the 'GenerationConfig' object and emits the error like above. Am I missing something here? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28444/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28444/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28443 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28443/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28443/comments | https://api.github.com/repos/huggingface/transformers/issues/28443/events | https://github.com/huggingface/transformers/pull/28443 | 2,076,012,129 | PR_kwDOCUB6oc5jxcqZ | 28,443 | Adding [T5/MT5/UMT5]ForTokenClassification | {
"login": "hackyon",
"id": 1557853,
"node_id": "MDQ6VXNlcjE1NTc4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackyon",
"html_url": "https://github.com/hackyon",
"followers_url": "https://api.github.com/users/hackyon/followers",
"following_url": "https://api.github.com/users/hackyon/following{/other_user}",
"gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackyon/subscriptions",
"organizations_url": "https://api.github.com/users/hackyon/orgs",
"repos_url": "https://api.github.com/users/hackyon/repos",
"events_url": "https://api.github.com/users/hackyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackyon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks @ArthurZucker. \r\n\r\nThe CIs are green now. Please take a look when you get a chance. Thanks!",
"Yes! Will review 🤗 ",
"Just rebasing the changes on top of the latest head",
"Thanks for adding the tests! It seems to have caused you a lot of extra work 😓 so let' s keep it as is. A lot of tests were missing\r\n",
"Thanks. \r\n\r\nRan all 3 edited tests locally with RUN_SLOW=1 and they all passed. Also rebased off of main and re-pushed to do another run of CI checks",
"Test failure from tests_exotic_models seems unrelated:\r\n\r\nE Failed to import NATTEN's CPP backend. This could be due to an invalid/incomplete install. Please uninstall NATTEN (pip uninstall natten) and re-install with the correct torch build: shi-labs.com/natten\r\n\r\nERROR tests/models/dinat/test_modeling_dinat.py - RuntimeError: Failed to import transformers.models.dinat.modeling_dinat because of the following error (look up to see its traceback):\r\nERROR tests/models/nat/test_modeling_nat.py - RuntimeError: Failed to import transformers.models.nat.modeling_nat because of the following error (look up to see its traceback):\r\n\r\nIf we need the tests to pass, I can do another merge to upstream and re-push to trigger another test run.",
"Don't worry it is unrelated! 🤗 we can merge without ",
"> this might slow the CI a tad bit 😉\r\n\r\nbecause of the new model classes (which is of course fine) or anything else?\r\n",
"Copied from of the full test suite of T5! "
] | 1,704 | 1,706 | 1,706 | CONTRIBUTOR | null | # What does this PR do?
Adding [T5/MT5/UMT5]ForTokenClassification. See discussion [here](https://github.com/huggingface/transformers/pull/26683#issuecomment-1874899361).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
@ArthurZucker
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28443/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28443",
"html_url": "https://github.com/huggingface/transformers/pull/28443",
"diff_url": "https://github.com/huggingface/transformers/pull/28443.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28443.patch",
"merged_at": 1706756029000
} |
https://api.github.com/repos/huggingface/transformers/issues/28442 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28442/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28442/comments | https://api.github.com/repos/huggingface/transformers/issues/28442/events | https://github.com/huggingface/transformers/pull/28442 | 2,075,968,215 | PR_kwDOCUB6oc5jxTKh | 28,442 | Refine xpu device setting | {
"login": "zhuhong61",
"id": 95205772,
"node_id": "U_kgDOBay5jA",
"avatar_url": "https://avatars.githubusercontent.com/u/95205772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuhong61",
"html_url": "https://github.com/zhuhong61",
"followers_url": "https://api.github.com/users/zhuhong61/followers",
"following_url": "https://api.github.com/users/zhuhong61/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuhong61/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuhong61/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuhong61/subscriptions",
"organizations_url": "https://api.github.com/users/zhuhong61/orgs",
"repos_url": "https://api.github.com/users/zhuhong61/repos",
"events_url": "https://api.github.com/users/zhuhong61/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuhong61/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | We'd like to revise some xpu related logic in device settting.
1. For MULTI_XPU case, we should set the device according to local rank, instead of to xpu:0 only.
2. If user set deepspeed config, deepspeed path should be triggered. But current logic will trigger xpu DDP logic not deepspeed. We adjust the order and put deepspeed ahead of xpu.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28442/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28442",
"html_url": "https://github.com/huggingface/transformers/pull/28442",
"diff_url": "https://github.com/huggingface/transformers/pull/28442.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28442.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28441 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28441/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28441/comments | https://api.github.com/repos/huggingface/transformers/issues/28441/events | https://github.com/huggingface/transformers/issues/28441 | 2,075,659,577 | I_kwDOCUB6oc57uA05 | 28,441 | Proposal for Adding a New Scheduler Strategy for Language Model Pretraining | {
"login": "gmftbyGMFTBY",
"id": 27548710,
"node_id": "MDQ6VXNlcjI3NTQ4NzEw",
"avatar_url": "https://avatars.githubusercontent.com/u/27548710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmftbyGMFTBY",
"html_url": "https://github.com/gmftbyGMFTBY",
"followers_url": "https://api.github.com/users/gmftbyGMFTBY/followers",
"following_url": "https://api.github.com/users/gmftbyGMFTBY/following{/other_user}",
"gists_url": "https://api.github.com/users/gmftbyGMFTBY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmftbyGMFTBY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmftbyGMFTBY/subscriptions",
"organizations_url": "https://api.github.com/users/gmftbyGMFTBY/orgs",
"repos_url": "https://api.github.com/users/gmftbyGMFTBY/repos",
"events_url": "https://api.github.com/users/gmftbyGMFTBY/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmftbyGMFTBY/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Good ISSUE! \r\nSimilar to the `learning_rate_target` in trlx config: https://trlx.readthedocs.io/en/docs/configs.html#trlx.data.configs.TrainConfig\r\n\r\n```\r\nlearning_rate_init (float) – Initial learning rate after ramp up\r\nlearning_rate_target (float) – Target learning rate after decay\r\n```\r\n\r\nNow the minimum learning rate cannot be configured in the transformers, it is hard-coded to 0, such as cosine_schedule or linear_schedule\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/optimization.py#L140\r\n```\r\nreturn max(0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress)))\r\n```\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/optimization.py#L104\r\n```\r\nreturn max(0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps)))\r\n```\r\n",
"### Temporary Implementation\r\n\r\n```python\r\nimport math\r\nimport transformers\r\n\r\nlr_decay_steps = 1500\r\nmin_lr_ratio = 0.1\r\n\r\n\r\ndef _get_cosine_schedule_with_warmup_lr_lambda(\r\n current_step: int, *, num_warmup_steps: int, num_training_steps: int, num_cycles: float\r\n):\r\n if current_step < num_warmup_steps:\r\n return float(current_step) / float(max(1, num_warmup_steps))\r\n if current_step > lr_decay_steps:\r\n return min_lr_ratio\r\n progress = float(current_step - num_warmup_steps) / float(max(1, lr_decay_steps - num_warmup_steps))\r\n coefficient = max(0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress)))\r\n return min_lr_ratio + coefficient * (1.0 - min_lr_ratio)\r\n\r\n\r\ndef add_lr_decay_limit_for_cosine_schedule():\r\n transformers.optimization._get_cosine_schedule_with_warmup_lr_lambda = _get_cosine_schedule_with_warmup_lr_lambda\r\n\r\n```\r\n\r\nOur implementation builds upon the cosine scheduler provided by `transformers.optimization`. We have introduced two new parameters:\r\n\r\n1. **`lr_decay_steps`**: This parameter signifies the maximum number of iterations for learning rate decay.\r\n \r\n2. **`min_lr_ratio`**: This parameter represents the proportion of the constant learning rate in comparison to `TrainingArguments.learning_rate` after reaching the specified `lr_decay_steps`.\r\n\r\nThe practical impact of the aforementioned scheduler is illustrated in the following figure.\r\n<img width=\"1637\" alt=\"Screenshot 2024-01-11 at 09 25 31\" src=\"https://github.com/huggingface/transformers/assets/91840504/4908e3ee-7fe9-4542-854b-21f5a3821f03\">\r\n\r\nAlternatively, we can adopt the following implementation, which features fewer parameters and a slightly different curve.\r\n```python\r\ndef _get_cosine_schedule_with_warmup_lr_lambda(\r\n current_step: int, *, num_warmup_steps: int, num_training_steps: int, num_cycles: float\r\n):\r\n if current_step < num_warmup_steps:\r\n return float(current_step) / float(max(1, num_warmup_steps))\r\n progress = float(current_step - num_warmup_steps) / float(max(1, num_training_steps - num_warmup_steps))\r\n return max(min_lr_ratio, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress)))\r\n```\r\n",
"cc @muellerzr sounds good no? \r\n",
"@muellerz Hello, could you please review our issue and contribution?"
] | 1,704 | 1,705 | null | CONTRIBUTOR | null | ### Feature request
We try to propose the addition of a new and widely-adopted scheduler strategy for language model pretraining in the Transformers repository. Upon reviewing the current schedulers available in the [Transformers optimization module](https://github.com/huggingface/transformers/blob/main/src/transformers/optimization.py#L338), it appears there is a notable absence of an out-of-the-box implementation for a specific type of scheduler. This particular scheduler is prevalent in recent pre-training models and features a warmup decay, but importantly, it also maintains a limited minimum learning rate post-maximum iteration steps.
This scheduling approach has seen extensive use in several prominent pre-trained large language models (LLMs), including:
1. TinyLLaMA: Implementation details can be found in their [pretraining script](https://github.com/jzhang38/TinyLlama/blob/main/pretrain/tinyllama.py#L375).
2. MindLLM: Described in their research paper, available at [arXiv:2310.15777](https://arxiv.org/pdf/2310.15777.pdf).
3. trlx: Utilized in the TRLx framework, as seen in their [GitHub repository](https://github.com/CarperAI/trlx/tree/main).
4. ...
The introduction of this scheduler into the Transformers library would not only complete the suite of existing scheduling strategies but also provide practitioners with a tool that's already proven its efficacy in recent LLM training methodologies. I believe its inclusion will be beneficial for the community, fostering more efficient and effective pretraining processes.
### Motivation
This issue aims to introduce a novel scheduler into the current Transformers library. The proposed scheduler combines the elements of warmup decay with a distinctive feature - the implementation of a constrained minimum learning rate beyond the maximum iteration steps.
### Your contribution
Yes, we could submit a PR as soon as possible if any huggingface members think this contribution is necessary. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28441/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28441/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28440 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28440/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28440/comments | https://api.github.com/repos/huggingface/transformers/issues/28440/events | https://github.com/huggingface/transformers/issues/28440 | 2,074,813,346 | I_kwDOCUB6oc57qyOi | 28,440 | Adding mixtral attention_bias in style of llama modeling | {
"login": "Moreh-LeeJunhyeok",
"id": 99154015,
"node_id": "U_kgDOBej4Xw",
"avatar_url": "https://avatars.githubusercontent.com/u/99154015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moreh-LeeJunhyeok",
"html_url": "https://github.com/Moreh-LeeJunhyeok",
"followers_url": "https://api.github.com/users/Moreh-LeeJunhyeok/followers",
"following_url": "https://api.github.com/users/Moreh-LeeJunhyeok/following{/other_user}",
"gists_url": "https://api.github.com/users/Moreh-LeeJunhyeok/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Moreh-LeeJunhyeok/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moreh-LeeJunhyeok/subscriptions",
"organizations_url": "https://api.github.com/users/Moreh-LeeJunhyeok/orgs",
"repos_url": "https://api.github.com/users/Moreh-LeeJunhyeok/repos",
"events_url": "https://api.github.com/users/Moreh-LeeJunhyeok/events{/privacy}",
"received_events_url": "https://api.github.com/users/Moreh-LeeJunhyeok/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"what is a status of this issue? is this on progress? thanks in advance ",
"Hi @Moreh-LeeJunhyeok, thanks for opening an issue! \r\n\r\nNote - making experiments easier isn't in and of itself enough of a reason to add something to a model. However, as there was the equivalent added to Llama, this seems reasonable. Could you open a PR and we can review your proposed changes? "
] | 1,704 | 1,705 | null | NONE | null | ### Feature request
### System Info
transformers version: 4.36.2
### Who can help?
don't have a clue about this
### Information
Refer to llama2 modeling code, I want to add attention bias option in mixtral model and configuration for flexibility of experiments.
If this changes seems appropriate, I will make a PR for it
### Expected behavior
After changes, attention bias option of model is added in config.
Can be controlled like example below(default config value is false)
```
from transformers import AutoConfig
config = AutoConfig.from_pretrained("variant_of_mixtral")
config.attention_bias = True
```
### Motivation
Refer to llama2 modeling code, I want to add attention bias option in mixtral model and configuration for flexibility of experiments.
### Your contribution
I have created a fix branch. I can make a PR of it
refer to [link](https://github.com/Moreh-LeeJunhyeok/transformers/tree/mixtral_add_attention_bias) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28440/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28439 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28439/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28439/comments | https://api.github.com/repos/huggingface/transformers/issues/28439/events | https://github.com/huggingface/transformers/pull/28439 | 2,074,806,467 | PR_kwDOCUB6oc5jtUbo | 28,439 | Task-specific pipeline init args | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for this work ! I don't see the PR's documentation though, would be nice to take a look at the rendered stuff.\r\n\r\nAlso would this be interesting @amyeroberts : https://github.com/huggingface/doc-builder/issues/465 ? (I've seen a discrepancy between docstring and typing in safetensors, and thought this might be interesting in transformers if done in core specific places potentially).",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28439). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@pcuenca Thank you for your review! I believe I've addressed all of your comments. \r\n\r\n@Narsil We finally have the docs 🥳 Re typing and docstring checks, I'm definitely pro when the types are given. We currently something which does this in the check_docstrings - [specifically here](https://github.com/huggingface/transformers/blob/708b19eb093bcfb4efcfa229925b249397c2bd5e/utils/check_docstrings.py#L976). However, all of the pipeline classes are ignored because they accept arguments which aren't documented e.g. most accept `tokenizer` even if it isn't used. Once possibility is explicitly setting the documented kwargs in the `__init__` and passing the rest through `**kwargs` in this or a follow up PR. WDYT? "
] | 1,704 | 1,706 | 1,706 | COLLABORATOR | null | # What does this PR do?
Related to: https://github.com/huggingface/transformers/pull/28439#issue-2074806467
Adds a function to build the pipelines init arguments based on the processing objects it accepts: tokenizer, image processor, feature extractor.
This replaces `PIPELINE_INIT_ARGS` for the task-specific arguments so as to avoid e.g. `tokenizer` being listed as an input for some models when it's not correct.
Removes `task` as an input argument for the task-specific pipelines (the task is already specified).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28439/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28439",
"html_url": "https://github.com/huggingface/transformers/pull/28439",
"diff_url": "https://github.com/huggingface/transformers/pull/28439.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28439.patch",
"merged_at": 1706633697000
} |
https://api.github.com/repos/huggingface/transformers/issues/28438 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28438/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28438/comments | https://api.github.com/repos/huggingface/transformers/issues/28438/events | https://github.com/huggingface/transformers/issues/28438 | 2,074,777,327 | I_kwDOCUB6oc57qpbv | 28,438 | Multi-worker HF training using trainer API in torch-xla result in too many graph compilations after saving checkpoint (transformers>=4.35) | {
"login": "jeffhataws",
"id": 56947987,
"node_id": "MDQ6VXNlcjU2OTQ3OTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/56947987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffhataws",
"html_url": "https://github.com/jeffhataws",
"followers_url": "https://api.github.com/users/jeffhataws/followers",
"following_url": "https://api.github.com/users/jeffhataws/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffhataws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffhataws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffhataws/subscriptions",
"organizations_url": "https://api.github.com/users/jeffhataws/orgs",
"repos_url": "https://api.github.com/users/jeffhataws/repos",
"events_url": "https://api.github.com/users/jeffhataws/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffhataws/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,706 | 1,706 | CONTRIBUTOR | null | ### System Info
transformers>=4.35
Neuron SDK 2.15 with torch-neuronx 1.13
### Who can help?
@muellerz
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
(Duplicate of https://github.com/aws-neuron/aws-neuron-sdk/issues/813)
I followed [PyTorch Neuron for Trainium Hugging Face BERT MRPC task finetuning using Hugging Face Trainer API](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.html#torch-hf-bert-finetune) to fine-tune BERT. I ran run_2w.sh and see the following behavior where it runs normally until the first checkpoint is saved, but then starts doing compilation for every step (I changed save_steps option in run_2w.sh to 10 steps in order to trigger the issue faster):
```
[INFO|trainer.py:1712] 2024-01-09 17:04:08,045 >> ***** Running training *****
[INFO|trainer.py:1713] 2024-01-09 17:04:08,045 >> Num examples = 1,840
[INFO|trainer.py:1714] 2024-01-09 17:04:08,045 >> Num Epochs = 5
[INFO|trainer.py:1715] 2024-01-09 17:04:08,045 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1718] 2024-01-09 17:04:08,045 >> Total train batch size (w. parallel, distributed & accumulation) = 16
[INFO|trainer.py:1719] 2024-01-09 17:04:08,045 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1720] 2024-01-09 17:04:08,045 >> Total optimization steps = 1,150 [INFO|trainer.py:1721] 2024-01-09 17:04:08,045 >> Number of trainable parameters = 109,483,778
0%| | 0/1150 [00:00<?, ?it/s]2024-01-09 17:04:08.000173: 140637 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:08.000175: 140637 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_1650
6334326618155050+abb26765/model.neff. Exiting with a successfully compiled graph.
0%| | 1/1150 [00:00<04:53, 3.92it/s]2024-01-09 17:04:09.000508: 140742 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache 2024-01-09 17:04:09.000603: 140742 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_2044
823947559839528+abb26765/model.neff. Exiting with a successfully compiled graph.
0%| | 2/1150 [00:02<29:23, 1.54s/it]2024-01-09 17:04:13.000328: 140780 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:13.000442: 140780 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_7850
734058944619683+abb26765/model.neff. Exiting with a successfully compiled graph.
1%| | 10/1150 [00:09<08:40, 2.19it/s][INFO|trainer.py:2859] 2024-01-09 17:04:17,051 >> Saving model checkpoint to /tmp/mrpc/tmp-checkpoint-10
(Done saving checkpoint, then compilation every step below)
2024-01-09 17:04:17.000789: 141260 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:17.000873: 141260 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_2523
922307180626946+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:20.000215: 141270 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:20.000216: 141270 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_6208
462474369064908+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:21.000202: 141279 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:21.000282: 141279 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_1498
3430005009285767+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:23.000265: 141288 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:23.000266: 141288 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_3356
031905174227108+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:24.000025: 141297 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:24.000104: 141297 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_5950
234423484734321+abb26765/model.neff. Exiting with a successfully compiled graph.
2024-01-09 17:04:26.000063: 141306 INFO ||NEURON_CACHE||: Compile cache path: /var/tmp/neuron-compile-cache
2024-01-09 17:04:26.000064: 141306 INFO ||NEURON_CC_WRAPPER||: Using a cached neff at /var/tmp/neuron-compile-cache/neuronxcc-2.11.0.35+4f5279863/MODULE_1050
0036830841255848+abb26765/model.neff. Exiting with a successfully compiled graph.
(Compilation repeated many times, and eventually run out of device memory in Neuron runtime)
```
This issue starts in transformers version 4.35.
### Expected behavior
We should see training completes normally with few torch-xla complations. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28438/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28437 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28437/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28437/comments | https://api.github.com/repos/huggingface/transformers/issues/28437/events | https://github.com/huggingface/transformers/pull/28437 | 2,074,758,258 | PR_kwDOCUB6oc5jtJze | 28,437 | Fix load correct tokenizer in Mixtral model documentation | {
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,705 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
There is an incorrect non existent tokenizer linked in the Mixtral documentation. https://huggingface.co./docs/transformers/main/en/model_doc/mixtral#usage-tips
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Models:
- text models: @ArthurZucker and @younesbelkada
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28437/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28437",
"html_url": "https://github.com/huggingface/transformers/pull/28437",
"diff_url": "https://github.com/huggingface/transformers/pull/28437.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28437.patch",
"merged_at": 1704906547000
} |
https://api.github.com/repos/huggingface/transformers/issues/28436 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28436/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28436/comments | https://api.github.com/repos/huggingface/transformers/issues/28436/events | https://github.com/huggingface/transformers/pull/28436 | 2,074,723,389 | PR_kwDOCUB6oc5jtCKF | 28,436 | Add qwen2 | {
"login": "JustinLin610",
"id": 27664428,
"node_id": "MDQ6VXNlcjI3NjY0NDI4",
"avatar_url": "https://avatars.githubusercontent.com/u/27664428?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JustinLin610",
"html_url": "https://github.com/JustinLin610",
"followers_url": "https://api.github.com/users/JustinLin610/followers",
"following_url": "https://api.github.com/users/JustinLin610/following{/other_user}",
"gists_url": "https://api.github.com/users/JustinLin610/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JustinLin610/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JustinLin610/subscriptions",
"organizations_url": "https://api.github.com/users/JustinLin610/orgs",
"repos_url": "https://api.github.com/users/JustinLin610/repos",
"events_url": "https://api.github.com/users/JustinLin610/events{/privacy}",
"received_events_url": "https://api.github.com/users/JustinLin610/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Feel free to rebase on main as well if some CIs are unrelated to your PR, and ping me whenever for a final review! 🤗 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28436). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks a lot for this pr and bearing with me! 🤗",
"Qwen1.5-0.5B-Chat lose output.weight after load from_pretrained and save_pretrained\r\n![image](https://github.com/huggingface/transformers/assets/99069487/b0d7db55-d461-47c6-9fca-f32174578c7b)\r\n![image](https://github.com/huggingface/transformers/assets/99069487/7d65f395-e44e-43a1-af7e-d2028fac1d49)\r\n"
] | 1,704 | 1,707 | 1,705 | CONTRIBUTOR | null | # Adding Qwen2
This PR adds the support of codes for the coming Qwen2 models. For information about Qwen, please visit https://github.com/QwenLM/Qwen. @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28436/reactions",
"total_count": 5,
"+1": 5,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28436/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28436",
"html_url": "https://github.com/huggingface/transformers/pull/28436",
"diff_url": "https://github.com/huggingface/transformers/pull/28436.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28436.patch",
"merged_at": 1705503742000
} |
https://api.github.com/repos/huggingface/transformers/issues/28435 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28435/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28435/comments | https://api.github.com/repos/huggingface/transformers/issues/28435/events | https://github.com/huggingface/transformers/issues/28435 | 2,074,690,827 | I_kwDOCUB6oc57qUUL | 28,435 | Skip some weights for load_in_8bit and keep them as fp16/32? | {
"login": "gregor-ge",
"id": 7710563,
"node_id": "MDQ6VXNlcjc3MTA1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7710563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gregor-ge",
"html_url": "https://github.com/gregor-ge",
"followers_url": "https://api.github.com/users/gregor-ge/followers",
"following_url": "https://api.github.com/users/gregor-ge/following{/other_user}",
"gists_url": "https://api.github.com/users/gregor-ge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gregor-ge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gregor-ge/subscriptions",
"organizations_url": "https://api.github.com/users/gregor-ge/orgs",
"repos_url": "https://api.github.com/users/gregor-ge/repos",
"events_url": "https://api.github.com/users/gregor-ge/events{/privacy}",
"received_events_url": "https://api.github.com/users/gregor-ge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Is this what you are looking for:\r\nhttps://github.com/huggingface/transformers/blob/a3b00030c96be4c66e818cf9a22040dbe41006d8/src/transformers/modeling_utils.py#L1165\r\n",
"Not quite, no, but it is similar. `_keep_in_fp32_modules ` is set on the library side (e.g. for the T5 modules) and it also is limited to fp32. I am looking for a solution similar to the initial suggestion in #20683, that is for example `Blip2ForConditionalGeneration.from_pretrained(..., load_in_8bit=True, skip_for_8bit_modules=[\"qformer\", \"vision_encoder\", \"language_projection\"])` which loads those modules in the checkpoint format (fp16, fp32, bf16, ...) without quantization.\r\n\r\nI think the following could work as a (dirty) way of implementing what I want with it though but I have to check later:\r\n```python\r\nfrom transformers import Blip2ForConditionalGeneration\r\n# add modules to list so they are skipped for quantization\r\nBlip2ForConditionalGeneration._keep_in_fp32_modules = [\"qformer\", \"vision_encoder\", \"language_projection\"]\r\nmodel = Blip2ForConditionalGeneration.from_pretrained(..., load_in_8bit=True)\r\n\r\n# manually cast back to fp16 if wanted\r\nBlip2ForConditionalGeneration._keep_in_fp32_modules = []\r\nmodel.qformer = model.qformer.half()\r\nmodel.vision_encoder= model.vision_encoder.half()\r\n...\r\n```",
"I dug around in `load_pretrained()` and realized there *is* a config option `quantization_config.llm_int8_skip_modules` [here](https://github.com/huggingface/transformers/blob/a3b00030c96be4c66e818cf9a22040dbe41006d8/src/transformers/modeling_utils.py#L3510) which seems to do exactly what I want. I will test this out on the weekend and report back. One thing I noticed is that when you set this parameter, you have to also manually add parameters like the `lm_head`which are skipped on default otherwise.",
"Yes, it works just as I want. It also work for int4. The parameter is documented [here](https://huggingface.co./docs/transformers/main_classes/quantization#transformers.BitsAndBytesConfig.llm_int8_skip_modules) which I seem to have missed in my search before.\r\n\r\nFor posterity, the following snippet is how to use it:\r\n```python\r\nfrom transformers import Blip2ForConditionalGeneration\r\nskip_int8_modules = [\"lm_head\", \"vision_encoder\", \"language_projection\", \"qformer\"] if skip_vision_int8 else None\r\nself.model = Blip2ForConditionalGeneration.from_pretrained(\r\n checkpoint,\r\n load_in_8bit=True,\r\n llm_int8_skip_modules=skip_int8_modules\r\n)\r\n´´´\r\n\r\nThanks for helping me!"
] | 1,704 | 1,705 | 1,705 | NONE | null | ### Feature request
Hello,
I am looking for a way to load a checkpoint where I only load some of the weights in 8 bit and keep others in 16/32 bit.
### Motivation
My motivation is for vision-language models like Llava or BLIP2 where I want to load the LLM part in 8 bit but the image encoder should stay in 16 bit because I notice performance degradations with CLIP in 8 bit and also want to be able to train this part without LoRA.
As far as I can see in the documentation, issues and with Google (both here and for bitsandbytes), there is currently no way to do this.
### Your contribution
I can in theory help implement something like this but I don't know where and how in the code this should be done. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28435/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28434 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28434/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28434/comments | https://api.github.com/repos/huggingface/transformers/issues/28434/events | https://github.com/huggingface/transformers/issues/28434 | 2,074,395,530 | I_kwDOCUB6oc57pMOK | 28,434 | Llama2 inference in bfloat16 | {
"login": "JeevanBhoot",
"id": 64039772,
"node_id": "MDQ6VXNlcjY0MDM5Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/64039772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JeevanBhoot",
"html_url": "https://github.com/JeevanBhoot",
"followers_url": "https://api.github.com/users/JeevanBhoot/followers",
"following_url": "https://api.github.com/users/JeevanBhoot/following{/other_user}",
"gists_url": "https://api.github.com/users/JeevanBhoot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JeevanBhoot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JeevanBhoot/subscriptions",
"organizations_url": "https://api.github.com/users/JeevanBhoot/orgs",
"repos_url": "https://api.github.com/users/JeevanBhoot/repos",
"events_url": "https://api.github.com/users/JeevanBhoot/events{/privacy}",
"received_events_url": "https://api.github.com/users/JeevanBhoot/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @JeevanBhoot \r\nThis is not supported in PyTorch, when passing input_ids they need to stay in `torch.int64`, the only way to run inference in bf16 is the following: \r\n```diff\r\nmodel_path = \"meta-llama/Llama-2-7b-chat-hf\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_path, device_map=\"auto\", trust_remote_code=False, revision=\"main\", torch_dtype=torch.bfloat16).to(\"cuda\")\r\ntokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)\r\n\r\n- input_ids = tokenizer(\"Tell me an interesting fact!\", return_tensors=\"pt\").input_ids.to(\"cuda\").to(torch.bfloat16)\r\n+ input_ids = tokenizer(\"Tell me an interesting fact!\", return_tensors=\"pt\").input_ids.to(\"cuda\")\r\noutput = model.generate(input_ids)\r\n```",
"@younesbelkada When input_ids in `torch.int64`, the output from the model is also `torch.int64`. Is there no way to have the model activations and outputs be `torch.bfloat16`?",
"@JeevanBhoot if you pass `torch.LongTensor` to a `nn.Embedding` layer the output should be a floating point. If you load your model in bf16 it should be `int64` -> `bfloat16`, you can inspect that by doing `model.get_input_embeddings()(input_ids).dtype`",
"@younesbelkada Ok, I can confirm that `model.get_input_embeddings()(input_ids).dtype` returns `bfloat16` but `output.dtype` is `int64`. \r\n\r\nIs the final output of the model cast from `bfloat16` to `int64`, or is the dtype changing at some point in the model?",
"@JeevanBhoot `output` will always be in `int64` since it corresponds to the predicted tokens. There is no need to cast the output indices in `bfloat16`, what is your motivation behind getting `output` in bf16?",
"@younesbelkada I don't need the final output to be in bf16, but I want to confirm that EVERY operation and output within the model is bf16.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes - 1x RTX 4090
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada @SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am trying to run both GPTQ and unquantized Llama2 models:
```python
gptq_config = GPTQConfig(bits=4, disable_exllama=True)
model_path = "TheBloke/Llama-2-7B-Chat-GPTQ"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=False, revision="main", quantization_config=gptq_config, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
input_ids = tokenizer("Tell me an interesting fact!", return_tensors="pt").input_ids.to("cuda").to(torch.bfloat16)
output = model.generate(input_ids)
```
and
```python
model_path = "meta-llama/Llama-2-7b-chat-hf"
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=False, revision="main", torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=True)
input_ids = tokenizer("Tell me an interesting fact!", return_tensors="pt").input_ids.to("cuda").to(torch.bfloat16)
output = model.generate(input_ids)
```
### Expected behavior
I am trying to run Llama2 inference in bfloat16 - I want all weights and computations to be in bfloat16. When I run the two snippets provided, I encounter the following error:
```
File "/home/jeevan/miniconda3/envs/llama2_env/lib/python3.10/site-packages/torch/nn/functional.py", line 2233, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got CUDABFloat16Type instead (while checking arguments for embedding)
```
If I keep the input in int64, then this works fine i.e. changing just the model weights to bfloat16 works fine. But I want all computation to be performed in bfloat16. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28434/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28433 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28433/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28433/comments | https://api.github.com/repos/huggingface/transformers/issues/28433/events | https://github.com/huggingface/transformers/pull/28433 | 2,074,298,641 | PR_kwDOCUB6oc5jrky2 | 28,433 | Enable multi-label image classification in pipeline | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@Narsil - I've added applying sigmoid function to the model outputs for image classification matching the text classification pipeline, per your [suggestion here](https://github.com/huggingface/huggingface.js/issues/429#issuecomment-1880625080). \r\n\r\nAgree controlling through a config would be a good idea -- it is partially controlled through `model.config.problem_type` here. Is this enough or would you like the activation function to be more explicitly defined in the config? "
] | 1,704 | 1,704 | 1,704 | COLLABORATOR | null | # What does this PR do?
Enables multilabel image classification in the pipeline and explicity specifying the activation function applied to the model's logits - matching the logic for [text classification pipeline](https://github.com/huggingface/transformers/blob/ffd3710391c0700a3957f0cdf2c99bc5ae966c70/src/transformers/pipelines/text_classification.py#L195).
Fixes https://github.com/huggingface/huggingface.js/issues/429
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28433/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28433",
"html_url": "https://github.com/huggingface/transformers/pull/28433",
"diff_url": "https://github.com/huggingface/transformers/pull/28433.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28433.patch",
"merged_at": 1704968979000
} |
https://api.github.com/repos/huggingface/transformers/issues/28432 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28432/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28432/comments | https://api.github.com/repos/huggingface/transformers/issues/28432/events | https://github.com/huggingface/transformers/pull/28432 | 2,074,076,507 | PR_kwDOCUB6oc5jqz3A | 28,432 | CI: limit natten version | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | MEMBER | null | # What does this PR do?
[natten](https://github.com/SHI-Labs/NATTEN/) has a new release (v0.15.0), breaking our CI. This PR limits its version to the latest working version. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28432/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28432/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28432",
"html_url": "https://github.com/huggingface/transformers/pull/28432",
"diff_url": "https://github.com/huggingface/transformers/pull/28432.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28432.patch",
"merged_at": 1704890345000
} |
https://api.github.com/repos/huggingface/transformers/issues/28431 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28431/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28431/comments | https://api.github.com/repos/huggingface/transformers/issues/28431/events | https://github.com/huggingface/transformers/pull/28431 | 2,073,804,691 | PR_kwDOCUB6oc5jp4fY | 28,431 | Doc | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@stevhliu please help review the doc",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28431). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi @stevhliu . Thanks for your advice, I have fixed these comments. Please take a look, thx!"
] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | Hi @amyeroberts
We need to update some lib versions for CPU training in the docs, would you please help review it? Thx! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28431/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28431",
"html_url": "https://github.com/huggingface/transformers/pull/28431",
"diff_url": "https://github.com/huggingface/transformers/pull/28431.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28431.patch",
"merged_at": 1704992148000
} |
https://api.github.com/repos/huggingface/transformers/issues/28430 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28430/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28430/comments | https://api.github.com/repos/huggingface/transformers/issues/28430/events | https://github.com/huggingface/transformers/pull/28430 | 2,073,787,119 | PR_kwDOCUB6oc5jp0rr | 28,430 | Fix number of models in README.md | {
"login": "bayllama",
"id": 142558246,
"node_id": "U_kgDOCH9EJg",
"avatar_url": "https://avatars.githubusercontent.com/u/142558246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bayllama",
"html_url": "https://github.com/bayllama",
"followers_url": "https://api.github.com/users/bayllama/followers",
"following_url": "https://api.github.com/users/bayllama/following{/other_user}",
"gists_url": "https://api.github.com/users/bayllama/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bayllama/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bayllama/subscriptions",
"organizations_url": "https://api.github.com/users/bayllama/orgs",
"repos_url": "https://api.github.com/users/bayllama/repos",
"events_url": "https://api.github.com/users/bayllama/events{/privacy}",
"received_events_url": "https://api.github.com/users/bayllama/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
This fixes a small typo in the README.md
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28430/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28430",
"html_url": "https://github.com/huggingface/transformers/pull/28430",
"diff_url": "https://github.com/huggingface/transformers/pull/28430.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28430.patch",
"merged_at": 1704885069000
} |
https://api.github.com/repos/huggingface/transformers/issues/28429 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28429/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28429/comments | https://api.github.com/repos/huggingface/transformers/issues/28429/events | https://github.com/huggingface/transformers/pull/28429 | 2,073,673,605 | PR_kwDOCUB6oc5jpb-B | 28,429 | disable query_length diff on graph model | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, I don't think this change should be done. jit traced model still uses SDPA that may dispatch to memory-efficient backend, which requires to have no fully masked rows (see https://github.com/pytorch/pytorch/issues/110213), which is what `_unmask_unattended` avoids. I'd recommend to trace models with `query_length > 1`.\r\n\r\n@jiqing-feng Could you clarify the motivation and what is not working as intended?",
"> Hi, I don't think this change should be done. jit traced model still uses SDPA that may dispatch to memory-efficient backend, which requires to have no fully masked rows (see [pytorch/pytorch#110213](https://github.com/pytorch/pytorch/issues/110213)), which is what `_unmask_unattended` avoids. I'd recommend to trace models with `query_length > 1`.\r\n> \r\n> @jiqing-feng Could you clarify the motivation and what is not working as intended?\r\n\r\nSorry for that I didn't make it clear. Yes, we trace the model with `query_length > 1`, but when we use the trace model to do generation tasks, the inputs do not contain `past_key_values` when inference the 1st token. We used a tricky way that input a `past_key_values` with 0 length, see [here](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/generation/modeling.py#L311-L313). In this way, we could avoid input `None` to the traced model (None input will cause traced model error), but it does not satisfy `query_length > 0` so the traced model forward failed in the `_unmask_unattended`. \r\n\r\nIn conclusion, [TSModelForCausalLM](https://github.com/huggingface/optimum-intel/blob/main/optimum/intel/generation/modeling.py#L331) in `optimum-intel` will fail if I don't make this change. \r\n \r\nDo you have any good ideas to avoid it? Thx!\r\n\r\nPlease ask me if there is anything I didn't make clear.\r\n",
"@jiqing-feng `query_length` is taken from `input_ids` (https://github.com/huggingface/transformers/blob/ee2482b6f8eca320f1eff112673adae215dbc927/src/transformers/models/llama/modeling_llama.py#L1037), not from the sequence length in the KV cache so I am not sure to understand, how is PKV impacting the controlflow `if query_length > 1:`? Feel free to share a repro of the unexpected behavior/bug if that can help.",
"Though I agree that jit.trace / dynamo / symbolic_trace are likely not able to properly trace https://github.com/huggingface/transformers/blob/6c78bbcb8320d316434262ef003251ca997db0d1/src/transformers/modeling_attn_mask_utils.py#L244-L245",
"Hi @fxmarty Thanks for your advice, I get your point now. The problem should be fixed in `optimum-intel` instead of transformers because it is related to our traced model inputs.",
"@jiqing-feng I actually apply the same fix as you in https://github.com/huggingface/transformers/pull/28447 as `_unmask_unattended` can not be properly traced with torch.jit.trace or symbolic_trace. As this method was only implemented as a workaround to a limitation in pytorch with a specific backend on a specific hardware, I think it is fine.\r\n\r\nWe may want to have a tracable `_unmask_unattended`."
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | Hi @fxmarty
In generation tasks, the model will not use `AttentionMaskConverter._unmask_unattended` on the 1st token because no `past_key_values`, but will use it from the 2nd token. It will cause a different implementation while tracing, so we need to disable check `query_length` when using `jit.trace`.
cc @younesbelkada @ArthurZucker | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28429/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28429",
"html_url": "https://github.com/huggingface/transformers/pull/28429",
"diff_url": "https://github.com/huggingface/transformers/pull/28429.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28429.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28428 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28428/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28428/comments | https://api.github.com/repos/huggingface/transformers/issues/28428/events | https://github.com/huggingface/transformers/issues/28428 | 2,073,602,733 | I_kwDOCUB6oc57mKqt | 28,428 | Huggingface endpoint not working | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, this is not related to `transformers` but the inference API ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
```
2024-01-10 05:12:28.914726: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-01-10 05:12:28.914812: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-01-10 05:12:28.917235: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-01-10 05:12:31.226361: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
WARNING:tensorflow:From /usr/local/lib/python3.10/dist-packages/transformers/commands/env.py:100: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.36.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.23
- JaxLib version: 0.4.23
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import requests
import json
def call_huggingface_api(text, params=None):
url = "https://d2q5h5r3a1pkorfp.us-east-1.aws.endpoints.huggingface.cloud"
endpoint = "/"
headers = {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json"
}
data = {
"inputs": text,
"parameters": params
}
json_data = json.dumps(data)
response = requests.post(url + endpoint, data=json_data, headers=headers)
if response.status_code == 200:
return response.json()
else:
print("Request failed with status code:", response.status_code)
return None
```
```
parameters = {
"top_k": None
}
result = call_huggingface_api(text, parameters)
print(result)
```
gives
```
Request failed with status code: 502
None
```
### Expected behavior
runs properly with result | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28428/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28427 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28427/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28427/comments | https://api.github.com/repos/huggingface/transformers/issues/28427/events | https://github.com/huggingface/transformers/issues/28427 | 2,073,519,255 | I_kwDOCUB6oc57l2SX | 28,427 | RagRetriever download too much data and won't stop | {
"login": "MohammadDara",
"id": 6161219,
"node_id": "MDQ6VXNlcjYxNjEyMTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6161219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MohammadDara",
"html_url": "https://github.com/MohammadDara",
"followers_url": "https://api.github.com/users/MohammadDara/followers",
"following_url": "https://api.github.com/users/MohammadDara/following{/other_user}",
"gists_url": "https://api.github.com/users/MohammadDara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MohammadDara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MohammadDara/subscriptions",
"organizations_url": "https://api.github.com/users/MohammadDara/orgs",
"repos_url": "https://api.github.com/users/MohammadDara/repos",
"events_url": "https://api.github.com/users/MohammadDara/events{/privacy}",
"received_events_url": "https://api.github.com/users/MohammadDara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ydshieh if you can have a look 🤗 ",
"Hi.\r\n\r\nWell , when I tried it on Colab\r\n\r\n> Downloading data: 80% 3.75G/4.69G [01:16<00:22, 41.7MB/s]\r\n\r\nSo it's big `4.69 G` ... On colab, it's fast. In your case, it depends on your network :-)",
"Sorry, your are right, something is wrong! I will check",
"Well, it is 51 files of 1.33 GB to download, and after that 20M examples to generate and takes 14 hours ....\r\n\r\n> Generating train split 1647286/21015300 [1:11:27<13:45:48, 390.89 examples/s\r\n\r\n\r\n",
"So it will definitely finish (in theory). There is nothing we can do but just keep patient 😭 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: macOS-14.1.1-arm64-arm-64bit
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration, RagSequenceForGeneration
import torch
import faiss
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact")
```
### Expected behavior
I expect RagRetriever to download data and finish it. but it will never stop. here is part of the result:
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.32G/1.32G [00:14<00:00, 88.3MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:14<00:00, 89.8MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 88.0MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.6MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 85.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:17<00:00, 75.9MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 83.5MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:14<00:00, 89.6MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 88.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 86.2MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:16<00:00, 82.4MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.6MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:16<00:00, 81.9MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:16<00:00, 78.2MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:19<00:00, 69.7MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:18<00:00, 70.4MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 84.3MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:16<00:00, 80.0MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:19<00:00, 69.3MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.4MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [02:03<00:00, 10.8MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:26<00:00, 50.7MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:22<00:00, 59.3MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:19<00:00, 66.8MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 84.2MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:18<00:00, 70.5MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:20<00:00, 65.0MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:18<00:00, 70.5MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 84.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:15<00:00, 87.5MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:18<00:00, 71.1MB/s]
Downloading data: 100%|█████████████████████████████████████████████████████████████████████| 1.33G/1.33G [00:19<00:00, 67.0MB/s]
It will continue downloading if you don't stop it. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28427/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28425 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28425/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28425/comments | https://api.github.com/repos/huggingface/transformers/issues/28425/events | https://github.com/huggingface/transformers/issues/28425 | 2,073,426,512 | I_kwDOCUB6oc57lfpQ | 28,425 | GQA Llama 13B slower than Llama 13B without GQA | {
"login": "Adonai02",
"id": 70610799,
"node_id": "MDQ6VXNlcjcwNjEwNzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/70610799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Adonai02",
"html_url": "https://github.com/Adonai02",
"followers_url": "https://api.github.com/users/Adonai02/followers",
"following_url": "https://api.github.com/users/Adonai02/following{/other_user}",
"gists_url": "https://api.github.com/users/Adonai02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Adonai02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Adonai02/subscriptions",
"organizations_url": "https://api.github.com/users/Adonai02/orgs",
"repos_url": "https://api.github.com/users/Adonai02/repos",
"events_url": "https://api.github.com/users/Adonai02/events{/privacy}",
"received_events_url": "https://api.github.com/users/Adonai02/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"That would be nice but a bit outside the scope of transformers! Would be nice if you have a working example! \r\nWhat I recommend is to register a load_state_dict hook that converts the checkpoints on the fly. \r\nThe benchmark should run on different num kv heads as some shapes might be less optimal ? That would be my intuitiion. Also a single head (MQA) should be always faster than MHA",
"If I understand correctly, here is displayed an attempt to implement GQA on a non GQA Llama 2 13b model?\nIf that's the case, and despite the slight loss of performance observed, does the context size in VRAM gets diminished as GQA allows, and is the perplexity of the model affected?\nIf that's not the case, sorry for misunderstanding!"
] | 1,704 | 1,705 | null | NONE | null | ### Feature request
It would be nice if when I choose different key_value_heads (key_value_heads < attention_heads) on config's model, automatically the attn weights were computed by mean pooling. Right now, if I do this, it gives me the next error.
key_value_heads = 4
<img width="916" alt="image" src="https://github.com/huggingface/transformers/assets/70610799/05ae81c3-2ac6-4339-a805-02725ff9b538">
### Motivation
Make models faster, e.g Llama 2 13B, Llama 7B, Mistral 7B etc.
### Your contribution
I tried to do a simple implementation. But it gives me inconsistent results. GQA model is slower than No GQA model.
```
from transformers import LlamaConfig
from transformers.models.llama.modeling_llama import LlamaAttention, LlamaSdpaAttention
from copy import deepcopy
import torch
def split_attention_to_heads(input_tensor, num_splits):
# Get the shape of the input tensor
rows, cols = input_tensor.shape
# Check if the number of rows is divisible by the number of splits
if rows % num_splits != 0:
raise ValueError("Number of rows is not divisible by the number of splits")
# Calculate the number of rows in each split
# Use chunk to split the tensor along the rows
split_tensors = input_tensor.chunk(num_splits, dim=0)
return split_tensors
def average_heads(tensor_tuple, group_size, dtype):
# Initialize an empty list to store the averaged tensors
averaged_tensors = []
# Iterate through the tuple and average consecutive groups
for i in range(0, len(tensor_tuple), group_size):
# Take a group of tensors
tensor_group = tensor_tuple[i:i + group_size]
# Calculate the mean along dimension 0
averaged_tensor = torch.mean(torch.stack(tensor_group), dim=0, dtype=dtype)
# Append the averaged tensor to the list
averaged_tensors.append(averaged_tensor)
# Convert the list of averaged tensors to a tuple
averaged_tensors_tuple = tuple(averaged_tensors)
return averaged_tensors_tuple
def convert_wts_to_gqa(attention_module: torch.nn.Module , model_configuration: LlamaConfig):
attentions_wts = attention_module.state_dict().copy()
num_heads = model_configuration.num_attention_heads
gqa_groups = num_heads // model_configuration.num_key_value_heads
for name_wts in list(attentions_wts.keys()):
if ("k_proj" in name_wts) or ("v_proj" in name_wts):
tensor_to_convert = attentions_wts[name_wts].clone()
torch_dtype = tensor_to_convert.dtype
attn_heads = split_attention_to_heads(tensor_to_convert, num_splits=num_heads)
gqa_tensors_grouped = average_heads(attn_heads, gqa_groups, dtype=torch_dtype)
gqa_tensors_grouped = torch.cat(gqa_tensors_grouped)
attentions_wts[name_wts] = gqa_tensors_grouped
del tensor_to_convert
return attentions_wts
def convert_llama_to_gqa(module: torch.nn.Module, llama_config_from_hf: LlamaConfig, inplace: bool = False):
if isinstance(module, LlamaAttention):
wts_gqa = convert_wts_to_gqa(attention_module=module, model_configuration=llama_config_from_hf)
llama_atention_gqa = LlamaAttention(llama_config_from_hf, layer_idx=module.layer_idx)
llama_atention_gqa.half()
llama_atention_gqa.load_state_dict(wts_gqa)
return llama_atention_gqa
out = module if inplace else deepcopy(module)
for name, child in out.named_children():
out._modules[name] = convert_llama_to_gqa(child, llama_config_from_hf=llama_config_from_hf, inplace=True)
return out
from transformers import AutoConfig
configuration_llama = AutoConfig.from_pretrained("meta-llama/Llama-2-13b-chat-hf")
configuration_llama.num_key_value_heads = 4
llama_gqa = convert_llama_to_gqa(llama, configuration_llama)
```
**Results**
GQA LLAMA
<img width="784" alt="image" src="https://github.com/huggingface/transformers/assets/70610799/d1a1c250-5ed3-4c34-9041-620b6b57ef3c">
NO GQA LLAMA
<img width="782" alt="image" src="https://github.com/huggingface/transformers/assets/70610799/b09b020a-94fa-450c-a239-3f7fa5339f7a">
I don't know if I'm misunderstanding something, please let me know if you can see something I can't
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28425/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28424 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28424/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28424/comments | https://api.github.com/repos/huggingface/transformers/issues/28424/events | https://github.com/huggingface/transformers/pull/28424 | 2,073,406,278 | PR_kwDOCUB6oc5joi7U | 28,424 | Solve: Inconsistent decoding with additional special tokens between slow and fast tokenizers. | {
"login": "hi-sushanta",
"id": 93595990,
"node_id": "U_kgDOBZQpVg",
"avatar_url": "https://avatars.githubusercontent.com/u/93595990?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hi-sushanta",
"html_url": "https://github.com/hi-sushanta",
"followers_url": "https://api.github.com/users/hi-sushanta/followers",
"following_url": "https://api.github.com/users/hi-sushanta/following{/other_user}",
"gists_url": "https://api.github.com/users/hi-sushanta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hi-sushanta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hi-sushanta/subscriptions",
"organizations_url": "https://api.github.com/users/hi-sushanta/orgs",
"repos_url": "https://api.github.com/users/hi-sushanta/repos",
"events_url": "https://api.github.com/users/hi-sushanta/events{/privacy}",
"received_events_url": "https://api.github.com/users/hi-sushanta/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Would you be able to check my solution?",
"I don't understand why this error occurs, but my code does the same work to define the issue section.",
"@hi-sushanta To figure out the errors, you'll need to look at the CI runs and debug locally so that the currently failing tests pass. For the quality checks, you'll need to fun `make fix-copies` and `make fixup`",
"But this command returns to the original codebase. that does not make sense because my code works exactly like the issues section mentioned."
] | 1,704 | 1,707 | null | CONTRIBUTOR | null | With this pull request, I have endeavored to remedy a minor decoding issue. If any problems remain with my proposed solution, I welcome your thoughtful feedback and suggestions for improvement.
Fixes #28287
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker @amyeroberts
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28424/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28424",
"html_url": "https://github.com/huggingface/transformers/pull/28424",
"diff_url": "https://github.com/huggingface/transformers/pull/28424.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28424.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28423 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28423/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28423/comments | https://api.github.com/repos/huggingface/transformers/issues/28423/events | https://github.com/huggingface/transformers/pull/28423 | 2,073,383,421 | PR_kwDOCUB6oc5joeCJ | 28,423 | Fix paths to AI Sweden Models reference and model loading | {
"login": "JuanFKurucz",
"id": 31422367,
"node_id": "MDQ6VXNlcjMxNDIyMzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/31422367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JuanFKurucz",
"html_url": "https://github.com/JuanFKurucz",
"followers_url": "https://api.github.com/users/JuanFKurucz/followers",
"following_url": "https://api.github.com/users/JuanFKurucz/following{/other_user}",
"gists_url": "https://api.github.com/users/JuanFKurucz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JuanFKurucz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JuanFKurucz/subscriptions",
"organizations_url": "https://api.github.com/users/JuanFKurucz/orgs",
"repos_url": "https://api.github.com/users/JuanFKurucz/repos",
"events_url": "https://api.github.com/users/JuanFKurucz/events{/privacy}",
"received_events_url": "https://api.github.com/users/JuanFKurucz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes the paths to the correct models of AI Sweden Models, as they migrated their models to a different account https://huggingface.co./AI-Sweden-Models.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Models:
- text models: @ArthurZucker and @younesbelkada
Documentation: @stevhliu and @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28423/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28423",
"html_url": "https://github.com/huggingface/transformers/pull/28423",
"diff_url": "https://github.com/huggingface/transformers/pull/28423.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28423.patch",
"merged_at": 1705306163000
} |
https://api.github.com/repos/huggingface/transformers/issues/28422 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28422/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28422/comments | https://api.github.com/repos/huggingface/transformers/issues/28422/events | https://github.com/huggingface/transformers/pull/28422 | 2,073,334,024 | PR_kwDOCUB6oc5joTUU | 28,422 | Set `cache_dir` for `evaluate.load()` in example scripts | {
"login": "aphedges",
"id": 14283972,
"node_id": "MDQ6VXNlcjE0MjgzOTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/14283972?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aphedges",
"html_url": "https://github.com/aphedges",
"followers_url": "https://api.github.com/users/aphedges/followers",
"following_url": "https://api.github.com/users/aphedges/following{/other_user}",
"gists_url": "https://api.github.com/users/aphedges/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aphedges/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aphedges/subscriptions",
"organizations_url": "https://api.github.com/users/aphedges/orgs",
"repos_url": "https://api.github.com/users/aphedges/repos",
"events_url": "https://api.github.com/users/aphedges/events{/privacy}",
"received_events_url": "https://api.github.com/users/aphedges/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I would appreciate if you could make sure to preserve my commit message (with any needed changes) when merging so it ends up in the Git log."
] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
While using `run_clm.py`,[^1] I noticed that some files were being added to my global cache, not the local cache. I set the `cache_dir` parameter for the one call to `evaluate.load()`, which partially solved the problem. I figured that while I was fixing the one script upstream, I might as well fix the problem in all other example scripts that I could.
There are still some files being added to my global cache, but this appears to be a bug in `evaluate` itself. This commit at least moves some of the files into the local cache, which is better than before.
To create this PR, I made the following regex-based transformation: `evaluate\.load\((.*?)\)` -> `evaluate\.load\($1,
cache_dir=model_args.cache_dir\)`. After using that, I manually fixed all modified files with `ruff` serving as useful guidance. During the process, I removed one existing usage of the `cache_dir` parameter in a script that did not have a corresponding `--cache-dir` argument declared.
[^1]: I specifically used `pytorch/language-modeling/run_clm.py` from v4.34.1 of the library. For the original code, see the following URL: https://github.com/huggingface/transformers/tree/acc394c4f5e1283c19783581790b3dc3105a3697/examples/pytorch/language-modeling/run_clm.py.
## Who can review?
Maintained examples:
- PyTorch:
- text models: @ArthurZucker
- TensorFlow: @Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28422/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28422",
"html_url": "https://github.com/huggingface/transformers/pull/28422",
"diff_url": "https://github.com/huggingface/transformers/pull/28422.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28422.patch",
"merged_at": 1704983924000
} |
https://api.github.com/repos/huggingface/transformers/issues/28421 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28421/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28421/comments | https://api.github.com/repos/huggingface/transformers/issues/28421/events | https://github.com/huggingface/transformers/pull/28421 | 2,073,059,648 | PR_kwDOCUB6oc5jnXEa | 28,421 | Skip now failing test in the Trainer tests | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
https://github.com/huggingface/accelerate/pull/2319 introduced a revert on the DataLoader sampling logic to *not* use a SeedableRandomSampler by default as users were taken aback by the performance differences, so we've set it to `False` by default. This test is now back to its old way, where it was failing for ages.
As noted in the `skip`, one of my next items to hit is a configuration for Accelerator that can be passed to the `TrainingArguments` that can customize this, but for now it's not the case.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28421/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28421",
"html_url": "https://github.com/huggingface/transformers/pull/28421",
"diff_url": "https://github.com/huggingface/transformers/pull/28421.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28421.patch",
"merged_at": 1704884551000
} |
https://api.github.com/repos/huggingface/transformers/issues/28420 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28420/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28420/comments | https://api.github.com/repos/huggingface/transformers/issues/28420/events | https://github.com/huggingface/transformers/pull/28420 | 2,072,866,425 | PR_kwDOCUB6oc5jmscR | 28,420 | Optionally preprocess segmentation maps for MobileViT | {
"login": "harisankar95",
"id": 58052269,
"node_id": "MDQ6VXNlcjU4MDUyMjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/58052269?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/harisankar95",
"html_url": "https://github.com/harisankar95",
"followers_url": "https://api.github.com/users/harisankar95/followers",
"following_url": "https://api.github.com/users/harisankar95/following{/other_user}",
"gists_url": "https://api.github.com/users/harisankar95/gists{/gist_id}",
"starred_url": "https://api.github.com/users/harisankar95/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harisankar95/subscriptions",
"organizations_url": "https://api.github.com/users/harisankar95/orgs",
"repos_url": "https://api.github.com/users/harisankar95/repos",
"events_url": "https://api.github.com/users/harisankar95/events{/privacy}",
"received_events_url": "https://api.github.com/users/harisankar95/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@amyeroberts Can you please review the PR? MobileViT image preprocessor is updated to accept segmentation maps inline to other preprocessors for segmentation models like that of Segformer. ",
"I have corrected the error in the tests_torch. I will wait for the https://github.com/huggingface/transformers/pull/28432 to be merged to fix the remaining CI tests",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28420). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
- Preprocessor can now accept segmentations maps as well and performs augmentations inline to input images.
- Tests added for preprocessing segmentation masks.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28420/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28420",
"html_url": "https://github.com/huggingface/transformers/pull/28420",
"diff_url": "https://github.com/huggingface/transformers/pull/28420.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28420.patch",
"merged_at": 1704984734000
} |
https://api.github.com/repos/huggingface/transformers/issues/28419 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28419/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28419/comments | https://api.github.com/repos/huggingface/transformers/issues/28419/events | https://github.com/huggingface/transformers/pull/28419 | 2,072,771,476 | PR_kwDOCUB6oc5jmXeF | 28,419 | Correctly resolve trust_remote_code=None for AutoTokenizer | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28419). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I tried, but I couldn't figure out a way to test it, and I don't think the previous test authors could either! The test would need to check that `input()` is called, but we can't really check that - we can only check for the error that's raised when it times out. This is what happens in [the existing tests](https://github.com/huggingface/transformers/blob/main/tests/models/auto/test_tokenization_auto.py#L306), and that's why the tests didn't catch the problem - the error is raised without ever calling `input()`, and the test only checks for the error, so it passed."
] | 1,704 | 1,704 | 1,704 | MEMBER | null | If `trust_remote_code` is left at the default `None` in `AutoTokenizer.from_pretrained()`, an error is thrown if you try to load a repo that requires remote code, rather than the correct dialog box being displayed. This PR fixes that issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28419/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28419",
"html_url": "https://github.com/huggingface/transformers/pull/28419",
"diff_url": "https://github.com/huggingface/transformers/pull/28419.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28419.patch",
"merged_at": 1704985929000
} |
https://api.github.com/repos/huggingface/transformers/issues/28418 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28418/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28418/comments | https://api.github.com/repos/huggingface/transformers/issues/28418/events | https://github.com/huggingface/transformers/pull/28418 | 2,072,765,682 | PR_kwDOCUB6oc5jmWNX | 28,418 | [i18n-fr] Translate accelerate tutorial to French | {
"login": "NoB0",
"id": 28621493,
"node_id": "MDQ6VXNlcjI4NjIxNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/28621493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoB0",
"html_url": "https://github.com/NoB0",
"followers_url": "https://api.github.com/users/NoB0/followers",
"following_url": "https://api.github.com/users/NoB0/following{/other_user}",
"gists_url": "https://api.github.com/users/NoB0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NoB0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NoB0/subscriptions",
"organizations_url": "https://api.github.com/users/NoB0/orgs",
"repos_url": "https://api.github.com/users/NoB0/repos",
"events_url": "https://api.github.com/users/NoB0/events{/privacy}",
"received_events_url": "https://api.github.com/users/NoB0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28418). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
Translates the `accelerate.md` file of the documentation to French.
Part of #21456
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
French speaking contributors.
Documentation: @stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28418/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28418",
"html_url": "https://github.com/huggingface/transformers/pull/28418",
"diff_url": "https://github.com/huggingface/transformers/pull/28418.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28418.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28417 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28417/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28417/comments | https://api.github.com/repos/huggingface/transformers/issues/28417/events | https://github.com/huggingface/transformers/pull/28417 | 2,072,687,874 | PR_kwDOCUB6oc5jmE_k | 28,417 | Bump fonttools from 4.31.1 to 4.43.0 in /examples/research_projects/decision_transformer | {
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
} | [
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
},
{
"id": 6410654816,
"node_id": "LA_kwDOCUB6oc8AAAABfhrUYA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/python",
"name": "python",
"color": "2b67c6",
"default": false,
"description": "Pull requests that update Python code"
}
] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | Bumps [fonttools](https://github.com/fonttools/fonttools) from 4.31.1 to 4.43.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/fonttools/fonttools/releases">fonttools's releases</a>.</em></p>
<blockquote>
<h2>4.43.0</h2>
<ul>
<li>[subset] Set up lxml <code>XMLParser(resolve_entities=False)</code> when parsing OT-SVG documents to prevent XML External Entity (XXE) attacks (9f61271dc): <a href="https://codeql.github.com/codeql-query-help/python/py-xxe/">https://codeql.github.com/codeql-query-help/python/py-xxe/</a></li>
<li>[varLib.iup] Added workaround for a Cython bug in <code>iup_delta_optimize</code> that was leading to IUP tolerance being incorrectly initialised, resulting in sub-optimal deltas (60126435d, <a href="https://redirect.github.com/cython/cython/issues/5732">cython/cython#5732</a>).</li>
<li>[varLib] Added new command-line entry point <code>fonttools varLib.avar</code> to add an <code>avar</code> table to an existing VF from axes mappings in a .designspace file (0a3360e52).</li>
<li>[instancer] Fixed bug whereby no longer used variation regions were not correctly pruned after VarData optimization (<a href="https://redirect.github.com/fonttools/fonttools/issues/3268">#3268</a>).</li>
<li>Added support for Python 3.12 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3283">#3283</a>).</li>
</ul>
<h2>4.42.1</h2>
<ul>
<li>[t1Lib] Fixed several Type 1 issues (<a href="https://redirect.github.com/fonttools/fonttools/issues/3238">#3238</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3240">#3240</a>).</li>
<li>[otBase/packer] Allow sharing tables reached by different offset sizes (<a href="https://redirect.github.com/fonttools/fonttools/issues/3241">#3241</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3236">#3236</a>, 457f11c2).</li>
<li>[varLib/merger] Fix Cursive attachment merging error when all anchors are NULL (<a href="https://redirect.github.com/fonttools/fonttools/issues/3248">#3248</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3247">#3247</a>).</li>
<li>[ttLib] Fixed warning when calling <code>addMultilingualName</code> and <code>ttFont</code> parameter was not passed on to <code>findMultilingualName</code> (<a href="https://redirect.github.com/fonttools/fonttools/issues/3253">#3253</a>).</li>
</ul>
<h2>4.42.0</h2>
<ul>
<li>[varLib] Use sentinel value 0xFFFF to mark a glyph advance in hmtx/vmtx as non participating, allowing sparse masters to contain glyphs for variation purposes other than {H,V}VAR (<a href="https://redirect.github.com/fonttools/fonttools/issues/3235">#3235</a>).</li>
<li>[varLib/cff] Treat empty glyphs in non-default masters as missing, thus not participating in CFF2 delta computation, similarly to how varLib already treats them for gvar (<a href="https://redirect.github.com/fonttools/fonttools/issues/3234">#3234</a>).</li>
<li>Added varLib.avarPlanner script to deduce 'correct' avar v1 axis mappings based on glyph average weights (<a href="https://redirect.github.com/fonttools/fonttools/issues/3223">#3223</a>).</li>
</ul>
<h2>4.41.1</h2>
<ul>
<li>[subset] Fixed perf regression in v4.41.0 by making <code>NameRecordVisitor</code> only visit tables that do contain nameID references (<a href="https://redirect.github.com/fonttools/fonttools/issues/3213">#3213</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3214">#3214</a>).</li>
<li>[varLib.instancer] Support instancing fonts containing null ConditionSet offsets in FeatureVariationRecords (<a href="https://redirect.github.com/fonttools/fonttools/issues/3211">#3211</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3212">#3212</a>).</li>
<li>[statisticsPen] Report font glyph-average weight/width and font-wide slant.</li>
<li>[fontBuilder] Fixed head.created date incorrectly set to 0 instead of the current timestamp, regression introduced in v4.40.0 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3210">#3210</a>).</li>
<li>[varLib.merger] Support sparse <code>CursivePos</code> masters (<a href="https://redirect.github.com/fonttools/fonttools/issues/3209">#3209</a>).</li>
</ul>
<h2>4.41.0</h2>
<ul>
<li>[fontBuilder] Fixed bug in setupOS2 with default panose attribute incorrectly being set to a dict instead of a Panose object (<a href="https://redirect.github.com/fonttools/fonttools/issues/3201">#3201</a>).</li>
<li>[name] Added method to <code>removeUnusedNameRecords</code> in the user range (<a href="https://redirect.github.com/fonttools/fonttools/issues/3185">#3185</a>).</li>
<li>[varLib.instancer] Fixed issue with L4 instancing (moving default) (<a href="https://redirect.github.com/fonttools/fonttools/issues/3179">#3179</a>).</li>
<li>[cffLib] Use latin1 so we can roundtrip non-ASCII in {Full,Font,Family}Name (<a href="https://redirect.github.com/fonttools/fonttools/issues/3202">#3202</a>).</li>
<li>[designspaceLib] Mark <!-- raw HTML omitted --> as optional in docs (as it is in the code).</li>
<li>[glyf-1] Fixed drawPoints() bug whereby last cubic segment becomes quadratic (<a href="https://redirect.github.com/fonttools/fonttools/issues/3189">#3189</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3190">#3190</a>).</li>
<li>[fontBuilder] Propagate the 'hidden' flag to the fvar Axis instance (<a href="https://redirect.github.com/fonttools/fonttools/issues/3184">#3184</a>).</li>
<li>[fontBuilder] Update setupAvar() to also support avar 2, fixing <code>_add_avar()</code> call site (<a href="https://redirect.github.com/fonttools/fonttools/issues/3183">#3183</a>).</li>
<li>Added new <code>voltLib.voltToFea</code> submodule (originally Tiro Typeworks' "Volto") for converting VOLT OpenType Layout sources to FEA format (<a href="https://redirect.github.com/fonttools/fonttools/issues/3164">#3164</a>).</li>
</ul>
<h2>4.40.0</h2>
<ul>
<li>Published native binary wheels to PyPI for all the python minor versions and platform and architectures currently supported that would benefit from this. They will include precompiled Cython-accelerated modules (e.g. cu2qu) without requiring to compile them from source. The pure-python wheel and source distribution will continue to be published as always (pip will automatically chose them when no binary wheel is available for the given platform, e.g. pypy). Use <code>pip install --no-binary=fonttools fonttools</code> to expliclity request pip to install from the pure-python source.</li>
<li>[designspaceLib|varLib] Add initial support for specifying axis mappings and build <code>avar2</code> table from those (<a href="https://redirect.github.com/fonttools/fonttools/issues/3123">#3123</a>).</li>
<li>[feaLib] Support variable ligature caret position (<a href="https://redirect.github.com/fonttools/fonttools/issues/3130">#3130</a>).</li>
<li>[varLib|glyf] Added option to --drop-implied-oncurves; test for impliable oncurve points either before or after rounding (<a href="https://redirect.github.com/fonttools/fonttools/issues/3146">#3146</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3147">#3147</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3155">#3155</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3156">#3156</a>).</li>
<li>[TTGlyphPointPen] Don't error with empty contours, simply ignore them (<a href="https://redirect.github.com/fonttools/fonttools/issues/3145">#3145</a>).</li>
<li>[sfnt] Fixed str vs bytes remnant of py3 transition in code dealing with de/compiling WOFF metadata (<a href="https://redirect.github.com/fonttools/fonttools/issues/3129">#3129</a>).</li>
<li>[instancer-solver] Fixed bug when moving default instance with sparse masters (<a href="https://redirect.github.com/fonttools/fonttools/issues/3139">#3139</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3140">#3140</a>).</li>
<li>[feaLib] Simplify variable scalars that don’t vary (<a href="https://redirect.github.com/fonttools/fonttools/issues/3132">#3132</a>).</li>
<li>[pens] Added filter pen that explicitly emits closing line when lastPt != movePt (<a href="https://redirect.github.com/fonttools/fonttools/issues/3100">#3100</a>).</li>
<li>[varStore] Improve optimize algorithm and better document the algorithm (<a href="https://redirect.github.com/fonttools/fonttools/issues/3124">#3124</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3127">#3127</a>).<br />
Added <code>quantization</code> option (<a href="https://redirect.github.com/fonttools/fonttools/issues/3126">#3126</a>).</li>
<li>Added CI workflow config file for building native binary wheels (<a href="https://redirect.github.com/fonttools/fonttools/issues/3121">#3121</a>).</li>
<li>[fontBuilder] Added glyphDataFormat=0 option; raise error when glyphs contain cubic outlines but glyphDataFormat was not explicitly set to 1 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3113">#3113</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3119">#3119</a>).</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/fonttools/fonttools/blob/main/NEWS.rst">fonttools's changelog</a>.</em></p>
<blockquote>
<h2>4.43.0 (released 2023-09-29)</h2>
<ul>
<li>[subset] Set up lxml <code>XMLParser(resolve_entities=False)</code> when parsing OT-SVG documents
to prevent XML External Entity (XXE) attacks (9f61271dc):
<a href="https://codeql.github.com/codeql-query-help/python/py-xxe/">https://codeql.github.com/codeql-query-help/python/py-xxe/</a></li>
<li>[varLib.iup] Added workaround for a Cython bug in <code>iup_delta_optimize</code> that was
leading to IUP tolerance being incorrectly initialised, resulting in sub-optimal deltas
(60126435d, <a href="https://redirect.github.com/cython/cython/issues/5732">cython/cython#5732</a>).</li>
<li>[varLib] Added new command-line entry point <code>fonttools varLib.avar</code> to add an
<code>avar</code> table to an existing VF from axes mappings in a .designspace file (0a3360e52).</li>
<li>[instancer] Fixed bug whereby no longer used variation regions were not correctly pruned
after VarData optimization (<a href="https://redirect.github.com/fonttools/fonttools/issues/3268">#3268</a>).</li>
<li>Added support for Python 3.12 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3283">#3283</a>).</li>
</ul>
<h2>4.42.1 (released 2023-08-20)</h2>
<ul>
<li>[t1Lib] Fixed several Type 1 issues (<a href="https://redirect.github.com/fonttools/fonttools/issues/3238">#3238</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3240">#3240</a>).</li>
<li>[otBase/packer] Allow sharing tables reached by different offset sizes (<a href="https://redirect.github.com/fonttools/fonttools/issues/3241">#3241</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3236">#3236</a>).</li>
<li>[varLib/merger] Fix Cursive attachment merging error when all anchors are NULL (<a href="https://redirect.github.com/fonttools/fonttools/issues/3248">#3248</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3247">#3247</a>).</li>
<li>[ttLib] Fixed warning when calling <code>addMultilingualName</code> and <code>ttFont</code> parameter was not
passed on to <code>findMultilingualName</code> (<a href="https://redirect.github.com/fonttools/fonttools/issues/3253">#3253</a>).</li>
</ul>
<h2>4.42.0 (released 2023-08-02)</h2>
<ul>
<li>[varLib] Use sentinel value 0xFFFF to mark a glyph advance in hmtx/vmtx as non
participating, allowing sparse masters to contain glyphs for variation purposes other
than {H,V}VAR (<a href="https://redirect.github.com/fonttools/fonttools/issues/3235">#3235</a>).</li>
<li>[varLib/cff] Treat empty glyphs in non-default masters as missing, thus not participating
in CFF2 delta computation, similarly to how varLib already treats them for gvar (<a href="https://redirect.github.com/fonttools/fonttools/issues/3234">#3234</a>).</li>
<li>Added varLib.avarPlanner script to deduce 'correct' avar v1 axis mappings based on
glyph average weights (<a href="https://redirect.github.com/fonttools/fonttools/issues/3223">#3223</a>).</li>
</ul>
<h2>4.41.1 (released 2023-07-21)</h2>
<ul>
<li>[subset] Fixed perf regression in v4.41.0 by making <code>NameRecordVisitor</code> only visit
tables that do contain nameID references (<a href="https://redirect.github.com/fonttools/fonttools/issues/3213">#3213</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3214">#3214</a>).</li>
<li>[varLib.instancer] Support instancing fonts containing null ConditionSet offsets in
FeatureVariationRecords (<a href="https://redirect.github.com/fonttools/fonttools/issues/3211">#3211</a>, <a href="https://redirect.github.com/fonttools/fonttools/issues/3212">#3212</a>).</li>
<li>[statisticsPen] Report font glyph-average weight/width and font-wide slant.</li>
<li>[fontBuilder] Fixed head.created date incorrectly set to 0 instead of the current
timestamp, regression introduced in v4.40.0 (<a href="https://redirect.github.com/fonttools/fonttools/issues/3210">#3210</a>).</li>
<li>[varLib.merger] Support sparse <code>CursivePos</code> masters (<a href="https://redirect.github.com/fonttools/fonttools/issues/3209">#3209</a>).</li>
</ul>
<h2>4.41.0 (released 2023-07-12)</h2>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/fonttools/fonttools/commit/145460e77f772767608e677737f2d00147152620"><code>145460e</code></a> Release 4.43.0</li>
<li><a href="https://github.com/fonttools/fonttools/commit/64f3fd83d901f2da882cca5efc38ebdfd2718ab7"><code>64f3fd8</code></a> Update changelog [skip ci]</li>
<li><a href="https://github.com/fonttools/fonttools/commit/7aea49e88cf997b3e0bdfd7f6330a16578c9ce5a"><code>7aea49e</code></a> Merge pull request <a href="https://redirect.github.com/fonttools/fonttools/issues/3283">#3283</a> from hugovk/main</li>
<li><a href="https://github.com/fonttools/fonttools/commit/4470c4401d628f273d79bf4bd0df42f1217fcc53"><code>4470c44</code></a> Bump requirements.txt to support Python 3.12</li>
<li><a href="https://github.com/fonttools/fonttools/commit/0c87cbad6e21c0f2511cdfc70ad7e1a572e84017"><code>0c87cba</code></a> Bump scipy for Python 3.12 support</li>
<li><a href="https://github.com/fonttools/fonttools/commit/eda6fa5cfbdfaf1d54cf391ed9c86b72288882a2"><code>eda6fa5</code></a> Add support for Python 3.12</li>
<li><a href="https://github.com/fonttools/fonttools/commit/0e033b0e5cd771f520bbf7346dedb7751677bd24"><code>0e033b0</code></a> Bump reportlab from 3.6.12 to 3.6.13 in /Doc</li>
<li><a href="https://github.com/fonttools/fonttools/commit/60126435dff31b489a9ea1a8dcc260101e5b1c20"><code>6012643</code></a> [iup] Work around cython bug</li>
<li><a href="https://github.com/fonttools/fonttools/commit/b14268a23c5a0dd644d2479064e4018a6b084b23"><code>b14268a</code></a> [iup] Remove copy/pasta</li>
<li><a href="https://github.com/fonttools/fonttools/commit/0a3360e52727cdefce2e9b28286b074faf99033c"><code>0a3360e</code></a> [varLib.avar] New module to compile avar from .designspace file</li>
<li>Additional commits viewable in <a href="https://github.com/fonttools/fonttools/compare/4.31.1...4.43.0">compare view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=fonttools&package-manager=pip&previous-version=4.31.1&new-version=4.43.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28417/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28417",
"html_url": "https://github.com/huggingface/transformers/pull/28417",
"diff_url": "https://github.com/huggingface/transformers/pull/28417.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28417.patch",
"merged_at": 1704882164000
} |
https://api.github.com/repos/huggingface/transformers/issues/28416 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28416/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28416/comments | https://api.github.com/repos/huggingface/transformers/issues/28416/events | https://github.com/huggingface/transformers/issues/28416 | 2,072,663,931 | I_kwDOCUB6oc57ild7 | 28,416 | Loading Phi 1.5 model from the hub gives warning that model is uninitialized | {
"login": "gabeorlanski",
"id": 18234433,
"node_id": "MDQ6VXNlcjE4MjM0NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/18234433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gabeorlanski",
"html_url": "https://github.com/gabeorlanski",
"followers_url": "https://api.github.com/users/gabeorlanski/followers",
"following_url": "https://api.github.com/users/gabeorlanski/following{/other_user}",
"gists_url": "https://api.github.com/users/gabeorlanski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gabeorlanski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gabeorlanski/subscriptions",
"organizations_url": "https://api.github.com/users/gabeorlanski/orgs",
"repos_url": "https://api.github.com/users/gabeorlanski/repos",
"events_url": "https://api.github.com/users/gabeorlanski/events{/privacy}",
"received_events_url": "https://api.github.com/users/gabeorlanski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"From the [model card](https://huggingface.co./microsoft/phi-2):\r\n> If you are using transformers>=4.36.0, always load the model with trust_remote_code=True to prevent side-effects.\r\n\r\nYou'll find a few examples on how to load the model there. The following should work:\r\n```python\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/phi-2\", trust_remote_code=True)\r\n```",
"I tried that and also tried deleting the downloaded file then running the code, issue still persists only with Phi1.5. ",
"Hey, that is expected, `model = AutoModelForCausalLM.from_pretrained(\"microsoft/phi-2\", use_safetensors= True, trust_remote_code =True)` will allow you to do that, but if you want to use the converted checkpoints you should use the ones share by @susnato here hf.co/susnato/phi-2 \r\n\r\nwill be fixed by #28392 ",
"So with Phi-2 that does work. But with Phi 1.5 the model hangs and never loads. I tried deleting the downloaded model from before, and it only downloads the config this time but, again, does not load.\r\n\r\nI updated the issue accordingly to specify this is only happening with Phi 1.5.",
"What checkpoint are you using for phi 1.5? @gabeorlanski ",
"I am using [microsoft/phi-1_5](https://huggingface.co./microsoft/phi-1_5)\r\n\r\nTo update, it _does_ end up finishing, but gave me this error:\r\n```shell\r\nTraceback (most recent call last):\r\n File \"/u/g/o/gorlanski/miniconda3/envs/syncode/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py\", line 286, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/u/g/o/gorlanski/miniconda3/envs/syncode/lib/python3.10/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co./api/models/microsoft/phi-1_5/discussions?page_index=11923\r\n```\r\n\r\nI set logging to debug level, and it appears to be going through all of the discussion pages:\r\n```shell\r\n[2024-01-10 10:00:25][urllib3.connectionpool][DEBUG] Starting new HTTPS connection (2): huggingface.co:443\r\n[2024-01-10 10:00:25][urllib3.connectionpool][DEBUG] https://huggingface.co.:443 \"HEAD /microsoft/phi-1_5/resolve/main/config.json HTTP/1.1\" 200 0\r\n[2024-01-10 10:00:25][urllib3.connectionpool][DEBUG] https://huggingface.co.:443 \"GET /api/models/microsoft/phi-1_5 HTTP/1.1\" 200 3080\r\n[2024-01-10 10:00:25][urllib3.connectionpool][DEBUG] https://huggingface.co.:443 \"GET /api/models/microsoft/phi-1_5/commits/main HTTP/1.1\" 200 16853\r\n[2024-01-10 10:00:25][urllib3.connectionpool][DEBUG] https://huggingface.co.:443 \"GET /api/models/microsoft/phi-1_5/commits/main?p=1 HTTP/1.1\" 200 1602\r\n[2024-01-10 10:00:25][urllib3.connectionpool][DEBUG] https://huggingface.co.:443 \"GET /api/models/microsoft/phi-1_5/discussions?page_index=0 HTTP/1.1\" 200 20155\r\n[2024-01-10 10:00:25][urllib3.connectionpool][DEBUG] https://huggingface.co.:443 \"GET /api/models/microsoft/phi-1_5/discussions?page_index=1 HTTP/1.1\" 200 20155\r\n[2024-01-10 10:00:25][urllib3.connectionpool][DEBUG] https://huggingface.co.:443 \"GET /api/models/microsoft/phi-1_5/discussions?page_index=2 HTTP/1.1\" 200 20155\r\n```\r\n\r\n",
"Can you please try to load from - `susnato/phi-1_5_dev` ? @gabeorlanski ",
"That one seems to work fine. It does still get the discussions (I assume that is intended)\r\n\r\nWhen looking at the `microsoft/phi-1_5` there appears to have been some recent updates, could that be the cause?",
"Yes @gabeorlanski, the recent updates should make the repo compatible with the library code...tagging @gugarosa to have a look at the [error message](https://github.com/huggingface/transformers/issues/28416#issuecomment-1885140212). \r\n\r\nUntil it is fixed you can use phi 1.5 from `susnato/phi-1_5_dev`",
"The download might be probably hanging because it is trying to find safetensors-based files, whereas Phi-1.5 was not saved with safetensors.\r\n\r\nRegarding the loading issues, using:\r\n```\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM.from_pretrained(\"microsoft/phi-1_5\", trust_remote_code=True)\r\n```\r\n\r\nShould be working now regardless of you transformers version.",
"@gugarosa that fixed it. Thanks so much! Closing the issue"
] | 1,704 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'cpu', 'offload_param_device': 'none', 'zero3_init_flag': False, 'zero_stage': 2}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following code:
```Python
from transformers import PhiForCausalLM
model = PhiForCausalLM.from_pretrained("microsoft/phi-1.5")
```
This happens for all phi models and has only started happening recently. It worked fine a few days ago. I have tried this both in my conda environment, a fresh one, and using the official huggingface docker image. It happened in all three.
Additionally if i run:
```Python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", trust_remote_code =True)
```
The same error occurs.
If I run:
```Python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", use_safetensors= True, trust_remote_code =True)
```
The model hangs and never loads.
### Expected behavior
The model should load with the initialized weights from the hub. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28416/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28415 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28415/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28415/comments | https://api.github.com/repos/huggingface/transformers/issues/28415/events | https://github.com/huggingface/transformers/issues/28415 | 2,072,579,042 | I_kwDOCUB6oc57iQvi | 28,415 | Can not load model after finetuning PHI2 model | {
"login": "zhangmiaosen2000",
"id": 59921236,
"node_id": "MDQ6VXNlcjU5OTIxMjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/59921236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhangmiaosen2000",
"html_url": "https://github.com/zhangmiaosen2000",
"followers_url": "https://api.github.com/users/zhangmiaosen2000/followers",
"following_url": "https://api.github.com/users/zhangmiaosen2000/following{/other_user}",
"gists_url": "https://api.github.com/users/zhangmiaosen2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhangmiaosen2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhangmiaosen2000/subscriptions",
"organizations_url": "https://api.github.com/users/zhangmiaosen2000/orgs",
"repos_url": "https://api.github.com/users/zhangmiaosen2000/repos",
"events_url": "https://api.github.com/users/zhangmiaosen2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhangmiaosen2000/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"Running into the exact same issue after fine-tuning with embedding reshaped ",
"> But when I try to use this code to load finetuned model:\r\n\r\nI assume that the error appears when you try to merge the adapter and the base model. The problem is that phi-2 expects the vocab_size to be a multiple of 64 (see [configuration_phi.py](https://huggingface.co./microsoft/phi-2/blob/834565c23f9b28b96ccbeabe614dd906b6db551a/configuration_phi.py#L44)). \r\n\r\nWhenever you add new tokens, you have to pad the embedding dimension to a multiple of 64. This has to be done after adding tokens when training **and** before merging the trained adapter. Example:\r\n\r\n```python\r\ntokenizer.add_tokens([\"<|im_start|>\", \"<PAD>\"])\r\ntokenizer.pad_token = \"<PAD>\"\r\ntokenizer.add_special_tokens(dict(eos_token=\"<|im_end|>\"))\r\nmodel.resize_token_embeddings(\r\n new_num_tokens=len(tokenizer),\r\n pad_to_multiple_of=64) # phi2 default is 64, see configuration_phi.py\r\n```\r\n\r\n[Notebook](https://github.com/geronimi73/phi2-finetune/blob/main/nb_qlora.ipynb) with a complete example for a QLoRA finetune of phi2.",
"Thanks @geronimi73 for the answer 🤗 ",
"> > But when I try to use this code to load finetuned model:\r\n> Whenever you add new tokens, you have to pad the embedding dimension to a multiple of 64. This has to be done after adding tokens when training **and** before merging the trained adapter. Example:\r\n\r\nsorry, forget everything I wrote please. It works but it's not necessary. The model's embedding layer is big enough to accomodate additional tokens, see this: https://huggingface.co./microsoft/phi-2/discussions/22#659d8ba950c1bbee5be6f179\r\n\r\nin other words, when you add tokens to the tokenizer, no need to resize the embeddings. in code:\r\n\r\n```python\r\nif tokenizer.pad_token is None:\r\n tokenizer.add_special_tokens(dict(pad_token=DEFAULT_PAD_TOKEN))\r\n # not needed:\r\n # smart_tokenizer_and_embedding_resize(\r\n # special_tokens_dict=dict(pad_token=DEFAULT_PAD_TOKEN),\r\n # tokenizer=tokenizer,\r\n # model=model,\r\n #)\r\n\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,706 | null | NONE | null | ### System Info
Here is my code in finetuning PHI-2 model
```
model = transformers.AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", flash_attn=True, flash_rotary=True, fused_dense=True, trust_remote_code=True)
tokenizer = transformers.AutoTokenizer.from_pretrained(
"microsoft/phi-2",
cache_dir=training_args.cache_dir,
model_max_length=training_args.model_max_length,
padding_side="right",
use_fast=True,
trust_remote_code=True
)
if tokenizer.pad_token is None:
smart_tokenizer_and_embedding_resize(
special_tokens_dict=dict(pad_token=DEFAULT_PAD_TOKEN),
tokenizer=tokenizer,
model=model,
)
```
where:
```
def smart_tokenizer_and_embedding_resize(
special_tokens_dict: Dict,
tokenizer: transformers.PreTrainedTokenizer,
model: transformers.PreTrainedModel,
):
"""Resize tokenizer and embedding.
Note: This is the unoptimized version that may make your embedding size not be divisible by 64.
"""
num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
if num_new_tokens > 0:
input_embeddings = model.get_input_embeddings().weight.data
output_embeddings = model.get_output_embeddings().weight.data
input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
input_embeddings[-num_new_tokens:] = input_embeddings_avg
output_embeddings[-num_new_tokens:] = output_embeddings_avg
```
Then I successfully finetuned model and save to xxx/checkpoint-500.
But when I try to use this code to load finetuned model:
```
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code = True
)
```
I alway got this:
```
Traceback (most recent call last):
File "xxx.py", line 272, in <module>
main()
File "xxx.py", line 151, in main
tokenizer, model = get_model(base_model=args.model, page_attention=args.vllm)
File "xxx.py", line 80, in get_model
model = AutoModelForCausalLM.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py", line 561, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 3480, in from_pretrained
) = cls._load_pretrained_model(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 3870, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 743, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/usr/local/lib/python3.8/dist-packages/accelerate/utils/modeling.py", line 285, in set_module_tensor_to_device
raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([50296, 2560]) in "weight" (which has shape torch.Size([50304, 2560])), this look incorrect.
```
**Can you help me with that?**
Note that, the saved config.json is:
```
{
"_name_or_path": "microsoft/phi-2",
"activation_function": "gelu_new",
"architectures": [
"PhiForCausalLM"
],
"attn_pdrop": 0.0,
"auto_map": {
"AutoConfig": "microsoft/phi-2--configuration_phi.PhiConfig",
"AutoModelForCausalLM": "microsoft/phi-2--modeling_phi.PhiForCausalLM"
},
"embd_pdrop": 0.0,
"flash_attn": true,
"flash_rotary": true,
"fused_dense": true,
"img_processor": null,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "phi-msft",
"n_embd": 2560,
"n_head": 32,
"n_head_kv": null,
"n_inner": null,
"n_layer": 32,
"n_positions": 2048,
"resid_pdrop": 0.1,
"rotary_dim": 32,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.36.2",
"use_cache": false,
"vocab_size": 50296
}
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
,
### Expected behavior
, | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28415/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28414 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28414/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28414/comments | https://api.github.com/repos/huggingface/transformers/issues/28414/events | https://github.com/huggingface/transformers/pull/28414 | 2,072,545,115 | PR_kwDOCUB6oc5jlleD | 28,414 | Fix mismatching loading in from_pretrained with/without accelerate | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28414). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@fxmarty Could you add a test that passes with these changes? ",
"@amyeroberts sure, where would be a good place for it?",
"@fxmarty I'd suggest putting a test in `modeling_utils` which does effectively the same thing as in the example in PR description with a small model. Happy for you to put it somewhere else if you think it's better ",
"There seem to be unrelated issues in the CI\r\n\r\n```\r\nBuilding wheels for collected packages: natten\r\n Building wheel for natten (setup.py) ... error\r\n error: subprocess-exited-with-error\r\n \r\n × python setup.py bdist_wheel did not run successfully.\r\n │ exit code: 1\r\n ╰─> [80 lines of output]\r\n Building NATTEN for CPU ONLY.\r\n Number of workers: 9\r\n running bdist_wheel\r\n running build\r\n running build_py\r\n creating build\r\n creating build/lib.linux-x86_64-cpython-38\r\n creating build/lib.linux-x86_64-cpython-38/natten\r\n copying src/natten/__init__.py -> build/lib.linux-x86_64-cpython-38/natten\r\n copying src/natten/flops.py -> build/lib.linux-x86_64-cpython-38/natten\r\n copying src/natten/functional.py -> build/lib.linux-x86_64-cpython-38/natten\r\n copying src/natten/natten1d.py -> build/lib.linux-x86_64-cpython-38/natten\r\n copying src/natten/natten2d.py -> build/lib.linux-x86_64-cpython-38/natten\r\n copying src/natten/natten3d.py -> build/lib.linux-x86_64-cpython-38/natten\r\n copying src/natten/nested.py -> build/lib.linux-x86_64-cpython-38/natten\r\n creating build/lib.linux-x86_64-cpython-38/natten/utils\r\n copying src/natten/utils/__init__.py -> build/lib.linux-x86_64-cpython-38/natten/utils\r\n copying src/natten/utils/tensor.py -> build/lib.linux-x86_64-cpython-38/natten/utils\r\n copying src/natten/utils/testing.py -> build/lib.linux-x86_64-cpython-38/natten/utils\r\n copying src/natten/utils/typing.py -> build/lib.linux-x86_64-cpython-38/natten/utils\r\n running build_ext\r\n Traceback (most recent call last):\r\n File \"/tmp/pip-install-musj9gr5/natten_ea055ed283804ee0b0ef252dd4ad0d6c/setup.py\", line 147, in build_extension\r\n subprocess.check_output([\"cmake\", \"--version\"])\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/subprocess.py\", line 415, in check_output\r\n return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/subprocess.py\", line 493, in run\r\n with Popen(*popenargs, **kwargs) as process:\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/subprocess.py\", line 858, in __init__\r\n self._execute_child(args, executable, preexec_fn, close_fds,\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/subprocess.py\", line 1704, in _execute_child\r\n raise child_exception_type(errno_num, err_msg, err_filename)\r\n FileNotFoundError: [Errno 2] No such file or directory: 'cmake'\r\n \r\n During handling of the above exception, another exception occurred:\r\n \r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"/tmp/pip-install-musj9gr5/natten_ea055ed283804ee0b0ef252dd4ad0d6c/setup.py\", line 210, in <module>\r\n setup(\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/__init__.py\", line 103, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/core.py\", line 185, in setup\r\n return run_commands(dist)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/core.py\", line 201, in run_commands\r\n dist.run_commands()\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/dist.py\", line 969, in run_commands\r\n self.run_command(cmd)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/dist.py\", line 963, in run_command\r\n super().run_command(command)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/dist.py\", line 988, in run_command\r\n cmd_obj.run()\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/wheel/bdist_wheel.py\", line 368, in run\r\n self.run_command(\"build\")\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/cmd.py\", line 318, in run_command\r\n self.distribution.run_command(command)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/dist.py\", line 963, in run_command\r\n super().run_command(command)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/dist.py\", line 988, in run_command\r\n cmd_obj.run()\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/command/build.py\", line 131, in run\r\n self.run_command(cmd_name)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/cmd.py\", line 318, in run_command\r\n self.distribution.run_command(command)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/dist.py\", line 963, in run_command\r\n super().run_command(command)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/dist.py\", line 988, in run_command\r\n cmd_obj.run()\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/command/build_ext.py\", line 88, in run\r\n _build_ext.run(self)\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py\", line 345, in run\r\n self.build_extensions()\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py\", line 467, in build_extensions\r\n self._build_extensions_serial()\r\n File \"/home/circleci/.pyenv/versions/3.8.12/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py\", line 493, in _build_extensions_serial\r\n self.build_extension(ext)\r\n File \"/tmp/pip-install-musj9gr5/natten_ea055ed283804ee0b0ef252dd4ad0d6c/setup.py\", line 149, in build_extension\r\n raise RuntimeError(\"Cannot find CMake executable\")\r\n RuntimeError: Cannot find CMake executable\r\n [end of output]\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\n ERROR: Failed building wheel for natten\r\n Running setup.py clean for natten\r\nFailed to build natten\r\nERROR: Could not build wheels for natten, which is required to install pyproject.toml-based projects\r\n```\r\n\r\n@ydshieh do you know if something was changed in the workflows?",
"Rebase on main should work - https://huggingface.slack.com/archives/C01NE71C4F7/p1704883154443469",
"Also cc @SunMarc ",
"@amyeroberts Thank you for the review. I modified the test to use a small model (1.6 MB)."
] | 1,704 | 1,706 | 1,705 | COLLABORATOR | null | It appears that passing a `device_map` may result in some parameters being not contiguous in the loaded model, which is not the case when loading a model without a `device_map`.
See for instance:
```python
from transformers import OwlViTProcessor, OwlViTForObjectDetection
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch16")
print("is contig (no device_map):", model.owlvit.visual_projection.weight.is_contiguous())
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch16", device_map="auto")
print("is contig (device_map):", model.owlvit.visual_projection.weight.is_contiguous())
```
printing
```
is contig (no device_map): True
is contig (device_map): False
```
A byproduct of this bug is that when a model is loaded with a `device_map`, then using
```python
model.save_pretrained("owlvit", safe_serialization=True)
```
results in
```
Traceback (most recent call last):
File "<tmp 2>", line 13, in <module>
model.save_pretrained("owlvit", save_config=True, safe_serialization=True)
File "/home/fxmarty/hf_internship/transformers/src/transformers/modeling_utils.py", line 2406, in save_pretrained
safe_save_file(shard, os.path.join(save_directory, shard_file), metadata={"format": "pt"})
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/safetensors/torch.py", line 281, in save_file
serialize_file(_flatten(tensors), filename, metadata=metadata)
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/safetensors/torch.py", line 481, in _flatten
"data": _tobytes(v, k),
File "/home/fxmarty/anaconda3/envs/hf-inf/lib/python3.9/site-packages/safetensors/torch.py", line 396, in _tobytes
raise ValueError(
ValueError: You are trying to save a non contiguous tensor: `owlvit.visual_projection.weight` which is not allowed. It either means you are trying to save tensors which are reference of each other in which case it's recommended to save only the full tensors, and reslice at load time, or simply call `.contiguous()` on your tensor to pack it before saving.
```
This bug stems from the fact that in the case we don't use accelerate, the weights are loaded through [`_load_from_state_dict`](https://github.com/huggingface/transformers/blob/357971ec367fecb9951ae3218feafece5f61416a/src/transformers/modeling_utils.py#L600), which makes use of [`param.copy_(input_param)`](https://github.com/pytorch/pytorch/blob/db79ceb110f6646523019a59bbd7b838f43d4a86/torch/nn/modules/module.py#L2040C29-L2040C29) which preserves the contiguity of the module's parameters. On the contrary, Accelerate's [`set_module_tensor_to_device`](https://github.com/huggingface/accelerate/blob/3969731ce827b088fcc56ea790935cdece12f800/src/accelerate/utils/modeling.py#L370) appears to override the existing value [simply by the one from the state dict](https://github.com/huggingface/transformers/blob/357971ec367fecb9951ae3218feafece5f61416a/src/transformers/modeling_utils.py#L716-L758), which may be not contiguous.
This PR fixes the issue. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28414/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28414",
"html_url": "https://github.com/huggingface/transformers/pull/28414",
"diff_url": "https://github.com/huggingface/transformers/pull/28414.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28414.patch",
"merged_at": 1705411791000
} |
https://api.github.com/repos/huggingface/transformers/issues/28413 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28413/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28413/comments | https://api.github.com/repos/huggingface/transformers/issues/28413/events | https://github.com/huggingface/transformers/issues/28413 | 2,072,410,625 | I_kwDOCUB6oc57hnoB | 28,413 | CausalLMOutputWithPast does not output hidden states | {
"login": "Tiziano41",
"id": 156085316,
"node_id": "U_kgDOCU2sRA",
"avatar_url": "https://avatars.githubusercontent.com/u/156085316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tiziano41",
"html_url": "https://github.com/Tiziano41",
"followers_url": "https://api.github.com/users/Tiziano41/followers",
"following_url": "https://api.github.com/users/Tiziano41/following{/other_user}",
"gists_url": "https://api.github.com/users/Tiziano41/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tiziano41/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tiziano41/subscriptions",
"organizations_url": "https://api.github.com/users/Tiziano41/orgs",
"repos_url": "https://api.github.com/users/Tiziano41/repos",
"events_url": "https://api.github.com/users/Tiziano41/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tiziano41/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @Tiziano41 \r\nThanks for the issue! It seems you are using the `trust_remote_code=True` model, therefore the bug should be on their end. Would you mind opening a similar issue on the original repository on 🤗 Hub? https://huggingface.co./microsoft/phi-2 ",
"Alternatively you can use the HF version of Phi-2 (i.e. without `trust_remote_code=True`) which should be here https://huggingface.co./susnato/phi-2 if I am not mistaken cc @ArthurZucker @susnato ",
"Yes please use it from `susnato/phi-2` until the codebase is changed and in proper order at `microsoft/phi-2` 😃 ",
"@younesbelkada @susnato OK, Thank you both for the fast reply! I will open the issue on their repository.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
sentence-transformers 2.2.2
transformers 4.31.0
numpy 1.19
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
This is the code I'm using :
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''
Name the writings of Dante Alighieri.
Answer:
''', return_tensors="pt", return_attention_mask=False)
embeddings = model(**inputs, output_hidden_states = True)
![image](https://github.com/huggingface/transformers/assets/156085316/040a75ea-18ba-4d4b-89c9-c7336869fb67)
### Expected behavior
As you can see in the image, the hidden_states attribute is None, while I would expect the intermediate layers representations.
I tried to specify the output_hidden_states = True, both in model instantiation and on inference but neither works.
Is there any parameter that I'm missing ? That's all I have seen on the available documentation.
Thank you in advance for you support. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28413/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28412 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28412/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28412/comments | https://api.github.com/repos/huggingface/transformers/issues/28412/events | https://github.com/huggingface/transformers/issues/28412 | 2,072,311,882 | I_kwDOCUB6oc57hPhK | 28,412 | TGI Support for Mixtral AWQ | {
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry, should have been posted in text-generation inference.\r\n\r\nAlso, TGI does work with AWQ. It's just that it doesn't work - at the time of writing with TheBloke's quant. See details [here](https://huggingface.co./TheBloke/Mixtral-8x7B-Instruct-v0.1-AWQ/discussions/1)."
] | 1,704 | 1,704 | 1,704 | NONE | null | ### Feature request
Currently, TGI seems to be able to load Mixtral AWQ models. However, the responses returned are blank.
### Motivation
It's possible to inference a Mixtral model from 16 bit weights (incl. with eetq if desired) but the downloading of weights is buggy and slow see [here](https://github.com/huggingface/text-generation-inference/issues/1413). Also, it would be nice to be able to just download 4-bit weights.
### Your contribution
The AWQ weights are good because they work in vLLM. So, there seems to be a bug on the TGI implementation (although it's unclear whether Mixtral is loading as a fluke because it doesn't seem to be explicitly supported for AWQ). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28412/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28412/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28411 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28411/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28411/comments | https://api.github.com/repos/huggingface/transformers/issues/28411/events | https://github.com/huggingface/transformers/issues/28411 | 2,072,297,197 | I_kwDOCUB6oc57hL7t | 28,411 | Indicies element out of bounds from inclusive range | {
"login": "nogifeet",
"id": 72322393,
"node_id": "MDQ6VXNlcjcyMzIyMzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/72322393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nogifeet",
"html_url": "https://github.com/nogifeet",
"followers_url": "https://api.github.com/users/nogifeet/followers",
"following_url": "https://api.github.com/users/nogifeet/following{/other_user}",
"gists_url": "https://api.github.com/users/nogifeet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nogifeet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nogifeet/subscriptions",
"organizations_url": "https://api.github.com/users/nogifeet/orgs",
"repos_url": "https://api.github.com/users/nogifeet/repos",
"events_url": "https://api.github.com/users/nogifeet/events{/privacy}",
"received_events_url": "https://api.github.com/users/nogifeet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! THanks for opening an issue, but without an isolated reproducible snippet + a traceback that shows this is an issue from transfromers I am not sure how we can help! 🤗 "
] | 1,704 | 1,704 | 1,704 | NONE | null | ### System Info
Hello, we are using the TR-OCR model exported to Onnx. We notice a problem with the large checkpoints for both printed and handwritten; when we run inference using the onnxruntime java library.
Dataset: IAM handwritten (Lines)
Different behaviours are observed on CPU and GPU:
CPU: (we might get an error like below)
Status Message: Non-zero status code returned while running the Gather node. Name:'Gather_346' Status Message:
indices element out of data bounds, idx=514 must be within the inclusive range [-514,513]
at ai.onnxruntime.OrtSession.run(Native Method)
at ai.onnxruntime.OrtSession.run(OrtSession.java:301)
at ai.onnxruntime.OrtSession.run(OrtSession.java:242)
GPU: We notice that the end token is not generated and the decoder keeps repeating the tokens after a point.
This is the main problem, usually, the Gather_346 and Gather_320 operators fail and throw data bounds error.
We have also noticed different behaviour when we turn caching on/off. Note we don't face this problem on the base or small checkpoints but only on the "large" checkpoints. Looking to understand whether this is an onnxruntime issue or hf, please let me know.
A similar issue was raised in the onnxruntime page: https://github.com/microsoft/onnxruntime/issues/2080
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Export the large checkpoints of TR-OCR.
2. Run a simple example from the IAM dataset image attached.
3. Use this image ![e04-083-00](https://github.com/huggingface/transformers/assets/72322393/f4210a4b-bc28-42e2-a3a7-ff05a9f022ab)
4. Don't use any max_length limit and you will notice that the end token is not generated and the tokens are repeated.
5. Current Output: The edges of the transoms should be bevelled to be edges to the edges of the
### Expected behavior
Current Output: The edges of the transoms should be bevelled to be edges to the edges of the
Expected Output: The edges of the transoms should be bevelled to | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28411/timeline | not_planned | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28410 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28410/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28410/comments | https://api.github.com/repos/huggingface/transformers/issues/28410/events | https://github.com/huggingface/transformers/issues/28410 | 2,072,135,788 | I_kwDOCUB6oc57gkhs | 28,410 | Query on Llama2 Tokenizer Behavior During Causal LM Instruction Tuning | {
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This behavior can be changed by the `add_bos_token` attribute. something like `tokenizer = LlamaTokenizer.from_pretrained(\"meta-llama/Llama-2-7b\", add_bos_token = False)` to de-activate the addition of the token. \r\nALso `tokenizer(..., add_special_tokens = False)` de activates this as well\r\n",
"Hi, thanks for response! It seems that the problem is not in the special token. Even if I set `add_special_tokens=False`, it would still output the same length.\r\n![image](https://github.com/huggingface/transformers/assets/38466901/f0555dc5-803d-4fc4-906f-58b1cd49737d)\r\n\r\nI check the last token of `prompt`, it is an empty string rather than a space in the prompt.\r\n![image](https://github.com/huggingface/transformers/assets/38466901/86637155-1a22-401e-9666-f4d92a77fa81)\r\n\r\nSo I guess this is the default behavior of Tokenizer?",
"You should use `tokenize` to check the tokens or `encode`:\r\n```python\r\nIn [2]: tokenizer.tokenize(prompt)\r\nOut[2]: \r\n[...,\r\n 'The',\r\n '▁answer',\r\n '▁is',\r\n ':',\r\n '▁']\r\n\r\nIn [3]: tokenizer.tokenize(label)\r\nOut[3]: \r\n[..., \r\n 'The',\r\n '▁answer',\r\n '▁is',\r\n ':',\r\n '▁C']\r\n```\r\nas you can see, `_` or `▁C` is a single token. This is expected. It's just not the same token. ",
"Thanks! Got it!"
] | 1,704 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-5.15.0-1050-azure-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("meta-llama/Llama-2-7b")
prompt = """Because of his beliefs he knew that if he were to kill he would suffer what?
A. shoot
B. commit crime
C. damnation
D. charming
E. think twice
The answer is: """
label = prompt + "C"
print(tokenizer(prompt,return_length=True).length)
print(tokenizer(label, return_length=True).length)
```
![image](https://github.com/huggingface/transformers/assets/38466901/d0ad322c-ecf3-419c-8fdd-616e23649f39)
### Expected behavior
Hello,
I am currently in the process of preparing data for Causal Language Model instruction tuning. As the loss calculation is solely based on the labels, it is imperative for me to accurately determine the length of the prompts in order to exclude them from the loss computation.
Upon examining the code, I observed that the Llama2 tokenizer seems to append a special empty string token at the end of each prompt. This addition results in an unchanged token length irrespective of whether a label is present or not.
Could you please clarify if this behavior is intentional or if it represents a bug? If this is indeed the expected functionality, could you suggest an alternative method or best practice for achieving the correct exclusion of prompt tokens in the loss calculation?
I appreciate your assistance on this matter.
Thank you! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28410/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28409 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28409/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28409/comments | https://api.github.com/repos/huggingface/transformers/issues/28409/events | https://github.com/huggingface/transformers/issues/28409 | 2,072,073,395 | I_kwDOCUB6oc57gVSz | 28,409 | Add FlashAttention-2 support for Mask2Former model | {
"login": "DanieleVeri",
"id": 20779433,
"node_id": "MDQ6VXNlcjIwNzc5NDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/20779433?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DanieleVeri",
"html_url": "https://github.com/DanieleVeri",
"followers_url": "https://api.github.com/users/DanieleVeri/followers",
"following_url": "https://api.github.com/users/DanieleVeri/following{/other_user}",
"gists_url": "https://api.github.com/users/DanieleVeri/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DanieleVeri/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DanieleVeri/subscriptions",
"organizations_url": "https://api.github.com/users/DanieleVeri/orgs",
"repos_url": "https://api.github.com/users/DanieleVeri/repos",
"events_url": "https://api.github.com/users/DanieleVeri/events{/privacy}",
"received_events_url": "https://api.github.com/users/DanieleVeri/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [] | 1,704 | 1,704 | null | NONE | null | ### Feature request
Is it possible to add FlashAttention-2 support to the Mask2Former model?
### Motivation
Since it is already availble for ViT, it would be great to have it on Mask2Former too.
Maybe the additional input masks to the decoder layer represent a major challenge?
### Your contribution
I could help by testing the implementations. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28409/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28408 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28408/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28408/comments | https://api.github.com/repos/huggingface/transformers/issues/28408/events | https://github.com/huggingface/transformers/pull/28408 | 2,072,032,542 | PR_kwDOCUB6oc5jj1Pc | 28,408 | Remove `task` arg in `load_dataset` in image-classification example | {
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Related to this PR, I opened an issue in `datasets` to improve the user experience when using `DatasetDict.column_names`:\r\n- https://github.com/huggingface/datasets/issues/6571",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28408). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I think we should also add `image_column_name` (default `image`) and `label_column_name` (default `label`) to the example's `DataTrainingArguments` to be consistent with [`audio-classification`](https://github.com/huggingface/transformers/blob/357971ec367fecb9951ae3218feafece5f61416a/examples/pytorch/audio-classification/run_audio_classification.py#L95-L101) as renaming `img` to `image` is not general enough.",
"Thanks @regisss "
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The `task` argument is now deprecated in the `datasets.load_dataset` method. This PR removes it and adds the renaming logic needed to deal with datasets like Cifar10 (the `task` attribute of datasets used to help with that).
Internal discussion here: https://huggingface.slack.com/archives/C034N0A7H09/p1704447848692889
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28408/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28408/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28408",
"html_url": "https://github.com/huggingface/transformers/pull/28408",
"diff_url": "https://github.com/huggingface/transformers/pull/28408.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28408.patch",
"merged_at": 1705388648000
} |
https://api.github.com/repos/huggingface/transformers/issues/28407 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28407/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28407/comments | https://api.github.com/repos/huggingface/transformers/issues/28407/events | https://github.com/huggingface/transformers/pull/28407 | 2,071,993,801 | PR_kwDOCUB6oc5jjs9c | 28,407 | [Whisper] Fix slow test | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"ah, you are too fast. See this comment \r\n\r\nhttps://github.com/huggingface/transformers/pull/28400#discussion_r1445856908\r\n\r\nin short: `mozilla-foundation/common_voice_11_0` doesn't require token if this works for you",
"> mozilla-foundation/common_voice_11_0\r\n\r\nThe problem with CV 11.0 is that the expected results will change. 6_1 is exactly the dataset that was there before as well - is it that difficult. Don't we already have access to secret tokens? E.g. couldn't we just use `HUGGINGFACE_PUSH`? ",
"I personally don't know if `HUGGINGFACE_PUSH` is the right token (is it a HF token or github token).\r\n\r\nWe have\r\n\r\n```\r\n token: ${{ secrets.HUGGINGFACE_PUSH }}\r\n hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}\r\n```\r\nSince we have such tokens, I think the infra team is fine with such tokens. We just need to talk to the infra team to make sure we are using the right one. And IMO, for read access, we should not use a push token (and better to have a read only token).\r\n\r\nI will talk to the infra.",
"There is a new secret (token) added. I will update this PR accordingly and push",
"well, I have to add it to all workflow files 😅 ",
"Thanks a lot @ydshieh! ",
"This PR should be good to merge! "
] | 1,704 | 1,704 | 1,704 | MEMBER | null | # What does this PR do?
This PR fixes a slow test for Whisper that doesn't work anymore because it's using a deprecated dataset. For this slow test to pass, we will need the CI to have access to a security token (cc @ydshieh) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28407/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28407",
"html_url": "https://github.com/huggingface/transformers/pull/28407",
"diff_url": "https://github.com/huggingface/transformers/pull/28407.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28407.patch",
"merged_at": 1704922536000
} |
https://api.github.com/repos/huggingface/transformers/issues/28406 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28406/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28406/comments | https://api.github.com/repos/huggingface/transformers/issues/28406/events | https://github.com/huggingface/transformers/pull/28406 | 2,071,931,777 | PR_kwDOCUB6oc5jjfiz | 28,406 | Fix auxiliary loss related code in transformers | {
"login": "SangbumChoi",
"id": 34004152,
"node_id": "MDQ6VXNlcjM0MDA0MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SangbumChoi",
"html_url": "https://github.com/SangbumChoi",
"followers_url": "https://api.github.com/users/SangbumChoi/followers",
"following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}",
"gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions",
"organizations_url": "https://api.github.com/users/SangbumChoi/orgs",
"repos_url": "https://api.github.com/users/SangbumChoi/repos",
"events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/SangbumChoi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"```\r\n- conditional_detr\r\n \r\n root@0a2b4fe54761:/mnt/nas2/users/sbchoi/transformers# RUN_SLOW=1 pytest tests/models/conditional_detr/\r\n ================================================================= test session starts ==================================================================\r\n platform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\n rootdir: /mnt/nas2/users/sbchoi/transformers\r\n configfile: pyproject.toml\r\n plugins: hypothesis-6.92.0, hydra-core-1.3.2\r\n collected 149 items\r\n \r\n tests/models/conditional_detr/test_image_processing_conditional_detr.py .............. [ 9%]\r\n tests/models/conditional_detr/test_modeling_conditional_detr.py .......................ssssss..sssssssss......s..............s......s........s.. [ 63%]\r\n ..sssssssssss.sssssssssssssss.s..s.......s............. [100%]\r\n \r\n =================================================================== warnings summary ===================================================================\r\n ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n ```\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\\\n\")\r\n \r\n ```\r\n \r\n src/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871\r\n ../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\r\n declare_namespace(pkg)\r\n \r\n tests/models/conditional_detr/test_modeling_conditional_detr.py::ConditionalDetrModelTest::test_disk_offload_bin\r\n /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.**get**(instance, owner)()\r\n \r\n tests/models/conditional_detr/test_modeling_conditional_detr.py::ConditionalDetrModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:460: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)\r\n \r\n tests/models/conditional_detr/test_modeling_conditional_detr.py::ConditionalDetrModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:463: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(\r\n \r\n tests/models/conditional_detr/test_modeling_conditional_detr.py::ConditionalDetrModelTest::test_pipeline_object_detection\r\n /opt/conda/lib/python3.10/site-packages/huggingface_hub/repocard.py:105: UserWarning: Repo card metadata block was not found. Setting CardData to empty.\r\n warnings.warn(\"Repo card metadata block was not found. Setting CardData to empty.\")\r\n \r\n - - Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n ===================================================== 101 passed, 48 skipped, 9 warnings in 54.71s =====================================================\r\n- deformable_detr\r\n \r\n root@0a2b4fe54761:/mnt/nas2/users/sbchoi/transformers# RUN_SLOW=1 pytest tests/models/deformable_detr/\r\n ================================================================= test session starts ==================================================================\r\n platform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\n rootdir: /mnt/nas2/users/sbchoi/transformers\r\n configfile: pyproject.toml\r\n plugins: hypothesis-6.92.0, hydra-core-1.3.2\r\n collected 151 items\r\n \r\n tests/models/deformable_detr/test_image_processing_deformable_detr.py .............. [ 9%]\r\n tests/models/deformable_detr/test_modeling_deformable_detr.py .......................ssssss.ssssssssss......s..............s......s............. [ 63%]\r\n sssssssssss.sssssssssssssss.s..s.......s............... [100%]\r\n \r\n =================================================================== warnings summary ===================================================================\r\n ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n ```\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\\\n\")\r\n \r\n ```\r\n \r\n src/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871\r\n ../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\r\n declare_namespace(pkg)\r\n \r\n tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelTest::test_disk_offload_bin\r\n /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.**get**(instance, owner)()\r\n \r\n tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:460: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)\r\n \r\n tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:463: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(\r\n \r\n tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelTest::test_model_outputs_equivalence\r\n /mnt/nas2/users/sbchoi/transformers/tests/models/deformable_detr/test_modeling_deformable_detr.py:372: UserWarning: Use of index_put_ on expanded tensors is deprecated. Please clone() the tensor before performing this operation. This also applies to advanced indexing e.g. tensor[indices] = tensor (Triggered internally at /opt/conda/conda-bld/pytorch_1702400410390/work/aten/src/ATen/native/TensorAdvancedIndexing.cpp:708.)\r\n t[t != t] = 0\r\n \r\n tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelTest::test_model_outputs_equivalence\r\n /mnt/nas2/users/sbchoi/transformers/tests/models/deformable_detr/test_modeling_deformable_detr.py:372: UserWarning: Use of masked_fill_ on expanded tensors is deprecated. Please clone() the tensor before performing this operation. This also applies to advanced indexing e.g. tensor[mask] = scalar (Triggered internally at /opt/conda/conda-bld/pytorch_1702400410390/work/aten/src/ATen/native/cuda/Indexing.cu:1564.)\r\n t[t != t] = 0\r\n \r\n tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelTest::test_pipeline_object_detection\r\n /opt/conda/lib/python3.10/site-packages/huggingface_hub/repocard.py:105: UserWarning: Repo card metadata block was not found. Setting CardData to empty.\r\n warnings.warn(\"Repo card metadata block was not found. Setting CardData to empty.\")\r\n \r\n - - Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n =============================================== 103 passed, 48 skipped, 11 warnings in 163.84s (0:02:43) ===============================================\r\n- deta\r\n \r\n root@0a2b4fe54761:/mnt/nas2/users/sbchoi/transformers# RUN_SLOW=1 pytest tests/models/deta\r\n ================================================================= test session starts ==================================================================\r\n platform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\n rootdir: /mnt/nas2/users/sbchoi/transformers\r\n configfile: pyproject.toml\r\n plugins: hypothesis-6.92.0, hydra-core-1.3.2\r\n collected 151 items\r\n \r\n tests/models/deta/test_image_processing_deta.py .............. [ 9%]\r\n tests/models/deta/test_modeling_deta.py ........s...............ssssss.ssssssssss......s..............s......s.............sssssssssss.sssssssss [ 78%]\r\n ssssss.s..s.......s.s............ [100%]\r\n \r\n =================================================================== warnings summary ===================================================================\r\n ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n ```\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\\\n\")\r\n \r\n ```\r\n \r\n src/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871\r\n ../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\r\n declare_namespace(pkg)\r\n \r\n tests/models/deta/test_modeling_deta.py::DetaModelTest::test_disk_offload_bin\r\n /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.**get**(instance, owner)()\r\n \r\n tests/models/deta/test_modeling_deta.py::DetaModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:460: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)\r\n \r\n tests/models/deta/test_modeling_deta.py::DetaModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:463: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(\r\n \r\n tests/models/deta/test_modeling_deta.py::DetaModelTest::test_model_outputs_equivalence\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:1894: UserWarning: Use of index_put_ on expanded tensors is deprecated. Please clone() the tensor before performing this operation. This also applies to advanced indexing e.g. tensor[indices] = tensor (Triggered internally at /opt/conda/conda-bld/pytorch_1702400410390/work/aten/src/ATen/native/TensorAdvancedIndexing.cpp:708.)\r\n t[t != t] = 0\r\n \r\n tests/models/deta/test_modeling_deta.py::DetaModelTest::test_model_outputs_equivalence\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:1894: UserWarning: Use of masked_fill_ on expanded tensors is deprecated. Please clone() the tensor before performing this operation. This also applies to advanced indexing e.g. tensor[mask] = scalar (Triggered internally at /opt/conda/conda-bld/pytorch_1702400410390/work/aten/src/ATen/native/cuda/Indexing.cu:1564.)\r\n t[t != t] = 0\r\n \r\n - - Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n ==================================================== 101 passed, 50 skipped, 10 warnings in 53.04s =====================================================\r\n- table transformer\r\n \r\n root@0a2b4fe54761:/mnt/nas2/users/sbchoi/transformers# RUN_SLOW=1 pytest tests/models/table_transformer/\r\n ================================================================= test session starts ==================================================================\r\n platform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\n rootdir: /mnt/nas2/users/sbchoi/transformers\r\n configfile: pyproject.toml\r\n plugins: hypothesis-6.92.0, hydra-core-1.3.2\r\n collected 136 items\r\n \r\n tests/models/table_transformer/test_modeling_table_transformer.py .....................ssssss..sssssssss......s...............s......s.......... [ 57%]\r\n ...sssssssssss.sssssssssssssss.s..s..........s............ [100%]\r\n \r\n tests/models/deta/test_image_processing_deta.py .............. [ 9%]\r\n tests/models/deta/test_modeling_deta.py ........s...............ssssss.ssssssssss.....s..............s......s.............ssssssssss [ 70%]\r\n s.sssssssssssssss.s..s.......s.s............\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n \r\n src/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n [100%]\r\n \r\n ============================================================= warnings summary =============================================================\r\n ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n ```\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\\\n\")\r\n \r\n ```\r\n \r\n src/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n \r\n ```\r\n declare_namespace(pkg)\r\n \r\n ```\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n \r\n =================================================================== warnings summary ===================================================================\r\n ../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/**init**.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n ```\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\\\n\")\r\n \r\n ```\r\n \r\n src/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n \r\n ../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871\r\n ../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/**init**.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\r\n declare_namespace(pkg)\r\n \r\n tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelTest::test_disk_offload_bin\r\n /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.**get**(instance, owner)()\r\n \r\n tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:460: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)\r\n \r\n tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:463: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(\r\n \r\n tests/models/table_transformer/test_modeling_table_transformer.py::TableTransformerModelTest::test_pipeline_object_detection\r\n /opt/conda/lib/python3.10/site-packages/huggingface_hub/repocard.py:105: UserWarning: Repo card metadata block was not found. Setting CardData to empty.\r\n warnings.warn(\"Repo card metadata block was not found. Setting CardData to empty.\")\r\n \r\n - - Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n ================================================ 89 passed, 47 skipped, 9 warnings in 103.04s (0:01:43) ================================================\r\n```\r\n\r\nCurrently I'm out of GPU so unavailable to test YOLOS :(",
"@SangbumChoi Thanks for iterating! For MaskFormer, we still need to keep `use_auxiliary_loss`. Rather confusingly, this flag is set in two places: within the `deocder_config` and within the model's config. \r\n\r\nIn the maskformer modeling file, it's the [maskformer's config](https://github.com/huggingface/transformers/blob/07bdbebb48a9fe1e748348e4e14ae0b4659e54c4/src/transformers/models/maskformer/modeling_maskformer.py#L1755) - `self.config.use_auxiliary_loss` - which is used, rather than the deocder's - `self.config.decoder_config.auxiliary_loss`. ",
"I can test YOLOs on my machine",
"@SangbumChoi It seems that the aux loss tests haven't been added for YOLOs yet (all tests pass - but that's expected :)) ",
"@amyeroberts For the case of YOLOs auxiliary head doesn't exist at the first place so the test cannot be established!",
"> @SangbumChoi Thanks for iterating! For MaskFormer, we still need to keep `use_auxiliary_loss`. Rather confusingly, this flag is set in two places: within the `deocder_config` and within the model's config.\r\n> \r\n> In the maskformer modeling file, it's the [maskformer's config](https://github.com/huggingface/transformers/blob/07bdbebb48a9fe1e748348e4e14ae0b4659e54c4/src/transformers/models/maskformer/modeling_maskformer.py#L1755) - `self.config.use_auxiliary_loss` - which is used, rather than the deocder's - `self.config.decoder_config.auxiliary_loss`.\r\n\r\nAh I thought config.decoder.auxiliary_loss and config.use_auxiliary_loss was the same variable but it should be remained seperately. Let me revert this and fix again!",
"Voila! Let me know if there are other things to change!\r\n\r\nBelow is the modified version of maskformer pytest\r\n\r\n```\r\nroot@bbc12aa272c6:/mnt/nas2/users/sbchoi/transformers# RUN_SLOW=1 pytest tests/models/maskformer/\r\n================================================================================================== test session starts ==================================================================================================\r\nplatform linux -- Python 3.10.13, pytest-7.4.4, pluggy-1.0.0\r\nrootdir: /mnt/nas2/users/sbchoi/transformers\r\nconfigfile: pyproject.toml\r\nplugins: hypothesis-6.92.0, hydra-core-1.3.2\r\ncollected 247 items\r\n\r\ntests/models/maskformer/test_image_processing_maskformer.py ...................... [ 8%]\r\ntests/models/maskformer/test_modeling_maskformer.py ........ssssss..sssssssss...s.........s.......s.........sssssssss.ssssssssssssssssss.s..s.....s................. [ 54%]\r\ntests/models/maskformer/test_modeling_maskformer_swin.py s........ssssss.ssssssssss..s........ss.......s.......ssssss.sssssssssssssssssssss.s......s.s.................... [100%]\r\n\r\n=================================================================================================== warnings summary ====================================================================================================\r\n../../../../../opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373\r\n /opt/conda/lib/python3.10/site-packages/_pytest/config/__init__.py:1373: PytestConfigWarning: Unknown config option: doctest_glob\r\n\r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\nsrc/transformers/deepspeed.py:23\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/deepspeed.py:23: FutureWarning: transformers.deepspeed module is deprecated and will be removed in a future version. Please import deepspeed modules directly from transformers.integrations\r\n warnings.warn(\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28\r\n /opt/conda/lib/python3.10/site-packages/torch/utils/cpp_extension.py:28: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html\r\n from pkg_resources import packaging # type: ignore[attr-defined]\r\n\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n../../../../../opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871\r\n /opt/conda/lib/python3.10/site-packages/pkg_resources/__init__.py:2871: DeprecationWarning: Deprecated call to `pkg_resources.declare_namespace('ruamel')`.\r\n Implementing implicit namespace packages (as specified in PEP 420) is preferred to `pkg_resources.declare_namespace`. See https://setuptools.pypa.io/en/latest/references/keywords.html#keyword-namespace-packages\r\n declare_namespace(pkg)\r\n\r\ntests/models/maskformer/test_image_processing_maskformer.py::MaskFormerImageProcessingTest::test_call_with_segmentation_maps\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/image_processing_maskformer.py:699: FutureWarning: The `pad_and_return_pixel_mask` argument is deprecated and will be removed in v4.27\r\n warnings.warn(\r\n\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_pipeline_feature_extraction\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_pipeline_image_segmentation\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_inference_fp16\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_inference_instance_segmentation_head\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_inference_instance_segmentation_head_resnet_backbone\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_inference_no_head\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_with_segmentation_maps_and_loss\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/image_processing_maskformer.py:410: FutureWarning: The `size_divisibility` argument is deprecated and will be removed in v4.27. Please use `size_divisor` instead.\r\n warnings.warn(\r\n\r\ntests/models/maskformer/test_image_processing_maskformer.py::MaskFormerImageProcessingTest::test_image_processor_from_dict_with_kwargs\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_pipeline_feature_extraction\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_pipeline_image_segmentation\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_inference_fp16\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_inference_instance_segmentation_head\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_inference_instance_segmentation_head_resnet_backbone\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_inference_no_head\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelIntegrationTest::test_with_segmentation_maps_and_loss\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/image_processing_maskformer.py:417: FutureWarning: The `max_size` argument is deprecated and will be removed in v4.27. Please use size['longest_edge'] instead.\r\n warnings.warn(\r\n\r\ntests/models/maskformer/test_image_processing_maskformer.py::MaskFormerImageProcessingTest::test_integration_instance_segmentation\r\ntests/models/maskformer/test_image_processing_maskformer.py::MaskFormerImageProcessingTest::test_integration_semantic_segmentation\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_pipeline_feature_extraction\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_pipeline_image_segmentation\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/image_processing_maskformer.py:428: FutureWarning: The `reduce_labels` argument is deprecated and will be removed in v4.27. Please use `do_reduce_labels` instead.\r\n warnings.warn(\r\n\r\ntests/models/maskformer/test_image_processing_maskformer.py::MaskFormerImageProcessingTest::test_integration_instance_segmentation\r\n /mnt/nas2/users/sbchoi/transformers/tests/models/maskformer/test_image_processing_maskformer.py:300: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(inputs[\"mask_labels\"][0].sum().item(), 41527.0)\r\n\r\n\r\n /mnt/nas2/users/sbchoi/transformers/tests/models/maskformer/test_image_processing_maskformer.py:395: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).\r\n self.assertTrue(torch.allclose(inputs[\"class_labels\"][0], torch.tensor(expected_class_labels)))\r\n\r\ntests/models/maskformer/test_image_processing_maskformer.py::MaskFormerImageProcessingTest::test_integration_panoptic_segmentation\r\n /mnt/nas2/users/sbchoi/transformers/tests/models/maskformer/test_image_processing_maskformer.py:403: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(inputs[\"mask_labels\"][0].sum().item(), 315193.0)\r\n\r\ntests/models/maskformer/test_image_processing_maskformer.py::MaskFormerImageProcessingTest::test_integration_panoptic_segmentation\r\n /mnt/nas2/users/sbchoi/transformers/tests/models/maskformer/test_image_processing_maskformer.py:404: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(inputs[\"mask_labels\"][1].sum().item(), 350747.0)\r\n\r\ntests/models/maskformer/test_image_processing_maskformer.py::MaskFormerImageProcessingTest::test_integration_semantic_segmentation\r\n /mnt/nas2/users/sbchoi/transformers/tests/models/maskformer/test_image_processing_maskformer.py:342: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(inputs[\"mask_labels\"][0].sum().item(), 170200.0)\r\n\r\ntests/models/maskformer/test_image_processing_maskformer.py::MaskFormerImageProcessingTest::test_integration_semantic_segmentation\r\n /mnt/nas2/users/sbchoi/transformers/tests/models/maskformer/test_image_processing_maskformer.py:343: DeprecationWarning: Please use assertEqual instead.\r\n self.assertEquals(inputs[\"mask_labels\"][1].sum().item(), 257036.0)\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_fast_init_context_manager\r\ntests/models/maskformer/test_modeling_maskformer_swin.py::MaskFormerSwinModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:460: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(init_instance.linear.bias, expected_bias, rtol=1e-3, atol=1e-4)\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_fast_init_context_manager\r\ntests/models/maskformer/test_modeling_maskformer_swin.py::MaskFormerSwinModelTest::test_fast_init_context_manager\r\n /mnt/nas2/users/sbchoi/transformers/tests/test_modeling_common.py:463: FutureWarning: `torch.testing.assert_allclose()` is deprecated since 1.12 and will be removed in a future release. Please use `torch.testing.assert_close()` instead. You can find detailed upgrade instructions in https://github.com/pytorch/pytorch/issues/61844.\r\n torch.testing.assert_allclose(\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_fast_init_context_manager\r\n /opt/conda/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.__get__(instance, owner)\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_attentions\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/modeling_maskformer_swin.py:212: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if num_channels != self.num_channels:\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_attentions\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/modeling_maskformer_swin.py:202: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if width % self.patch_size[1] != 0:\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_attentions\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/modeling_maskformer_swin.py:205: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if height % self.patch_size[0] != 0:\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_attentions\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/modeling_maskformer_swin.py:574: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n was_padded = pad_values[3] > 0 or pad_values[5] > 0\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_attentions\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/modeling_maskformer_swin.py:575: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if was_padded:\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::Mas\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/modeling_maskformer_swin.py:248: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n should_pad = (height % 2 == 1) or (width % 2 == 1)\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_attentions\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/modeling_maskformer_swin.py:249: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if should_pad:\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_attentions\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /opt/conda/lib/python3.10/site-packages/torch/nn/functional.py:2443: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if size_prods == 1:\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_attentions\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/modeling_maskformer.py:537: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_weights.size() != (batch_size * self.num_heads, target_len, source_len):\r\n\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_attentions\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_output_hidden_state\r\ntests/models/maskformer/test_modeling_maskformer.py::MaskFormerModelTest::test_torchscript_simple\r\n /mnt/nas2/users/sbchoi/transformers/src/transformers/models/maskformer/modeling_maskformer.py:568: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_output.size() != (batch_size * self.num_heads, target_len, self.head_dim):\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n===================================================================================== 148 passed, 99 skipped, 69 warnings in 55.89s =====================================================================================\r\n\r\n```",
"@amyeroberts Changed as you suggested! However, do you know the method rerun the CI? it has time out error.",
"@SangbumChoi I can re-run :) ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28406). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@amyeroberts https://github.com/huggingface/transformers/pull/28354
There are several things that I summarized
1. deta/table_transformer has no issue
2. conditional_detr had slight issue but it was simple
3. yolos turns out that they don't have auxiliary output for their result. So I removed related changes and did not add aux_loss test. Check [out](https://github.com/hustvl/YOLOS/blob/5717fc29d727dab84ad585c56457b4de1225eddc/models/detector.py#L53) they don't have 'aux_output'.
4. maskformer issue related to configuration file
let me know if other models need this auxiliary loss test! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28406/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28406/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28406",
"html_url": "https://github.com/huggingface/transformers/pull/28406",
"diff_url": "https://github.com/huggingface/transformers/pull/28406.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28406.patch",
"merged_at": 1705673522000
} |
https://api.github.com/repos/huggingface/transformers/issues/28405 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28405/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28405/comments | https://api.github.com/repos/huggingface/transformers/issues/28405/events | https://github.com/huggingface/transformers/pull/28405 | 2,071,834,084 | PR_kwDOCUB6oc5jjKkH | 28,405 | [`core`/ FEAT] Add the possibility to push custom tags using `PreTrainedModel` itself | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28405). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I made some modification after some offline discussion, an exmaple model pushed can be found here: https://huggingface.co./ybelkada/test-tags-model lmk wdyt @amyeroberts @ArthurZucker @Wauplin \r\nI also updated the API on the PR description",
"@amyeroberts @Wauplin I think that I have addressed most of your comments now, the API is much more flexible and less \"agressive\" thanks to your suggestions\r\n1- Creates a simple description\r\n2- Moved `transformers` to `library-name`\r\n3- Now Trainer and `model_tags` are compatible together, which was not the case in the first commits. e.g. when combining `SFTTrainer` from trl that automatically pushes `sft` and `trl` tags and this PR, all tags gets successfully pushed. E.g.:\r\n```python\r\nfrom transformers import AutoModelForCausalLM, TrainingArguments\r\nfrom trl import SFTTrainer\r\nfrom datasets import load_dataset\r\n\r\nmodel_name = \"HuggingFaceM4/tiny-random-LlamaForCausalLM\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name)\r\n\r\nmodel.add_model_tags([\"tag-test\"])\r\n\r\ntrain_data = load_dataset(\"imdb\", split=\"train\")\r\n\r\ntrainer = SFTTrainer(\r\n model=model,\r\n args=TrainingArguments(output_dir=\"test-tag-already-tagged\"),\r\n train_dataset=train_data,\r\n dataset_text_field=\"text\"\r\n)\r\n\r\ntrainer.push_to_hub(\"ybelkada/test-tag-already-tagged\")\r\n```\r\nThe example repo is here: https://huggingface.co./ybelkada/test-tag-already-tagged\r\n4- If tags already exists it will not overwrite all the tags but add the new tags with existing tags. e.g. if one does `model.add_tags([\"test-tag-2\"])` after already pushing a model with a tag to Hub, as you can see from this repository: https://huggingface.co./ybelkada/test-model-already-tagged the new tag gets appended\r\n\r\ncc @osanseviero as well",
"Thanks for handling this @younesbelkada "
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
From an idea we discussed internally. This PR introduces a new API to inject custom tags in the model card
From a community perspective it will make easier to push custom tags as for now it is only limited to trainers.
Below I demonstrate the simplicity of the API
```python
from transformers import AutoModelForCausalLM
model_name = "HuggingFaceM4/tiny-random-LlamaForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_name)
model.add_model_tags(["tag-test"])
model.push_to_hub("llama-tagged")
```
cc @osanseviero @Narsil @julien-c
Note with the current design each time a user calls `push_to_hub`, it will create a model card template if no model card is present on the hub | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28405/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28405/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28405",
"html_url": "https://github.com/huggingface/transformers/pull/28405",
"diff_url": "https://github.com/huggingface/transformers/pull/28405.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28405.patch",
"merged_at": 1705326488000
} |
https://api.github.com/repos/huggingface/transformers/issues/28404 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28404/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28404/comments | https://api.github.com/repos/huggingface/transformers/issues/28404/events | https://github.com/huggingface/transformers/issues/28404 | 2,071,685,145 | I_kwDOCUB6oc57e2gZ | 28,404 | How the new version of transformers uses the author's LLaVA weights? | {
"login": "koking0",
"id": 45281765,
"node_id": "MDQ6VXNlcjQ1MjgxNzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/45281765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/koking0",
"html_url": "https://github.com/koking0",
"followers_url": "https://api.github.com/users/koking0/followers",
"following_url": "https://api.github.com/users/koking0/following{/other_user}",
"gists_url": "https://api.github.com/users/koking0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/koking0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/koking0/subscriptions",
"organizations_url": "https://api.github.com/users/koking0/orgs",
"repos_url": "https://api.github.com/users/koking0/repos",
"events_url": "https://api.github.com/users/koking0/events{/privacy}",
"received_events_url": "https://api.github.com/users/koking0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nBoth should be equal (cc @younesbelkada).\r\n\r\nWhich difference are you seeing?",
"I used the above code, and the initial error was that the preprocessor_config.json file was missing. \r\n\r\nI found that there was indeed no such file in https://huggingface.co./liuhaotian/llava-v1.5-7b/tree/main, so I started from https://huggingface.co./llava-hf/llava-1.5-7b-hf/tree/main Download this file, then put it in the model folder of liuhaotian/llava-v1.5-7b, and then run it, I encountered the following error:\r\n\r\n```\r\nvalueError: The input provided to the model are wrong. The number of image tokens is 0 while the number of image given to the model is 1. This prevents correct indexing and breaks batch generation.\r\n```\r\n\r\n![image](https://github.com/huggingface/transformers/assets/45281765/b6d2865c-9673-492a-9636-f7c00291cfe1)\r\n",
"Hi @koking0 \r\nunfortunately the checkpoints are not inter-compatible, if you want to use `liuhaotian/llava-v1.5-7b` you need to use the original author's library and not transformers",
"ok, fine.",
"Note however `liuhaotian/llava-v1.5-7b` should be the same as `llava-hf/llava-1.5-7b-hf`, same for the 13b model and so on "
] | 1,704 | 1,704 | 1,704 | NONE | null | I am very excited that the LLaVA model has been added to transformers-4.36. I noticed that the LLaVA model of transformers seems to be different from the LLaVA author's model.
LLaVA model of transformers: [https://huggingface.co./llava-hf/llava-1.5-7b-hf](https://huggingface.co./llava-hf/llava-1.5-7b-hf)
LLaVA author's model: [https://huggingface.co./liuhaotian/llava-v1.5-7b](https://huggingface.co./liuhaotian/llava-v1.5-7b)
We have been using the LLaVA author's model. I would like to ask if transformers-4.36 can load the LLaVA author's model.
Something like this:
```python
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# model_id = "llava-hf/llava-1.5-7b-hf"
model_id = "liuhaotian/llava-v1.5-7b"
prompt = "USER: <image>\nWhat are these?\nASSISTANT:"
image_file = "images/000000039769.jpg"
processor = AutoProcessor.from_pretrained(model_id)
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16
).to(device)
raw_image = Image.open(image_file)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28404/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28403 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28403/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28403/comments | https://api.github.com/repos/huggingface/transformers/issues/28403/events | https://github.com/huggingface/transformers/pull/28403 | 2,071,661,787 | PR_kwDOCUB6oc5jilIT | 28,403 | Update Mixtral modeling | {
"login": "imoneoi",
"id": 26354659,
"node_id": "MDQ6VXNlcjI2MzU0NjU5",
"avatar_url": "https://avatars.githubusercontent.com/u/26354659?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imoneoi",
"html_url": "https://github.com/imoneoi",
"followers_url": "https://api.github.com/users/imoneoi/followers",
"following_url": "https://api.github.com/users/imoneoi/following{/other_user}",
"gists_url": "https://api.github.com/users/imoneoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imoneoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imoneoi/subscriptions",
"organizations_url": "https://api.github.com/users/imoneoi/orgs",
"repos_url": "https://api.github.com/users/imoneoi/repos",
"events_url": "https://api.github.com/users/imoneoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/imoneoi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"It worries me a little bit that Mistral has different implementations for this specific process in the model.\r\n\r\nThey submitted a PR to vLLM which is aligned with the current Huggingface implementation:\r\n\r\nhttps://github.com/mistralai/vllm-release/commit/3e21dacb79471ebf946e72e67a5ca14ebcc598c1#diff-74473eb619768ed055303d29d74c47b1753615e8323e69467506b2b42d3f9898R341-R344",
"Yep, we tried to follow this instead of the paper as they shared that code. Thanks for submitting a pr anyways 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | # What does this PR do?
The [Mixtral technical report](https://arxiv.org/pdf/2401.04088.pdf) was published recently, showing that Mixtral routing weights are calculated in the top-K before softmax order.
This PR updates the Mixtral model implementation accordingly.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28403/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/28403/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28403",
"html_url": "https://github.com/huggingface/transformers/pull/28403",
"diff_url": "https://github.com/huggingface/transformers/pull/28403.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28403.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28402 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28402/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28402/comments | https://api.github.com/repos/huggingface/transformers/issues/28402/events | https://github.com/huggingface/transformers/issues/28402 | 2,071,573,060 | I_kwDOCUB6oc57ebJE | 28,402 | google / flan-t5-xxl introduces different result to inference API | {
"login": "YJYJLee",
"id": 28900943,
"node_id": "MDQ6VXNlcjI4OTAwOTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/28900943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YJYJLee",
"html_url": "https://github.com/YJYJLee",
"followers_url": "https://api.github.com/users/YJYJLee/followers",
"following_url": "https://api.github.com/users/YJYJLee/following{/other_user}",
"gists_url": "https://api.github.com/users/YJYJLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YJYJLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YJYJLee/subscriptions",
"organizations_url": "https://api.github.com/users/YJYJLee/orgs",
"repos_url": "https://api.github.com/users/YJYJLee/repos",
"events_url": "https://api.github.com/users/YJYJLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/YJYJLee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi,\r\n\r\nThis is because the inference API uses greedy decoding by default, whereas you are using beam search (as you are providing `num_beams=5` to the generate method). One would need to use the same generation parameters as [the ones used by the inference API](https://huggingface.co./docs/api-inference/detailed_parameters#text-generation-task) to compare apples-to-apples.",
"> Hi,\r\n> \r\n> This is because the inference API uses greedy decoding by default, whereas you are using beam search (as you are providing `num_beams=5` to the generate method). One would need to use the same generation parameters as [the ones used by the inference API](https://huggingface.co./docs/api-inference/detailed_parameters#text-generation-task) to compare apples-to-apples.\r\n\r\n@NielsRogge Thanks! I understand the difference in output by using different decoding strategy, but the fact that beam search is not generating output sequence to the end seems like another problem.\r\n\r\nIs it a bug in beam search implementation?",
"@NielsRogge \r\n\r\nAlso, I tried the greedy decoding by removing `num_beams=5`, but the output is still not generated to the end\r\n\r\n```\r\nDie Hauptbier der Insel ist \"Number One\", es ist nicht ein\r\n```",
"You can increase `max_new_tokens`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.15.0-1048-aws-x86_64-with-glibc2.10
- Python version: 3.8.18
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0.dev20240104+cu121 (True)
- Tensorflow version (GPU?): 2.13.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
In inference API [webpage](https://huggingface.co./google/flan-t5-xxl) of flan-t5-xxl, the result is produced properly as below.
<img width="584" alt="Screenshot 2024-01-08 at 10 37 43 PM" src="https://github.com/huggingface/transformers/assets/28900943/73816a9b-5ba4-4aac-a61e-9341027b712f">
However, when I try to run from the source code, the result is somehow not generated to the end.
```
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
dtype = torch.float16
device = torch.device("cuda:0")
model_name = "google/flan-t5-xxl"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name, torch_dtype=dtype).to(device)
input = tokenizer(['translate English to German: The main local beer is "Number One", it is not a complex beer, but pleasant and refreshing. The other local beer is called "Manta".'], return_tensors="pt", padding=True)
output = model.generate(**input.to(device), num_beams=5)
tokenizer.decode(output[0], skip_special_tokens=True)
```
It produces
```
'Das Hauptlokale Bier ist "Number One", es ist nicht ein'
```
### Expected behavior
I suspect both inference API and python code should generate the same output, but running python code does not complete the generation.
I also tried with other inputs, but they all have the same problem | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28402/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28402/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28401 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28401/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28401/comments | https://api.github.com/repos/huggingface/transformers/issues/28401/events | https://github.com/huggingface/transformers/pull/28401 | 2,071,287,937 | PR_kwDOCUB6oc5jhU5l | 28,401 | dummy test; not for merge | {
"login": "weimingzha0",
"id": 38259546,
"node_id": "MDQ6VXNlcjM4MjU5NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/38259546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/weimingzha0",
"html_url": "https://github.com/weimingzha0",
"followers_url": "https://api.github.com/users/weimingzha0/followers",
"following_url": "https://api.github.com/users/weimingzha0/following{/other_user}",
"gists_url": "https://api.github.com/users/weimingzha0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/weimingzha0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weimingzha0/subscriptions",
"organizations_url": "https://api.github.com/users/weimingzha0/orgs",
"repos_url": "https://api.github.com/users/weimingzha0/repos",
"events_url": "https://api.github.com/users/weimingzha0/events{/privacy}",
"received_events_url": "https://api.github.com/users/weimingzha0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28401/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28401",
"html_url": "https://github.com/huggingface/transformers/pull/28401",
"diff_url": "https://github.com/huggingface/transformers/pull/28401.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28401.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28400 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28400/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28400/comments | https://api.github.com/repos/huggingface/transformers/issues/28400/events | https://github.com/huggingface/transformers/pull/28400 | 2,071,001,586 | PR_kwDOCUB6oc5jgV8q | 28,400 | [SDPA] Make sure attn mask creation is always done on CPU | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Also I don't fully understand why we use the `huggingface/transformers-pytorch-gpu` Docker image for the Pipeline tests here https://github.com/huggingface/transformers/blob/3b742ea84cfc32432d60c0b65c886576ef736833/.github/workflows/self-scheduled.yml#L248 , \r\nbut we use the `huggingface/transformers-all-latest-gpu` image for all other tests. Should we maybe change the image for the pipeline tests? ",
"> why we use the huggingface/transformers-pytorch-gpu\r\n\r\nIIRC, this image doesn't have `tensorflow` stuff, but `transformers-all-latest-gpu` has both torch/tf. In theory, we can try to use only `transformers-all-latest-gpu`. But it's not a bad idea if we have environments that has only torch or only tf to test.\r\n\r\n\r\n(there is also a job `run_pipelines_tf_gpu` which uses `transformers-tensorflow-gpu`.)\r\n\r\n",
"Moved the fix of Whisper's slow test here: https://github.com/huggingface/transformers/pull/28407 (cc @ydshieh) ",
"@patrickvonplaten FYI The image fails to build due to `intel_extension_for_pytorch`, so we are still using the old image (therefore torch 2.1.0)\r\n\r\nI will have to check what happens\r\n\r\n```\r\nERROR: failed to solve: process \"sh -lc python3 -m pip install --no-cache-dir intel_extension_for_pytorch==$INTEL_TORCH_EXT -f https://developer.intel.com/ipex-whl-stable-cpu\" did not complete successfully: exit code: 1\r\nError: buildx failed with: ERROR: failed to solve: process \"sh -lc python3 -m pip install --no-cache-dir intel_extension_for_pytorch==$INTEL_TORCH_EXT -f https://developer.intel.com/ipex-whl-stable-cpu\" did not complete successfully: exit code: 1\r\n```"
] | 1,704 | 1,704 | 1,704 | MEMBER | null | # What does this PR do?
Many SDPA tests currently fail. E.g. when running:
```
CUDA_VISIBLE_DEVICES="1" RUN_SLOW=1 pytest tests/models/whisper/test_modeling_whisper.py::WhisperStandaloneDecoderModelTest::test_eager_matches_sdpa_inference_0_float16 -sv
```
We get the following error message:
```
> range_tensor[range_tensor >= indices] = 0
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
Until now SDPA tests were not run on our Circle CI because we test on PyTorch 2.1.0, but SDPA needs PyTorch 2.1.1 (see [here](https://github.com/huggingface/transformers/blob/3b742ea84cfc32432d60c0b65c886576ef736833/src/transformers/utils/import_utils.py#L275))
This PR:
- 1. Makes sure the sdpa attention_mask is always created on CPU
- 2. That we test on PyTorch 2.1.1
- 3. It fixes a "FileNotFoundError" for a Whisper slow test (see [here](https://github.com/huggingface/transformers/actions/runs/7442488193/job/20246174017#step:9:1734)) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28400/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28400",
"html_url": "https://github.com/huggingface/transformers/pull/28400",
"diff_url": "https://github.com/huggingface/transformers/pull/28400.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28400.patch",
"merged_at": 1704794719000
} |
https://api.github.com/repos/huggingface/transformers/issues/28399 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28399/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28399/comments | https://api.github.com/repos/huggingface/transformers/issues/28399/events | https://github.com/huggingface/transformers/pull/28399 | 2,070,868,093 | PR_kwDOCUB6oc5jf46J | 28,399 | Use py310 for docbuild | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | COLLABORATOR | null | # What does this PR do?
Use py310 for docbuild | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28399/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28399/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28399",
"html_url": "https://github.com/huggingface/transformers/pull/28399",
"diff_url": "https://github.com/huggingface/transformers/pull/28399.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28399.patch",
"merged_at": 1704980389000
} |
https://api.github.com/repos/huggingface/transformers/issues/28398 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28398/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28398/comments | https://api.github.com/repos/huggingface/transformers/issues/28398/events | https://github.com/huggingface/transformers/pull/28398 | 2,070,826,243 | PR_kwDOCUB6oc5jfvpS | 28,398 | Update metadata loading for oneformer | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Previously, the loading of the metadata file for oneformer was effectively hardcoded to download a file from the hub. This PR updates the `prepare_metadata` method to allow for loading of local files as well as model repos.
```py
from transformers import OneformerImageProcessor
image_processor = OneformerImageProcessor(
checkpoint,
repo_path="path/to/file",
class_info_file="local_metadata.json"
)
```
Fixes #23116 and partially addresses #27572
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28398/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28398/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28398",
"html_url": "https://github.com/huggingface/transformers/pull/28398",
"diff_url": "https://github.com/huggingface/transformers/pull/28398.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28398.patch",
"merged_at": 1705062931000
} |
https://api.github.com/repos/huggingface/transformers/issues/28397 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28397/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28397/comments | https://api.github.com/repos/huggingface/transformers/issues/28397/events | https://github.com/huggingface/transformers/issues/28397 | 2,070,801,692 | I_kwDOCUB6oc57be0c | 28,397 | Seamless M4T-v2 Inference bug when using chunk_length_s parameter | {
"login": "asusdisciple",
"id": 138434950,
"node_id": "U_kgDOCEBZhg",
"avatar_url": "https://avatars.githubusercontent.com/u/138434950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asusdisciple",
"html_url": "https://github.com/asusdisciple",
"followers_url": "https://api.github.com/users/asusdisciple/followers",
"following_url": "https://api.github.com/users/asusdisciple/following{/other_user}",
"gists_url": "https://api.github.com/users/asusdisciple/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asusdisciple/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asusdisciple/subscriptions",
"organizations_url": "https://api.github.com/users/asusdisciple/orgs",
"repos_url": "https://api.github.com/users/asusdisciple/repos",
"events_url": "https://api.github.com/users/asusdisciple/events{/privacy}",
"received_events_url": "https://api.github.com/users/asusdisciple/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @ylacombe ",
"Hey @asusdisciple, thanks for opening this issue. I'll look into it, but first, could you send a snippet which show how you use the pipeline ?",
"Sure. Here is how I initialize the pipeline:\r\n```\r\n\r\nobj = pipeline(task=\"automatic-speech-recognition\",\r\n model=\"facebook/seamless-m4t-v2-large\",\r\n device=device,\r\n chunk_length_s=30\r\n batch_size=64,\r\n generate_kwargs={\"num_beams\": 1,\r\n \"temperature\": 0,\r\n \"task\": \"transcribe\",\r\n \"do_sample\": True\r\n }\r\n )\r\n \r\n```\r\n\r\nAnd this is how I transcribe it with the given language:\r\n\r\n```\r\ntmp = model(audiopath,\r\n generate_kwargs={\"tgt_lang\": lang}\r\n )\r\n```\r\n\r\nJust remember the audiofile should be longer than 10s. I already see this issue with files which are 20s long but 1min probably works better. I am even unsure why it happens with 20s audiofiles because setting chunking to 30s should not affect those right?",
"Thanks @asusdisciple, note that the chunk parameter for seq2seq models is highly experimental as pointed out in #20104.\r\n\r\n@Narsil, it could be good to have your opinion on this since you added the algorithm. Could you give an explanation of how it works and how to improve it ? Many thanks",
"Could you share an actual file that triggers it, the expected output, and the actual output with chunking ? Having a fully complete reproducer really helps narrow down these kind of things.\r\n\r\nChunking works like the following. Chunk every audio into chunks of length `chunk_length_s` with overlapping (1/6 * chunk_length_s by default). We run the model on each of those, and then stitch the results by finding the maximum overlapping output, and fusing those strings together.\r\n\r\nMany things could be at play, it would be easier to get good hints with the actual issue.\r\n\r\n-> seamless m4t might not be trained on long audio, and having position_ids or something, making it harder for it to run on those extra chunks. Shouldn't affect chunked with chunks < trained length tho.\r\n-> There could be an issue with the chunking and/or stitching code.\r\n-> On whisper, without issue, the fact that the chunking ahppens arbitrarily in audio means that the model is hallucinating more, on both ends of the chunks, meaning the stitching will include the hallucination (since it's a maximally overlapping string). No good solution for that one, either try to precut audio along silences if they exist, or find a way to fine-tune the model to handle those. If it happens for whisper, it can happen here (although the degradation for whisper is a fraction of a percent of WER, as it also sometimes improves baseline whisper). Maybe S4t is more sensitive to it.\r\n",
"For whisper I can state that on my FLEURS benchmark, where I concatenated files to longer version (5 and 30min) WER dropped by 10% for all HF implementations. Faster-Whisper (Repo) however performed exactly the same as on small files when I used the parameter `condition_on_previous_text=False`. If I used Faster-Whisper with this parameter on `True` the WER skyrocketed to 200%. So maybe this plays a role since you can not turn it off in HF.\r\n\r\nAs for seemless m4t-v2 here is a minimal reproducer with ground_truth text and the audio file, which was concatenated from a few sentences in FLEURS. WER 66%. English Language. Repetitions were cleaned. The audio file is in the uploads: \r\n[17837018222808398366.wav.zip](https://github.com/huggingface/transformers/files/13920168/17837018222808398366.wav.zip)\r\n\r\n```\r\nfrom transformers import pipeline\r\nimport jiwer\r\n\r\nground_truth = \"Police superintendent Chandra Shekhar Solanki said the accused appeared in court with covered faces. During the struggle for independence organised by the Mau movement, a peaceful gathering in the town resulted in the killing of the paramount chief Tupua Tamasese Lealofi III. Martelly swore in a new Provisional Electoral Council (CEP) of nine members yesterday. Europe is a continent that is relatively small but with many independent countries. Under normal circumstances, travelling through multiple countries would mean having to go through visa applications and passport control multiple times. Their thermal behavior is not as steady as large caves on Earth that often maintain a fairly constant temperature, but it is consistent with these being deep holes in the ground,' said Glen Cushing of the United States Geological Survey (USGS) Astrogeology Team and of Northern Arizona University located in Flagstaff, Arizona. Sleep interruption is the process of purposefully awakening during your normal sleep period and falling asleep a short time later (10–60 minutes). Most of the buildings on the edges of the complex have been rebuilt in order to give tourists a better idea of how they originally appeared. Angel (2006), explains the Continuum approach as a method being used to help organizations reach a higher level of performance. Due to the underwater topology the return flow is concentrated at a few deeper sections, and a fast current to deep water may form there. towards the end of the Middle Ages western Europe began to develop their own style. one of the biggest developments of the time as a result of the crusades people began to use buttons to fasten clothing. NextGen is a system the FAA claims would allow aircraft to fly shorter routes and save millions of gallons of fuel each year and cut carbon emissions. A full 20 percent of the water that pours out of the planet's rivers into the oceans comes from the Amazon. So, it is likely that the notation was added simply as a label. He was greeted by Singapore's Deputy Prime Minister Wong Kan Seng and discussed trade and terrorism issues with the Singapore Prime Minister Lee Hsien Loong. The disease is carried by pigs, which then migrates to humans through mosquitos. Elements like calcium and potassium are considered metals. Of course, there are also metals like silver and gold. Field trips are a large part of any classroom. Quite often a teacher would love to take her students places to which a bus trip is not an option. People now write messages on computer screens, never having to come close to a sharpener. One of the most common methods used to illustrate the importance of socialization is to draw upon the few unfortunate cases of children who were, through neglect, misfortune, or wilful abuse, not socialized by adults while they were growing up. He added that 'they should not, however, be asked to take on obligations that go beyond their development stage, responsibility and capabilities.' Ironing damp clothes can help them dry. Many hotels have an iron and ironing board available for loan, even if one is not present in the room. Apia is the capital of Samoa. The town is on the island of Upolu and has a population of just under 40,000. A curry is a dish based on herbs and spices, together with either meat or vegetables. Four skiers in the women's sitting group failed to finish their runs, and 45 of the 117 total skiers in the Giant Slalom failed to rank in the race. The debate was sparked by controversy over spending on relief and reconstruction in the wake Hurricane Katrina; which some fiscal conservatives have humorously labeled 'Bush's New Orleans Deal.' In just two weeks the Americans and Free French forces had liberated southern France and were turning towards Germany. The Great Pyramid at Giza is the only one of the seven wonders that is still standing today. The two compounds react with one another to form crystals that may block kidney function, researchers at the university said. Most interpretations of technological determinism share two general ideas: that the development of technology itself follows a path largely beyond cultural or political influence, and that technology in turn has 'effects' on societies that are inherent, rather than socially conditioned.\"\r\n\r\n\r\nobj = pipeline(task=\"automatic-speech-recognition\",\r\n model=\"facebook/seamless-m4t-v2-large\",\r\n device=\"cuda:0\",\r\n chunk_length_s=30, # NEVER USE THIS PARAMETER, DEGRADES SCORE FOR LONG AUDIO\r\n batch_size=4,\r\n generate_kwargs={\"num_beams\": 1,\r\n \"temperature\": 0,\r\n \"task\": \"transcribe\",\r\n \"do_sample\": True\r\n }\r\n )\r\n\r\n\r\naudiopath = \"...\"\r\n\r\npred = obj(audiopath,\r\n generate_kwargs={\"tgt_lang\": \"eng\"}\r\n )\r\n\r\n\r\nerr = round(jiwer.process_words(ground_truth, pred[\"text\"]).wer, 2)\r\nprint(err)\r\n\r\n```\r\n\r\n\r\n\r\n\r\n",
"First of all, there is a BIG warning that this is experimental:\r\n\r\nhttps://github.com/huggingface/transformers/pull/20104/files#diff-ccc8e98fcaa81fdf6317a652438a309bcede0bbe336774288c2fcf91d9f11082R291-R296\r\n\r\nI played around a bit, I found a bug in ASR pipeline, although it won't fix your issues, since it really seems that it's the model that has a lot of issues with hallucinations.\r\n\r\n[simple.webm](https://github.com/huggingface/transformers/assets/204321/a5f111dc-cfb8-4b62-bf9e-6484cd162135)\r\n\r\nHere is the cut version (30s).\r\n\r\nWithout chunking the output is\r\n```\r\n'Police Superintendent Shandra Shankar Solanki said the accused appeared in court with their faces covered during th\r\ne independence struggle organized by the Maoist movement, which resulted in the killing of the Supreme Leader Tupa T\r\nupa Tupa Tsai.'\r\n```\r\n\r\nWith chunking (it will chunk at 25s and rerun the missing 5 with overlap)\r\n\r\n```\r\n'Police Superintendent Shandra Shankar Solanki said the accused appeared in court with their faces covered during the independence struggle organized by the Maoist movement, which resulted in the killing of the Supreme Leader Tupa Tupa Tupa Tsai. The newly elected president of the United States, Martin Luther King, Jr., was sworn in on Tuesday, July 9, at the age of 93.'\r\n```\r\n\r\nThe second sentence seems like pretty bad hallucination from the actual audio. \r\n\r\n@ylacombe is this model known for hallucinating so much ?\r\n\r\n",
"Thanks for investigating @Narsil ! So this is the result after fixing the bug you found ?\r\n\r\n> @ylacombe is this model known for hallucinating so much?\r\n\r\nIt is, the model principal usage is translation. Using it in an ASR settings is likely to produce big hallucinations. Moreover, from my own usage, it is not really good with short audios.\r\n\r\n",
"> So this is the result after fixing the bug you found ?\r\n\r\nThe bugfix doesn't really help that much tbh (so no the example doesn't include it, the bug is that we don't reinclude the `stride` parameter when the model is `seq2seq` withing the forward method, meaning we don't use the custom stitching for seq2seq, but given the high hallucination rate, it doesn't really help, it's actually worse on the entire file in this particular instance).",
"Are there any benchmarks on hallucinations in m4t and benchmarks in general? In the paper the performance for ASR and M4T-v2 is almost double as good as Whisper-v2. In my own reproduction on Fleurs I was able to reproduce the paper results. \r\n\r\nAlso I know this feature is experimental, just wanted to let you guys now! I really appreciate the work you do and I know more than a few people who use huggingface implementations just for the sake of chunking. So I think its an important feature and even if the behaviour can't be changed, knowledge about things like this is key right? :) \r\n\r\n![Screenshot from 2024-01-19 10-57-10](https://github.com/huggingface/transformers/assets/138434950/b797a31e-a4f1-4dfd-a50d-4c4afcc17fe0)\r\n",
"Hello @asusdisciple and @Narsil, circling back on this issue, I've used jiwer visualisation tools to pinpoint where the transcription fails. (`print(jiwer.visualize_alignment(jiwer.process_words(ground_truth.lower(), pred[\"text\"].lower())))`)\r\n\r\nTurns out M4T doesn't hallucinate that much\r\n\r\n\r\n```python\r\nsentence 1\r\nREF: police superintendent chandra shekhar solanki said the accused appeared in court with covered faces. during the struggle for independence organised by the mau movement, a peaceful gathering in the town resulted in the killing of the paramount chief ***** ******** *** ******** ***** ** * *** *********** ********* ******* tupua tamasese lealofi iii. martelly swore in a new provisional electoral council (cep) of nine members yesterday. europe is a continent that is relatively small but with many independent countries. under normal circumstances, travelling through multiple countries would mean having to go through visa applications and passport ********* ****** ************* ********** ******* ******** ********* ***** **** ****** ** ** ******* **** ************ *** ******** control multiple times. their thermal behavior is not as steady as large caves on earth that often maintain a fairly constant temperature, but it is consistent with these being deep holes in the ground,' said glen cushing of the united states geological survey *** **** ********** **** ***** ***** **** ***** ** *** ******* **** ***** ******* ** *** ****** (usgs) astrogeology team and of northern arizona university located in flagstaff, arizona. sleep interruption is the process of ************ purposefully awakening during your normal sleep period and falling asleep a short time later (10–60 minutes). most of the buildings on the edges of the complex have been rebuilt in order to give tourists a better idea of how they originally appeared. angel (2006), explains the continuum approach as a method being used to help organizations reach a higher level of performance. due to the underwater topology the return flow is concentrated at a few deeper sections, and a fast current to deep water may form there. towards the end of the middle ages western europe began to develop their own style. one of the biggest developments of the time as a result of the crusades people began to use buttons to fasten clothing. nextgen is a system the faa claims would allow aircraft to fly shorter routes and save millions of gallons of fuel each year and cut carbon emissions. a full 20 percent of the water that pours out of the planet's rivers into the oceans comes from the amazon. so, it is likely that the notation was added simply as a label. he was greeted by singapore's deputy prime minister wong kan seng and discussed trade and terrorism issues with the singapore prime minister lee hsien loong. the disease is carried by pigs, which then migrates to humans through mosquitos. elements like calcium and potassium are considered metals. of course, there are also metals like silver and gold. field trips are a large part of any classroom. quite often a teacher would love to take her students places to which a bus trip is not an option. people now write messages on computer screens, never having to come close to a sharpener. one of the most common methods used to illustrate the importance of socialization is to draw upon the few unfortunate cases of children who were, through neglect, misfortune, or wilful abuse, not socialized by ** adults while they were growing up. he added that 'they should not, however, be asked to take on obligations that go beyond their development stage, responsibility and capabilities.' ironing damp clothes can help them dry. many hotels have an iron and ironing board available for loan, even if one is not present in the room. apia is the capital of samoa. the town is on the island of upolu and has a population of just under 40,000. a curry is a dish based on herbs and spices, together with either meat or vegetables. four skiers in the women's sitting group failed to finish their runs, and 45 of the 117 total skiers in the giant slalom failed to rank in the race. the debate was sparked by controversy over spending on relief and reconstruction in the wake ** hurricane katrina; which some fiscal conservatives have humorously labeled 'bush's new orleans deal.' in just two weeks the americans and free french forces had liberated southern france and were turning towards germany. the great pyramid at giza is the only one of the seven wonders that is still standing today. the two compounds react with one another to form crystals that may block kidney function, researchers at the university said. most interpretations of technological determinism share two general ***** **** *** *********** ** ********** ****** ******* * **** ******* ****** ******** ** ********* ********* **** *************** ** ************* *********** ***** *** ******* ideas: that the development of technology itself follows a path largely beyond cultural or political influence, and that technology in turn has 'effects' on societies that are inherent, rather than socially conditioned.\r\nHYP: police superintendent shandra shankar solansky said the accused appeared in court with covered faces during the struggle for independence organized by the mao movement a peaceful gathering in the town resulted in the killing of *** paramount chief tuppa tupaczai iii martelli swore in a new provisional electoral council of nine members yesterday martelli swore in a new provisional electoral council ***** of nine members yesterday. europe is a continent that is relatively small but with many independent countries. under normal circumstances, travelling through multiple countries would mean having to go through visa applications and passport controls. normal circumstances travelling through multiple countries would mean having to go through visa applications and passport control multiple times their thermal behaviour is not as steady as large caves on earth that often maintain a fairly constant temperature but ** is consistent with these being deep holes in the ground said glen cushing of the united states geological survey but it's consistent with these being deep holes in the ground, said glenn cushing of the united states geological survey and the northern arizona university located in flagstaff, arizona. ***** ************ ** the process of deliberately waking up during your normal sleep period and falling asleep a short time later. ****** ********* **** ** the best way to do this is ******* **** **** ******* ** ***** to use ******** a ****** **** ** *** **** ********** ********* ***** ******* ******** *** continuum approach ** * ****** ***** **** to help organizations achieve * higher levels of performance. *** ** the next gen *** ****** **** is ************ ** a system that ********* *** * **** ******* ** **** ***** *** **** ****** ******* the *** ** *** ****** **** ******* ****** ***** ** ******* ***** *** ****** *** ** *** ******* ************ ** *** **** ** * ****** ** *** ******** ****** ***** ** *** ******* ** ****** ********* ******* ** * ****** *** faa claims would allow aircraft to fly shorter routes and save millions of gallons of fuel each year and cut carbon emissions. * **** ** ******* ** the faa claims ***** *** ** *** ******** ****** **** *** ****** ***** **** *** ******* *** ** ** ****** that the system will allow aircraft to fly shorter routes and save millions of gallons of fuel each year **** and cut carbon emissions. ********* ****** **** the ********* ***** ******** *** ***** ****** *** disease is carried by pigs, which then migrates to humans through mosquitoes. the most common methods used to illustrate the importance ******* ***** *** **** ****** **** ****** *** ***** ***** ***** *** * ***** **** of socialization are: ***** ***** * ******* ***** **** ** **** *** ******** ****** ** ***** * *** **** ** *** ** ******* ****** *** ***** ******** ** ******** ******** ***** ****** ** **** ***** ** * ********** one of the most common methods used to illustrate the importance of socialization is to draw on the *** unfortunate cases of children who were through neglect, misfortune or willful abuse not socialized by an adult while they were growing up. the city of ***** ****** **** ******** ** ***** ** **** ** *********** **** ** ****** ***** *********** ****** ************** *** ************** ******* **** ******* *** **** **** **** **** ****** **** ** **** *** ******* ***** ********* *** ***** **** ** *** ** *** ******* ** *** ***** apia is the capital of samoa. the city of upu is home to ***** *** *** a population of just under 40,000 people. ***** ** * **** ***** ** ***** *** ******* ******** **** ****** **** ** *********** **** ****** ** the ******* ******* ***** ****** ** ****** ***** ***** *** ** ** *** *** ***** ****** ** *** ***** ****** ****** ** **** ** *** ***** *** debate was sparked by controversy over spending on relief and reconstruction in the wake of hurricane katrina, which some fiscal conservatives have humorously labeled bush's new orleans deal. its **** *** ***** *** ********* *** **** ****** forces had liberated southern france and were turning towards germany the great pyramid at giza is *** **** one of the seven wonders that is still standing today the two compounds react with one another to form crystals that may block kidney function researchers at the university said most interpretations of technological determinism share two general ideas that the development of technology itself follows a path largely beyond cultural or political influence most interpretations of technological determinism share two general ideas: that the development of technology itself follows a path largely beyond cultural or political influence and that technology in turn has effects on society that are inherent rather than socially conditioned.\r\n S S S S S S S D I I I I I I I I I I I S S S S S D I I I I I I I I I I I I I I I I I S S S D S I I I I I I I I I I I I I I I I I S S S S D D D I S S S D D D D S S S S S S D D D D D D S D D D D D D D D D D D D D D D D D S D S D D S S D D D D D S S D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D S S D D D D D D D D D D D D D D D D D S S S S S S S S S S S S S S S S S D S S S D D D D D D D D D D S S S S S S S S S S D D D D D D D D D D D D D D D S S D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D S D S S S S I S S S S D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D S S S S S S D D D S S D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D I S S S S D D D D D D D D S D D S S S I I I I I I I I I I I I I I I I I I I I I I I I S S S S \r\n\r\nnumber of sentences: 1\r\nsubstitutions=99 deletions=280 insertions=72 hits=333\r\n\r\nmer=57.53%\r\nwil=69.10%\r\nwip=30.90%\r\nwer=63.34%\r\n```\r\n\r\nIt seems that the transcription works quite well but repeats a lot of sequence -> indicate that it should work better with a low `stride_length_s`.\r\n\r\nI've tried with `stride_length_s=1`, and I already get a much better WER (45%).\r\n\r\n```python\r\nsentence 1\r\nREF: police superintendent chandra shekhar solanki said the accused appeared in court with covered faces. during the struggle for independence organised by the mau movement, a peaceful gathering in the town resulted in the killing of the paramount chief tupua tamasese lealofi iii. martelly swore in a new provisional electoral council (cep) of nine members yesterday. europe is a continent that is relatively small but with many independent countries. under normal circumstances, travelling through multiple countries would mean having to go through visa applications and passport control multiple times. their thermal behavior is not as steady as large caves on ****** *** ***** ** *** ******* earth that often **** ** ***** **** maintain a fairly constant temperature, but it is consistent with these being deep holes in the ground,' said glen cushing of the united states geological survey (usgs) astrogeology team and of northern arizona university located in flagstaff, arizona. sleep interruption is the process of ************ purposefully awakening during your normal sleep period and falling asleep a short time later (10–60 minutes). most of the buildings on the edges of the complex have been rebuilt in order to give tourists a better idea of how they originally appeared. angel (2006), explains the continuum approach as a method being used to help organizations reach a higher level of performance. due to the underwater topology the return flow is concentrated at a few deeper sections, and a fast current to deep water may form there. towards the end of the middle ages western europe began to develop their own style. one of the biggest developments of the time as a result of the crusades ***** people began to use buttons to fasten ***** ******* clothing. nextgen is a system the *** faa claims **** *** ****** would allow aircraft to fly shorter routes and save millions of gallons of fuel each year and cut carbon emissions. a full 20 percent of the water that pours out of the planet's rivers into the oceans comes from the amazon. so, it is likely that the notation was added simply as a label. he was greeted by singapore's deputy prime minister wong kan seng and discussed trade and terrorism issues with the singapore prime minister lee hsien loong. the disease is carried by pigs, which then migrates to humans through mosquitos. elements like calcium and potassium are considered metals. of course, there are also metals like silver and gold. field trips are a large part of any classroom. quite often a teacher would love to take her students places to which a bus trip is not an option. people now write messages on computer screens, never having to come close to a sharpener. one of the most common methods used to illustrate the importance of socialization is to draw upon the few unfortunate cases of children who were, through neglect, misfortune, or wilful abuse, not socialized by ** adults while they were growing up. he added that 'they should not, however, be asked to take on obligations that go beyond their development stage, responsibility and capabilities.' ironing damp clothes can help them dry. many hotels have an iron and ironing board available for loan, even if one is not present in the room. apia is the capital of samoa. the town is on the island of upolu and has a population of just under 40,000. a curry is a dish based on herbs and spices, together with either meat or vegetables. four skiers in the women's sitting group failed to finish their runs, and 45 of the 117 total skiers in the giant slalom failed to rank in the race. the debate was sparked by controversy over spending on relief and reconstruction in the wake ** hurricane katrina; which some fiscal conservatives have humorously labeled 'bush's new orleans deal.' in just two weeks the americans and free french forces had liberated southern france and were turning towards germany. the great pyramid at giza is the only one of the seven wonders that is still standing today. the two compounds react with one another to form crystals that may block kidney function, researchers at the university said. most interpretations of technological determinism share two general ideas: that the development of technology itself follows a path largely beyond cultural or political ********* ******** ** ********* influence, and that technology in turn has 'effects' on societies that are inherent, rather than socially conditioned.\r\nHYP: police superintendent shandra shankar solansky said the accused appeared in court with covered faces during the struggle for independence organized by the mao movement a peaceful gathering in the town resulted in the killing of *** paramount chief tuppa tupaczai iii martelli ******** swore in a new provisional electoral council ***** of nine members yesterday europe is a continent that is relatively small but with many independent countries. under normal circumstances, travelling through multiple countries would mean having to go through visa applications and passport control multiple times. their thermal behaviour is not as steady as large caves on earth. the study of the earth's crust is often done in caves that are relatively stable in temperature, but it is consistent with these being deep holes in the ground, said glenn cushing of the united states geological survey ****** ************ **** and the northern arizona university located in flagstaff, arizona. ***** ************ ** the process of deliberately waking up during your normal sleep period and falling asleep a short time later. ****** ********* **** ** *** ********* ** *** ***** ** *** ******* **** **** ******* ** ***** ** **** ******** * ****** **** ** *** **** ********** ********* ***** ******* ******** *** ********* ******** ** * ****** ***** **** ** **** ************* ***** * ****** ***** ** ************ due to the underwater topology the return flow is concentrated at a few deeper sections and a fast current to deep water may form there towards the end of the middle ages western europe began to develop their own style one of the biggest developments of that time as a result of the crusades began people began to use buttons to fasten their clothes next gen is a system the the faa claims that the system will allow aircraft to fly shorter routes and save millions of gallons of fuel each year and cut carbon emissions. discuss **** ** ******* ** *** ***** **** ***** *** ** *** ******** ****** **** *** ****** ***** **** *** ******* *** ** ** ****** **** *** ******** *** ***** ****** ** * ****** ** *** ******* ** *********** ****** ***** ******** **** *** **** *** ********* trade and terrorism issues with the singapore prime minister lee hsien long the disease is carried by pigs which then migrates to humans through mosquitoes elements like calcium and potassium are considered metals of course there are also metals like silver and gold field trips are a large part of any classroom quite often a teacher would love to take her students places to which a bus **** ** *** ** ******* ****** *** ***** ******** ** ******** ******** ***** ****** ** **** ***** ** * ********** one of the most common methods used to illustrate the importance of socialization is to draw on the *** unfortunate cases of children who were through neglect, misfortune or willful abuse not socialized by an adult while they were growing up. ** ***** **** ***** ****** **** ******** ** ***** ** **** ** *********** **** ** ****** ***** *********** ****** ************** *** ************** ******* **** ******* *** **** **** **** **** ****** **** ** **** *** ******* ***** ********* *** ***** **** ** *** ** *** ******* ** the island **** ** *** ******* of upu *** **** is home to ****** ** ***** *** *** a population of just under 40,000 people. ***** ** * **** ***** ** ***** *** ******* ******** **** ****** **** ** *********** **** ****** ** the ******* ******* ***** ****** ** ****** ***** ***** *** ** ** *** *** ***** ****** ** *** ***** ****** ****** ** **** ** *** ***** *** debate was sparked by controversy over spending on relief and reconstruction in the wake of hurricane katrina, which some fiscal conservatives have humorously labeled bush's new orleans deal. its **** *** ***** *** ********* *** **** ****** forces had liberated southern france and were turning towards germany the great pyramid at giza is *** **** one of the seven wonders that is still standing today the two compounds react with one another to form crystals that may block kidney function researchers at the university said most interpretations of technological determinism share two general ideas that the development of technology itself follows a path largely beyond cultural or political influence cultural or political influence and that technology in turn has effects on society that are inherent rather than socially conditioned.\r\n S S S S S S S D S S S S D D S S I I I I I I S S I I I I S S S S S S D D D S D D D I S S S D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D S S S S I I I S S I I I I S S D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D S S S S S S S D D D D D D D D D D D D D D D D D D D D S D S S S S I S D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D S D D D D S D D S S D D D D D S S D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D D I S S S S D D D D D D D D S D D S S S S I I I I S S S S \r\n\r\nnumber of sentences: 1\r\nsubstitutions=65 deletions=236 insertions=24 hits=411\r\n\r\nmer=44.16%\r\nwil=52.55%\r\nwip=47.45%\r\nwer=45.65%\r\n```\r\n\r\n~~It also seems that the quality of the transcription degrades the longer the audio is, despite using chunking. I'm indeed under the impression that there are more deletions as we reach the end of the audio.~~\r\n\r\nThis could maybe prevented by modifying a bit how the transcription algorithm `(_find_longest_common_sequence`) work.\r\nAt the moment, for each chunk, it checks the longest common sequence between the already concatened sequence (composed of the concatenation of each previous chunks) and the newest chunk, and add the newest chunk where it doesn't match.\r\n\r\nAs @Narsil indicates, some ways to improve it could maybe be linked to the stride parameters and the fact that we have some heuristics on how to stitch the different chunks\r\n"
] | 1,704 | 1,706 | null | NONE | null | ### System Info
Ubuntu 22
Python 3.12
Latest Transformers
### Who can help?
@Narsil
@SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I know the chunk parameter is experimental, but it completely falls apart in case of m4t-v2. It works FAR better in
Whisper. Would it be possible to disable it or print a big fat warning when it is used? To reproduce:
1. Use seamless m4t-v2 model in a transformers pipeline.
2. Use task transcribe and ASR for any language you like.
3. Use an Audio which is longer than 30 seconds.
4. Set the `chunk_length_s` parameter to any value, e.g. 30
5. Compare score to transcription without setting `chunk_length_s`.
The scores in terms of WER % error are usually 4-5 times worse than you would expect. It is basically unusable at the moment, which make m4t-v2 not usable for ASR, except your files are very short.
### Expected behavior
Just chunk it and get approxximateky the same score if possible. I dont know why it fails so hard in comparison to whisper which works pretty good. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28397/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28396 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28396/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28396/comments | https://api.github.com/repos/huggingface/transformers/issues/28396/events | https://github.com/huggingface/transformers/issues/28396 | 2,070,736,295 | I_kwDOCUB6oc57bO2n | 28,396 | Llama 2 Transfomers Neuron X issue | {
"login": "liechtym",
"id": 7433062,
"node_id": "MDQ6VXNlcjc0MzMwNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7433062?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liechtym",
"html_url": "https://github.com/liechtym",
"followers_url": "https://api.github.com/users/liechtym/followers",
"following_url": "https://api.github.com/users/liechtym/following{/other_user}",
"gists_url": "https://api.github.com/users/liechtym/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liechtym/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liechtym/subscriptions",
"organizations_url": "https://api.github.com/users/liechtym/orgs",
"repos_url": "https://api.github.com/users/liechtym/repos",
"events_url": "https://api.github.com/users/liechtym/events{/privacy}",
"received_events_url": "https://api.github.com/users/liechtym/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sorry I don't think this is related to `transformers` as there is a wrapper around it. `sdpa` is natively supported in transformers ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | I was trying to use the generate API for Llama 2 using the same code from this example:
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/libraries/transformers-neuronx/transformers-neuronx-developer-guide.html#features
My code:
```
from transformers_neuronx.llama.model import LlamaForSampling
from transformers_neuronx.generation_utils import HuggingFaceGenerationModelAdapter
llama_model_cpu = LlamaForCausalLM.from_pretrained(
'meta-llama/Llama-2-7b-chat-hf',
torch_dtype=torch.float16,
)
llama_model_neuron = LlamaForSampling.from_pretrained('/home/ubuntu/Llama-2-7b-chat-hf-split', batch_size=1, tp_degree=2, amp='f16')
llama_model_neuron.to_neuron()
print('Config: ', llama_model_cpu.config)
llama_model = HuggingFaceGenerationModelAdapter(llama_model_cpu.config, llama_model_neuron)
```
Error:
```
Traceback (most recent call last):
File "modular.py", line 107, in <module>
chatbot = MiniGPT4LLama2Chatbot(cfg_path, gpu_id)
File "modular.py", line 62, in __init__
self.model = model_cls.from_config(model_config)
File "/home/ubuntu/MiniGPT-4/minigpt4/models/minigpt4.py", line 173, in from_config
model = cls(
File "/home/ubuntu/MiniGPT-4/minigpt4/models/minigpt4.py", line 45, in __init__
super().__init__(
File "/home/ubuntu/MiniGPT-4/minigpt4/models/minigpt_base.py", line 43, in __init__
self.llama_model, self.llama_tokenizer = self.init_llm(
File "/home/ubuntu/MiniGPT-4/minigpt4/models/base_model.py", line 202, in init_llm
llama_model = HuggingFaceGenerationModelAdapter(llama_model_cpu.config, llama_model_neuron)
File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers_neuronx/generation_utils.py", line 18, in __init__
super().__init__(config)
File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1190, in __init__
config = self._autoset_attn_implementation(
File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1311, in _autoset_attn_implementation
config = cls._check_and_enable_sdpa(
File "/opt/aws_neuron_venv_pytorch/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1464, in _check_and_enable_sdpa
raise ValueError(
ValueError: HuggingFaceGenerationModelAdapter does not support an attention implementation through torch.nn.functional.scaled_dot_product_attention yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new
```
Is there a work around for this? Or is supporting this attention implemention the only way? I simply want to use the generate api with a neuron-compiled model. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28396/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28395 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28395/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28395/comments | https://api.github.com/repos/huggingface/transformers/issues/28395/events | https://github.com/huggingface/transformers/issues/28395 | 2,070,736,215 | I_kwDOCUB6oc57bO1X | 28,395 | AttributeError: 'HfDeepSpeedConfig' object has no attribute 'trainer_config_finalize' | {
"login": "zhongshsh",
"id": 62104945,
"node_id": "MDQ6VXNlcjYyMTA0OTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/62104945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongshsh",
"html_url": "https://github.com/zhongshsh",
"followers_url": "https://api.github.com/users/zhongshsh/followers",
"following_url": "https://api.github.com/users/zhongshsh/following{/other_user}",
"gists_url": "https://api.github.com/users/zhongshsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhongshsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhongshsh/subscriptions",
"organizations_url": "https://api.github.com/users/zhongshsh/orgs",
"repos_url": "https://api.github.com/users/zhongshsh/repos",
"events_url": "https://api.github.com/users/zhongshsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhongshsh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey thanks for reporting, can you upgrade to the newest version of transformers 🤗 ",
"thx for your reply. After using `pip install -U transformers` or using `conda upgrade transformers`, same error still exists. here is the version info after the upgrade:\r\n```\r\n- `transformers` version: 4.36.2\r\n- Platform: Linux-5.15.0-18-shopee-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.13\r\n- Huggingface_hub version: 0.20.2\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.22.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.0.1 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```",
"The config class used seems to be `HFDeepSpeedConfig` vs `HfTrainerDeepSpeedConfig` (which inherits from the latter.) ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.30.0
- Platform: Linux-5.15.0-18-shopee-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.1
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
run the repo https://github.com/stanleylsx/llms_tool in the mode of `rm_train`
1. read https://github.com/stanleylsx/llms_tool?tab=readme-ov-file#rm-training and modify `config.py`
2. read https://github.com/stanleylsx/llms_tool?tab=readme-ov-file#deepspeed and modify `config.py`
3. run `deepspeed --num_gpus 2 --master_port=9999 main.py`
Then getting error
```shell
Traceback (most recent call last):
File "llms_tool/main.py", line 34, in <module>
train.train_reward_model()
File "llms_tool/engines/train.py", line 309, in train_reward_model. # https://github.com/stanleylsx/llms_tool/blob/main/engines/train.py#L309
train_result = trainer.train()
File "xxx/miniconda3/envs/py310/lib/python3.10/site-packages/transformers/trainer.py", line 1645, in train
return inner_training_loop(
File "xxx/miniconda3/envs/py310/lib/python3.10/site-packages/transformers/trainer.py", line 1725, in _inner_training_loop
self.optimizer, self.lr_scheduler = deepspeed_init(self, num_training_steps=max_steps)
File "xxx/miniconda3/envs/py310/lib/python3.10/site-packages/transformers/deepspeed.py", line 344, in deepspeed_init # https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/deepspeed.py#L355
hf_deepspeed_config.trainer_config_finalize(args, model, num_training_steps)
AttributeError: 'HfDeepSpeedConfig' object has no attribute 'trainer_config_finalize'
```
### Expected behavior
not report error | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28395/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28394 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28394/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28394/comments | https://api.github.com/repos/huggingface/transformers/issues/28394/events | https://github.com/huggingface/transformers/pull/28394 | 2,070,672,841 | PR_kwDOCUB6oc5jfN6W | 28,394 | Mentee owlv2 | {
"login": "talshaharabany",
"id": 50660642,
"node_id": "MDQ6VXNlcjUwNjYwNjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/50660642?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/talshaharabany",
"html_url": "https://github.com/talshaharabany",
"followers_url": "https://api.github.com/users/talshaharabany/followers",
"following_url": "https://api.github.com/users/talshaharabany/following{/other_user}",
"gists_url": "https://api.github.com/users/talshaharabany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/talshaharabany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/talshaharabany/subscriptions",
"organizations_url": "https://api.github.com/users/talshaharabany/orgs",
"repos_url": "https://api.github.com/users/talshaharabany/repos",
"events_url": "https://api.github.com/users/talshaharabany/events{/privacy}",
"received_events_url": "https://api.github.com/users/talshaharabany/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28394/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28394",
"html_url": "https://github.com/huggingface/transformers/pull/28394",
"diff_url": "https://github.com/huggingface/transformers/pull/28394.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28394.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28393 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28393/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28393/comments | https://api.github.com/repos/huggingface/transformers/issues/28393/events | https://github.com/huggingface/transformers/issues/28393 | 2,070,596,478 | I_kwDOCUB6oc57ast- | 28,393 | IndexError: index out of range in self | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This means your inputs are out of bound for the embedding matrix. Do you mind trying to explain your issue instead of just pointing to a collab notebook! 🤗 ",
"> This means your inputs are out of bound for the embedding matrix. Do you mind trying to explain your issue instead of just pointing to a collab notebook! 🤗\r\n\r\n```\r\n# Datasets\r\ndataset = load_dataset(\"wikitext\", \"wikitext-2-raw-v1\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"distilgpt2\")\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n\r\n\r\n\r\ndef tokenize_function(examples):\r\n return tokenizer(examples[\"text\"])\r\n\r\ntokenized_datasets = dataset.map(\r\n tokenize_function, batched=True, num_proc=4, remove_columns=[\"text\"]\r\n)\r\n\r\ntokenized_datasets[\"train\"][1]\r\n\r\n```\r\n```\r\n{'input_ids': [796, 569, 18354, 7496, 17740, 6711, 796, 220, 198],\r\n 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}\r\n```\r\nProblem comes when i run:\r\n```\r\ndef compute_metrics(eval_pred):\r\n logits, labels = eval_pred\r\n predictions = np.argmax(logits, axis=-1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n\r\ndef model_init():\r\n return AutoModelForSequenceClassification.from_pretrained(\r\n 'distilgpt2', return_dict=True)\r\n\r\nimport torch\r\nfrom transformers import TrainingArguments, Trainer\r\n# model = \"distilgpt2\"\r\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\r\ntraining_args = TrainingArguments(\r\n\t\t output_dir=\"test_trainer\",\r\n\t\t evaluation_strategy=\"epoch\",\r\n\t\t report_to=\"wandb\")\r\n\r\ntrainer = Trainer(\r\n model_init=model_init,\r\n tokenizer=tokenizer,\r\n args=training_args,\r\n train_dataset=small_train_dataset,\r\n eval_dataset=small_eval_dataset,\r\n compute_metrics=compute_metrics,\r\n)\r\ntrainer.train()\r\n```\r\nwhich gives error:\r\n```\r\nSome weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at distilgpt2 and are newly initialized: ['score.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nSome weights of GPT2ForSequenceClassification were not initialized from the model checkpoint at distilgpt2 and are newly initialized: ['score.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nYou're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n---------------------------------------------------------------------------\r\nIndexError Traceback (most recent call last)\r\n<ipython-input-15-856d04b69f6c> in <cell line: 18>()\r\n 16 compute_metrics=compute_metrics,\r\n 17 )\r\n---> 18 trainer.train()\r\n\r\n13 frames\r\n/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 2231 # remove once script supports set_grad_enabled\r\n 2232 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 2233 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n 2234 \r\n 2235 \r\n\r\nIndexError: index out of range in self\r\n```\r\nAny help to resolve is appreciated ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
Colab Notebook T4
Colab: https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing
### Who can help?
@ArthurZucker @younesbelkada @pacman
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
You're using a GPT2TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-15-856d04b69f6c> in <cell line: 18>()
16 compute_metrics=compute_metrics,
17 )
---> 18 trainer.train()
13 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2231 # remove once script supports set_grad_enabled
2232 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2233 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2234
2235
IndexError: index out of range in self
```
for code:
```
import torch
from transformers import TrainingArguments, Trainer
# model = "distilgpt2"
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
training_args = TrainingArguments(
output_dir="test_trainer",
evaluation_strategy="epoch",
report_to="wandb")
trainer = Trainer(
model_init=model_init,
tokenizer=tokenizer,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
```
### Expected behavior
run the training | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28393/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28392 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28392/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28392/comments | https://api.github.com/repos/huggingface/transformers/issues/28392/events | https://github.com/huggingface/transformers/pull/28392 | 2,070,530,963 | PR_kwDOCUB6oc5jeu-x | 28,392 | update docs to add the `phi-2` example | {
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Can you make it into a tip / warning for the mean time. ",
"Hi @ArthurZucker, I have added the tip/warning. \r\n\r\nLet me know if anything more needs to be added.",
"just need to rebase on main 😉 ",
"Sorry my bad :sweat_smile: \r\n\r\nDone!",
"Thanks! "
] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This updates the docs as discussed [here](https://github.com/huggingface/transformers/pull/28211#issuecomment-1880115245)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28392/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28392",
"html_url": "https://github.com/huggingface/transformers/pull/28392",
"diff_url": "https://github.com/huggingface/transformers/pull/28392.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28392.patch",
"merged_at": 1704899268000
} |
https://api.github.com/repos/huggingface/transformers/issues/28391 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28391/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28391/comments | https://api.github.com/repos/huggingface/transformers/issues/28391/events | https://github.com/huggingface/transformers/issues/28391 | 2,070,525,785 | I_kwDOCUB6oc57abdZ | 28,391 | [BUG] Very high loss when using DeepSpeed with CPU offloading for versions>=4.36.0. | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Closing as #28245 fixed it",
"These are brutally expensive bugs for your users. What kind of test cases could be implemented to prevent this from happening?",
"We added a new test that you can find here: https://github.com/huggingface/transformers/pull/28245/files#diff-712bbefe24deb3653111486b471859bd0f2b4d61b87a81845189da697edde29d ",
"@ArthurZucker thank you :pray: "
] | 1,704 | 1,705 | 1,704 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.4.0-166-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.0
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cu121 (True)
- Tensorflow version (GPU?): 2.15.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**Describe the bug**
Very high loss when using DeepSpeed with CPU offloading for versions>=4.36.0
After spending 4 hours found that the cause is this PR: https://github.com/huggingface/transformers/pull/27709 and reverting this PR results in expected training loss curves.
**To Reproduce**
Steps to reproduce the behavior:
1. `run_glue.py` transformers example
```
cd transformers
export CUDA_VISISBLE_DEVICES=0,1
export TASK_NAME=mrpc
```
2. ds_config.json
```
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
3. Launch command:
```
torchrun --nnodes 1 --nproc-per-node 2 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --deepspeed ds_config.json --lr_scheduler_type cosine --save_strategy "epoch" --evaluation_strategy "epoch" --logging_steps 1
```
4. See very high loss values:
```
/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py:1330: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)
total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)])
/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py:1330: UserWarning: The torch.cuda.*DtypeTensor constructors are no longer recommended. It's best to use methods such as torch.tensor(data, dtype=*, device='cuda') to create tensors. (Triggered internally at ../torch/csrc/tensor/python_tensor.cpp:83.)
total_norm_cuda = get_accelerator().FloatTensor([float(total_norm)])
{'loss': 18.693, 'learning_rate': 4.999896350176413e-05, 'epoch': 0.01}
{'loss': 14.7233, 'learning_rate': 4.999585409300281e-05, 'epoch': 0.02}
{'loss': 7.5012, 'learning_rate': 4.999067203154777e-05, 'epoch': 0.03}
{'loss': 7.0167, 'learning_rate': 4.9983417747094816e-05, 'epoch': 0.03}
{'loss': 5.6229, 'learning_rate': 4.9974091841168195e-05, 'epoch': 0.04}
{'loss': 8.2446, 'learning_rate': 4.99626950870707e-05, 'epoch': 0.05}
{'loss': 4.521, 'learning_rate': 4.994922842981958e-05, 'epoch': 0.06}
{'loss': 2.8402, 'learning_rate': 4.9933692986068165e-05, 'epoch': 0.07}
{'loss': 3.8474, 'learning_rate': 4.991609004401324e-05, 'epoch': 0.08}
{'loss': 3.3916, 'learning_rate': 4.9896421063288286e-05, 'epoch': 0.09}
{'loss': 3.2036, 'learning_rate': 4.98746876748424e-05, 'epoch': 0.1}
{'loss': 2.467, 'learning_rate': 4.985089168080509e-05, 'epoch': 0.1}
{'loss': 2.9375, 'learning_rate': 4.982503505433683e-05, 'epoch': 0.11}
{'loss': 2.2244, 'learning_rate': 4.979711993946543e-05, 'epoch': 0.12}
{'loss': 3.5073, 'learning_rate': 4.976714865090827e-05, 'epoch': 0.13}
{'loss': 4.1485, 'learning_rate': 4.973512367388038e-05, 'epoch': 0.14}
{'loss': 3.1678, 'learning_rate': 4.970104766388832e-05, 'epoch': 0.15}
{'loss': 2.0759, 'learning_rate': 4.966492344651005e-05, 'epoch': 0.16}
{'loss': 4.6716, 'learning_rate': 4.962675401716056e-05, 'epoch': 0.17}
{'loss': 2.0029, 'learning_rate': 4.958654254084355e-05, 'epoch': 0.17}
{'loss': 2.9068, 'learning_rate': 4.9544292351888966e-05, 'epoch': 0.18}
{'loss': 2.8601, 'learning_rate': 4.95000069536765e-05, 'epoch': 0.19}
{'loss': 2.4923, 'learning_rate': 4.9453690018345144e-05, 'epoch': 0.2}
{'loss': 2.4786, 'learning_rate': 4.9405345386488614e-05, 'epoch': 0.21}
{'loss': 2.1235, 'learning_rate': 4.9354977066836986e-05, 'epoch': 0.22}
{'loss': 1.6675, 'learning_rate': 4.930258923592418e-05, 'epoch': 0.23}
{'loss': 2.2374, 'learning_rate': 4.924818623774178e-05, 'epoch': 0.23}
{'loss': 3.5783, 'learning_rate': 4.9191772583378705e-05, 'epoch': 0.24}
{'loss': 3.2699, 'learning_rate': 4.91333529506472e-05, 'epoch': 0.25}
{'loss': 2.0839, 'learning_rate': 4.907293218369499e-05, 'epoch': 0.26}
{'loss': 2.7985, 'learning_rate': 4.901051529260352e-05, 'epoch': 0.27}
{'loss': 1.7308, 'learning_rate': 4.89461074529726e-05, 'epoch': 0.28}
{'loss': 1.1548, 'learning_rate': 4.88797140054912e-05, 'epoch': 0.29}
{'loss': 2.1195, 'learning_rate': 4.8811340455494624e-05, 'epoch': 0.3}
{'loss': 3.4952, 'learning_rate': 4.874099247250798e-05, 'epoch': 0.3}
{'loss': 2.2034, 'learning_rate': 4.8668675889776095e-05, 'epoch': 0.31}
{'loss': 1.9979, 'learning_rate': 4.85943967037798e-05, 'epoch': 0.32}
{'loss': 1.1768, 'learning_rate': 4.851816107373871e-05, 'epoch': 0.33}
{'loss': 1.3446, 'learning_rate': 4.843997532110051e-05, 'epoch': 0.34}
{'loss': 2.8533, 'learning_rate': 4.835984592901678e-05, 'epoch': 0.35}
{'loss': 1.32, 'learning_rate': 4.82777795418054e-05, 'epoch': 0.36}
{'loss': 2.8985, 'learning_rate': 4.819378296439961e-05, 'epoch': 0.37}
{'loss': 2.1229, 'learning_rate': 4.8107863161783773e-05, 'epoch': 0.37}
{'loss': 2.2255, 'learning_rate': 4.8020027258415764e-05, 'epoch': 0.38}
{'loss': 2.0219, 'learning_rate': 4.793028253763633e-05, 'epoch': 0.39}
13%|█████████████████▌
```
5. When reverting PR https://github.com/huggingface/transformers/pull/27709 or changing the offloading device from `cpu` to `none`, the training happens properly:
```
{'loss': 0.6931, 'learning_rate': 4.999896350176413e-05, 'epoch': 0.01}
{'loss': 0.6833, 'learning_rate': 4.999585409300281e-05, 'epoch': 0.02}
{'loss': 0.6762, 'learning_rate': 4.999067203154777e-05, 'epoch': 0.03}
{'loss': 0.6624, 'learning_rate': 4.9983417747094816e-05, 'epoch': 0.03}
{'loss': 0.653, 'learning_rate': 4.9974091841168195e-05, 'epoch': 0.04}
{'loss': 0.7168, 'learning_rate': 4.99626950870707e-05, 'epoch': 0.05}
{'loss': 0.6631, 'learning_rate': 4.994922842981958e-05, 'epoch': 0.06}
{'loss': 0.6233, 'learning_rate': 4.9933692986068165e-05, 'epoch': 0.07}
{'loss': 0.6324, 'learning_rate': 4.991609004401324e-05, 'epoch': 0.08}
{'loss': 0.677, 'learning_rate': 4.9896421063288286e-05, 'epoch': 0.09}
{'loss': 0.5409, 'learning_rate': 4.98746876748424e-05, 'epoch': 0.1}
{'loss': 0.6643, 'learning_rate': 4.985089168080509e-05, 'epoch': 0.1}
{'loss': 0.7062, 'learning_rate': 4.982503505433683e-05, 'epoch': 0.11}
{'loss': 0.6542, 'learning_rate': 4.979711993946543e-05, 'epoch': 0.12}
{'loss': 0.6364, 'learning_rate': 4.976714865090827e-05, 'epoch': 0.13}
{'loss': 0.661, 'learning_rate': 4.973512367388038e-05, 'epoch': 0.14}
{'loss': 0.6675, 'learning_rate': 4.970104766388832e-05, 'epoch': 0.15}
{'loss': 0.6261, 'learning_rate': 4.966492344651005e-05, 'epoch': 0.16}
{'loss': 0.654, 'learning_rate': 4.962675401716056e-05, 'epoch': 0.17}
{'loss': 0.6515, 'learning_rate': 4.958654254084355e-05, 'epoch': 0.17}
{'loss': 0.6065, 'learning_rate': 4.9544292351888966e-05, 'epoch': 0.18}
{'loss': 0.5913, 'learning_rate': 4.95000069536765e-05, 'epoch': 0.19}
{'loss': 0.6339, 'learning_rate': 4.9453690018345144e-05, 'epoch': 0.2}
{'loss': 0.6514, 'learning_rate': 4.9405345386488614e-05, 'epoch': 0.21}
{'loss': 0.6394, 'learning_rate': 4.9354977066836986e-05, 'epoch': 0.22}
{'loss': 0.652, 'learning_rate': 4.930258923592418e-05, 'epoch': 0.23}
{'loss': 0.6087, 'learning_rate': 4.924818623774178e-05, 'epoch': 0.23}
{'loss': 0.6242, 'learning_rate': 4.9191772583378705e-05, 'epoch': 0.24}
{'loss': 0.534, 'learning_rate': 4.91333529506472e-05, 'epoch': 0.25}
{'loss': 0.6013, 'learning_rate': 4.907293218369499e-05, 'epoch': 0.26}
{'loss': 0.6088, 'learning_rate': 4.901051529260352e-05, 'epoch': 0.27}
{'loss': 0.5327, 'learning_rate': 4.89461074529726e-05, 'epoch': 0.28}
{'loss': 0.5884, 'learning_rate': 4.88797140054912e-05, 'epoch': 0.29}
{'loss': 0.6137, 'learning_rate': 4.8811340455494624e-05, 'epoch': 0.3}
{'loss': 0.5845, 'learning_rate': 4.874099247250798e-05, 'epoch': 0.3}
{'loss': 0.5141, 'learning_rate': 4.8668675889776095e-05, 'epoch': 0.31}
{'loss': 0.5223, 'learning_rate': 4.85943967037798e-05, 'epoch': 0.32}
{'loss': 0.5016, 'learning_rate': 4.851816107373871e-05, 'epoch': 0.33}
{'loss': 0.5055, 'learning_rate': 4.843997532110051e-05, 'epoch': 0.34}
{'loss': 0.5048, 'learning_rate': 4.835984592901678e-05, 'epoch': 0.35}
{'loss': 0.4792, 'learning_rate': 4.82777795418054e-05, 'epoch': 0.36}
{'loss': 0.6927, 'learning_rate': 4.819378296439961e-05, 'epoch': 0.37}
{'loss': 0.6419, 'learning_rate': 4.8107863161783773e-05, 'epoch': 0.37}
{'loss': 0.5379, 'learning_rate': 4.8020027258415764e-05, 'epoch': 0.38}
{'loss': 0.4458, 'learning_rate': 4.793028253763633e-05, 'epoch': 0.39}
{'loss': 0.5525, 'learning_rate': 4.783863644106502e-05, 'epoch': 0.4}
{'loss': 0.4353, 'learning_rate': 4.7745096567983256e-05,
```
**Expected behavior**
Same training loss curves with and without CPU offloading
**ds_report output**
```
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
inference_core_ops ..... [NO] ....... [OKAY]
cutlass_ops ............ [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
ragged_device_ops ...... [NO] ....... [OKAY]
ragged_ops ............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.1
[WARNING] using untested triton version (2.1.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch']
torch version .................... 2.1.2+cu121
deepspeed install path ........... ['/raid/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/deepspeed']
deepspeed info ................... 0.12.6, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.1, cuda 12.1
shared memory (/dev/shm) size .... 251.77 GB
```
**System info (please complete the following information):**
- GPU count and types: two A100s
- Python version 3.10.13
- Transformers version: 4.36.2
- Accelerate version: 0.25.0
cc @ArthurZucker as you have better insights wrt the changes done in PR https://github.com/huggingface/transformers/pull/27709.
### Expected behavior
Same training loss curves with and without CPU offloading | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28391/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28390 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28390/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28390/comments | https://api.github.com/repos/huggingface/transformers/issues/28390/events | https://github.com/huggingface/transformers/pull/28390 | 2,070,510,911 | PR_kwDOCUB6oc5jeql1 | 28,390 | Check the xpu available and move the tensor or model to xpu | {
"login": "yuanwu2017",
"id": 34643241,
"node_id": "MDQ6VXNlcjM0NjQzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/34643241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanwu2017",
"html_url": "https://github.com/yuanwu2017",
"followers_url": "https://api.github.com/users/yuanwu2017/followers",
"following_url": "https://api.github.com/users/yuanwu2017/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanwu2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanwu2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanwu2017/subscriptions",
"organizations_url": "https://api.github.com/users/yuanwu2017/orgs",
"repos_url": "https://api.github.com/users/yuanwu2017/repos",
"events_url": "https://api.github.com/users/yuanwu2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanwu2017/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28390/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28390",
"html_url": "https://github.com/huggingface/transformers/pull/28390",
"diff_url": "https://github.com/huggingface/transformers/pull/28390.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28390.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28389 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28389/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28389/comments | https://api.github.com/repos/huggingface/transformers/issues/28389/events | https://github.com/huggingface/transformers/issues/28389 | 2,070,380,409 | I_kwDOCUB6oc57Z395 | 28,389 | 'str' object has no attribute 'to' | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, according to the documentation: \r\n> model ([`PreTrainedModel`] or `torch.nn.Module`, *optional*):\r\n The model to train, evaluate or use for predictions. If not provided, a `model_init` must be passed.\r\n\r\nyou are passing a string. ",
"> Hey, according to the documentation:\r\n> \r\n> > ```\r\n> > model ([`PreTrainedModel`] or `torch.nn.Module`, *optional*):\r\n> > ```\r\n> \r\n> ```\r\n> The model to train, evaluate or use for predictions. If not provided, a `model_init` must be passed.\r\n> ```\r\n> \r\n> you are passing a string.\r\n\r\nThanks, where can i add: https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing ",
"> > Hey, according to the documentation:\r\n> > > ```\r\n> > > model ([`PreTrainedModel`] or `torch.nn.Module`, *optional*):\r\n> > > ```\r\n> > \r\n> > \r\n> > ```\r\n> > The model to train, evaluate or use for predictions. If not provided, a `model_init` must be passed.\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > you are passing a string.\r\n> \r\n> Thanks, where can i add: https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing\r\n\r\ni added: ```\r\ndef model_init():\r\n return AutoModelForSequenceClassification.from_pretrained( 'distilgpt2', return_dict=True)```\r\n\r\nbut got error:\r\n```\r\n---------------------------------------------------------------------------\r\nAssertionError Traceback (most recent call last)\r\n[<ipython-input-10-856d04b69f6c>](https://localhost:8080/#) in <cell line: 18>()\r\n 16 compute_metrics=compute_metrics,\r\n 17 )\r\n---> 18 trainer.train()\r\n\r\n6 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/models/gpt2/modeling_gpt2.py](https://localhost:8080/#) in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)\r\n 1446 \r\n 1447 assert (\r\n-> 1448 self.config.pad_token_id is not None or batch_size == 1\r\n 1449 ), \"Cannot handle batch sizes > 1 if no padding token is defined.\"\r\n 1450 if self.config.pad_token_id is None:\r\n\r\nAssertionError: Cannot handle batch sizes > 1 if no padding token is defined.\r\n```\r\nColab: https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing \r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
Colab Notebook, T4
Colab Notebook: https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing
### Who can help?
@pacman100 @muellerz @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import TrainingArguments, Trainer
model = "distilgpt2"
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
training_args = TrainingArguments(
output_dir="test_trainer",
evaluation_strategy="epoch",
report_to="wandb")
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.train()
```
ERROR
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
8 report_to="wandb")
9
---> 10 trainer = Trainer(
11 model=model,
12 args=training_args,
1 frames
[/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _move_model_to_device(self, model, device)
688
689 def _move_model_to_device(self, model, device):
--> 690 model = model.to(device)
691 # Moving a model to an XLA device disconnects the tied weights, so we have to retie them.
692 if self.args.parallel_mode == ParallelMode.TPU and hasattr(model, "tie_weights"):
AttributeError: 'str' object has no attribute 'to'
```
### Expected behavior
run the model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28389/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28388 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28388/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28388/comments | https://api.github.com/repos/huggingface/transformers/issues/28388/events | https://github.com/huggingface/transformers/issues/28388 | 2,070,280,146 | I_kwDOCUB6oc57ZffS | 28,388 | How to use an efficient encoder as shared EncoderDecoderModel? | {
"login": "Bachstelze",
"id": 19904888,
"node_id": "MDQ6VXNlcjE5OTA0ODg4",
"avatar_url": "https://avatars.githubusercontent.com/u/19904888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bachstelze",
"html_url": "https://github.com/Bachstelze",
"followers_url": "https://api.github.com/users/Bachstelze/followers",
"following_url": "https://api.github.com/users/Bachstelze/following{/other_user}",
"gists_url": "https://api.github.com/users/Bachstelze/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bachstelze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bachstelze/subscriptions",
"organizations_url": "https://api.github.com/users/Bachstelze/orgs",
"repos_url": "https://api.github.com/users/Bachstelze/repos",
"events_url": "https://api.github.com/users/Bachstelze/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bachstelze/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"Deberta as shared encoder and decoder would be also good: https://github.com/huggingface/transformers/issues/12436"
] | 1,704 | 1,704 | null | NONE | null | ### Feature request
Efficient encoder like destilBERT, ALBERT or ELECTRA aren't supported as decoder of the EncoderDecoderModel and so they can't be shared as encoder and decoder.
### Motivation
Warm-starting shared models is a powerful way to build transformer models. Yet the efficient models can't be used.
### Your contribution
We could implement the support for destilBERT, ALBERT or ELECTRA. They shouldn't be that different from other encoders. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28388/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28386 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28386/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28386/comments | https://api.github.com/repos/huggingface/transformers/issues/28386/events | https://github.com/huggingface/transformers/pull/28386 | 2,070,056,730 | PR_kwDOCUB6oc5jdGp0 | 28,386 | Fix wrong xpu device in DistributedType.MULTI_XPU mode | {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ArthurZucker would you take a look at this PR as well? Not quite sure who is the best reviewer for this case. Many thanks!",
"Hi folks, any concern or feedback for this PR? ",
"cc @abhilash1910",
"Thanks @muellerzr for highlighting. As per our previous issue : #27716 (cc @ArthurZucker ) , this was causing an issue in ipex . @faaany I will sync with you offline to find a proper solution. Thanks",
"#28442 might give a solution as well",
"> #28442 might give a solution as well\r\n\r\nThanks for the review! I will have an offline meeting with @abhilash1910 and @zhuhong61 for an alignment and get back to you soon.\r\n",
"@ArthurZucker we talked with each other offline today and decided to move forward with the solution proposed in this PR. \r\nPls give your comments as well, thanks a lot! @abhilash1910 @zhuhong61",
"Hi @ArthurZucker We can close this PR https://github.com/huggingface/transformers/pull/28442, and use #28386 as the final solution. Thanks",
"Thanks, does this need a specific version of torch / accelerate to work (which I would suspect as the patch was introduced )",
"> Thanks, does this need a specific version of torch / accelerate to work (which I would suspect as the patch was introduced )\r\n\r\nyes, a patched version for pytorch is still needed for XPU currently.",
"> Thanks, does this need a specific version of torch / accelerate to work (which I would suspect as the patch was introduced )\r\n\r\n@ArthurZucker, yes, for making XPU work, people need to follow this link https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.1.10%2Bxpu to set up env, it includes the installation intel oneCCL, intel patched PyTorch and Intel Extensions for PyTorch, these 3 support xpu.\r\n\r\nAnd Intel PyTorch team is also working with Meta to make xpu a built-in device of stock PyTorch. By that time, we can use XPU with stock PyTorch.",
"Hi @muellerzr @ArthurZucker, this PR has been there for a while and we are also internally aligned with this solution. Do you guys still have any concerns? ",
"Yes for sure! ",
"> LGTM my only concern is that this can only work starting latest version of torch, thus we would need to keep the old solution for older versions. WDYT?\r\n\r\nHi @ArthurZucker , the old solution is no longer needed as the changes discussed here fall perfectly in line with the expectations from the xpu . Conclusively this implies that the code changes can be used as is ( as it was being used before with accelerate and others) . \r\nFor any breaking changes related to core pytorch which would require change of apis on xpu side, we would definitely raise a significant PR to address those. But as of now, this change is perfectly fine with the existing software stack. Thanks",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28386). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"> LGTM my only concern is that this can only work starting latest version of torch, thus we would need to keep the old solution for older versions. WDYT?\r\n\r\nHmmm, I don't think people using older versions of torch would have problems with our current fix. In contrast, they would get the correct device set once our PR is merged. But anyway, thanks so much for the review and approval! ",
"Hi @ArthurZucker, could you help merge this PR? Many thanks!",
"Thanks for the explanations! "
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | ## Problem
When running lora fine-tuning on XPU in single-node&multi-card way, I noticed that the device is not correctly set up for distributed fine-tuning in the "__setup_devices" function.
![image](https://github.com/huggingface/transformers/assets/24477841/8f5a0233-86ef-4854-85c4-e0b5f02dc7ce)
As can be seen from the red box, if the distributed_type is `DistributedType.MULTI_XPU`, the device shouldn't be hard-coded to "xpu:0". Otherwise, the GPU process with rank 1 will end up with moving its model to "xpu:0" instead of the correct "xpu:1".
## Solution
We can remove the entire code in the red box because the device has already been correctly set in the above orange box. which is consistent to the device set by accelerator shown as below:
![image](https://github.com/huggingface/transformers/assets/24477841/ded014cd-de65-44c5-b5a6-c59f17ac7823)
## Reproduce
My training script is from [this ](https://huggingface.co./blog/Lora-for-sequence-classification-with-Roberta-Llama-Mistral)blog article and here is the command for launching this script:
```bash
accelerate launch training.py
```
And this is my Environment:
```bash
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.1
- Accelerate version: 0.25.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_XPU
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: 1,2
- rdzv_backend: static
- same_network: True
- main_training_function: main
- ipex_config: {'use_xpu': True}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.1.0a0+cxx11.abi (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
```
Pls, have a review, thanks!
@muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28386/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28386",
"html_url": "https://github.com/huggingface/transformers/pull/28386",
"diff_url": "https://github.com/huggingface/transformers/pull/28386.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28386.patch",
"merged_at": 1705667334000
} |
https://api.github.com/repos/huggingface/transformers/issues/28385 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28385/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28385/comments | https://api.github.com/repos/huggingface/transformers/issues/28385/events | https://github.com/huggingface/transformers/issues/28385 | 2,069,908,917 | I_kwDOCUB6oc57YE21 | 28,385 | model.generate() produces different results with paddings | {
"login": "zhentaocc",
"id": 90437536,
"node_id": "MDQ6VXNlcjkwNDM3NTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/90437536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhentaocc",
"html_url": "https://github.com/zhentaocc",
"followers_url": "https://api.github.com/users/zhentaocc/followers",
"following_url": "https://api.github.com/users/zhentaocc/following{/other_user}",
"gists_url": "https://api.github.com/users/zhentaocc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhentaocc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhentaocc/subscriptions",
"organizations_url": "https://api.github.com/users/zhentaocc/orgs",
"repos_url": "https://api.github.com/users/zhentaocc/repos",
"events_url": "https://api.github.com/users/zhentaocc/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhentaocc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Feel free to check https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535 for a better explanation ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,707 | 1,707 | NONE | null | ### System Info
I found this issue when I try to reproduce `https://huggingface.co./spaces/HuggingFaceH4/open_llm_leaderboard` results, specifically gsm8k on `mncai/Llama2-7B-guanaco-dolphin-500`. My result is 13.12 while the one reported was 5.99.
Also I found paddings make the outputs different (not sure if they are correct). For example, with batch size = 1, which means no padding, it's 13.12; with batch size =8, it is 5.69.
And I traced the root cause, findings are the same input token ids can get different outputs with or without padding tokens.
And the `pad_token_id `was 2, which is the same as `eos_token_id`. So if I set `pad_token_id `as 0, the results seems same as that without paddings.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I tested following runs on RTX8000.
1. `python main.py --model=hf-causal-experimental --model_args="pretrained=mncai/Llama2-7B-guanaco-dolphin-500," --tasks=gsm8k --num_fewshot=5 --batch_size=1 `
2. `python main.py --model=hf-causal-experimental --model_args="pretrained=mncai/Llama2-7B-guanaco-dolphin-500," --tasks=gsm8k --num_fewshot=5 --batch_size=8 `
### Expected behavior
1. accuracy should be ~13.12
2. accuracy should be ~5.69 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28385/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28384 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28384/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28384/comments | https://api.github.com/repos/huggingface/transformers/issues/28384/events | https://github.com/huggingface/transformers/issues/28384 | 2,069,899,827 | I_kwDOCUB6oc57YCoz | 28,384 | add_tokens does not preserve spacing | {
"login": "denizyuret",
"id": 1822118,
"node_id": "MDQ6VXNlcjE4MjIxMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1822118?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/denizyuret",
"html_url": "https://github.com/denizyuret",
"followers_url": "https://api.github.com/users/denizyuret/followers",
"following_url": "https://api.github.com/users/denizyuret/following{/other_user}",
"gists_url": "https://api.github.com/users/denizyuret/gists{/gist_id}",
"starred_url": "https://api.github.com/users/denizyuret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/denizyuret/subscriptions",
"organizations_url": "https://api.github.com/users/denizyuret/orgs",
"repos_url": "https://api.github.com/users/denizyuret/repos",
"events_url": "https://api.github.com/users/denizyuret/events{/privacy}",
"received_events_url": "https://api.github.com/users/denizyuret/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Few things can come into play. By default the token will be normalized, this means that the normalizer will be adding a prefix space to that token. When decoding, that space is removed. You should add the token using `tokenizer.add_tokens(AddedToken(\"Deniz\", normalized = False, special=False))`",
"This time there is an extra space that appears after the token:\r\n```\r\n>>> from transformers import AddedToken, AutoTokenizer\r\n>>> mtok = AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1')\r\n>>> mtok.add_tokens(AddedToken('Deniz', normalized=False, special=False))\r\n1\r\n>>> str = 'Arthur met Deniz today.'\r\n>>> mtok.decode(mtok.encode(str))\r\n'<s> Arthur met Deniz today.'\r\n```\r\n\r\nHere are the individual tokens if helps:\r\n```\r\n>>> mtok.convert_ids_to_tokens(mtok.encode(str))\r\n['<s>', '▁Arthur', '▁met', '▁', 'Deniz', '▁', '▁today', '.']\r\n```",
"That is also expected, you should either use a slow tokenizer (`use_fast = False`) or follow the fix that is here: #26678. It's a known issue of the normalizer. ",
"`use_fast=False` has an interesting behavior:\r\n```\r\n>>> mtok = AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1', use_fast=False)\r\n>>> mtok.add_tokens([AddedToken('Deniz', normalized=False, special=False)])\r\n>>> mtok.decode(mtok.encode('Arthur met Deniz today.'))\r\n'<s>Arthur met Deniz today.'\r\n>>> mtok.convert_ids_to_tokens(mtok.encode('Arthur met Deniz today.'))\r\n['<s>', '▁Arthur', '▁met', '▁', 'Deniz', '▁', '▁today', '.']\r\n```\r\ni.e. `decode` puts extra space before the added token, however `convert_ids_to_tokens` shows an extra space after the added token.\r\n\r\nI tried adding ' Deniz' or '▁Deniz' thinking it would better match the behavior of regular tokens like '▁Arthur' but no success.\r\n\r\nSo far I haven't been able to find a combination of options that will preserve spacing. Will check out your fix next.",
"This is the combination that works:\r\n```python \r\nIn [44]: mtok = AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1', use_fast=False, legacy = False)\r\n\r\nIn [45]: mtok.tokenize('Arthur met Deniz today.')\r\nOut[45]: ['▁Arthur', '▁met', '▁Den', 'iz', '▁today', '.']\r\n\r\nIn [46]: mtok.add_tokens([AddedToken('Deniz', normalized=False, special=False)])\r\nOut[46]: 1\r\n\r\nIn [47]: mtok.tokenize('Arthur met Deniz today.')\r\nOut[47]: ['▁Arthur', '▁met', '▁', 'Deniz', '▁today', '.']\r\n```\u001b\r\n",
"`mtok.decode` still throws in some extra spaces, but this works for model training etc. Thanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.35.2
- Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.6
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
>>> from transformers import AutoTokenizer
>>> mtok = AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1')
>>> str = 'Arthur met Deniz today.'
>>> mtok.decode(mtok.encode(str))
'<s> Arthur met Deniz today.'
>>> mtok.add_tokens(['Deniz'])
1
>>> mtok.decode(mtok.encode(str))
'<s> Arthur metDeniz today.'
```
### Expected behavior
Spaces shoud be preserved when a text is encoded/decoded. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28384/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28387 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28387/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28387/comments | https://api.github.com/repos/huggingface/transformers/issues/28387/events | https://github.com/huggingface/transformers/issues/28387 | 2,070,201,891 | I_kwDOCUB6oc57ZMYj | 28,387 | Issue with Adding New Tokens to ESM2 Model Tokenizer | {
"login": "mahdip72",
"id": 42680708,
"node_id": "MDQ6VXNlcjQyNjgwNzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/42680708?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mahdip72",
"html_url": "https://github.com/mahdip72",
"followers_url": "https://api.github.com/users/mahdip72/followers",
"following_url": "https://api.github.com/users/mahdip72/following{/other_user}",
"gists_url": "https://api.github.com/users/mahdip72/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mahdip72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mahdip72/subscriptions",
"organizations_url": "https://api.github.com/users/mahdip72/orgs",
"repos_url": "https://api.github.com/users/mahdip72/repos",
"events_url": "https://api.github.com/users/mahdip72/events{/privacy}",
"received_events_url": "https://api.github.com/users/mahdip72/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Seems like a bug with ESMTokenizer, (which doesn't use this library).\r\n\r\n@ArthurZucker for insights or the more relevant people ?",
"Hey, I cannot reproduce this: \r\n```python \r\nIn [23]: model_checkpoint = \"facebook/esm2_t6_8M_UR50D\"\r\n ...: tokenizer_2 = AutoTokenizer.from_pretrained(model_checkpoint)\r\nhuggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\r\nTo disable this warning, you can either:\r\n - Avoid using `tokenizers` before the fork if possible\r\n - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\r\ntokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 95.0/95.0 [00:00<00:00, 135kB/s]\r\nvocab.txt: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 93.0/93.0 [00:00<00:00, 247kB/s]\r\nspecial_tokens_map.json: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 125/125 [00:00<00:00, 416kB/s]\r\n\r\nIn [24]: tokenizer_2\r\nOut[24]: \r\nEsmTokenizer(name_or_path='facebook/esm2_t6_8M_UR50D', vocab_size=33, model_max_length=1024, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<eos>', 'unk_token': '<unk>', 'pad_token': '<pad>', 'cls_token': '<cls>', 'mask_token': '<mask>'}, clean_up_tokenization_spaces=True), added_tokens_decoder={\r\n 0: AddedToken(\"<cls>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n 1: AddedToken(\"<pad>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n 2: AddedToken(\"<eos>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n 3: AddedToken(\"<unk>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n 32: AddedToken(\"<mask>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n}\r\n```\r\n\r\n```python \r\n>>> tokenizer_2.add_tokens([\"J\"]) \r\nEsmTokenizer(name_or_path='facebook/esm2_t6_8M_UR50D', vocab_size=33, model_max_length=1024, is_fast=False, padding_side='right', truncation_side='right', special_tokens={'eos_token': '<eos>', 'unk_token': '<unk>', 'pad_token': '<pad>', 'cls_token': '<cls>', 'mask_token': '<mask>', 'additional_special_tokens': ['J']}, clean_up_tokenization_spaces=True), added_tokens_decoder={\r\n 0: AddedToken(\"<cls>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n 1: AddedToken(\"<pad>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n 2: AddedToken(\"<eos>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n 3: AddedToken(\"<unk>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n 32: AddedToken(\"<mask>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n 33: AddedToken(\"J\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\r\n}\r\n```\r\n\r\n```python\r\nIn [29]: tokenizer_2.get_vocab()\r\nOut[29]: \r\n{'<cls>': 0,\r\n '<pad>': 1,\r\n '<eos>': 2,\r\n '<unk>': 3,\r\n 'L': 4,\r\n 'A': 5,\r\n 'G': 6,\r\n 'V': 7,\r\n 'S': 8,\r\n 'E': 9,\r\n 'R': 10,\r\n 'T': 11,\r\n 'I': 12,\r\n 'D': 13,\r\n 'P': 14,\r\n 'K': 15,\r\n 'Q': 16,\r\n 'N': 17,\r\n 'F': 18,\r\n 'Y': 19,\r\n 'M': 20,\r\n 'H': 21,\r\n 'W': 22,\r\n 'C': 23,\r\n 'X': 24,\r\n 'B': 25,\r\n 'U': 26,\r\n 'Z': 27,\r\n 'O': 28,\r\n '.': 29,\r\n '-': 30,\r\n '<null_1>': 31,\r\n '<mask>': 32}\r\n```",
"> My main problem is that I noticed the length of the tokenizer does not change after adding the new token and therefore the above code does not extend the embeddings layer as expected.\r\n\r\n@ArthurZucker My problem is not with being a special token. When I am adding new tokens the vocab size does not change (33). Could you help me understand how to correctly increase the embedding size of the model?\r\n\r\nDoes it make sense if I define it manually?\r\n```python\r\nmodel_checkpoint = \"facebook/esm2_t6_8M_UR50D\"\r\nmodel = AutoModelForMaskedLM.from_pretrained(model_checkpoint)\r\ntokenizer = AutoTokenizer.from_pretrained(model_checkpoint)\r\nnum_added_toks = tokenizer.add_tokens(['J'])\r\nmodel.resize_token_embeddings(33 + num_added_toks)\r\n```",
"If the token is already part of the vocab, it is expected that the vocab size will not change",
"@ArthurZucker I am adding completely new tokens. I see them being added to the tokenizer. But the vocab size doesn't changed despite the fact that the new indexes are being set as the additional_special_tokens_ids.\r\nI bypassed the issue using the following line:\r\n\r\n```python\r\nmodel.resize_token_embeddings(max(tokenizer.additional_special_tokens_ids))\r\n```",
"The length of the vocab is different from the max if you have holes in the vocab. This ESMTokenizer uses the length as number of tokens rather than the max! \r\nNice fix and not sure we should change it no? ",
"@ArthurZucker @Narsil I fixed my problem, but others using ESM models might still have trouble. These models are very important for protein research now. The way the tokenizer counts words can confuse people when they try to make the model learn new tokens. This is different from the usual instruction of extending embedding layer such as llama 2 and could cause errors. Clearer steps in documentation or a fix in the tokenizer might help researchers.",
"cc @Rocketknight1 we might want to update that? WDYT? \r\n@mahdip72 would you like to open a pr for doc fixes? ",
"Hi all, I investigated the issue. There is indeed [specific code in the ESM tokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/esm/tokenization_esm.py#L161) that causes all new added tokens to be counted as 'special' tokens. I suspect the reason for this was that the authors felt the token list for proteins was constant (since it was just the list of amino acids), and therefore any new token had to be outside the normal vocabulary.\r\n\r\nIn your case @mahdip72, I'm guessing you want to add either nonstandard amino acids or tokens like `J` that represent \"leucine OR isoleucine\", correct? This is a valid use-case for ESM, and I think we should update the tokenizer code to support it. There is the issue of backward compatibility, though, so I see two possible solutions:\r\n\r\n1 (More backward compatible):\r\nUpdate `add_tokens` so that it keeps `special_tokens=True` as the default, but lets users manually specify `special_tokens=False` for cases like this\r\n\r\n2 (Matches workflows for other models):\r\nUpdate `add_tokens` so that `special_tokens=False` is the default, like other models. Users will need to manually specify `special_tokens=True` to add tokens as special tokens. This is probably a better solution, but it may break existing workflows.\r\n\r\nI'll see if I can grab a member of the ESM team to comment on this!",
"> In your case @mahdip72, I'm guessing you want to add either nonstandard amino acids or tokens like J that represent \"leucine OR isoleucine\", correct? \r\n\r\nIt is correct. My goal is to add new non-separatable tokens like the ESM vocabs to the ESM tokenizer. Also, I have seen lots of folk are adding non-separable 3Di [fold seek](https://www.nature.com/articles/s41587-023-01773-0) tokens and/or chemical-related tokens such as [SELFIES](https://arxiv.org/abs/1905.13741) to the protein language models. As far as I am understand, these tokens are non-separable and constant, similar to amino acids tokens.\r\n\r\n@Rocketknight1 Are special tokens constant and inseparable? What is the difference between normal tokens and special tokens in the ESM tokenizer?",
"Hi @mahdip72, the idea of \"special tokens\" mostly comes from tokenization for language models. In general, special tokens have two main properties:\r\n\r\n- Special tokens can be skipped when decoding using `skip_special_tokens = True`.\r\n- Special tokens are never split by the tokenizer.\r\n\r\nThese traits aren't especially relevant for ESM - in general, people aren't generating sequences with ESM and so tokenizer decoding doesn't apply, and secondly ESM never splits the text it tokenizes because it always converts one character to one token, unlike tokenizers like sentencepiece that are commonly used for natural language.\r\n\r\nI think the most sensible solution is to just update `add_tokens` for ESM so it behaves like other models and adds tokens as \"non-special\" by default, even though this might affect backward compatibility slightly. What do you think?",
"@Rocketknight1 I Agree. A general solution similar to other models is more sensible.",
"Hi @mahdip72, I've opened a PR at #28535 that should resolve this. Can you try it out and let me know if it resolves your issue? To install the PR branch, run this command: `pip install --upgrade git+https://github.com/huggingface/transformers.git@allow_esm_add_tokens`",
"Thanks for looking into this Matt! Agree ignoring special_tokens and force `special_tokens=True` was problematic there.\r\n Makes sense to make the behavior consistent with the rest of huggingface ecosystem, although I'm a bit nervous about silently breaking backward compatibility for any users who were adding special tokens without explicilty specifying the flag special_tokens=True there",
"Yeah @tomsercu - it was a bit of a concern for us too! I suspect adding tokens was quite niche, though - hopefully the improvement from standardization outweighs the very small number of people who were depending on the old behaviour.",
"Agreed. Thanks for the fix Matt!"
] | 1,704 | 1,705 | 1,705 | NONE | null | Hello
I am encountering an issue while working with the ESM2 models (`facebook/esm2_t6_8M_UR50D`). Specifically, when I try to add new tokens to the tokenizer, they are automatically classified as special tokens, even though I am specifying `special_tokens=False`.
Here is the code snippet I am using:
```python
model_checkpoint = "facebook/esm2_t6_8M_UR50D"
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
num_added_toks = tokenizer.add_tokens(['J'], special_tokens=False)
print("We have added", num_added_toks, "tokens")
model.resize_token_embeddings(len(tokenizer))
```
After executing this code, the new token ('J') is added as a special token, which is not the intended behavior. This behavior is different compared to when I use similar code with BERT models, where new tokens are added as expected without being automatically marked as special.
The vocab output is below:
```python
<bound method EsmTokenizer.get_vocab of EsmTokenizer(name_or_path=‘facebook/esm2_t6_8M_UR50D’, vocab_size=33, model_max_length=1024, is_fast=False, padding_side=‘right’, truncation_side=‘right’, special_tokens={‘eos_token’: ‘’, ‘unk_token’: ‘’, ‘pad_token’: ‘’, ‘cls_token’: ‘’, ‘mask_token’: ‘’, ‘additional_special_tokens’: [‘J’]}, clean_up_tokenization_spaces=True), added_tokens_decoder={
0: AddedToken(“”, rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
1: AddedToken(“”, rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
2: AddedToken(“”, rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
3: AddedToken(“”, rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
32: AddedToken(“”, rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
33: AddedToken(“J”, rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
}>
```
My main problem is that I noticed the **length of the tokenizer** does not change after adding the new token and therefore the above code does not extend the embeddings layer as expected.
I'm seeking guidance or a workaround for this issue. Is this a known issue with the ESM2 tokenizer, or am I missing something in my implementation?
Any help or insight into this matter would be greatly appreciated.
Thank you!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28387/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28383 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28383/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28383/comments | https://api.github.com/repos/huggingface/transformers/issues/28383/events | https://github.com/huggingface/transformers/issues/28383 | 2,069,570,341 | I_kwDOCUB6oc57WyMl | 28,383 | GPUA10:QWenLMHeadModel does not support Flash Attention 2.0 yet | {
"login": "PolarPeak",
"id": 44831329,
"node_id": "MDQ6VXNlcjQ0ODMxMzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/44831329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PolarPeak",
"html_url": "https://github.com/PolarPeak",
"followers_url": "https://api.github.com/users/PolarPeak/followers",
"following_url": "https://api.github.com/users/PolarPeak/following{/other_user}",
"gists_url": "https://api.github.com/users/PolarPeak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PolarPeak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PolarPeak/subscriptions",
"organizations_url": "https://api.github.com/users/PolarPeak/orgs",
"repos_url": "https://api.github.com/users/PolarPeak/repos",
"events_url": "https://api.github.com/users/PolarPeak/events{/privacy}",
"received_events_url": "https://api.github.com/users/PolarPeak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey, I don't think Qwen was merged into transformers it's a code on the hub, you should open the issue directly on the model discussion page : https://huggingface.co./Qwen/Qwen-72B/discussions"
] | 1,704 | 1,704 | 1,704 | NONE | null | ValueError: QWenLMHeadModel does not support Flash Attention 2.0 yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new
my GPU is A10
transformers>=4.36.2 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28383/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28382 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28382/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28382/comments | https://api.github.com/repos/huggingface/transformers/issues/28382/events | https://github.com/huggingface/transformers/issues/28382 | 2,069,500,600 | I_kwDOCUB6oc57WhK4 | 28,382 | rewrite trainer's save_model method get unexpected pytorch_model.bin file | {
"login": "Chandler-Bing",
"id": 29994840,
"node_id": "MDQ6VXNlcjI5OTk0ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/29994840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Chandler-Bing",
"html_url": "https://github.com/Chandler-Bing",
"followers_url": "https://api.github.com/users/Chandler-Bing/followers",
"following_url": "https://api.github.com/users/Chandler-Bing/following{/other_user}",
"gists_url": "https://api.github.com/users/Chandler-Bing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Chandler-Bing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Chandler-Bing/subscriptions",
"organizations_url": "https://api.github.com/users/Chandler-Bing/orgs",
"repos_url": "https://api.github.com/users/Chandler-Bing/repos",
"events_url": "https://api.github.com/users/Chandler-Bing/events{/privacy}",
"received_events_url": "https://api.github.com/users/Chandler-Bing/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello, this does not seem to be a direct issue, so would recommend you to ask on[ the forum ](https://discuss.huggingface.co/) 🤗 "
] | 1,704 | 1,706 | 1,706 | NONE | null | ### System Info
- `transformers` version: 4.33.1
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes, deepspeed zero2
### Who can help?
@pacman100 @muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. I rewrite trainer's `save_model` method below
```
def save_model(self, output_dir=None, _internal_call=False):
os.makedirs(output_dir, exist_ok=True)
self.model.save_pretrained(output_dir)
```
2. training info
- deepened zero2
- 1 node, 8 GPUs
3. seems `save_pretrained` has default `max_shard_size=10GB` so I expect 2 bin files each less than 10GB. however I get one 14GB pytorch_model.bin. **why?**
4. I find that if I didn't rewrite `save_model` , it behave normal. and the execute code in trainer.py line 2784
```
elif self.is_deepspeed_enabled:
# this takes care of everything as long as we aren't under zero3
if version.parse(accelerate_version) <= version.parse("0.20.3"):
raise ValueError("Install Accelerate from main branch")
try:
state_dict = self.accelerator.get_state_dict(self.deepspeed)
if self.args.should_save:
self._save(output_dir, state_dict=state_dict)
except ValueError:
logger.warning(
" stage3_gather_16bit_weights_on_model_save=false. Saving the full checkpoint instead, use"
" zero_to_fp32.py to recover weights"
)
self._save(output_dir, state_dict={})
# remove the dummy state_dict
remove_dummy_checkpoint(self.args.should_save, output_dir, [WEIGHTS_NAME, SAFE_WEIGHTS_NAME])
self.model_wrapped.save_checkpoint(output_dir)
```
`state_dict = self.accelerator.get_state_dict(self.deepspeed)` **what's the difference between `self.accelerator.get_state_dict(self.deepspeed)` and `self.model.state_dict()` ?**
### Expected behavior
says above
thanks for answering | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28382/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28381 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28381/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28381/comments | https://api.github.com/repos/huggingface/transformers/issues/28381/events | https://github.com/huggingface/transformers/issues/28381 | 2,069,429,917 | I_kwDOCUB6oc57WP6d | 28,381 | PhiForCausalLM does not support Flash Attention 2.0 | {
"login": "gmittal",
"id": 2015126,
"node_id": "MDQ6VXNlcjIwMTUxMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/2015126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmittal",
"html_url": "https://github.com/gmittal",
"followers_url": "https://api.github.com/users/gmittal/followers",
"following_url": "https://api.github.com/users/gmittal/following{/other_user}",
"gists_url": "https://api.github.com/users/gmittal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gmittal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gmittal/subscriptions",
"organizations_url": "https://api.github.com/users/gmittal/orgs",
"repos_url": "https://api.github.com/users/gmittal/repos",
"events_url": "https://api.github.com/users/gmittal/events{/privacy}",
"received_events_url": "https://api.github.com/users/gmittal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"Hi, I would like to work on this issue",
"Support for Phi-2 is still WIP, you can follow the progress here: https://github.com/huggingface/transformers/pull/28163",
"Hi @gmittal, Flash Attention is already implemented for Phi, [PR](https://github.com/huggingface/transformers/pull/27661) \r\n\r\nIt seems that you are using the hub version of `phi-2`. Please use it from the library to properly enable Flash Attention.\r\nFor now `microsoft/phi-2`, does not have the correct order of the weights to be used with the library model so please use it from `susnato/phi-2`.\r\n\r\nFirst update to the latest transformers version - \r\n```\r\npip install -U transformers\r\n```\r\n\r\nthen run - \r\n\r\n```python\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"susnato/phi-2\", \r\n use_flash_attention_2=True, \r\n torch_dtype=torch.float16)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"susnato/phi-2\")\r\n\r\ninputs = tokenizer('''def print_prime(n):\r\n \"\"\"\r\n Print all primes between 1 and n\r\n \"\"\"''', return_tensors=\"pt\", return_attention_mask=False)\r\n\r\noutputs = model.generate(**inputs, max_length=200)\r\ntext = tokenizer.batch_decode(outputs)[0]\r\nprint(text)\r\n```\r\n\r\n\r\nLet me know if this works or not.",
"I would like to work on this issue\r\n",
"Using HF alignment notebook, DPO script gives me this error regardless of transformers version. (I already force updated with pip). When I remove flash attention from the yaml it works (after a bit of code adjustment). I am able to fine tune with one of my sft scripts using flash attention, which is the strange part.",
"Hello everyone!\r\n\r\nThis should be fixed in transformers 4.37.0.dev. If not using that version, please make sure that `trust_remote_code=True` when loading the model and it should work out-of-the-box with flash-attention 2.",
"Thanks! Closing as this was fixed in https://github.com/huggingface/transformers/pull/28163",
"I installed from source so now i am on transformers 4.37.dev0 and i am still getting the Incompatible error, even with trust remote code set to true.\r\n\r\n`C:\\Users\\PC\\Documents\\Code-Trainer\\FineTune>py FINETUNERphiFP16.py --model_name_or_path C:\\Users\\PC\\Documents\\NEWGEN\\text-generation-webui-main\\models\\dolphin-2_6-phi-2 --data_path MiniCoderW.json --output_dir C:\\Users\\PC\\Documents\\NEWGEN\\text-generation-webui-main\\models\\TrainedPhi --num_train_epochs 3 --model_max_length 1024 --per_device_train_batch_size 1 --evaluation_strategy \"no\" --save_strategy \"steps\" --save_steps 1000 --save_total_limit 10 --learning_rate 2e-5 --warmup_steps 10 --logging_steps 10 --lr_scheduler_type \"cosine\" --report_to \"tensorboard\" --bf16 False --dataloader_num_workers 12 --optim paged_adamw_8bit\r\nWARNING:tensorflow:From C:\\Python311\\Lib\\site-packages\\keras\\src\\losses.py:2976: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.\r\n\r\n====================================================================================================\r\nTrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\nauto_find_batch_size=False,\r\nbf16=False,\r\nbf16_full_eval=False,\r\ncache_dir=None,\r\ndata_seed=None,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=12,\r\ndataloader_persistent_workers=False,\r\ndataloader_pin_memory=True,\r\nddp_backend=None,\r\nddp_broadcast_buffers=None,\r\nddp_bucket_cap_mb=None,\r\nddp_find_unused_parameters=None,\r\nddp_timeout=1800,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndispatch_batches=None,\r\ndo_eval=False,\r\ndo_predict=False,\r\ndo_train=False,\r\neval_accumulation_steps=None,\r\neval_delay=0,\r\neval_steps=None,\r\nevaluation_strategy=IntervalStrategy.NO,\r\nfp16=False,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\nfsdp=[],\r\nfsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\r\nfsdp_min_num_params=0,\r\nfsdp_transformer_layer_cls_to_wrap=None,\r\nfull_determinism=False,\r\ngradient_accumulation_steps=1,\r\ngradient_checkpointing=False,\r\ngradient_checkpointing_kwargs=None,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nhalf_precision_backend=auto,\r\nhub_always_push=False,\r\nhub_model_id=None,\r\nhub_private_repo=False,\r\nhub_strategy=HubStrategy.EVERY_SAVE,\r\nhub_token=<HUB_TOKEN>,\r\nignore_data_skip=False,\r\ninclude_inputs_for_metrics=False,\r\ninclude_num_input_tokens_seen=False,\r\ninclude_tokens_per_second=False,\r\njit_mode_eval=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=2e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=0,\r\nlog_level=passive,\r\nlog_level_replica=warning,\r\nlog_on_each_node=True,\r\nlogging_dir=C:\\Users\\PC\\Documents\\NEWGEN\\text-generation-webui-main\\models\\TrainedPhi\\runs\\Jan12_23-36-31_Nicolas,\r\nlogging_first_step=False,\r\nlogging_nan_inf_filter=True,\r\nlogging_steps=10,\r\nlogging_strategy=IntervalStrategy.STEPS,\r\nlr_scheduler_kwargs={},\r\nlr_scheduler_type=SchedulerType.COSINE,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmodel_max_length=1024,\r\nmp_parameters=,\r\nneftune_noise_alpha=None,\r\nno_cuda=False,\r\nnum_train_epochs=3.0,\r\noptim=OptimizerNames.PAGED_ADAMW_8BIT,\r\noptim_args=None,\r\noutput_dir=C:\\Users\\PC\\Documents\\NEWGEN\\text-generation-webui-main\\models\\TrainedPhi,\r\noverwrite_output_dir=False,\r\npast_index=-1,\r\nper_device_eval_batch_size=8,\r\nper_device_train_batch_size=1,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=None,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=<PUSH_TO_HUB_TOKEN>,\r\nray_scope=last,\r\nremove_unused_columns=True,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=C:\\Users\\PC\\Documents\\NEWGEN\\text-generation-webui-main\\models\\TrainedPhi,\r\nsave_on_each_node=False,\r\nsave_only_model=False,\r\nsave_safetensors=True,\r\nsave_steps=1000,\r\nsave_strategy=IntervalStrategy.STEPS,\r\nsave_total_limit=10,\r\nseed=42,\r\nskip_memory_metrics=True,\r\nsplit_batches=False,\r\ntf32=None,\r\ntorch_compile=False,\r\ntorch_compile_backend=None,\r\ntorch_compile_mode=None,\r\ntorchdynamo=None,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_cpu=False,\r\nuse_ipex=False,\r\nuse_legacy_prediction_loop=False,\r\nuse_mps_device=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=10,\r\nweight_decay=0.0,\r\n)\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\nPAD Token: <|endoftext|> 50256\r\nBOS Token <|endoftext|> 50256\r\nEOS Token <|im_end|> 50295\r\nLoad tokenizer from C:\\Users\\PC\\Documents\\NEWGEN\\text-generation-webui-main\\models\\dolphin-2_6-phi-2 over.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\PC\\Documents\\Code-Trainer\\FineTune\\FINETUNERphiFP16.py\", line 192, in <module>\r\n train()\r\n File \"C:\\Users\\PC\\Documents\\Code-Trainer\\FineTune\\FINETUNERphiFP16.py\", line 145, in train\r\n model = transformers.AutoModelForCausalLM.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Python311\\Lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 561, in from_pretrained\r\n return model_class.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Python311\\Lib\\site-packages\\transformers\\modeling_utils.py\", line 3497, in from_pretrained\r\n config = cls._autoset_attn_implementation(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Python311\\Lib\\site-packages\\transformers\\modeling_utils.py\", line 1340, in _autoset_attn_implementation\r\n cls._check_and_enable_flash_attn_2(\r\n File \"C:\\Python311\\Lib\\site-packages\\transformers\\modeling_utils.py\", line 1420, in _check_and_enable_flash_attn_2\r\n raise ValueError(\r\nValueError: PhiForCausalLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co./C:\\Users\\PC\\Documents\\NEWGEN\\text-generation-webui-main\\models\\dolphin-2_6-phi-2/discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new\r\n`\r\n\r\n\r\nHere is the script I am using:\r\n\r\n`import copy\r\nimport random\r\nfrom dataclasses import dataclass, field\r\nfrom typing import Optional, Dict, Sequence\r\n\r\nimport torch\r\nimport transformers\r\nfrom transformers import Trainer\r\nfrom datasets import load_dataset\r\n\r\n\r\nIGNORE_INDEX = -100\r\nEOT_TOKEN = \"<|EOT|>\"\r\n\r\ndef build_instruction_prompt(instruction: str):\r\n return '''\r\nYou are an AI programming assistant, utilizing the DeepSeek Coder model, developed by DeepSeek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.\r\n### Instruction:\r\n{}\r\n### Response:\r\n'''.format(instruction.strip()).lstrip()\r\n\r\n@dataclass\r\nclass ModelArguments:\r\n model_name_or_path: Optional[str] = field(default=\"deepseek-ai/deepseek-coder-6.7b-instruct\")\r\n\r\n@dataclass\r\nclass DataArguments:\r\n data_path: str = field(default=None, metadata={\"help\": \"Path to the training data.\"})\r\n\r\n\r\n@dataclass\r\nclass TrainingArguments(transformers.TrainingArguments):\r\n cache_dir: Optional[str] = field(default=None)\r\n optim: str = field(default=\"adamw_torch\")\r\n model_max_length: int = field(\r\n default=512,\r\n metadata={\"help\": \"Maximum sequence length. Sequences will be right padded (and possibly truncated).\"},\r\n )\r\n\r\ndef safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str):\r\n \"\"\"Collects the state dict and dump to disk.\"\"\"\r\n state_dict = trainer.model.state_dict()\r\n if trainer.args.should_save:\r\n cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()}\r\n del state_dict\r\n trainer._save(output_dir, state_dict=cpu_state_dict) # noqa\r\n\r\n\r\ndef _tokenize_fn(strings: Sequence[str], tokenizer: transformers.PreTrainedTokenizer) -> Dict:\r\n \"\"\"Tokenize a list of strings.\"\"\"\r\n tokenized_list = [\r\n tokenizer(\r\n text,\r\n return_tensors=\"pt\",\r\n padding=\"longest\",\r\n max_length=tokenizer.model_max_length,\r\n truncation=True,\r\n )\r\n for text in strings\r\n ]\r\n\r\n input_ids = labels = [tokenized.input_ids[0] for tokenized in tokenized_list]\r\n input_ids_lens = labels_lens = [\r\n tokenized.input_ids.ne(tokenizer.pad_token_id).sum().item() for tokenized in tokenized_list\r\n ]\r\n\r\n return dict(\r\n input_ids=input_ids,\r\n labels=labels,\r\n input_ids_lens=input_ids_lens,\r\n labels_lens=labels_lens,\r\n )\r\n\r\n\r\ndef preprocess(\r\n sources: Sequence[str],\r\n targets: Sequence[str],\r\n tokenizer: transformers.PreTrainedTokenizer,\r\n) -> Dict:\r\n \"\"\"Preprocess the data by tokenizing.\"\"\"\r\n examples = [s + t for s, t in zip(sources, targets)]\r\n examples_tokenized, sources_tokenized = [_tokenize_fn(strings, tokenizer) for strings in (examples, sources)]\r\n input_ids = examples_tokenized[\"input_ids\"]\r\n\r\n labels = copy.deepcopy(input_ids)\r\n for label, source_len in zip(labels, sources_tokenized[\"input_ids_lens\"]):\r\n label[:source_len] = IGNORE_INDEX\r\n return dict(input_ids=input_ids, labels=labels)\r\n\r\n@dataclass\r\nclass DataCollatorForSupervisedDataset(object):\r\n \"\"\"Collate examples for supervised fine-tuning.\"\"\"\r\n tokenizer: transformers.PreTrainedTokenizer\r\n\r\n def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:\r\n input_ids, labels = tuple([instance[key] for instance in instances] for key in (\"input_ids\", \"labels\"))\r\n input_ids = [torch.tensor(x) for x in input_ids]\r\n input_ids = torch.nn.utils.rnn.pad_sequence(\r\n input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id\r\n )\r\n labels = [torch.tensor(x) for x in labels]\r\n labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=IGNORE_INDEX)\r\n \r\n return dict(\r\n input_ids=input_ids,\r\n labels=labels,\r\n attention_mask=input_ids.ne(self.tokenizer.pad_token_id),\r\n )\r\n\r\ndef train_tokenize_function(examples, tokenizer):\r\n sources = [\r\n build_instruction_prompt(instruction)\r\n for instruction in examples['instruction']\r\n ]\r\n targets = [f\"{output}\\n{EOT_TOKEN}\" for output in examples['output']]\r\n data_dict = preprocess(sources, targets, tokenizer)\r\n return data_dict\r\n\r\ndef train():\r\n parser = transformers.HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))\r\n model_args, data_args, training_args = parser.parse_args_into_dataclasses()\r\n \r\n if training_args.local_rank == 0:\r\n print('='*100)\r\n print(training_args)\r\n \r\n tokenizer = transformers.AutoTokenizer.from_pretrained(\r\n model_args.model_name_or_path,\r\n model_max_length=training_args.model_max_length,\r\n padding_side=\"right\",\r\n use_fast=True,\r\n trust_remote_code=True\r\n )\r\n if tokenizer.pad_token is None:\r\n tokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n\r\n print(\"PAD Token:\", tokenizer.pad_token, tokenizer.pad_token_id)\r\n print(\"BOS Token\", tokenizer.bos_token, tokenizer.bos_token_id)\r\n print(\"EOS Token\", tokenizer.eos_token, tokenizer.eos_token_id)\r\n\r\n if training_args.local_rank == 0:\r\n print(\"Load tokenizer from {} over.\".format(model_args.model_name_or_path))\r\n\r\n model = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_args.model_name_or_path,\r\n torch_dtype=torch.bfloat16,\r\n trust_remote_code=True,\r\n attn_implementation=\"flash_attention_2\",\r\n )\r\n\r\n if training_args.local_rank == 0:\r\n print(\"Load model from {} over.\".format(model_args.model_name_or_path))\r\n\r\n\r\n raw_train_datasets = load_dataset(\r\n 'json',\r\n data_files=data_args.data_path,\r\n split=\"train\",\r\n cache_dir=training_args.cache_dir\r\n )\r\n \r\n train_dataset = raw_train_datasets.map(\r\n train_tokenize_function,\r\n batched=True,\r\n batch_size=3000,\r\n num_proc=32,\r\n remove_columns=raw_train_datasets.column_names,\r\n load_from_cache_file=True, # not args.overwrite_cache\r\n desc=\"Running Encoding\",\r\n fn_kwargs={ \"tokenizer\": tokenizer }\r\n )\r\n\r\n \r\n if training_args.local_rank == 0:\r\n print(\"Training dataset samples:\", len(train_dataset))\r\n for index in random.sample(range(len(train_dataset)), 3):\r\n print(f\"Sample {index} of the training set: {train_dataset[index]['input_ids']}, {train_dataset[index]['labels']}.\")\r\n print(f\"Sample {index} of the training set: {tokenizer.decode(list(train_dataset[index]['input_ids']))}.\")\r\n\r\n data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer)\r\n data_module = dict(train_dataset=train_dataset, eval_dataset=None, data_collator=data_collator)\r\n\r\n trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, **data_module)\r\n\r\n trainer.train()\r\n trainer.save_state()\r\n safe_save_model_for_hf_trainer(trainer=trainer, output_dir=training_args.output_dir)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n train()\r\n`\r\n",
"Hi @NickWithBotronics if you set `trust_remote_code=True`, then the code from the hub is used (in case of microsoft/phi-2 that's defined [here](https://huggingface.co./microsoft/phi-2/blob/main/modeling_phi.py)), rather than [modeling_phi.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/phi/modeling_phi.py) defined natively in the Transformers library.\r\n\r\nHence it's recommended to convert the weights from the `microsoft/phi-2` repo to a native one, which will work with Flash Attention 2. One can leverage the [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/phi/convert_phi_weights_to_hf.py) for that.\r\n\r\n@ArthurZucker should we host the converted phi-2 weights as part of the Microsoft organization? Cause currently one will get a lot of mismatched keys when doing the following:\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n 'microsoft/phi-2',\r\n use_flash_attention_2=True,\r\n torch_dtype=torch.bfloat16,\r\n)\r\n```\r\ndue to the the model in Transformers using a single matrix for queries, keys and values wheras the code on the hub uses separate matrices.",
"Thank you <3 !!!! that fixed that error(using the new modeling.py and converted hf format), now onto a new error that's due to my script I think?. :(\r\n'\r\nC:\\Python311\\Lib\\site-packages\\torch\\_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()\r\n return self.fget.__get__(instance, owner)()\r\nYou are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\PC\\Documents\\Code-Trainer\\FineTune\\FINETUNERphiFP16.py\", line 192, in <module>\r\n train()\r\n File \"C:\\Users\\PC\\Documents\\Code-Trainer\\FineTune\\FINETUNERphiFP16.py\", line 145, in train\r\n model = transformers.AutoModelForCausalLM.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Python311\\Lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 561, in from_pretrained\r\n return model_class.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Python311\\Lib\\site-packages\\transformers\\modeling_utils.py\", line 3503, in from_pretrained\r\n model = cls(config, *model_args, **model_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\PC\\.cache\\huggingface\\modules\\transformers_modules\\MiniPhi\\modeling_phi.py\", line 967, in __init__\r\n self.model = PhiModel(config)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\PC\\.cache\\huggingface\\modules\\transformers_modules\\MiniPhi\\modeling_phi.py\", line 821, in __init__\r\n [PhiDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]\r\n File \"C:\\Users\\PC\\.cache\\huggingface\\modules\\transformers_modules\\MiniPhi\\modeling_phi.py\", line 821, in <listcomp>\r\n [PhiDecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\PC\\.cache\\huggingface\\modules\\transformers_modules\\MiniPhi\\modeling_phi.py\", line 629, in __init__\r\n self.self_attn = PHI_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx=layer_idx)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\PC\\.cache\\huggingface\\modules\\transformers_modules\\MiniPhi\\modeling_phi.py\", line 412, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"C:\\Users\\PC\\.cache\\huggingface\\modules\\transformers_modules\\MiniPhi\\modeling_phi.py\", line 245, in __init__\r\n self.attention_dropout = config.attention_dropout\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Python311\\Lib\\site-packages\\transformers\\configuration_utils.py\", line 265, in __getattribute__\r\n return super().__getattribute__(key)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nAttributeError: 'PhiConfig' object has no attribute 'attention_dropout'\r\n\r\nC:\\Users\\PC\\Documents\\Code-Trainer\\FineTune>\r\n\"\r\n\r\n\r\nEdit: fixed it by downloading the latest: Generation_config.json, Config.json, Configuration_phi.py, and Modeling_phi.py",
"while I got it working, the training loss was very wack. It started at 6 and went to 2 (after 3 epochs) but when I used the old config with out flash attention it was .6 to ~.29(also 3 epochs) same dataset same set up, same model. Just different config files and flash attention. I saw someone else experience the same thing on twitter.",
"Can you open a seperate issue for this? With a reproducible snippet ",
"Gotcha, I’ll move to this ticket #28488 "
] | 1,704 | 1,705 | 1,705 | NONE | null | ```
import torch
from transformers import AutoModelForCausalLM, AutoModel
model = AutoModelForCausalLM.from_pretrained(
'microsoft/phi-2',
use_flash_attention_2=True,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
)
```
Throws:
```
ValueError: PhiForCausalLM does not support Flash Attention 2.0 yet. Please open an issue on GitHub to request support for this architecture: https://github.com/huggingface/transformers/issues/new
``` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28381/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28380 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28380/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28380/comments | https://api.github.com/repos/huggingface/transformers/issues/28380/events | https://github.com/huggingface/transformers/pull/28380 | 2,069,357,560 | PR_kwDOCUB6oc5jap25 | 28,380 | Fix building alibi tensor when num_heads is not a power of 2 | {
"login": "abuelnasr0",
"id": 64566340,
"node_id": "MDQ6VXNlcjY0NTY2MzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/64566340?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abuelnasr0",
"html_url": "https://github.com/abuelnasr0",
"followers_url": "https://api.github.com/users/abuelnasr0/followers",
"following_url": "https://api.github.com/users/abuelnasr0/following{/other_user}",
"gists_url": "https://api.github.com/users/abuelnasr0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abuelnasr0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abuelnasr0/subscriptions",
"organizations_url": "https://api.github.com/users/abuelnasr0/orgs",
"repos_url": "https://api.github.com/users/abuelnasr0/repos",
"events_url": "https://api.github.com/users/abuelnasr0/events{/privacy}",
"received_events_url": "https://api.github.com/users/abuelnasr0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | # Fix building alibi tensor when `n_heads` is not a power of 2
when the `n_heads` of MPT model is not a power of 2 number (ex: 6), the function that build the alibi tensor will return with an error, you can check that by running the extra test case that I have added. This PR fixes that issue.
## Before submitting
- [x] Did you write any new necessary tests?
@ArthurZucker and @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28380/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28380/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28380",
"html_url": "https://github.com/huggingface/transformers/pull/28380",
"diff_url": "https://github.com/huggingface/transformers/pull/28380.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28380.patch",
"merged_at": 1704706781000
} |
https://api.github.com/repos/huggingface/transformers/issues/28379 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28379/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28379/comments | https://api.github.com/repos/huggingface/transformers/issues/28379/events | https://github.com/huggingface/transformers/pull/28379 | 2,069,268,419 | PR_kwDOCUB6oc5jaX4v | 28,379 | Convert SlimSAM checkpoints | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [] | 1,704 | 1,707 | null | CONTRIBUTOR | null | # What does this PR do?
This PR extends the conversion script of SAM (Segment Anything) to also support [SlimSAM](https://github.com/czg1225/SlimSAM/tree/master) checkpoints.
SlimSAM is a compressed (pruned) version of SAM, claims to outperform FastSAM and MobileSAM. Looks cool! Below are the SAM-vit-base (top) versus the SlimSAM-77-uniform (bottom) results.
<img width="1254" alt="Screenshot 2024-01-07 at 20 59 40" src="https://github.com/huggingface/transformers/assets/48327001/588bdec0-86b8-4669-8e27-88232a379243">
see https://github.com/czg1225/SlimSAM/issues/4
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28379/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28379/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28379",
"html_url": "https://github.com/huggingface/transformers/pull/28379",
"diff_url": "https://github.com/huggingface/transformers/pull/28379.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28379.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28378 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28378/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28378/comments | https://api.github.com/repos/huggingface/transformers/issues/28378/events | https://github.com/huggingface/transformers/pull/28378 | 2,069,263,197 | PR_kwDOCUB6oc5jaW3R | 28,378 | fix: sampling in flax keeps EOS | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Running the same example listed in #28377, we get the correct output with EOS token present before the PAD tokens:\r\n`[\"</s><s><s>My friends are cool but they eat too many carbs. I love carbs. The carb-averse among us have a love for dessert on this one. I've got some amazing desserts. I've been a junkie for 10 years. I can't go straight after dinner, I've been eating for 4. I love it. That's why I made the TV show.</s><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>\"]`"
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #28377
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. @sanchit-gandhi
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28378/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28378",
"html_url": "https://github.com/huggingface/transformers/pull/28378",
"diff_url": "https://github.com/huggingface/transformers/pull/28378.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28378.patch",
"merged_at": 1705342329000
} |
https://api.github.com/repos/huggingface/transformers/issues/28377 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28377/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28377/comments | https://api.github.com/repos/huggingface/transformers/issues/28377/events | https://github.com/huggingface/transformers/issues/28377 | 2,069,262,223 | I_kwDOCUB6oc57Vm-P | 28,377 | Flax generate sampling does not return EOS | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | ### System Info
When calling `flax_model.generate(do_sample=True)`, we can notice that the EOS token has been replaced with PAD token.
Configuration:
```
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.4 (tpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: Noe
- Using distributed or parallel set-up in script?: No
```
### Who can help?
@sanchit-gandhi @gante
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Use `FlaxBartForConditionalGeneration`
2. Change generate method to `do_sample=True, num_beams=1`
3. When decoding, display all special tokens
Here is adapted script:
```
from transformers import AutoTokenizer, FlaxBartForConditionalGeneration
model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn")
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs."
inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="np")
# Generate Summary
summary_ids = model.generate(inputs["input_ids"], do_sample=True, num_beams=1).sequences
print(tokenizer.batch_decode(summary_ids, skip_special_tokens=False, clean_up_tokenization_spaces=False))
```
The output does not contain EOS `</s>` at the end before PAD tokens `<pad>`:
`["</s><s><s>My friends are cool but they eat too many carbs. I love carbs. The carb-averse among us have a love for dessert on this one. I've got some amazing desserts. I've been a junkie for 10 years. I can't go straight after dinner, I've been eating for 4. I love it. That's why I made the TV show.<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>"]`
The same issue happens with any decoder model.
### Expected behavior
EOS token should appear before the sequence of PAD tokens. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28377/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28376 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28376/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28376/comments | https://api.github.com/repos/huggingface/transformers/issues/28376/events | https://github.com/huggingface/transformers/issues/28376 | 2,069,187,992 | I_kwDOCUB6oc57VU2Y | 28,376 | Detr Loss: "IndexError: tensors used as indices must be long, int, byte or bool tensors" | {
"login": "kimborenn",
"id": 42417378,
"node_id": "MDQ6VXNlcjQyNDE3Mzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/42417378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kimborenn",
"html_url": "https://github.com/kimborenn",
"followers_url": "https://api.github.com/users/kimborenn/followers",
"following_url": "https://api.github.com/users/kimborenn/following{/other_user}",
"gists_url": "https://api.github.com/users/kimborenn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kimborenn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kimborenn/subscriptions",
"organizations_url": "https://api.github.com/users/kimborenn/orgs",
"repos_url": "https://api.github.com/users/kimborenn/repos",
"events_url": "https://api.github.com/users/kimborenn/events{/privacy}",
"received_events_url": "https://api.github.com/users/kimborenn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests, since you are using custom code, unless you isolate that this is comming from transformers / the trainer and not he way you process your data, I would recommend you to ask your question on the [forum](https://discuss.huggingface.co/) instead. I'm sure the community will be of help!\r\n\r\nThanks!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.35.0
- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.5 (cpu)
- Jax version: 0.4.21
- JaxLib version: 0.4.21
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the provided code:
```
import torchvision
import os
import json
class CocoDetection(torchvision.datasets.CocoDetection):
def __init__(self, img_folder, feature_extractor, train=True):
ann_file = os.path.join(img_folder, "../../annotations/instances_train2017.json" if train else "../../annotations/instances_val2017.json")
super(CocoDetection, self).__init__(img_folder, ann_file)
self.feature_extractor = feature_extractor
self.id_mapping = {}
self.categories = json.loads(Path(ann_file).read_text())['categories']
# Update 'id' values and populate the id_mapping dictionary
for i, label in enumerate(self.categories):
original_id = label['id']
new_id = i + 1
label['id'] = new_id
self.id_mapping[original_id] = new_id
def __getitem__(self, idx):
img, target = super(CocoDetection, self).__getitem__(idx)
image_id = self.ids[idx]
target = {'image_id': image_id, 'annotations': target}
encoding = self.feature_extractor(images=img, annotations=target, return_tensors="pt")
item = {}
item["pixel_values"] = encoding["pixel_values"].squeeze() # remove batch dimension
item["pixel_mask"] = encoding["pixel_mask"]
class_labels_tensor = encoding["labels"][0]["class_labels"]
encoding["labels"][0]["class_labels"] = torch.tensor([self.id_mapping[original_id.item()] for original_id in class_labels_tensor])
item["labels"] = encoding["labels"][0] # remove batch dimension
return item['pixel_values'], item['labels']
from transformers import DetrFeatureExtractor
img_folder = "/home/datasets/coco/images"
feature_extractor = DetrFeatureExtractor.from_pretrained("facebook/detr-resnet-50")
train_dataset = CocoDetection(img_folder=f'{img_folder}/train2017', feature_extractor=feature_extractor)
val_dataset = CocoDetection(img_folder=f'{img_folder}/val2017', feature_extractor=feature_extractor, train=False)
def collate_fn(batch):
pixel_values = [item[0] for item in batch]
encoding = feature_extractor.pad(pixel_values, return_tensors="pt")
labels = [item[1] for item in batch]
batch = {}
batch['pixel_values'] = encoding['pixel_values']
batch['pixel_mask'] = encoding['pixel_mask']
batch['labels'] = labels
return batch
from transformers import DetrImageProcessor, TrainingArguments, Trainer, AutoModelForObjectDetection
from pathlib import Path
import torch
categories = [i['name'] for i in train_dataset.categories]
id2label = {index: x for index, x in enumerate(categories, start=0)}
label2id = {v: k for k, v in id2label.items()}
checkpoint = "facebook/detr-resnet-50"
image_processor = DetrImageProcessor.from_pretrained(checkpoint)
model = AutoModelForObjectDetection.from_pretrained(
checkpoint,
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
)
training_args = TrainingArguments(
output_dir='.',
remove_unused_columns=False,
save_strategy="epoch",
learning_rate=5e-5,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
num_train_epochs=3,
logging_steps=200,
save_total_limit=3,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=collate_fn,
train_dataset=train_dataset,
eval_dataset=val_dataset
)
trainer.train()
```
### Expected behavior
The training process should proceed without errors, and the model should be trained successfully. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28376/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28376/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28375 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28375/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28375/comments | https://api.github.com/repos/huggingface/transformers/issues/28375/events | https://github.com/huggingface/transformers/issues/28375 | 2,069,157,788 | I_kwDOCUB6oc57VNec | 28,375 | NameError: name 'torch' is not defined | {
"login": "KaifAhmad1",
"id": 98801504,
"node_id": "U_kgDOBeOXYA",
"avatar_url": "https://avatars.githubusercontent.com/u/98801504?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaifAhmad1",
"html_url": "https://github.com/KaifAhmad1",
"followers_url": "https://api.github.com/users/KaifAhmad1/followers",
"following_url": "https://api.github.com/users/KaifAhmad1/following{/other_user}",
"gists_url": "https://api.github.com/users/KaifAhmad1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaifAhmad1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaifAhmad1/subscriptions",
"organizations_url": "https://api.github.com/users/KaifAhmad1/orgs",
"repos_url": "https://api.github.com/users/KaifAhmad1/repos",
"events_url": "https://api.github.com/users/KaifAhmad1/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaifAhmad1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The error is misleading, it actually means that bitsandbytes is not available. That's because you're using Windows, and the library only works on Linux (see [here](https://github.com/TimDettmers/bitsandbytes#tldr)).\r\n\r\nNote that you leaked your token twice, you should rotate it.",
"Would you like to open a PR for a fix? 🤗 ",
"Yes, I'd be happy to submit a fix.",
"Actually I thought the issue was in `src/transformers/integrations/bitsandbytes.py`:\r\n```python\r\nif is_bitsandbytes_available(): \r\n import bitsandbytes as bnb \r\n import torch \r\n import torch.nn as nn\r\n```\r\nbecause it only imports torch if bitsandbytes is available. When torch is used later, it raises \"NameError: name 'torch' is not defined\".\r\n\r\nBut when PyTorch is properly installed, the function `transformers.utils.import_utils._get_module()` is called before the code above (I missed this when I answered) and it raises a RuntimeError that clearly indicates that the issue comes from bitsandbytes. So I don't think there's anything to fix.",
"Alright, closing as completed then! ",
"@KaifAhmad1 don't forget to [install PyTorch](https://pytorch.org/get-started/locally/), and you can follow the instructions [here](https://github.com/TimDettmers/bitsandbytes/issues/807) to install bitsandbytes on Windows."
] | 1,704 | 1,704 | 1,704 | NONE | null | ### System Info
#### Device Info:
Device: `DESKTOP-0EE5HES`
Processor: `11th Gen Intel Core i5-1135G7 @ 2.40GHz`
Windows Edition: `Windows 11 Home Single Language`
System Type: `64-bit operating system, x64-based processor`
Version: `23H2 (Build 22631.2861)`
#### Package Info:
transformers; `4.36.2`
Python: `3.10.10`
Huggingface_hub: `0.20.1`
Tensorflow: `2.15.0`
accelerate: `0.25.0`
torch: `2.1.2`
### Who can help?
@ArthurZucker
@younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
#### Packages:
```
!pip install -qU \
transformers \
accelerate \
einops \
langchain \
xformers \
bitsandbytes \
faiss-gpu \
sentence_transformers \
datasets
```
#### Loading the Llama-2-7B-Chat in notebook
```
# Import necessary libraries
import torch
from torch import cuda, bfloat16
import transformers
```
```
# Specify the pre-trained model ID
model_id = 'meta-llama/Llama-2-7b-chat-hf'
```
```
# Determine the device to be used (GPU if available, otherwise CPU)
device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'
```
```
# Configure quantization settings using the BitsAndBytesConfig
bnb_config = transformers.BitsAndBytesConfig(
load_in_4bit=False,
bnb_4bit_quant_type='nf4',
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=bfloat16
)
```
```
# Initialize HF (Hugging Face) items with an access token
hf_auth = 'HF Token Here'
```
```
# Load model configuration using AutoConfig
model_config = transformers.AutoConfig.from_pretrained(
model_id,
use_auth_token=hf_auth
)
```
```
# Load the pre-trained model for causal language modeling using AutoModelForCausalLM
model = transformers.AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
config=model_config,
quantization_config=bnb_config,
device_map='auto',
use_auth_token=hf_auth
)
```
##### Here is the Ouput of this code snippet:
```
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
warnings.warn(
Loading checkpoint shards: 0%
0/2 [00:00<?, ?it/s]
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
[<ipython-input-8-55cdaac704cb>](https://localhost:8080/#) in <cell line: 2>()
1 # Load the pre-trained model for causal language modeling using AutoModelForCausalLM
----> 2 model = transformers.AutoModelForCausalLM.from_pretrained(
3 model_id,
4 trust_remote_code=True,
5 config=model_config,
4 frames
[/usr/local/lib/python3.10/dist-packages/transformers/integrations/bitsandbytes.py](https://localhost:8080/#) in set_module_quantized_tensor_to_device(module, tensor_name, device, value, fp16_statistics)
56 old_value = getattr(module, tensor_name)
57
---> 58 if old_value.device == torch.device("meta") and device not in ["meta", torch.device("meta")] and value is None:
59 raise ValueError(f"{tensor_name} is on the meta device, we need a `value` to put in on {device}.")
60
NameError: name 'torch' is not defined
```
### Expected behavior
```
from torch import cuda, bfloat16
import transformers
model_id = 'meta-llama/Llama-2-7b-chat-hf'
device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'
# set quantization configuration to load large model with less GPU memory
# this requires the `bitsandbytes` library
bnb_config = transformers.BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=bfloat16
)
# begin initializing HF items, you need an access token
hf_auth = 'HF Token Here'
model_config = transformers.AutoConfig.from_pretrained(
model_id,
use_auth_token=hf_auth
)
model = transformers.AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=True,
config=model_config,
quantization_config=bnb_config,
device_map='auto',
use_auth_token=hf_auth
)
# enable evaluation mode to allow model inference
model.eval()
print(f"Model loaded on {device}")
```
```
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py:1067: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
warnings.warn(
config.json: 0%| | 0.00/614 [00:00<?, ?B/s]
/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py:472: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead.
warnings.warn(
model.safetensors.index.json: 0%| | 0.00/26.8k [00:00<?, ?B/s]
Downloading shards: 0%| | 0/2 [00:00<?, ?it/s]
model-00001-of-00002.safetensors: 0%| | 0.00/9.98G [00:00<?, ?B/s]
model-00002-of-00002.safetensors: 0%| | 0.00/3.50G [00:00<?, ?B/s]
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
generation_config.json: 0%| | 0.00/188 [00:00<?, ?B/s]
Model loaded on cuda:0
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28375/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28374 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28374/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28374/comments | https://api.github.com/repos/huggingface/transformers/issues/28374/events | https://github.com/huggingface/transformers/issues/28374 | 2,068,992,694 | I_kwDOCUB6oc57UlK2 | 28,374 | The model has parameters that do not require training, causing training to be interrupted. | {
"login": "xmy0916",
"id": 43675899,
"node_id": "MDQ6VXNlcjQzNjc1ODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/43675899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xmy0916",
"html_url": "https://github.com/xmy0916",
"followers_url": "https://api.github.com/users/xmy0916/followers",
"following_url": "https://api.github.com/users/xmy0916/following{/other_user}",
"gists_url": "https://api.github.com/users/xmy0916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xmy0916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xmy0916/subscriptions",
"organizations_url": "https://api.github.com/users/xmy0916/orgs",
"repos_url": "https://api.github.com/users/xmy0916/repos",
"events_url": "https://api.github.com/users/xmy0916/events{/privacy}",
"received_events_url": "https://api.github.com/users/xmy0916/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! No idea as we don't have access to the full custom code, but would recommend you to use the latest version of transformers, and share a full snippet + traceback otherwise we can't help you ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,707 | 1,707 | NONE | null | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.28
- Python version: 3.10.13
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.4.0
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
def build_vision_projector(config, delay_load=False, **kwargs):
return nn.ModuleDict({
"image": nn.Linear(config.mm_hidden_size, config.hidden_size),
"video": nn.Linear(config.mm_hidden_size, config.hidden_size)
})
```
i have a projector layer which will match the input tensor by the key image or video. above code will cause the training to get stuck.
when i modify the code like:
```python
def build_vision_projector(config, delay_load=False, **kwargs):
proj = nn.Linear(config.mm_hidden_size, config.hidden_size)
return nn.ModuleDict({
"image": proj,
"video": proj
})
```
it means that all inputs use the same projector layer. This way can train normally.
### Expected behavior
The training error is:
![image](https://github.com/huggingface/transformers/assets/43675899/0c2163b2-d9f4-4702-ae19-e4b1eef7830f)
Some batches of my training data are a mixture of videos and images, some have only videos, and some have only images. This will result in the projector layer of some batches of video projector layer or image projector layer not being forwarded. It seems that this is the reason why the training is stuck.
Is there any solution? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28374/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28373 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28373/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28373/comments | https://api.github.com/repos/huggingface/transformers/issues/28373/events | https://github.com/huggingface/transformers/pull/28373 | 2,068,890,592 | PR_kwDOCUB6oc5jZM2S | 28,373 | Change progress logging to once across all nodes | {
"login": "siddartha-RE",
"id": 55106295,
"node_id": "MDQ6VXNlcjU1MTA2Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/55106295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/siddartha-RE",
"html_url": "https://github.com/siddartha-RE",
"followers_url": "https://api.github.com/users/siddartha-RE/followers",
"following_url": "https://api.github.com/users/siddartha-RE/following{/other_user}",
"gists_url": "https://api.github.com/users/siddartha-RE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/siddartha-RE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddartha-RE/subscriptions",
"organizations_url": "https://api.github.com/users/siddartha-RE/orgs",
"repos_url": "https://api.github.com/users/siddartha-RE/repos",
"events_url": "https://api.github.com/users/siddartha-RE/events{/privacy}",
"received_events_url": "https://api.github.com/users/siddartha-RE/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Let's just include the changes to `trainer_callback.py` please, and #28364 we'll leave for handling the checkpoint issue. Thanks!\r\n\r\nDone. Thank you for reviewing.",
"@muellerzr What do you think? Will close if you think this does not seem worth changing.."
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Change progress logging to be once across all nodes rather than once per node.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Library:
- trainer: @muellerzr and @pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28373/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28373",
"html_url": "https://github.com/huggingface/transformers/pull/28373",
"diff_url": "https://github.com/huggingface/transformers/pull/28373.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28373.patch",
"merged_at": 1705089682000
} |
https://api.github.com/repos/huggingface/transformers/issues/28372 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28372/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28372/comments | https://api.github.com/repos/huggingface/transformers/issues/28372/events | https://github.com/huggingface/transformers/issues/28372 | 2,068,888,813 | I_kwDOCUB6oc57ULzt | 28,372 | Support setting multiple adapters | {
"login": "pbarker",
"id": 5533189,
"node_id": "MDQ6VXNlcjU1MzMxODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5533189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pbarker",
"html_url": "https://github.com/pbarker",
"followers_url": "https://api.github.com/users/pbarker/followers",
"following_url": "https://api.github.com/users/pbarker/following{/other_user}",
"gists_url": "https://api.github.com/users/pbarker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pbarker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pbarker/subscriptions",
"organizations_url": "https://api.github.com/users/pbarker/orgs",
"repos_url": "https://api.github.com/users/pbarker/repos",
"events_url": "https://api.github.com/users/pbarker/events{/privacy}",
"received_events_url": "https://api.github.com/users/pbarker/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"cc @younesbelkada "
] | 1,704 | 1,704 | null | NONE | null | ### Feature request
The underlying peft library supports setting multiple adapters:
```python
model.set_adapters(["adapter_a", "adapter_b"])
```
It would be nice if the pipeline supported the same, from looking at https://github.com/huggingface/transformers/pull/25077 it appears it only supports a single adapter
### Motivation
This is useful functionality in the peft library
### Your contribution
Happy to make the changes here! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28372/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28371 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28371/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28371/comments | https://api.github.com/repos/huggingface/transformers/issues/28371/events | https://github.com/huggingface/transformers/issues/28371 | 2,068,843,148 | I_kwDOCUB6oc57UAqM | 28,371 | Data collator does not pass inputs to tokenizer | {
"login": "EricLBuehler",
"id": 65165915,
"node_id": "MDQ6VXNlcjY1MTY1OTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/65165915?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EricLBuehler",
"html_url": "https://github.com/EricLBuehler",
"followers_url": "https://api.github.com/users/EricLBuehler/followers",
"following_url": "https://api.github.com/users/EricLBuehler/following{/other_user}",
"gists_url": "https://api.github.com/users/EricLBuehler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EricLBuehler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EricLBuehler/subscriptions",
"organizations_url": "https://api.github.com/users/EricLBuehler/orgs",
"repos_url": "https://api.github.com/users/EricLBuehler/repos",
"events_url": "https://api.github.com/users/EricLBuehler/events{/privacy}",
"received_events_url": "https://api.github.com/users/EricLBuehler/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"Ok, thanks for the recommendation!"
] | 1,704 | 1,704 | 1,704 | NONE | null | Hello all,
While attempting to train a model using `Trainer` and `DataCollatorForSeq2Seq`, I get the following error:
`ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided []`.
I have written my own model class (this is necessary for the design), which wraps the base model (a `PreTrainedModel`) and has only the `forward`, `generate`, `prepare_inputs_for_generation` and some utility methods.
Am I missing a method to let `transformers` know what inputs I expect? I would greatly appreciate any tips! | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28371/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28370 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28370/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28370/comments | https://api.github.com/repos/huggingface/transformers/issues/28370/events | https://github.com/huggingface/transformers/issues/28370 | 2,068,828,967 | I_kwDOCUB6oc57T9Mn | 28,370 | Missing `vocab_file` Attribute When Using Custom SentencePiece Models | {
"login": "teleprint-me",
"id": 77757836,
"node_id": "MDQ6VXNlcjc3NzU3ODM2",
"avatar_url": "https://avatars.githubusercontent.com/u/77757836?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/teleprint-me",
"html_url": "https://github.com/teleprint-me",
"followers_url": "https://api.github.com/users/teleprint-me/followers",
"following_url": "https://api.github.com/users/teleprint-me/following{/other_user}",
"gists_url": "https://api.github.com/users/teleprint-me/gists{/gist_id}",
"starred_url": "https://api.github.com/users/teleprint-me/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/teleprint-me/subscriptions",
"organizations_url": "https://api.github.com/users/teleprint-me/orgs",
"repos_url": "https://api.github.com/users/teleprint-me/repos",
"events_url": "https://api.github.com/users/teleprint-me/events{/privacy}",
"received_events_url": "https://api.github.com/users/teleprint-me/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 3081136536,
"node_id": "MDU6TGFiZWwzMDgxMTM2NTM2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Difficult%20Issue",
"name": "Good Difficult Issue",
"color": "684CC7",
"default": false,
"description": ""
}
] | open | false | null | [] | [
"Thanks for opening this! \r\nThe conversion scripts should not always rely on sentencepiece as some models do not use it. ( Bert for example)\r\nHowever it is true that supporting a kind of direct conversion could / would help. \r\n\r\nWould you like to open a PR to help using the converter class for converting spm models to fast tokenizers ? ",
"Sure, I'll start looking into this today. I appreciate your suggestion on the PR, and I'll consider how to integrate these changes without enforcing SentencePiece dependency where not necessary. Looking forward to any further feedback or suggestions.",
"@ArthurZucker I think it's important to note that this is not a feature request. It's a bug. `SentencePieceProcessor` does not have a `vocab_file` attribute. Any source code that relies on this will fail as a result. I'm only familiar with the newer version(s) of `sentencepiece`, so I'm not sure if this was the case in previous versions.\r\n\r\n```python\r\n>>> fast_tokenizer = llama_converter.converted()\r\nTraceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n fast_tokenizer = llama_converter.converted()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/mnt/valerie/transformers/src/transformers/convert_slow_tokenizer.py\", line 530, in converted\r\n tokenizer = self.tokenizer(self.proto)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/mnt/valerie/transformers/src/transformers/convert_slow_tokenizer.py\", line 1181, in tokenizer\r\n _, merges = SentencePieceExtractor(self.original_tokenizer.vocab_file).extract(vocab_scores)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nAttributeError: 'SentencePieceProcessor' object has no attribute 'vocab_file'\r\n```",
"Got it, it's more of a bug on our side I think. We need to update the converter. ",
"Would you like to open a PR? I am a bit low on bandwidth for this currently ",
"I unfortunately do not have the time or resources to contribute at the moment. I'm spread thin as it is.\r\n\r\nI did look into it and it's unfortunately not a simple fix. \r\n\r\nThere's probably a workaround or a hack that could be implemented, but the entire design is assuming a certain pattern that doesn't exist.\r\n\r\nI was looking into this when I had time to work on the convert.py for llama.cpp and was attempting to figure out how to support the vocab for huggingface fast tokenizers.\r\n\r\nAfter studying the tokenizers source code, I saw that the implementation is based on the factory pattern which seems to be extended by a bridge or facade pattern. This isn't normally an issue, but there are multiple steps in the pipeline that coerce the user to remotely access the vocabulary.\r\n\r\nThat's when I dug a bit deeper and learned about the conversion and tokenizer scripts which don't seem to be documented very well or at all.\r\n\r\nI'm working on other personal projects at the moment which have a higher priority for me. I'm more than happy to bounce ideas off one another. I'm not sure when I'll be able to dedicate more time into redesigning/fixing the implementation though.\r\n\r\nThese are just some of my observations and notes I made throughout the process. "
] | 1,704 | 1,707 | null | NONE | null | ### System Info
- `transformers` version: 4.36.2
- Platform: Linux-6.1.69-1-lts-x86_64-with-glibc2.38
- Python version: 3.11.6
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.2+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: NA
- Using distributed or parallel set-up in script?: NA
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Steps to Reproduce
1. Import the SentencePiece library:
```python
import sentencepiece as spm
```
2. Verify the SentencePiece version:
```python
spm.__version__ # Ensure it is 0.1.98 or later
```
3. Attempt to create a `LlamaConverter` instance with a loaded SentencePiece tokenizer:
```python
from transformers.convert_slow_tokenizer import LlamaConverter
# Define the path to your SentencePiece model
spm_model_path = "path/to/your/tokenizer.model"
# Load the SentencePiece tokenizer
spm_tokenizer = spm.SentencePieceProcessor()
spm_tokenizer.Load(spm_model_path)
# Create a LlamaConverter instance with the loaded tokenizer
llama_converter = LlamaConverter(spm_tokenizer)
```
4. Observe the error raised due to the missing `vocab_file` attribute.
### Expected Behavior
To address this issue and make it easier for users to work with custom SentencePiece models, the following modifications are proposed:
1. **Modify `Converter` Class**:
- Update the `Converter` class to accept a file path to the SentencePiece model when initializing an instance of the class.
- Instantiate the SentencePiece tokenizer and load the model from the provided file path within the `Converter` class.
```python
import sentencepiece as spm
class Converter:
def __init__(self, file_path_to_tokenizer):
self.file_path = file_path_to_tokenizer
self.original_tokenizer = spm.SentencePieceProcessor()
self.original_tokenizer.Load(self.file_path)
def converted(self) -> Tokenizer:
raise NotImplementedError()
```
2. **Update `SpmConverter` Class**:
- Update the `SpmConverter` class to use the `file_path` attribute instead of the non-existent `model_file` attribute when loading the SentencePiece model.
- Ensure compatibility with the modifications made in the `Converter` class.
```python
from transformers.convert_slow_tokenizer import Converter
class SpmConverter(Converter):
def __init__(self, file_path_to_tokenizer):
requires_backends(self, "protobuf")
super().__init__(file_path_to_tokenizer)
# Load the SentencePiece model using the file path
with open(self.file_path, "rb") as f:
m.ParseFromString(f.read())
```
### Expected Outcome
With these proposed modifications, users should be able to create `Converter` instances with their custom SentencePiece models by providing the file path, and the `SpmConverter` class should correctly load the model using the specified file path, resolving the issue of missing attributes and ensuring compatibility with Transformers. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28370/timeline | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28369 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28369/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28369/comments | https://api.github.com/repos/huggingface/transformers/issues/28369/events | https://github.com/huggingface/transformers/pull/28369 | 2,068,658,566 | PR_kwDOCUB6oc5jYebU | 28,369 | [AttentionMaskConverter] fix sdpa unmask unattended | {
"login": "zspo",
"id": 26846598,
"node_id": "MDQ6VXNlcjI2ODQ2NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26846598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zspo",
"html_url": "https://github.com/zspo",
"followers_url": "https://api.github.com/users/zspo/followers",
"following_url": "https://api.github.com/users/zspo/following{/other_user}",
"gists_url": "https://api.github.com/users/zspo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zspo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zspo/subscriptions",
"organizations_url": "https://api.github.com/users/zspo/orgs",
"repos_url": "https://api.github.com/users/zspo/repos",
"events_url": "https://api.github.com/users/zspo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zspo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Thanks, can you update the PR name to \"[`AttentionMaskConverter`] fix sdpa unmask unattended\"\r\n\r\nHi, nailed it! Thanks",
"Thanks 🤗 "
] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
Keep tensor devices consistent
@ArthurZucker @amyeroberts | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28369/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28369",
"html_url": "https://github.com/huggingface/transformers/pull/28369",
"diff_url": "https://github.com/huggingface/transformers/pull/28369.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28369.patch",
"merged_at": 1704717225000
} |
https://api.github.com/repos/huggingface/transformers/issues/28368 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28368/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28368/comments | https://api.github.com/repos/huggingface/transformers/issues/28368/events | https://github.com/huggingface/transformers/issues/28368 | 2,068,557,686 | I_kwDOCUB6oc57S692 | 28,368 | [Flax] Migration from frozen to regular dicts with v0.7.1+ | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @patrickvonplaten @ArthurZucker @patil-suraj for Flax discussions",
"I'm also in favor of 2.) Also given we don't have that much use in Flax, I wonder whether we should do a bigger breaking change by changing the default of `_do_init` to `_do_init=False` - it's just the correct design for Flax",
"+1 on this! In doing so, we could also refactor the Flax module classes such that they perform all the necessary pre and post processing (e.g. as per https://github.com/huggingface/transformers/pull/22866), rather than how it currently is with a mix of the model and modules. In doing so, the modules can be used as standalone Flax modules",
"Re-opening as the PR was a \"quick fix\" to ensure the CI passed for new model additions - leaving this issue open until we perform the bigger refactor (option 2) and align ourselves with the Flax stateless design"
] | 1,704 | 1,707 | null | CONTRIBUTOR | null | ### Feature request
As of version 0.7.1, Flax defaults to returning **regular dictionaries** with the methods `.init` and `.apply`, not **frozen dictionaries** as was the case before: https://github.com/google/flax/discussions/3191
The `.init` method is called in the Transformers method `model.init_weights`, where we randomly initialised the model's parameters:
https://github.com/huggingface/transformers/blob/4ab5fb8941a38d172b3883c152c34ae2a0b83a68/src/transformers/models/gpt_neo/modeling_flax_gpt_neo.py#L370
Therefore, this Flax update is a breaking change for Transformers: previously, calling `model.init_weights` returned a frozen dict of params, whereas now it returns a regular dict. However, blindly reverting to using frozen dicts might cause issues for Flax users, since they will get regular dicts of params from Flax, but get frozen ones from Transformers.
This leaves us with two options:
1. Update the `model.init_weights` method to always return a frozen dict, even in the `module.init` returns a standard dict. This mitigates the breaking change and reverts to the behaviour we had before
2. Follow the Flax behaviour and return regular dicts of params with v0.7.1+. This would keep Transformers in-line with the latest Flax philosophy, at the expense of a breaking change
A PR to implement 1 is in #28367: it is a single line change for each of the Flax modelling files. To implement 2, we would need to check if the `random_params` return by the `module.init` method are frozen or not, and match the dictionary type on the returned outputs.
Note that the change in behaviour will only really affect users who are initialising parameters themselves (with `_do_init=False`). These are typically advanced users who are familiar with the Flax library, and want an easy way of dropping-in Transformers Flax modules into other Flax scripts. Therefore, I would be in favour of 2, in order to maintain equivalence between the Flax and Transformers libraries. For users who rely on automatic init (`_do_init=True`), there's unlikely to be any friction, since they tend not to access the model params anyway. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28368/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28368/timeline | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28367 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28367/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28367/comments | https://api.github.com/repos/huggingface/transformers/issues/28367/events | https://github.com/huggingface/transformers/pull/28367 | 2,068,538,196 | PR_kwDOCUB6oc5jYErk | 28,367 | [Flax] Freeze params when _do_init=True | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28367). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,707 | 1,707 | CONTRIBUTOR | null | # What does this PR do?
As of version 0.7.1, Flax defaults to returning **regular dictionaries** with the methods `.init` and `.apply`, not frozen dictionaries as was the case before: https://github.com/google/flax/discussions/3191
This PR shows how we can update the Transformers modelling code to maintain the previous behaviour, where the `init_weights` method returns a frozen dict of params | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28367/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28367",
"html_url": "https://github.com/huggingface/transformers/pull/28367",
"diff_url": "https://github.com/huggingface/transformers/pull/28367.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28367.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28366 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28366/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28366/comments | https://api.github.com/repos/huggingface/transformers/issues/28366/events | https://github.com/huggingface/transformers/issues/28366 | 2,068,536,149 | I_kwDOCUB6oc57S1tV | 28,366 | apply_chat_template fails if the model's jinja template doesn't support a system prompt | {
"login": "unoriginalscreenname",
"id": 815932,
"node_id": "MDQ6VXNlcjgxNTkzMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/815932?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/unoriginalscreenname",
"html_url": "https://github.com/unoriginalscreenname",
"followers_url": "https://api.github.com/users/unoriginalscreenname/followers",
"following_url": "https://api.github.com/users/unoriginalscreenname/following{/other_user}",
"gists_url": "https://api.github.com/users/unoriginalscreenname/gists{/gist_id}",
"starred_url": "https://api.github.com/users/unoriginalscreenname/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unoriginalscreenname/subscriptions",
"organizations_url": "https://api.github.com/users/unoriginalscreenname/orgs",
"repos_url": "https://api.github.com/users/unoriginalscreenname/repos",
"events_url": "https://api.github.com/users/unoriginalscreenname/events{/privacy}",
"received_events_url": "https://api.github.com/users/unoriginalscreenname/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Second this issue, just ran into this on Mistral as well.",
"cc @Rocketknight1 ",
"Hi @unoriginalscreenname and @lithafnium, this was discussed in #27922. I'll paste the relevant part of my reply here:\r\n\r\n> Unfortunately, this is an unavoidable consequence of how these models were trained. Some models were trained with system prompts as part of the training data, and other models were not. When a model was not trained with a system prompt, it will not have any tokens that it can use to represent a system prompt, and trying to insert a prompt will confuse the model and probably significantly reduce the output quality.\r\n\r\n> In the cases when a model was trained without a system prompt, the model's chat template can be configured to raise an error if a `system` message is included in the input, and this is indeed what happens with some models (e.g. LLaMA/Mistral/Mixtral). This is correct and intended behaviour, and there isn't really any way to \"fix\" it without retraining the models!\r\n\r\n> The only solution I can suggest is that there is usually a different fine-tune of most models that supports a system prompt. For example, instead of Mistral-instruct you can use [Zephyr-beta](https://huggingface.co./HuggingFaceH4/zephyr-7b-beta), and instead of Mixtral-instruct you can use [Dolphin](https://huggingface.co./cognitivecomputations/dolphin-2.7-mixtral-8x7b). Both of these models were trained with system prompts, and will understand them correctly (and apply them in their chat template).\r\n\r\nIt is possible to write a chat template that ignores system messages or merges them into the first user message like you suggested, but this is not what the authors of the LLaMA/Mistral/Mixtral prompts chose to do. Although it seems like a clean solution, in practice I think there is a significant risk that just blending the system message into the user message would create a confusing message that would reduce model performance.\r\n\r\nAlso, please note that **chat templates are set in individual model repos, not in the code of Transformers itself**. As a result, we at Hugging Face don't actually have the power to overrule model authors and force their models to accept system prompts if their chosen template doesn't support them! See our [blog post](https://huggingface.co./blog/chat-templates) about chat templates for more information.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | NONE | null | ### System Info
All the latest versions
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Some of these models reject the system prompt in their chat template. It's very inconsistent and there doesn't seem to be a way to tell if a model accepts a system prompt or not. You end up getting this error if you pass in a system prompt and format it using the chat template function:
jinja2.exceptions.TemplateError: Conversation roles must alternate user/assistant/user/assistant/
I'm really surprised I haven't seen anybody raise this. Am I missing something? How do you tell if a model takes a system prompt or not?
here is an example from the PIVOT-10.7B-Mistral-V0.2 model:
"chat_template": "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '[INST] ' + message['content'] + ' [/INST]' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}",
### Expected behavior
I would expect that apply_chat_template woudl be able to read or parse either the template or config file and determine if a system prompt was expected. if it wasn't, then i would expect it would either ignore it or merge it with the first instruction in the chat. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28366/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28365 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28365/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28365/comments | https://api.github.com/repos/huggingface/transformers/issues/28365/events | https://github.com/huggingface/transformers/issues/28365 | 2,068,478,757 | I_kwDOCUB6oc57Snsl | 28,365 | Whisper Alignment Heads calculation for custom model | {
"login": "DavraYoung",
"id": 33338429,
"node_id": "MDQ6VXNlcjMzMzM4NDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/33338429?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DavraYoung",
"html_url": "https://github.com/DavraYoung",
"followers_url": "https://api.github.com/users/DavraYoung/followers",
"following_url": "https://api.github.com/users/DavraYoung/following{/other_user}",
"gists_url": "https://api.github.com/users/DavraYoung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DavraYoung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DavraYoung/subscriptions",
"organizations_url": "https://api.github.com/users/DavraYoung/orgs",
"repos_url": "https://api.github.com/users/DavraYoung/repos",
"events_url": "https://api.github.com/users/DavraYoung/events{/privacy}",
"received_events_url": "https://api.github.com/users/DavraYoung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@sanchit-gandhi hi, have you tried this with distilled model?",
"Hey 🤗 thanks for opening an issue! We try to keep the github issues for bugs/feature requests.\r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\n\r\nThanks!",
"Hey @DavraYoung! Indeed - as @ArthurZucker mentioned the forum is the best place to ask such questions. Linking some resources that may be of help for reference for your question to get you going:\r\n* Answer from Whisper author: https://github.com/openai/whisper/discussions/1388#discussioncomment-7568428\r\n* Thread on the HF Hub: https://huggingface.co./distil-whisper/distil-small.en/discussions/7"
] | 1,704 | 1,706 | 1,706 | NONE | null | ### Feature request
Token timestamps work great on the pretrained model, but once the model is finetuned, token timestamp are no more correct.
I tried to dig deeper and found that the token timestamps calculation is done using cross attention heads with some dynamic time warping calculation over token sequence.
it would be good to have some discussion or script that allow to select the best alignment heads for any whisper model.
What were your approach for selecting those alignment heads?
### Motivation
Token timestamps, long form transcription, hallucination detection etc.
### Your contribution
I have tried calculating best alignment heads using this approach:
0. Select 100 good enough sample audios.
1. Use some CTC model for per word timestamp calculation, calculate per word timestamps
2. Run whisper with token timestamps and get timestamps considering each attention head separately
3. Consider the time difference for each attention head with the one from CTC model (for matching token transcriptions)
4. Select the attention head with best accuracy
5. Repeat from step 1, using the selected head, calculate all other heads that in combination with current heads can improve the accuracy
I was able to achieve average 200ms accuracy for token timestamps. Accuracy on best 80% is 40ms. The model fails only on some cases having 2000-4000ms shift, which can be detected using some scripts. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28365/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28364 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28364/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28364/comments | https://api.github.com/repos/huggingface/transformers/issues/28364/events | https://github.com/huggingface/transformers/pull/28364 | 2,068,164,458 | PR_kwDOCUB6oc5jWzoc | 28,364 | Fix for checkpoint rename race condition | {
"login": "tblattner",
"id": 10550807,
"node_id": "MDQ6VXNlcjEwNTUwODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/10550807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tblattner",
"html_url": "https://github.com/tblattner",
"followers_url": "https://api.github.com/users/tblattner/followers",
"following_url": "https://api.github.com/users/tblattner/following{/other_user}",
"gists_url": "https://api.github.com/users/tblattner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tblattner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tblattner/subscriptions",
"organizations_url": "https://api.github.com/users/tblattner/orgs",
"repos_url": "https://api.github.com/users/tblattner/repos",
"events_url": "https://api.github.com/users/tblattner/events{/privacy}",
"received_events_url": "https://api.github.com/users/tblattner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"One thing to note, I attempted to reuse the existing with block: \r\n```with self.args.main_process_first(desc=\"Renaming model checkpoint folder to true location\", local=self.args.save_on_each_node):```\r\n\r\nand include the fsync. Unfortunately fsync did not flush buffers related to the staging directory, so it still failed on other processes. This raises some concerns as to the behavior of fsync on network attached storage using NFS.",
"Oops, missed this. I looked yesterday :) but I guess you poster after I looked. This is my version:\r\nhttps://github.com/huggingface/transformers/pull/28373\r\n\r\nI don't see why existence check is required if it is happening only once per node.",
"> Oops, missed this. I looked yesterday :) but I guess you poster after I looked. This is my version: #28373\r\n> \r\n> I don't see why existence check is required if it is happening only once per node.\r\n\r\nCould be a race if the output directory for the checkpoint is used sometime later in the code. If that is not the case, then shouldn't be an issue.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28364). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks @tblattner 🤗 ",
"- `if self.is_local_process_zero() if self.args.save_on_each_node else self.is_world_process_zero():`\r\n- `if self.args.should_save`\r\nMay I ask What is the difference between the two lines above?Can replace Line1 with Line2",
"> * `if self.is_local_process_zero() if self.args.save_on_each_node else self.is_world_process_zero():`\r\n> * `if self.args.should_save`\r\n> May I ask What is the difference between the two lines above?Can replace Line1 with Line2\r\n\r\nDefinition of should_save looks like they would be equivalent and the second one a little clearer. It will also allow the extra check below for should_save to be removed.",
"Looks like `fd = os.open(output_dir, os.O_RDONLY)` doesn't work on Windows:\r\n![image](https://github.com/huggingface/transformers/assets/28014244/5449c420-26e8-4252-b108-df7e52679424)\r\n",
"> Looks like `fd = os.open(output_dir, os.O_RDONLY)` doesn't work on Windows: ![image](https://private-user-images.githubusercontent.com/28014244/298395624-5449c420-26e8-4252-b108-df7e52679424.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MDU4OTIzMTQsIm5iZiI6MTcwNTg5MjAxNCwicGF0aCI6Ii8yODAxNDI0NC8yOTgzOTU2MjQtNTQ0OWM0MjAtMjZlOC00MjUyLWIxMDgtZGY3ZTUyNjc5NDI0LnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDAxMjIlMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwMTIyVDAyNTMzNFomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPThmMGMwOTEwYjc0ZGNiMjE1YjAwMjU1ODU2NmE4MmQxM2ZjMWZkOGQ1ZDE4ZGI4NTY5YjY1NmExZWFmN2FlMmImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.7q7GQD1FWee4DlY15fryneYUKq2mUEhepjvXFkLA5tw)\r\n\r\nThis is indeed an issue. Windows handles directories differently than Linux. I'm not an expert at Windows dev in python, so I don't have a good solution sorry!",
"I think this is critical, when your training fails at the first checkpoint. Maybe as a workaround we can add a condition for non-Windows systems?\r\n```python\r\nif platform.system() != 'Windows':\r\n fd = os.open(output_dir, os.O_RDONLY)\r\n os.fsync(fd)\r\n os.close(fd)\r\n```\r\n@muellerzr @siddartha-RE ",
"Yes, quick PR going in a moment.",
"@Vechtomov @tblattner https://github.com/huggingface/transformers/pull/28637 fixed it.\r\n\r\nUnsure how it affects multinode on windows but if a user has this situation and hits it then we can deal with it then as there's not really a clean solution for doing so in python :( "
] | 1,704 | 1,706 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
When running distributed training with deepspeed, I was encountered a race condition due to os.rename not being atomic on network filesystems. This rework, changes the logic for renaming to only run on the main processes, or a single main process depending on the save_on_each_node flag. Also added is the use of fsync to try to flush buffers, hopefully ensuring the rename is completed. fsync may have no effect in some filesystems, so a better mechanism may be required to ensure that the rename completed.
Fixes #27925
## Before submitting
- [No] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [Yes] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [Discussed on Github issue] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [Yes] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [No] Did you write any new necessary tests?
## Who can review?
@muellerzr
@pacman100
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28364/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28364/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28364",
"html_url": "https://github.com/huggingface/transformers/pull/28364",
"diff_url": "https://github.com/huggingface/transformers/pull/28364.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28364.patch",
"merged_at": 1704902143000
} |
https://api.github.com/repos/huggingface/transformers/issues/28363 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28363/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28363/comments | https://api.github.com/repos/huggingface/transformers/issues/28363/events | https://github.com/huggingface/transformers/pull/28363 | 2,068,009,708 | PR_kwDOCUB6oc5jWRjx | 28,363 | [`DETR`] Update the processing to adapt masks & bboxes to reflect padding | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28363). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 1,704 | 1,707 | 1,707 | COLLABORATOR | null | # What does this PR do?
Fixes an issue with the processing of batches of images for DETR and DETR related models.
Previously, the annotations for the models - specifically the masks and bboxes, wouldn't be updated to account for the new image sizes if padding occurred. This PR pads the masks to match their corresponding masks.
The following images show the processing behaviour for annotations when there are two images in the batch, of different sizes.
* Each image is resized so that it's shortest edge is 800, and the other edge is resized to maintain the aspect ratio. If the resulting longest edge is longer than 1333, then the longest edge is resized to 1333 and the shortest edge resized to maintain the aspect ratio.
* The images are padded to the largest height and width dimensions in the batch
* The masks are padded to match their respective image's padding.
* The bounding box values are readjusted to account for the padded image size
Fixes #28153
## Bounding boxes
In the previous processing logic, there were two possible scenarios:
* If `do_normalize=False` then no action is needed. The output bboxes are not in relative format.
* If `do_normalize=True` the relative coordinates of the bbox needs to be updated to account for the additional pixels when padding.
This PR:
* Adds a new argument `do_convert_annotations` which enables the user to control whether the bboxes are converted independent of `do_normalize`. This is useful because 1) the normalization of bounding boxes is independent of the pixel values 2) The current `normalize_annotations` logic both normalizes AND converts to a different bbox format (`(x0, y0, x1, y1)` -> `(center_x, center_y, width, height)`)
* Conditionally updates the bounding boxes wrt the padded images only if `do_convert_annotations=True`. If `do_convert_annotations=False` this isn't necessary.
Here we see the input and output images, and their bbox annotations.
On main:
![annotations_original](https://github.com/huggingface/transformers/assets/22614925/953ef7ab-4e55-4f93-832f-7b99d884729c)
On this branch:
![annotations_fixed](https://github.com/huggingface/transformers/assets/22614925/4a8f247b-0f74-4e21-b602-1f21aa2f94f4)
## Masks
Masks are updated so they have the same padding as their corresponding image.
Here are the input and output images and masks:
On main:
![masks_0_original](https://github.com/huggingface/transformers/assets/22614925/9050b269-77f0-4a2d-9206-0e923039307b)
On this branch:
![masks_0_fixed](https://github.com/huggingface/transformers/assets/22614925/23bda519-6c53-4845-94e9-604f7882dc14)
On main:
![masks_1_original](https://github.com/huggingface/transformers/assets/22614925/7de82bfd-8f80-4a84-a483-40dac3b34b3b)
On this branch:
![masks_1_fixed](https://github.com/huggingface/transformers/assets/22614925/6cd543a2-e39f-4bf5-ae20-1491099d0225)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28363/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28363",
"html_url": "https://github.com/huggingface/transformers/pull/28363",
"diff_url": "https://github.com/huggingface/transformers/pull/28363.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28363.patch",
"merged_at": 1707848826000
} |
https://api.github.com/repos/huggingface/transformers/issues/28362 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28362/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28362/comments | https://api.github.com/repos/huggingface/transformers/issues/28362/events | https://github.com/huggingface/transformers/pull/28362 | 2,067,842,903 | PR_kwDOCUB6oc5jVsvA | 28,362 | Tokenizer kwargs in textgeneration pipe | {
"login": "thedamnedrhino",
"id": 8396998,
"node_id": "MDQ6VXNlcjgzOTY5OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8396998?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thedamnedrhino",
"html_url": "https://github.com/thedamnedrhino",
"followers_url": "https://api.github.com/users/thedamnedrhino/followers",
"following_url": "https://api.github.com/users/thedamnedrhino/following{/other_user}",
"gists_url": "https://api.github.com/users/thedamnedrhino/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thedamnedrhino/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thedamnedrhino/subscriptions",
"organizations_url": "https://api.github.com/users/thedamnedrhino/orgs",
"repos_url": "https://api.github.com/users/thedamnedrhino/repos",
"events_url": "https://api.github.com/users/thedamnedrhino/events{/privacy}",
"received_events_url": "https://api.github.com/users/thedamnedrhino/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thank you 🤗. Anything specific I can help with?",
"Not sure since it's merged 🤗 "
] | 1,704 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Support tokenizer arguments in `text-generation` pipeline `__call__()`
Fixes # (issue)
#27869
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28362/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28362",
"html_url": "https://github.com/huggingface/transformers/pull/28362",
"diff_url": "https://github.com/huggingface/transformers/pull/28362.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28362.patch",
"merged_at": 1705333938000
} |
https://api.github.com/repos/huggingface/transformers/issues/28361 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28361/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28361/comments | https://api.github.com/repos/huggingface/transformers/issues/28361/events | https://github.com/huggingface/transformers/pull/28361 | 2,067,842,705 | PR_kwDOCUB6oc5jVssN | 28,361 | chore: Fix typo s/exclusivelly/exclusively/ | {
"login": "hugo-syn",
"id": 61210734,
"node_id": "MDQ6VXNlcjYxMjEwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/61210734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hugo-syn",
"html_url": "https://github.com/hugo-syn",
"followers_url": "https://api.github.com/users/hugo-syn/followers",
"following_url": "https://api.github.com/users/hugo-syn/following{/other_user}",
"gists_url": "https://api.github.com/users/hugo-syn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hugo-syn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hugo-syn/subscriptions",
"organizations_url": "https://api.github.com/users/hugo-syn/orgs",
"repos_url": "https://api.github.com/users/hugo-syn/repos",
"events_url": "https://api.github.com/users/hugo-syn/events{/privacy}",
"received_events_url": "https://api.github.com/users/hugo-syn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,704 | 1,704 | 1,704 | CONTRIBUTOR | null | # What does this PR do?
Just fix some typos in the code/doc
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Sorry if you are not the right person, but I think that _documentation_ should be appropriate @stevhliu @MKhalusova
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28361/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28361",
"html_url": "https://github.com/huggingface/transformers/pull/28361",
"diff_url": "https://github.com/huggingface/transformers/pull/28361.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28361.patch",
"merged_at": 1704489555000
} |
https://api.github.com/repos/huggingface/transformers/issues/28360 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28360/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28360/comments | https://api.github.com/repos/huggingface/transformers/issues/28360/events | https://github.com/huggingface/transformers/issues/28360 | 2,067,751,897 | I_kwDOCUB6oc57P2PZ | 28,360 | Pythia (GPTNeoXForCausalLM) Regression (inference time) in transformers 4.35.0 | {
"login": "JonasGeiping",
"id": 22680696,
"node_id": "MDQ6VXNlcjIyNjgwNjk2",
"avatar_url": "https://avatars.githubusercontent.com/u/22680696?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JonasGeiping",
"html_url": "https://github.com/JonasGeiping",
"followers_url": "https://api.github.com/users/JonasGeiping/followers",
"following_url": "https://api.github.com/users/JonasGeiping/following{/other_user}",
"gists_url": "https://api.github.com/users/JonasGeiping/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JonasGeiping/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JonasGeiping/subscriptions",
"organizations_url": "https://api.github.com/users/JonasGeiping/orgs",
"repos_url": "https://api.github.com/users/JonasGeiping/repos",
"events_url": "https://api.github.com/users/JonasGeiping/events{/privacy}",
"received_events_url": "https://api.github.com/users/JonasGeiping/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Thanks for reporting, this should be a lot easier to debug! 🤗 Having a look (can definitely reproduce).\r\n- quick fix: using `bfloat16` get's rid of the nan's. \r\n- definitely a regression as I can confirm this was passing before. \r\n- seems to come from https://github.com/huggingface/transformers/commit/253f9a3f9716d08a81fb305fe71f983122eb608b \r\n- we don't have failing tests so either because we load in a different dtype or we just didn't test it. \r\n"
] | 1,704 | 1,705 | 1,705 | NONE | null | ### System Info
- `transformers` version: 4.35.0
- Platform: Linux-5.16.19-76051619-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.1
- Accelerate version: 0.25.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0.dev20240104 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### The Problem
The Pythia models, and by extension the `GPTNeoXForCausalLM` implementation don't appear to be working correctly in 4.35. I've attached a simple reproduction snippet below. This code works on 4.34, but produces NaNs on 4.35 during the forward pass. The token ids are not particularly anomalous.
The problem is likely related to the report at https://github.com/huggingface/transformers/issues/28316, but this issue shows that any effects on reward modeling might be 2nd-order effects and that changes between 4.34 and 4.35 are the problem.
@ArthurZucker @younesbelkada
### Reproduction
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name_or_path = "EleutherAI/pythia-70m"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, torch_dtype=torch.float16)
input_ids = [29, 93, 303, 64, 5478, 49651, 10394, 187, 34, 12939, 875] # this is '<|im_start|>system\nA chat between'
# alternative: tokenizer('<|im_start|>system\nA chat between')
input_ids = torch.as_tensor(input_ids)[None]
model.cuda()
input_ids = input_ids.cuda()
model(input_ids)["logits"] # has NaNs?
```
### Expected behavior
Normal forward pass, without NaNs. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28360/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/28359 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28359/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28359/comments | https://api.github.com/repos/huggingface/transformers/issues/28359/events | https://github.com/huggingface/transformers/pull/28359 | 2,067,739,721 | PR_kwDOCUB6oc5jVVcL | 28,359 | [i18n-fr] Translate pipeline tutorial to French | {
"login": "NoB0",
"id": 28621493,
"node_id": "MDQ6VXNlcjI4NjIxNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/28621493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NoB0",
"html_url": "https://github.com/NoB0",
"followers_url": "https://api.github.com/users/NoB0/followers",
"following_url": "https://api.github.com/users/NoB0/following{/other_user}",
"gists_url": "https://api.github.com/users/NoB0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NoB0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NoB0/subscriptions",
"organizations_url": "https://api.github.com/users/NoB0/orgs",
"repos_url": "https://api.github.com/users/NoB0/repos",
"events_url": "https://api.github.com/users/NoB0/events{/privacy}",
"received_events_url": "https://api.github.com/users/NoB0/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"> Pipeline is a female gendered noun in french! Otherwise should be alright\r\n\r\nAccording to [Larousse](https://www.larousse.fr/dictionnaires/francais/pipeline/61067) and [Le Petit Robert](https://dictionnaire.lerobert.com/definition/pipeline) dictionaries it is male gendered. Waiting for feedback before changing the gender.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28359). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"It is not the same object 😅 \r\n> anglicisme Tuyau servant au transport à grande distance et en grande quantité de fluides (pétrole, gaz naturel…). ➙ [gazoduc](https://dictionnaire.lerobert.com/definition/gazoduc), [oléoduc](https://dictionnaire.lerobert.com/definition/oleoduc).\r\n\r\nthis one is male, but it sounds wrong when talking about AI no? ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,707 | 1,707 | CONTRIBUTOR | null | # What does this PR do?
Translates the `pipeline_tutorial.md` file of the documentation to French.
Part of #21456
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
French speaking contributors.
Documentation: @stevhliu and @MKhalusova | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28359/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28359",
"html_url": "https://github.com/huggingface/transformers/pull/28359",
"diff_url": "https://github.com/huggingface/transformers/pull/28359.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28359.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28358 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28358/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28358/comments | https://api.github.com/repos/huggingface/transformers/issues/28358/events | https://github.com/huggingface/transformers/pull/28358 | 2,067,570,096 | PR_kwDOCUB6oc5jUvwt | 28,358 | Add HTDemucs | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_28358). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,708 | 1,708 | CONTRIBUTOR | null | # What does this PR do?
Adds HTDemucs, required for the MusicGen melody model. Supersedes #25660.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28358/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/28358",
"html_url": "https://github.com/huggingface/transformers/pull/28358",
"diff_url": "https://github.com/huggingface/transformers/pull/28358.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/28358.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/28357 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/28357/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/28357/comments | https://api.github.com/repos/huggingface/transformers/issues/28357/events | https://github.com/huggingface/transformers/issues/28357 | 2,067,521,303 | I_kwDOCUB6oc57O98X | 28,357 | "cached cross_attention states don't have to be reordered -> they are always the same" | {
"login": "YJYJLee",
"id": 28900943,
"node_id": "MDQ6VXNlcjI4OTAwOTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/28900943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YJYJLee",
"html_url": "https://github.com/YJYJLee",
"followers_url": "https://api.github.com/users/YJYJLee/followers",
"following_url": "https://api.github.com/users/YJYJLee/following{/other_user}",
"gists_url": "https://api.github.com/users/YJYJLee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YJYJLee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YJYJLee/subscriptions",
"organizations_url": "https://api.github.com/users/YJYJLee/orgs",
"repos_url": "https://api.github.com/users/YJYJLee/repos",
"events_url": "https://api.github.com/users/YJYJLee/events{/privacy}",
"received_events_url": "https://api.github.com/users/YJYJLee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Cross attention only happends between the encoder and decoder, but the encoded states do not change, you always use the same, the decoder state change. Thus you don't re-compute / change the cross-attention states across the generate function",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,704 | 1,707 | 1,707 | NONE | null | ### System Info
Sorry, this is not the bug report, but I couldn't find the proper category to ask this question
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
![Screenshot 2024-01-05 at 10 13 07 AM](https://github.com/huggingface/transformers/assets/28900943/d4abaec7-9dfd-4384-94dc-43a120964b19)
I have noticed that in many models, KV cache reordering is not happening for cross_attention states.
### Expected behavior
Could someone explain why we don't need KV cache reordering for cross_attention states?
To my knowledge, KV cache or cross_attention also needs to be reordered according to the selected beam indices during the beam search. But I'm not sure how "they are always the same" for cross_attentions. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/28357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/28357/timeline | completed | null | null |