url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/8
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/8/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/8/comments
https://api.github.com/repos/huggingface/transformers/issues/8/events
https://github.com/huggingface/transformers/pull/8
378,859,647
MDExOlB1bGxSZXF1ZXN0MjI5NDY0MjQ5
8
fixed small typos in the README.md
{ "login": "gokriznastic", "id": 14166854, "node_id": "MDQ6VXNlcjE0MTY2ODU0", "avatar_url": "https://avatars.githubusercontent.com/u/14166854?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gokriznastic", "html_url": "https://github.com/gokriznastic", "followers_url": "https://api.github.com/users/gokriznastic/followers", "following_url": "https://api.github.com/users/gokriznastic/following{/other_user}", "gists_url": "https://api.github.com/users/gokriznastic/gists{/gist_id}", "starred_url": "https://api.github.com/users/gokriznastic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gokriznastic/subscriptions", "organizations_url": "https://api.github.com/users/gokriznastic/orgs", "repos_url": "https://api.github.com/users/gokriznastic/repos", "events_url": "https://api.github.com/users/gokriznastic/events{/privacy}", "received_events_url": "https://api.github.com/users/gokriznastic/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Many thanks!" ]
1,541
1,541
1,541
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/8/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/8/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/8", "html_url": "https://github.com/huggingface/transformers/pull/8", "diff_url": "https://github.com/huggingface/transformers/pull/8.diff", "patch_url": "https://github.com/huggingface/transformers/pull/8.patch", "merged_at": 1541707202000 }
https://api.github.com/repos/huggingface/transformers/issues/7
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/7/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/7/comments
https://api.github.com/repos/huggingface/transformers/issues/7/events
https://github.com/huggingface/transformers/pull/7
378,498,589
MDExOlB1bGxSZXF1ZXN0MjI5MTkxMDMx
7
Develop
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,541
1,541
1,541
MEMBER
null
Fixing `run_squad.py` pre-processing bug. Various clean-ups: - the weight initialization was not optimal (tf. truncated_normal_initializer(stddev=0.02) was translated in weight.data.normal_(0.02) instead of weight.data.normal_(mean=0.0, std=0.02) which likely affected the performance of run_classifer.py also. - gradient accumulation loss was not averaged over the accumulation steps which would have required to change the hyper-parameters for using accumulation. - the evaluation was not done with torch.no_grad() and thus sub-optimal in terms of speed/memory.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/7/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/7/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/7", "html_url": "https://github.com/huggingface/transformers/pull/7", "diff_url": "https://github.com/huggingface/transformers/pull/7.diff", "patch_url": "https://github.com/huggingface/transformers/pull/7.patch", "merged_at": 1541630206000 }
https://api.github.com/repos/huggingface/transformers/issues/6
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/6/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/6/comments
https://api.github.com/repos/huggingface/transformers/issues/6/events
https://github.com/huggingface/transformers/issues/6
377,736,844
MDU6SXNzdWUzNzc3MzY4NDQ=
6
Failure during pytest (and solution for python3)
{ "login": "dandelin", "id": 3676247, "node_id": "MDQ6VXNlcjM2NzYyNDc=", "avatar_url": "https://avatars.githubusercontent.com/u/3676247?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dandelin", "html_url": "https://github.com/dandelin", "followers_url": "https://api.github.com/users/dandelin/followers", "following_url": "https://api.github.com/users/dandelin/following{/other_user}", "gists_url": "https://api.github.com/users/dandelin/gists{/gist_id}", "starred_url": "https://api.github.com/users/dandelin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dandelin/subscriptions", "organizations_url": "https://api.github.com/users/dandelin/orgs", "repos_url": "https://api.github.com/users/dandelin/repos", "events_url": "https://api.github.com/users/dandelin/events{/privacy}", "received_events_url": "https://api.github.com/users/dandelin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks, I update the readme." ]
1,541
1,541
1,541
CONTRIBUTOR
null
``` foo@bar:~/foo/bar/pytorch-pretrained-BERT$ pytest -sv ./tests/ ===================================================================================================================== test session starts ===================================================================================================================== platform linux -- Python 3.6.6, pytest-3.9.1, py-1.7.0, pluggy-0.8.0 -- /home/foo/.pyenv/versions/anaconda3-5.1.0/bin/python cachedir: .pytest_cache rootdir: /data1/users/foo/bar/pytorch-pretrained-BERT, inifile: plugins: remotedata-0.3.0, openfiles-0.3.0, doctestplus-0.1.3, cov-2.6.0, arraydiff-0.2, flaky-3.4.0 collected 0 items / 3 errors =========================================================================================================================== ERRORS ============================================================================================================================ ___________________________________________________________________________________________________________ ERROR collecting tests/modeling_test.py ___________________________________________________________________________________________________________ ImportError while importing test module '/data1/users/foo/bar/pytorch-pretrained-BERT/tests/modeling_test.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/modeling_test.py:25: in <module> import modeling E ModuleNotFoundError: No module named 'modeling' _________________________________________________________________________________________________________ ERROR collecting tests/optimization_test.py _________________________________________________________________________________________________________ ImportError while importing test module '/data1/users/foo/bar/pytorch-pretrained-BERT/tests/optimization_test.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/optimization_test.py:23: in <module> import optimization E ModuleNotFoundError: No module named 'optimization' _________________________________________________________________________________________________________ ERROR collecting tests/tokenization_test.py _________________________________________________________________________________________________________ ImportError while importing test module '/data1/users/foo/bar/pytorch-pretrained-BERT/tests/tokenization_test.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/tokenization_test.py:22: in <module> import tokenization E ModuleNotFoundError: No module named 'tokenization' ===Flaky Test Report=== ===End Flaky Test Report=== !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 3 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! =================================================================================================================== 3 error in 0.60 seconds ================================================================================================================== ``` In python 3, `python -m pytest -sv tests/` works fine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/6/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/6/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/5
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/5/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/5/comments
https://api.github.com/repos/huggingface/transformers/issues/5/events
https://github.com/huggingface/transformers/issues/5
377,698,378
MDU6SXNzdWUzNzc2OTgzNzg=
5
MRPC hyperparameters question
{ "login": "ethanjperez", "id": 6402205, "node_id": "MDQ6VXNlcjY0MDIyMDU=", "avatar_url": "https://avatars.githubusercontent.com/u/6402205?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ethanjperez", "html_url": "https://github.com/ethanjperez", "followers_url": "https://api.github.com/users/ethanjperez/followers", "following_url": "https://api.github.com/users/ethanjperez/following{/other_user}", "gists_url": "https://api.github.com/users/ethanjperez/gists{/gist_id}", "starred_url": "https://api.github.com/users/ethanjperez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ethanjperez/subscriptions", "organizations_url": "https://api.github.com/users/ethanjperez/orgs", "repos_url": "https://api.github.com/users/ethanjperez/repos", "events_url": "https://api.github.com/users/ethanjperez/events{/privacy}", "received_events_url": "https://api.github.com/users/ethanjperez/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi Ethan,\r\nThanks we used the MRPC hyper-parameters indeed, I corrected the README.\r\nRegarding the dev set accuracy, I am not really surprised there is a slightly lower accuracy with the PyTorch version (even though the variance is high so it's hard to get something significant). That is something that is generally observed (see for example [the work of Remi Cadene](https://github.com/Cadene/pretrained-models.pytorch)) and we also experienced that with [our TF->PT port of the OpenAI GPT model](https://github.com/huggingface/pytorch-openai-transformer-lm).\r\nMy personal feeling is that there are slight differences in the way the backends of TensorFlow and PyTorch handle the operations and these differences make the pre-trained weights sub-optimal for PyTorch. ", "Great, thanks for clarifying that. Regarding the slightly lower accuracy, that makes sense. Thanks for your help and for releasing this!", "Maybe it would help to train the Tensorflow pre-trained weights for e.g. one epoch in PyTorch (using the MLM and next-sentence objective)? That may help transfer to other tasks, depending on what the issue is", "Hi @ethanjperez, actually the weight initialization fix (`tf. truncated_normal_initializer(stddev=0.02)` was translated in `weight.data.normal_(0.02)` instead of `weight.data.normal_(mean=0.0, std=0.02)` fixed in 2a97fe22) has brought us back to the TensorFlow results on MRPC (between 84 and 88%).\r\nI am closing this issue.", "@thomwolf Great to hear - thanks for working to fix it!" ]
1,541
1,541
1,541
CONTRIBUTOR
null
When describing how you reproduced the MRPC results, you say: "Our test ran on a few seeds with the original implementation hyper-parameters gave evaluation results between 82 and 87." and you link to the SQuAD hyperparameters (https://github.com/google-research/bert#squad). Is the link a mistake? Or did you use the SQuAD hyperparameters for tuning on MRPC? More generally, I'm wondering if there's a reason the MRPC dev set accuracy is slightly lower (in [82, 87] vs. [84, 88] reported by Google)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/5/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/5/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/4
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/4/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/4/comments
https://api.github.com/repos/huggingface/transformers/issues/4/events
https://github.com/huggingface/transformers/pull/4
377,620,943
MDExOlB1bGxSZXF1ZXN0MjI4NTIxMjA3
4
Fix typo in subheader BertForQuestionAnswering
{ "login": "knutole", "id": 2197944, "node_id": "MDQ6VXNlcjIxOTc5NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2197944?v=4", "gravatar_id": "", "url": "https://api.github.com/users/knutole", "html_url": "https://github.com/knutole", "followers_url": "https://api.github.com/users/knutole/followers", "following_url": "https://api.github.com/users/knutole/following{/other_user}", "gists_url": "https://api.github.com/users/knutole/gists{/gist_id}", "starred_url": "https://api.github.com/users/knutole/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/knutole/subscriptions", "organizations_url": "https://api.github.com/users/knutole/orgs", "repos_url": "https://api.github.com/users/knutole/repos", "events_url": "https://api.github.com/users/knutole/events{/privacy}", "received_events_url": "https://api.github.com/users/knutole/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "exact thanks !" ]
1,541
1,541
1,541
CONTRIBUTOR
null
Should say `BertForQuestionAnswering`, but says `BertForSequenceClassification`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/4/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/4/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/4", "html_url": "https://github.com/huggingface/transformers/pull/4", "diff_url": "https://github.com/huggingface/transformers/pull/4.diff", "patch_url": "https://github.com/huggingface/transformers/pull/4.patch", "merged_at": 1541460858000 }
https://api.github.com/repos/huggingface/transformers/issues/3
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/3/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/3/comments
https://api.github.com/repos/huggingface/transformers/issues/3/events
https://github.com/huggingface/transformers/issues/3
377,592,631
MDU6SXNzdWUzNzc1OTI2MzE=
3
run_squad questions
{ "login": "ZhaoyueCheng", "id": 3590333, "node_id": "MDQ6VXNlcjM1OTAzMzM=", "avatar_url": "https://avatars.githubusercontent.com/u/3590333?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaoyueCheng", "html_url": "https://github.com/ZhaoyueCheng", "followers_url": "https://api.github.com/users/ZhaoyueCheng/followers", "following_url": "https://api.github.com/users/ZhaoyueCheng/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaoyueCheng/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaoyueCheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaoyueCheng/subscriptions", "organizations_url": "https://api.github.com/users/ZhaoyueCheng/orgs", "repos_url": "https://api.github.com/users/ZhaoyueCheng/repos", "events_url": "https://api.github.com/users/ZhaoyueCheng/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaoyueCheng/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[ { "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }, { "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false } ]
[ "It also seems to me that the SQuAD 1.1 can not reproduce the google tensorflow version performance.", "> It also seems to me that the SQuAD 1.1 can not reproduce the google tensorflow version performance.\r\n\r\nWhat batch size are you running?", "I'm running on 4 GPU with a batch size of 48, the result is {\"exact_match\": 21.551561021759696, \"f1\": 41.785968963154055}", "Just ran on 1 GPU batch size of 10, the result is {\"exact_match\": 21.778618732261116, \"f1\": 41.83593185416649}\r\nActually it might be with the eval code Ill look into it", "Sure, Thanks, I'm checking for the reason too, will report if find anything.", "The predictions file is only outputting one word. Need to find out if the bug is in the model itself or write predictions function in run_squad.py. The correct answer always seems to be in the nbest_predictions, but its never selected.", "What performance does Hugging Face get on SQuAD using this reimplementation?", "Hi all,\r\nWe were not able to try SQuAD on a multi-GPU with the correct batch_size until recently so we relied on the standard deviations computed in the [notebooks](https://github.com/huggingface/pytorch-pretrained-BERT/tree/master/notebooks) to compare the predicted hidden states and losses for the SQuAD script. I was able to try on a multi-GPU today and there is indeed a strong difference.\r\nWe got about the same results that you get: F1 of 41.8 and exact match of 21.7.\r\nI am investigating that right now, my personal guess is that this may be related to things outside the model it-self like the optimizer or the post-processing in SQuAD as these were not compared between the TF and PT models.\r\nI will keep you guys updated in this issue and I add a mention in the readme that the SQuAD example doesn't work yet.\r\nIf you have some insights, feel free to participate in the discussion.", "If you're comparing activations, it may be worth comparing gradients as well to see if you receive similarly low gradients standard deviations for identical batches. You might see that the gradient is not comparable from the last layer itself (due to e.g. difference in how PyTorch may handle weight decay / optimization differently); you may also see that gradients only become not comparable only after a particular point in backpropagation, and that would show perhaps that the backward pass for a particular function differs between PyTorch and Tensorflow", "Ok guys thanks for waiting, we've nailed down the culprit which was in fact a bug in the pre-processing logic (more exactly this dumb typo https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/run_squad.py#L865).\r\n\r\nI took the occasion to clean up a few things I noticed while walking through the code:\r\n- the weight initialization was not optimal (`tf. truncated_normal_initializer(stddev=0.02)` was translated in `weight.data.normal_(0.02)` instead of `weight.data.normal_(mean=0.0, std=0.02)` which likely affected the performance of `run_classifer.py` also.\r\n- gradient accumulation loss was not averaged over the accumulation steps which would have required to change the hyper-parameters for using accumulation.\r\n- the evaluation was not done with `torch.no_grad()` and thus sub-optimal in terms of speed/memory.\r\n\r\nThese fixes are pushed on the `develop` branch right now.\r\n\r\nAll in all I think we are pretty good now and none of these issues affected the core PyTorch model (the BERT Transformer it-self) so if you only used `extract_features.py` you were good from the beginning. And `run_classifer.py` was ok apart from the sub-optimal additional weights initialization.\r\n\r\nI will merge the develop branch as soon as we got the final results confirmed (currently it's been training for 20 minutes (0.3 epoch) on 4GPU with a batch size of 56 and we are already above 85 on F1 on SQuAD and 77 in exact match so I'm rather confident and I think you guys can play with it too now).\r\n\r\nI am also cleaning up the code base to prepare for a first release that we will put on pip for easier access.", "@thomwolf This is awesome - thank you! Do you know what the final SQuAD results were from the training run you started?", "I got `{\"exact_match\": 80.07568590350047, \"f1\": 87.6494485519583}` with slightly sub-optimal parameters (`max_seq 300` instead of `384` which means more answers are truncated and a `batch_size 56` for 2 epochs of training which is probably a too big batch size and/or 1 epoch should suffice).\r\n\r\nIt trains in about 1h/epoch on 4 GPUs with such a big batch size and truncated examples.", "Using the same HP as the TensorFlow version we are actually slightly better on F1 than the original implementation (on the default random seed we used):\r\n`{\"f1\": 88.52381567990474, \"exact_match\": 81.22043519394512}`\r\nversus TF: `{\"f1\": 88.41249612335034, \"exact_match\": 81.2488174077578}`\r\n\r\nI am trying `BERT-large` on SQuAD now which is totally do-able on a 4 GPU server with the recommended batch-size of 24 (about 16h of expected training time using the `--optimize_on_cpu` option and 2 steps of gradient accumulation). I will update the readme with the results.", "Great, I saw the BERT-large ones as well - thank you for sharing these results! How long did the BERT-base SQuAD training take on a single GPU when you tried it? I saw BERT-large took ~18 hours over 4 K-80's", "Hi Ethan, I didn't try SQuAD on a single-GPU. On four k-80 (not k40), BERT-base took 5h to train on SQuAD." ]
1,541
1,542
1,541
NONE
null
Thanks a lot for the port! I have some minor questions, for the run_squad file, I see two options for accumulating gradients, accumulate_gradients and gradient_accumulation_steps but it seems to me that it can be combined into one. The other one is for the global_step variable, seems we are only counting but not using this variable in gradient accumulating. Thanks again!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/3/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/3/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/2
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/2/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/2/comments
https://api.github.com/repos/huggingface/transformers/issues/2/events
https://github.com/huggingface/transformers/pull/2
377,592,526
MDExOlB1bGxSZXF1ZXN0MjI4NDk5Mjc3
2
Port tokenization for the multilingual model
{ "login": "elyase", "id": 1175888, "node_id": "MDQ6VXNlcjExNzU4ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/1175888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elyase", "html_url": "https://github.com/elyase", "followers_url": "https://api.github.com/users/elyase/followers", "following_url": "https://api.github.com/users/elyase/following{/other_user}", "gists_url": "https://api.github.com/users/elyase/gists{/gist_id}", "starred_url": "https://api.github.com/users/elyase/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elyase/subscriptions", "organizations_url": "https://api.github.com/users/elyase/orgs", "repos_url": "https://api.github.com/users/elyase/repos", "events_url": "https://api.github.com/users/elyase/events{/privacy}", "received_events_url": "https://api.github.com/users/elyase/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for that, sorry for the delay" ]
1,541
1,541
1,541
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/2/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/2/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/2", "html_url": "https://github.com/huggingface/transformers/pull/2", "diff_url": "https://github.com/huggingface/transformers/pull/2.diff", "patch_url": "https://github.com/huggingface/transformers/pull/2.patch", "merged_at": 1541885266000 }
https://api.github.com/repos/huggingface/transformers/issues/1
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/1/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/1/comments
https://api.github.com/repos/huggingface/transformers/issues/1/events
https://github.com/huggingface/transformers/pull/1
377,057,813
MDExOlB1bGxSZXF1ZXN0MjI4MTIwMzcx
1
Create DataParallel model if several GPUs
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,541
1,541
1,541
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/1/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/1/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/1", "html_url": "https://github.com/huggingface/transformers/pull/1", "diff_url": "https://github.com/huggingface/transformers/pull/1.diff", "patch_url": "https://github.com/huggingface/transformers/pull/1.patch", "merged_at": 1541254235000 }