url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
sequence
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/108
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/108/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/108/comments
https://api.github.com/repos/huggingface/transformers/issues/108/events
https://github.com/huggingface/transformers/issues/108
389,346,652
MDU6SXNzdWUzODkzNDY2NTI=
108
Does max_seq_length specify the maxium number of words
{ "login": "artemlos", "id": 6392760, "node_id": "MDQ6VXNlcjYzOTI3NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/6392760?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemlos", "html_url": "https://github.com/artemlos", "followers_url": "https://api.github.com/users/artemlos/followers", "following_url": "https://api.github.com/users/artemlos/following{/other_user}", "gists_url": "https://api.github.com/users/artemlos/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemlos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemlos/subscriptions", "organizations_url": "https://api.github.com/users/artemlos/orgs", "repos_url": "https://api.github.com/users/artemlos/repos", "events_url": "https://api.github.com/users/artemlos/events{/privacy}", "received_events_url": "https://api.github.com/users/artemlos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "`max_seq_length` specifies the maximum number of tokens of the input. The number of token is superior or equal to the number of words of an input. \r\n\r\nFor example, the following sentence:\r\n\r\n```\r\nThe man hits the saxophone and demonstrates how to properly use the racquet.\r\n```\r\n\r\nis tokenized as follows:\r\n\r\n```\r\nthe man hits the saxophone and demonstrates how to properly use the ra ##c ##quet .\r\n```\r\n\r\nAnd depending on the task 2 to 3 additional special tokens (`[CLS]` and `[SEP]`) are added to the input to format it.", "@rodgzilla thanks!", "could we make it smaller? ", "So what if there are sentences where the maximum number of tokens is greater than max_seq_length?\r\n\r\nDoes that mean extra tokens beyond max_seq_length will get cut off?", "@tsungruihon yes, just use smaller sentences\r\n\r\n@echan00 no automatic cut off but there is a warning from the tokenizer that your inputs are too long and the model will throw an error. You have to limit the size manually.", "Hi All,\r\n\r\nDoes that mean we cannot use BERT for classifying long documents. The documents having 5-6 Paragraphs and each paragraph having 10-15 mins with about 10-12 words in each line ?", "@SaurabhBhatia0211 \r\nYou can try splitting a document to smaller chunks (e.g. paragraphs or even lines), computing embedding for each of those chunks, and average those vectors to get the document representation. ", "@rodgzilla is this true?\r\n\r\n> HuggingFace's Trainer API, including the SFTrainer, by default pads all sequences to the maximum length within the batch, not to the max_seq_length argument. The max_seq_length argument serves as a hard limit to the sequence length, truncating any examples that are longer than that. The API was designed this way because padding to the maximum sequence length in the batch improves computational efficiency.\r\n\r\n?" ]
1,544
1,689
1,544
NONE
null
I'm trying to figure out how the `--max_seq_length` parameter works in [run_classifier](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py). Based on the source, it seems like it represents the number of words? Is that correct?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/108/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/107
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/107/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/107/comments
https://api.github.com/repos/huggingface/transformers/issues/107/events
https://github.com/huggingface/transformers/pull/107
389,227,363
MDExOlB1bGxSZXF1ZXN0MjM3MjYyNzEy
107
Fix optimizer to work with horovod
{ "login": "llidev", "id": 29957883, "node_id": "MDQ6VXNlcjI5OTU3ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/29957883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/llidev", "html_url": "https://github.com/llidev", "followers_url": "https://api.github.com/users/llidev/followers", "following_url": "https://api.github.com/users/llidev/following{/other_user}", "gists_url": "https://api.github.com/users/llidev/gists{/gist_id}", "starred_url": "https://api.github.com/users/llidev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/llidev/subscriptions", "organizations_url": "https://api.github.com/users/llidev/orgs", "repos_url": "https://api.github.com/users/llidev/repos", "events_url": "https://api.github.com/users/llidev/events{/privacy}", "received_events_url": "https://api.github.com/users/llidev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Great thanks!" ]
1,544
1,544
1,544
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/107/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/107", "html_url": "https://github.com/huggingface/transformers/pull/107", "diff_url": "https://github.com/huggingface/transformers/pull/107.diff", "patch_url": "https://github.com/huggingface/transformers/pull/107.patch", "merged_at": 1544523481000 }
https://api.github.com/repos/huggingface/transformers/issues/106
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/106/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/106/comments
https://api.github.com/repos/huggingface/transformers/issues/106/events
https://github.com/huggingface/transformers/issues/106
389,201,876
MDU6SXNzdWUzODkyMDE4NzY=
106
Picking max_sequence_length in run_classifier.py CoLA task
{ "login": "artemlos", "id": 6392760, "node_id": "MDQ6VXNlcjYzOTI3NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/6392760?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemlos", "html_url": "https://github.com/artemlos", "followers_url": "https://api.github.com/users/artemlos/followers", "following_url": "https://api.github.com/users/artemlos/following{/other_user}", "gists_url": "https://api.github.com/users/artemlos/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemlos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemlos/subscriptions", "organizations_url": "https://api.github.com/users/artemlos/orgs", "repos_url": "https://api.github.com/users/artemlos/repos", "events_url": "https://api.github.com/users/artemlos/events{/privacy}", "received_events_url": "https://api.github.com/users/artemlos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "As mentioned in #89, the maximum value of `max_sequence_length` is 512. ", "@rodgzilla thanks!" ]
1,544
1,544
1,544
NONE
null
Is there an upper bound for the max_sequence_length parameter when using run_classifier.py with CoLA task? When I tested with the default max_sequence_length of 128, everything worked good, but once I changed it to something else, eg 1024, it started the training and failed on the first iteration with the error shown below: ```` Traceback (most recent call last): File "run_classifier.py", line 643, in <module> main() File "run_classifier.py", line 551, in main loss = model(input_ids, segment_ids, input_mask, label_ids) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/jet/var/python/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 868, in forward _, pooled_output = self.bert(input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/jet/var/python/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 609, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/jet/var/python/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 199, in forward embeddings = self.dropout(embeddings) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/modules/dropout.py", line 53, in forward return F.dropout(input, self.p, self.training, self.inplace) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/functional.py", line 595, in dropout return _functions.dropout.Dropout.apply(input, p, training, inplace) File "/jet/var/python/lib/python3.6/site-packages/torch/nn/_functions/dropout.py", line 40, in forward ctx.noise.bernoulli_(1 - ctx.p).div_(1 - ctx.p) RuntimeError: Creating MTGP constants failed. at /jet/tmp/build/aten/src/THC/THCTensorRandom.cu:34 ```` The command I ran is ``` python run_classifier.py \ --task_name CoLA \ --do_train \ --do_eval \ --do_lower_case \ --data_dir $GLUE_DIR/Test/ \ --bert_model bert-base-uncased \ --max_seq_length 128 \ --train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3.0 \ --output_dir /tmp/BERT/test1 ````
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/106/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/105
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/105/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/105/comments
https://api.github.com/repos/huggingface/transformers/issues/105/events
https://github.com/huggingface/transformers/issues/105
388,994,586
MDU6SXNzdWUzODg5OTQ1ODY=
105
weights initialized two times
{ "login": "friskit-china", "id": 2494883, "node_id": "MDQ6VXNlcjI0OTQ4ODM=", "avatar_url": "https://avatars.githubusercontent.com/u/2494883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/friskit-china", "html_url": "https://github.com/friskit-china", "followers_url": "https://api.github.com/users/friskit-china/followers", "following_url": "https://api.github.com/users/friskit-china/following{/other_user}", "gists_url": "https://api.github.com/users/friskit-china/gists{/gist_id}", "starred_url": "https://api.github.com/users/friskit-china/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/friskit-china/subscriptions", "organizations_url": "https://api.github.com/users/friskit-china/orgs", "repos_url": "https://api.github.com/users/friskit-china/repos", "events_url": "https://api.github.com/users/friskit-china/events{/privacy}", "received_events_url": "https://api.github.com/users/friskit-china/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think it required for both the places. Because both of them can be used individually. As it is mentioned in the README.md file, the model can be loaded with 7 classes. In fact if you check `BertForMaskedLM` and `BertForNextSentencePrediction` classes it also has the weights initialised.\r\n\r\nPlease correct me if I am wrong :)", "You are right @Arjunsankarlal :-)" ]
1,544
1,544
1,544
NONE
null
Hi, I found that you initilized all weights twice: The first one is in BertModel class: https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/pytorch_pretrained_bert/modeling.py#L586 And the second one is in classes of each tasks such as in BertForSequenceClassification class: https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/pytorch_pretrained_bert/modeling.py#L674 I think maybe you only need the second one?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/105/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/104
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/104/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/104/comments
https://api.github.com/repos/huggingface/transformers/issues/104/events
https://github.com/huggingface/transformers/issues/104
388,930,579
MDU6SXNzdWUzODg5MzA1Nzk=
104
BERT for classification example training files
{ "login": "artemlos", "id": 6392760, "node_id": "MDQ6VXNlcjYzOTI3NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/6392760?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemlos", "html_url": "https://github.com/artemlos", "followers_url": "https://api.github.com/users/artemlos/followers", "following_url": "https://api.github.com/users/artemlos/following{/other_user}", "gists_url": "https://api.github.com/users/artemlos/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemlos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemlos/subscriptions", "organizations_url": "https://api.github.com/users/artemlos/orgs", "repos_url": "https://api.github.com/users/artemlos/repos", "events_url": "https://api.github.com/users/artemlos/events{/privacy}", "received_events_url": "https://api.github.com/users/artemlos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please read the [example section in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#fine-tuning-with-bert-running-the-examples)" ]
1,544
1,544
1,544
NONE
null
Are there any example training files for `run_classifier.py`?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/104/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/103
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/103/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/103/comments
https://api.github.com/repos/huggingface/transformers/issues/103/events
https://github.com/huggingface/transformers/issues/103
388,915,407
MDU6SXNzdWUzODg5MTU0MDc=
103
Words after tokenization replaced with #
{ "login": "nischalhp", "id": 1147533, "node_id": "MDQ6VXNlcjExNDc1MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1147533?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nischalhp", "html_url": "https://github.com/nischalhp", "followers_url": "https://api.github.com/users/nischalhp/followers", "following_url": "https://api.github.com/users/nischalhp/following{/other_user}", "gists_url": "https://api.github.com/users/nischalhp/gists{/gist_id}", "starred_url": "https://api.github.com/users/nischalhp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nischalhp/subscriptions", "organizations_url": "https://api.github.com/users/nischalhp/orgs", "repos_url": "https://api.github.com/users/nischalhp/repos", "events_url": "https://api.github.com/users/nischalhp/events{/privacy}", "received_events_url": "https://api.github.com/users/nischalhp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Because it uses WordPiece tokenization, and will introduce the `#` token.\r\nCheck: https://github.com/google-research/bert#tokenization", "@ymcui okay sweet, thank you. Will use the relevant one. ", "@ymcui How do I change this ? or is not possible to do so?", "1. If you are training completely from scratch, then it will be possible to use your own tokenizer.\r\n2. However, if you are fine-tuning on the existing pre-trained BERT models, I think it will not be possible to change the tokenizer, as the pre-trained BERT models are trained using WordPiece tokenizer. ", "@ymcui is right.\r\n\r\nSince the purpose of the present repo is to supply pre-trained model basically you are stuck with WordPiece tokenization.\r\n\r\nIf you build a new model and train it from scratch, you can selected whatever tokenization you want :-)", "@ymcui @thomwolf - Yes, that is quite a problem and thanks for getting back. Evaluating building something on our own now 🗡 " ]
1,544
1,544
1,544
NONE
null
Hello, When training the bert-base-multilingual-cased model for Question and Answering, I see that the tokens look like this : ```tokens: [CLS] what is the ins ##ured _ name ? [SEP] versi ##cherung ##ss ##che ##in erg ##o hau ##srat ##versi ##cherung hr - sv 927 ##26 ##49 ##2 ``` Any idea why words are getting replaced with #? Here is the command I am using : ```python run_squad.py --bert_model bert-base-multilingual-cased --do_train --do_predict --train_file dataset_train.json --predict_file dataset_predict.json --train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2.0 --max_seq_length 400 --doc_stride 20 --output_dir output_dir```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/103/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/102
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/102/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/102/comments
https://api.github.com/repos/huggingface/transformers/issues/102/events
https://github.com/huggingface/transformers/issues/102
388,901,365
MDU6SXNzdWUzODg5MDEzNjU=
102
How to modify the model config?
{ "login": "Arjunsankarlal", "id": 28828445, "node_id": "MDQ6VXNlcjI4ODI4NDQ1", "avatar_url": "https://avatars.githubusercontent.com/u/28828445?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Arjunsankarlal", "html_url": "https://github.com/Arjunsankarlal", "followers_url": "https://api.github.com/users/Arjunsankarlal/followers", "following_url": "https://api.github.com/users/Arjunsankarlal/following{/other_user}", "gists_url": "https://api.github.com/users/Arjunsankarlal/gists{/gist_id}", "starred_url": "https://api.github.com/users/Arjunsankarlal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Arjunsankarlal/subscriptions", "organizations_url": "https://api.github.com/users/Arjunsankarlal/orgs", "repos_url": "https://api.github.com/users/Arjunsankarlal/repos", "events_url": "https://api.github.com/users/Arjunsankarlal/events{/privacy}", "received_events_url": "https://api.github.com/users/Arjunsankarlal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The problem is because of the `max_position_embeddings` default size is 512 and it is exceeding in the case of my input as I mentioned. For now I have just made hack by hard coding it directly in the [modelling.py](url) file directly 😅. Yet need to know, where to find the bert_config.json file and changing it there would be the correct way of doing it.", "The config file is located in the .tar.gz archive that is getting downloaded, cached, and then extracted on the fly as you create a `BertModel` instance with the static `from_pretrained()` constructor. \r\nYou'll see a log message like\r\n```\r\n extracting archive file /home/USERNAME/.pytorch_pretrained_bert/bert-base-cased.tar.gz to temp dir /tmp/tmp96bkwrj0\r\n```\r\nIf you extract that archive yourself, you'll find the bert_config.json file. The thing, though, is that it doesn't make sense to modify this file, as it is tied to the pretrained models. If you increase `max_position_embeddings` in the config, you won't be able to use the pretrained models.\r\n\r\nInstead, you will have to train a model from scratch, which may or -- more likely -- may not be feasible depending on the hardware you have access to.", "Yeah as you said, while debugging I noticed that every time the .tar.gz file was extracted to a new temp cache location and from there models are fetched. Even in that case we are not able to find the json file where it was extracted. Also I think `max_position_embeddings` does not relate with the model training because, when I changed its value(before loading the model with torch.load) like this \r\n\r\n`config.__dict__['max_position_embeddings'] = 2048`\r\n\r\nfrom 512 to 2048 (hard coded way) the code ran properly without any error.\r\n\r\nAnd the [lines](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L99-L101) in modelling.py tells that it can be customised if required. But I don't see a way parameterising it so that it will be changed while fetching the config, because it is loaded like [this](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/modeling.py#L500-L501).\r\n\r\nIt would be great if customisations are supported for the applicable options.", "It does not make sense to customize options when using pretrained models, it only makes sense when training your own model from scratch.\r\n\r\nYou cannot use the pretrained models with another max_position_embeddings than 512, because the pretrained models contain pretrained embeddings for 512 positions.\r\nThe original transformer paper introduced a [positional encoding](http://nlp.seas.harvard.edu/2018/04/03/attention.html#positional-encoding) which allows extrapolation to arbitrary input lengths, but this was not used in BERT.\r\n\r\nYou can override max_position_embeddings, but this won't have any effect. The model will probably run fine for shorter inputs, but you will get a `RuntimeError: cuda runtime error (59)` for an input longer than 512 word pieces, because the embedding lookup [here](https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/pytorch_pretrained_bert/modeling.py#L194) will attempt to use an index that is too large.", "Indeed, it doesn't make sense to go over 512 tokens for a pre-trained model.\r\n\r\nIf you have longer text, you should try the sliding window approach detailed on the original Bert repo: https://github.com/google-research/bert/issues/66", "1. What if my sentences are well within 100 token in length. In that case does it make sense to change max_position_embeddings?\r\n2. Adding 1 more similar question to, during model evaluation if I pass sentence to model and generate embeddings will it take sentence length as total tokens or 512 default? In that scenario if my sentence has 10 unique tokens then what does 512 stands for in hidden layers?" ]
1,544
1,627
1,544
NONE
null
Well I am trying to generate embedding for a large sentence. I get this error > Traceback (most recent call last): all_encoder_layers, _ = model(input_ids, token_type_ids=None, attention_mask=input_mask) File "/Users/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/Users/venv/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 611, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/Users/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/Users/venv/lib/python3.6/site-packages/pytorch_pretrained_bert/modeling.py", line 196, in forward position_embeddings = self.position_embeddings(position_ids) File "/Users/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/Users/venv/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 110, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/Users/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 1110, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: index out of range at /Users/soumith/code/builder/wheel/pytorch-src/aten/src/TH/generic/THTensorMath.cpp:352 I find that max_position_embeddings (default size 512) is getting exceeded. Which is taken from the config that is downloaded as part of the initial step. Initially the download was done to the default location `PYTORCH_PRETRAINED_BERT_CACHE` where I was not able to find the config.json other than the model file and vocab.txt (named with random characters). I did it to a specific location in local with the `cache_dir` param, here also I was facing the same problem of finding the bert_config.json. Also I found a file in both the default cache and local cache, named with junk characters of JSON type. When I tried opening it, I could just see this _{"url": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz", "etag": "\"61343686707ed78320e9e7f406946db2-49\""}_ Any help to modify the config.json would be appreciated. Or if this is been caused for a different reason, Please let me know.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/102/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/102/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/101
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/101/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/101/comments
https://api.github.com/repos/huggingface/transformers/issues/101/events
https://github.com/huggingface/transformers/pull/101
388,788,249
MDExOlB1bGxSZXF1ZXN0MjM2OTczMzA0
101
Adding --do_lower_case for all uncased BERTs examples
{ "login": "davidefiocco", "id": 4547987, "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidefiocco", "html_url": "https://github.com/davidefiocco", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "repos_url": "https://api.github.com/users/davidefiocco/repos", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed, thanks for that!" ]
1,544
1,544
1,544
CONTRIBUTOR
null
I had missed those, it should make sense to use them
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/101/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/101/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/101", "html_url": "https://github.com/huggingface/transformers/pull/101", "diff_url": "https://github.com/huggingface/transformers/pull/101.diff", "patch_url": "https://github.com/huggingface/transformers/pull/101.patch", "merged_at": 1544387372000 }
https://api.github.com/repos/huggingface/transformers/issues/100
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/100/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/100/comments
https://api.github.com/repos/huggingface/transformers/issues/100/events
https://github.com/huggingface/transformers/issues/100
388,713,951
MDU6SXNzdWUzODg3MTM5NTE=
100
Squad dataset has multiple answers to a question.
{ "login": "nischalhp", "id": 1147533, "node_id": "MDQ6VXNlcjExNDc1MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/1147533?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nischalhp", "html_url": "https://github.com/nischalhp", "followers_url": "https://api.github.com/users/nischalhp/followers", "following_url": "https://api.github.com/users/nischalhp/following{/other_user}", "gists_url": "https://api.github.com/users/nischalhp/gists{/gist_id}", "starred_url": "https://api.github.com/users/nischalhp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nischalhp/subscriptions", "organizations_url": "https://api.github.com/users/nischalhp/orgs", "repos_url": "https://api.github.com/users/nischalhp/repos", "events_url": "https://api.github.com/users/nischalhp/events{/privacy}", "received_events_url": "https://api.github.com/users/nischalhp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\nIn `train-v2.0.json`, there is only one answer for the question.\r\nIn `dev-v2.0.json` and hidden `test-v2.0.json`, there are several answers for a given question.\r\nI think the code that you mentioned is designed for not mistakenly using `dev-v2.0.json` for training. If you are going to use your own data or other types of data that has multiple answers, you can simply comment out this part.\r\n\r\nBest", "Hello @ymcui ,\r\n\r\nI did exactly that, thank you for confirming. Just wanted to be sure that there are no other implications. You are right, I have converted our dataset into SQuAD form and using that with the model. \r\n\r\nRegards,\r\nNischal" ]
1,544
1,544
1,544
NONE
null
https://github.com/huggingface/pytorch-pretrained-BERT/blob/3ba5470eb85464df62f324bea88e20da234c423f/examples/run_squad.py#L143 The confusing part here is that in line 146, only the first answer is considered, so I am wondering why is there a check for multiple answers before. Also, SQuad dataset has multiple answers for the same question. Is this by design or am I fundamentally missing something?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/100/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/99
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/99/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/99/comments
https://api.github.com/repos/huggingface/transformers/issues/99/events
https://github.com/huggingface/transformers/issues/99
388,660,542
MDU6SXNzdWUzODg2NjA1NDI=
99
run_squad.py stuck on batch size greater than 1
{ "login": "wcgan", "id": 43312978, "node_id": "MDQ6VXNlcjQzMzEyOTc4", "avatar_url": "https://avatars.githubusercontent.com/u/43312978?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wcgan", "html_url": "https://github.com/wcgan", "followers_url": "https://api.github.com/users/wcgan/followers", "following_url": "https://api.github.com/users/wcgan/following{/other_user}", "gists_url": "https://api.github.com/users/wcgan/gists{/gist_id}", "starred_url": "https://api.github.com/users/wcgan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wcgan/subscriptions", "organizations_url": "https://api.github.com/users/wcgan/orgs", "repos_url": "https://api.github.com/users/wcgan/repos", "events_url": "https://api.github.com/users/wcgan/events{/privacy}", "received_events_url": "https://api.github.com/users/wcgan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Please copy paste the command you are using to run this example.", "Here you go\r\n\r\n```\r\npython ./run_squad.py \r\n --bert_model bert-base-uncased \\\r\n --do_train \\\r\n --do_predict \\\r\n --train_file $SQUAD_DIR/train-v1.1.json \\\r\n --predict_file $SQUAD_DIR/dev-v1.1.json \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --train_batch_size 32 \\\r\n --output_dir /tmp/debug_squad/ \\\r\n --gradient_accumulation_steps 2 \\\r\n```", "I don't see why this wouldn't work.\r\nMaybe update the repo & module to the latest version?\r\nYou should also add `--do_lower_case` to the arguments if you are using an uncased model.\r\nMaybe post a full log of your output?", "Updated to the latest version but it still does not work. When I terminate the script after it is stuck for some time, I get the message '../python3.7/threading.py\", line 1072, in _wait_for_tstate_lock elif lock.acquire(block, timeout)'. Perhaps it is running into some deadlock condition?\r\n\r\nI'm not sure how to obtain a full log, would you be able to explain how can I do so? Thanks!", "Hi, you can try with the new release 0.4.0.", "Is the problem resolved? I am having the same issue, using 2 gtx1080ti. \r\nit stuck when running on multiple gpus. I have to comment out `torch.nn.DataParallel(model)`, to make it work. ", "If you are using a multi-GPU setting, pytorch splits the batch dynamically between the 2 GPUs.\r\nExample - batch_size =5\r\nGPU 0 may get 3,max_sequence_len\r\nGPU 1 may get 2,max_sequence_len\r\n\r\nThis could be a cuda splitting issue, I recommend you try a single GPU setting to debug this.\r\n\r\nThanks,\r\nAnkit " ]
1,544
1,550
1,544
NONE
null
Thanks a lot for the code! I need help figuring out why the script is not working so long the batch_size is set to be above 1. Specifically, it seems to be stuck at Line 908: loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions). I am using 4 k80. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/99/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/99/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/98
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/98/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/98/comments
https://api.github.com/repos/huggingface/transformers/issues/98/events
https://github.com/huggingface/transformers/issues/98
388,660,132
MDU6SXNzdWUzODg2NjAxMzI=
98
Problem about convert TF model and pretraining
{ "login": "zhezhaoa", "id": 10495098, "node_id": "MDQ6VXNlcjEwNDk1MDk4", "avatar_url": "https://avatars.githubusercontent.com/u/10495098?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhezhaoa", "html_url": "https://github.com/zhezhaoa", "followers_url": "https://api.github.com/users/zhezhaoa/followers", "following_url": "https://api.github.com/users/zhezhaoa/following{/other_user}", "gists_url": "https://api.github.com/users/zhezhaoa/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhezhaoa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhezhaoa/subscriptions", "organizations_url": "https://api.github.com/users/zhezhaoa/orgs", "repos_url": "https://api.github.com/users/zhezhaoa/repos", "events_url": "https://api.github.com/users/zhezhaoa/events{/privacy}", "received_events_url": "https://api.github.com/users/zhezhaoa/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @zhezhaoa, I see, I will fix this in the next release.\r\n\r\nFor now you should be able to fix that by installing the repo from source (git clone the repo and `pip install -e .` and changing [line 53 of convert_tf_checkpoint_to_pytorch.py](https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py#L53) from\r\n```if name[-1] in [\"adam_v\", \"adam_m\"]:```\r\nto\r\n```if any(n in [\"adam_v\", \"adam_m\"] for n in name):```", "Thank you very much! It could be great if you can provide pertaining code like the official TF implementation.", "Ok this loading issue is now fixed in master and the new 0.4.0 release." ]
1,544
1,544
1,544
CONTRIBUTOR
null
First of all, Thank you for this great job. I use the official tensorflow implementation to pretrain on my corpus and then save the model. I want to convert this model to pytorch format and use it, but I got the error: Traceback (most recent call last): File "convert_tf_checkpoint_to_pytorch.py", line 105, in <module> convert() File "convert_tf_checkpoint_to_pytorch.py", line 86, in convert pointer = getattr(pointer, l[0]) AttributeError: 'Parameter' object has no attribute 'adam_m' Could you give me some advice? Thank you very much. It is great if you can release the pretrain code. I think it is useful even we cannot use TPU. Because we can fine-tune above google's pertained model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/98/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/98/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/97
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/97/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/97/comments
https://api.github.com/repos/huggingface/transformers/issues/97/events
https://github.com/huggingface/transformers/issues/97
388,470,290
MDU6SXNzdWUzODg0NzAyOTA=
97
RuntimeError: cuda runtime error (59) : device-side assert triggered
{ "login": "liu946", "id": 7871150, "node_id": "MDQ6VXNlcjc4NzExNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/7871150?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liu946", "html_url": "https://github.com/liu946", "followers_url": "https://api.github.com/users/liu946/followers", "following_url": "https://api.github.com/users/liu946/following{/other_user}", "gists_url": "https://api.github.com/users/liu946/gists{/gist_id}", "starred_url": "https://api.github.com/users/liu946/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liu946/subscriptions", "organizations_url": "https://api.github.com/users/liu946/orgs", "repos_url": "https://api.github.com/users/liu946/repos", "events_url": "https://api.github.com/users/liu946/events{/privacy}", "received_events_url": "https://api.github.com/users/liu946/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "And here is the trace when running in cpu.\r\n```\r\n File \"/data/home/liuyang/dlab/dlab/embedder/stack_embedder.py\", line 23, in embed\r\n present, _ = embedder(batch_sentence)\r\n File \"/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 477, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/home/liuyang/dlab/dlab/embedder/base_embedder.py\", line 28, in forward\r\n return self.embed(*input)\r\n File \"/data/home/liuyang/dlab/dlab/embedder/bert_embedder.py\", line 141, in embed\r\n output_all_encoded_layers=False)\r\n File \"/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 477, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py\", line 607, in forward\r\n embedding_output = self.embeddings(input_ids, token_type_ids)\r\n File \"/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 477, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py\", line 192, in forward\r\n position_embeddings = self.position_embeddings(position_ids)\r\n File \"/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 477, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/sparse.py\", line 110, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/functional.py\", line 1110, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorMath.cpp:352\r\n```\r\nIt seems like embedding indexing out of range, but all of the token ids were generated from `tokenizer.convert_tokens_to_ids`. I think it is cause by indexing the position_embeddings rather than the word_embedding.", "I am also facing the same problem. What is the solution?\r\n\r\n\r\n```\r\n result = self.forward(*input, **kwargs)\r\n 490 for hook in self._forward_hooks.values():\r\n 491 hook_result = hook(self, input, result)\r\n\r\n~/projects/yct-experimentation-master/pytorch_yct_sidd/pytorch_pretrained_bert_yct/modeling.py in forward(self, input_ids)\r\n 956 segment_ids = torch.zeros_like(input_ids)\r\n 957 # Zero-pad up to the sequence length.\r\n--> 958 _, pooled_output = self.bert(input_ids, segment_ids, input_mask, output_all_encoded_layers=False)\r\n 959 pooled_output = self.dropout(pooled_output)\r\n 960 return self.classifier(pooled_output)\r\n\r\n~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 487 result = self._slow_forward(*input, **kwargs)\r\n 488 else:\r\n--> 489 result = self.forward(*input, **kwargs)\r\n 490 for hook in self._forward_hooks.values():\r\n 491 hook_result = hook(self, input, result)\r\n\r\n~/projects/yct-experimentation-master/pytorch_yct_sidd/pytorch_pretrained_bert_yct/modeling.py in forward(self, input_ids, token_type_ids, attention_mask, output_all_encoded_layers)\r\n 624 extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0\r\n 625 \r\n--> 626 embedding_output = self.embeddings(input_ids, token_type_ids)\r\n 627 encoded_layers = self.encoder(embedding_output,\r\n 628 extended_attention_mask,\r\n\r\n~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 487 result = self._slow_forward(*input, **kwargs)\r\n 488 else:\r\n--> 489 result = self.forward(*input, **kwargs)\r\n 490 for hook in self._forward_hooks.values():\r\n 491 hook_result = hook(self, input, result)\r\n\r\n~/projects/yct-experimentation-master/pytorch_yct_sidd/pytorch_pretrained_bert_yct/modeling.py in forward(self, input_ids, token_type_ids)\r\n 192 \r\n 193 words_embeddings = self.word_embeddings(input_ids)\r\n--> 194 position_embeddings = self.position_embeddings(position_ids)\r\n 195 token_type_embeddings = self.token_type_embeddings(token_type_ids)\r\n 196 \r\n\r\n~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)\r\n 487 result = self._slow_forward(*input, **kwargs)\r\n 488 else:\r\n--> 489 result = self.forward(*input, **kwargs)\r\n 490 for hook in self._forward_hooks.values():\r\n 491 hook_result = hook(self, input, result)\r\n\r\n~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)\r\n 116 return F.embedding(\r\n 117 input, self.weight, self.padding_idx, self.max_norm,\r\n--> 118 self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n 119 \r\n 120 def extra_repr(self):\r\n\r\n~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)\r\n 1452 # remove once script supports set_grad_enabled\r\n 1453 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)\r\n-> 1454 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\n 1455 \r\n 1456 \r\n\r\nRuntimeError: CUDA error: device-side assert triggered\r\n```\r\n", "@siddBanPsu I got the problem because I didn't limit the max length of the sentence so that the position embedder get the position token id lager than its length.", "> @siddBanPsu I got the problem because I didn't limit the max length of the sentence so that the position embedder get the position token id lager than its length.\r\n\r\n3q,I meet the same problem too, and I solve the problem after I set the max length of input sequence, but here how the position embedder get the position token id?", "@yyHaker hello,i have set the max length,but i still get the same error,could you tell me how to set the max length?\r\nmy code is as following:\r\nparser.add_argument(\"--max_seq_length\", default=64, type=int,\r\n help=\"The maximum total input sequence length after tokenization.\")\r\n\r\nerror:\r\nTensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [117,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed", "@yihenglu these are old issues related to `pytorch_pretrained_bert`, you should rather open a new issue with a clear description of the model you are using, the version of the library and the error message you have.", "If you are using a tokenizer try:\r\n`tokenizer(input, truncation=True)`\r\nThis will truncate the input to the max_length", "> \r\n\r\nThis actually solved my issue...", "Hi \r\n\r\nI am using LayoutLM V2 model. I am trying to finetune the the model by using my custom dataset. I got bellow error message.\r\nPlease tell me how to resolve the error.\r\n\r\n../aten/src/ATen/native/cuda/Indexing.cu:703: indexSelectLargeIndex: block: [79,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nTraceback (most recent call last):\r\n File \"layoutlmV2/train.py\", line 124, in <module>\r\n trainer.train()\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1371, in train\r\n ignore_keys_for_eval=ignore_keys_for_eval,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 1609, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 2300, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/trainer.py\", line 2332, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1110, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py\", line 1238, in forward\r\n return_dict=return_dict,\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1110, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py\", line 906, in forward\r\n inputs_embeds=inputs_embeds,\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py\", line 752, in _calc_text_embeddings\r\n spatial_position_embeddings = self.embeddings._calc_spatial_position_embeddings(bbox)\r\n File \"/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py\", line 92, in _calc_spatial_position_embeddings\r\n h_position_embeddings = self.h_position_embeddings(bbox[:, :, 3] - bbox[:, :, 1])\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\", line 1110, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py\", line 160, in forward\r\n self.norm_type, self.scale_grad_by_freq, self.sparse)\r\n File \"/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py\", line 2183, in embedding\r\n return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)\r\nRuntimeError: CUDA error: device-side assert triggered\r\n 0% 0/240 [00:00<?, ?it/s]\r\n\r\nyou can download the code and dataset along with notebook\r\nhttps://drive.google.com/file/d/1VdTvn580pGgVBlN03UX5alaFqSbc8Q5_/view?usp=sharing\r\n\r\nGithub issue:\r\nhttps://github.com/microsoft/unilm/issues/755\r\n\r\nPlease help" ]
1,544
1,654
1,544
NONE
null
I got this error when using bert model to get the present as a feature for training. Could anyone can help? Thanks a lot. Here is the cuda and python trace. ``` /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [163,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. THCudaCheck FAIL file=/pytorch/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu line=266 error=59 : device-side assert triggered Traceback (most recent call last): File "examples/bert_pku_seg.py", line 89, in <module> train() File "examples/bert_pku_seg.py", line 48, in train trainer.train(SAVE_DIR) File "/data/home/liuyang/dlab/dlab/process/trainer.py", line 61, in train after_batch_iter_hook=train_step_hook) File "/data/home/liuyang/dlab/dlab/process/common.py", line 49, in data_runner forward_output = model(batch_sentence) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/dlab/dlab/model/sequence_tagger.py", line 66, in forward batch_words_present, seq_length = self.embedder.embed(sentences) File "/data/home/liuyang/dlab/dlab/embedder/stack_embedder.py", line 23, in embed present, _ = embedder(batch_sentence) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/dlab/dlab/embedder/base_embedder.py", line 28, in forward return self.embed(*input) File "/data/home/liuyang/dlab/dlab/embedder/bert_embedder.py", line 141, in embed output_all_encoded_layers=False) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 607, in forward embedding_output = self.embeddings(input_ids, token_type_ids) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/data/home/liuyang/pyenv/bert-pyt-p3/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 195, in forward embeddings = words_embeddings + position_embeddings + token_type_embeddings RuntimeError: cuda runtime error (59) : device-side assert triggered at /pytorch/aten/src/THC/generated/../generic/THCTensorMathPointwise.cu:266 ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/97/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/97/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/96
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/96/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/96/comments
https://api.github.com/repos/huggingface/transformers/issues/96/events
https://github.com/huggingface/transformers/pull/96
388,342,497
MDExOlB1bGxSZXF1ZXN0MjM2NjI5NTA3
96
BertForMultipleChoice and Swag dataset example.
{ "login": "rodgzilla", "id": 12107203, "node_id": "MDQ6VXNlcjEyMTA3MjAz", "avatar_url": "https://avatars.githubusercontent.com/u/12107203?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rodgzilla", "html_url": "https://github.com/rodgzilla", "followers_url": "https://api.github.com/users/rodgzilla/followers", "following_url": "https://api.github.com/users/rodgzilla/following{/other_user}", "gists_url": "https://api.github.com/users/rodgzilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rodgzilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rodgzilla/subscriptions", "organizations_url": "https://api.github.com/users/rodgzilla/orgs", "repos_url": "https://api.github.com/users/rodgzilla/repos", "events_url": "https://api.github.com/users/rodgzilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rodgzilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi Gregory, I will take some time to review and test that this week.\r\n\r\nJust a word on additional dependencies, I would like to keep the package as light as possible (currently it's aligned with the dependencies of AllenNLP) so if you can manage to avoid adding any additional dependency it would be better.", "Did you get good results fine-tuning the model on SWAG?\r\nWe should also indicate (reproducible) numbers in the `README.md` if we want to add this example to the repo (like the other examples).", "Completely forgot to run the training, I've running it right now and I should have the results by the end of the day.\r\n\r\nMy parameters are the following ones:\r\n\r\n```bash\r\npython examples/run_swag.py \\\r\n--do_train \\\r\n--do_eval \\\r\n--do_lower_case \\\r\n--data_dir $SWAG_DIR/data/ \\\r\n--bert_model bert-base-uncased \\\r\n--max_seq_length 100 \\\r\n--train_batch_size 4 \r\n--learning_rate 2e-5 \\\r\n--num_train_epochs 3.0 \\\r\n--output_dir /tmp/swag\r\n```\r\n\r\nThe batch size of 4 isn't ideal but my GPU memory is pretty already full with these parameters.\r\n\r\n", "I get a 77.76% accuracy on SWAG, I would be interested in the results with a batch size of 16 like in the Bert paper.\r\n\r\nI've added my results to the readme and precised that the difference in performance was probably caused by the difference in `training_batch_size`.\r\n\r\n@thomwolf Any chance you could run it on multiple GPUs?\r\n\r\nI will commit a patch later to remove the pandas dependency.", "Yes, I'll give a try on a bigger machine.\r\n\r\nYou can use gradient accumulation to get bigger batch size on a single GPU, you know right? (I wrote a lengthy blurb on that [here](https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255))", "I didn't know actually, this is super cool! \r\n\r\nI'll give it try. Thanks for the link.", "I did another finetuning using a `training_batch_size` of 16 and a `gradient_accumulation_steps` of 4 and I get a much better accuracy (80.62% instead of 77.76%). I have updated the readme accordingly and if everything is okay the branch should be ready for a merge.\r\n\r\n@thomwolf Thanks for introducing to gradient accumulation, it's quite a neat trick and I think I will use it a lot more in the future. ", "Oh that's great, now we are in the ballpark of the 81.6 reported in the BERT paper!\r\nLooks good to me, I'm merging!", "still not able to get 81.6.. after 3 epochs it get to 79.97.. any help?", "Probably relevant #461 ", "Any chance someone can post the command for only testing, after training by using a model from a checkpoint? Because even when I test it on swag original data, it complains about having label column even though there is none. I set —do_test and but it’s still not working.", "Can anybody help me to understand this code?\r\ni looking for an example of using \"Bertformultiplechoice\" for training swag datasets.\r\nThank you in advance", " When I run run_multiple_choice.py I get this error, I am not sure why? I need help here.\r\n\r\nError:\r\n2020-11-30 21:53:38.693828: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most recent call last):\r\n File \"./transformers/examples/multiple-choice/run_multiple_choice.py\", line 26, in <module>\r\n import transformers\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/__init__.py\", line 20, in <module>\r\n from . import dependency_versions_check\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/dependency_versions_check.py\", line 21, in <module>\r\n from .file_utils import is_tokenizers_available\r\n File \"/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py\", line 88, in <module>\r\n import datasets # noqa: F401\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/__init__.py\", line 26, in <module>\r\n from .arrow_dataset import Dataset, concatenate_datasets\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py\", line 41, in <module>\r\n from .arrow_writer import ArrowWriter, TypedSequence\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/arrow_writer.py\", line 26, in <module>\r\n from .features import Features, _ArrayXDExtensionType\r\n File \"/usr/local/lib/python3.6/dist-packages/datasets/features.py\", line 210, in <module>\r\n class _ArrayXDExtensionType(pa.PyExtensionType):\r\nAttributeError: module 'pyarrow' has no attribute 'PyExtensionType'", "I uninstalled pyarrow and installed it again. it fixed the error." ]
1,544
1,606
1,544
CONTRIBUTOR
null
Hi! This is the code that enables Bert models to be used for Multiple Choice problems (such as [Swag](https://github.com/rowanz/swagaf) and [ROCStories](http://cs.rochester.edu/nlp/rocstories/). For my implementation, I use the algorithm described in #90 and issue [#38](https://github.com/google-research/bert/issues/38) from the tensorflow implementation repo. The commentaries and the `README.md` files have been updated but I would very much appreciate if someone could check my changes. I am also unable to test my code on multiple GPUs so I can't check whether it works or not. I will let a training run during the night to see what kind of result we get although I won't be able to do a proper hyper-parameter search due my computing power limitations. I also have a question. I used `pandas` to load the Swag dataset, do I need to specify it somewhere in a file to add it as a dependency for `pip`? I have never published a module on `pip`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/96/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/96/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/96", "html_url": "https://github.com/huggingface/transformers/pull/96", "diff_url": "https://github.com/huggingface/transformers/pull/96.diff", "patch_url": "https://github.com/huggingface/transformers/pull/96.patch", "merged_at": 1544699112000 }
https://api.github.com/repos/huggingface/transformers/issues/95
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/95/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/95/comments
https://api.github.com/repos/huggingface/transformers/issues/95/events
https://github.com/huggingface/transformers/issues/95
388,242,901
MDU6SXNzdWUzODgyNDI5MDE=
95
Not updating the BERT embeddings during the fine tuning process
{ "login": "avisil", "id": 43005718, "node_id": "MDQ6VXNlcjQzMDA1NzE4", "avatar_url": "https://avatars.githubusercontent.com/u/43005718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avisil", "html_url": "https://github.com/avisil", "followers_url": "https://api.github.com/users/avisil/followers", "following_url": "https://api.github.com/users/avisil/following{/other_user}", "gists_url": "https://api.github.com/users/avisil/gists{/gist_id}", "starred_url": "https://api.github.com/users/avisil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avisil/subscriptions", "organizations_url": "https://api.github.com/users/avisil/orgs", "repos_url": "https://api.github.com/users/avisil/repos", "events_url": "https://api.github.com/users/avisil/events{/privacy}", "received_events_url": "https://api.github.com/users/avisil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can do it by setting the `requires_grad` attribute of the embedding layer in `BertModel`. That will look something like this: \r\n\r\n```\r\n model = BertForQuestionAnswering.from_pretrained(args.bert_model,\r\n cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(args.local_rank))\r\n model.bert.embeddings.requires_grad = False\r\n```\r\n\r\nI haven't tested this code but it should do what you are asking.\r\n\r\nMore explanation are available on the [PyTorch forums](https://discuss.pytorch.org/t/how-the-pytorch-freeze-network-in-some-layers-only-the-rest-of-the-training/7088)", "Thanks let me try this..I was thinking of going through the BERTAdam optimizer.\r\n", "With `requires_grad=false`: {\"exact_match\": 80.65279091769158, \"f1\": 88.04683744174879}\r\nWithout `requires_grad=false`: {\"exact_match\": 81.33396404919584, \"f1\": 88.43774959214048}", "Thanks for your feedback. \r\n\r\nHave you checked that the values of the embedding matrix are indeed unchanged by the finetuning?", "Nope I haven't.. I plan to do that today. @thomwolf do you think we're on the right track? Just need your 2 cents :) ", "Yes you can do\r\n```python\r\nfor p in model.bert.embeddings.parameters():\r\n p.requires_grad = False\r\n```\r\nYou can also just not send these parameters to the optimizer (when you create the optimizer) as detailed on the PyTorch forums. Both methods will work. Combining the two will gives the lowest overhead (no un-necessary computation of gradient and no un-necessary check of update during the optimizer step). The PyTorch forum is the best reference for this kind of general question.", "Correct me if I'm wrong, but setting `model.bert.embeddings.requires_grad = False` does not seem to propagate.\r\n\r\n```python\r\nbert = BertModel.from_pretrained('bert-base-uncased')\r\nbert.embeddings.requires_grad = False\r\nfor name, param in bert.named_parameters(): \r\n if param.requires_grad:\r\n print(name)\r\n```\r\n\r\nOutput:\r\n\r\n> embeddings.word_embeddings.weight\r\n> embeddings.position_embeddings.weight\r\n> embeddings.token_type_embeddings.weight\r\n> embeddings.LayerNorm.weight\r\n> embeddings.LayerNorm.bias\r\n> encoder.layer.0.attention.self.query.weight\r\n> encoder.layer.0.attention.self.query.bias\r\n> encoder.layer.0.attention.self.key.weight\r\n> encoder.layer.0.attention.self.key.bias\r\n> encoder.layer.0.attention.self.value.weight\r\n> encoder.layer.0.attention.self.value.bias\r\n> encoder.layer.0.attention.output.dense.weight\r\n> encoder.layer.0.attention.output.dense.bias\r\n> encoder.layer.0.attention.output.LayerNorm.weight\r\n> encoder.layer.0.attention.output.LayerNorm.bias\r\n> encoder.layer.0.intermediate.dense.weight\r\n> encoder.layer.0.intermediate.dense.bias\r\n> encoder.layer.0.output.dense.weight\r\n> encoder.layer.0.output.dense.bias\r\n> encoder.layer.0.output.LayerNorm.weight\r\n> encoder.layer.0.output.LayerNorm.bias\r\n> encoder.layer.1.attention.self.query.weight\r\n> encoder.layer.1.attention.self.query.bias\r\n> encoder.layer.1.attention.self.key.weight\r\n> encoder.layer.1.attention.self.key.bias\r\n> encoder.layer.1.attention.self.value.weight\r\n> encoder.layer.1.attention.self.value.bias\r\n> encoder.layer.1.attention.output.dense.weight\r\n> encoder.layer.1.attention.output.dense.bias\r\n> encoder.layer.1.attention.output.LayerNorm.weight\r\n> encoder.layer.1.attention.output.LayerNorm.bias\r\n> encoder.layer.1.intermediate.dense.weight\r\n> encoder.layer.1.intermediate.dense.bias\r\n> encoder.layer.1.output.dense.weight\r\n> encoder.layer.1.output.dense.bias\r\n> encoder.layer.1.output.LayerNorm.weight\r\n> encoder.layer.1.output.LayerNorm.bias\r\n> encoder.layer.2.attention.self.query.weight\r\n> encoder.layer.2.attention.self.query.bias\r\n> encoder.layer.2.attention.self.key.weight\r\n> encoder.layer.2.attention.self.key.bias\r\n> encoder.layer.2.attention.self.value.weight\r\n> encoder.layer.2.attention.self.value.bias\r\n> encoder.layer.2.attention.output.dense.weight\r\n> encoder.layer.2.attention.output.dense.bias\r\n> encoder.layer.2.attention.output.LayerNorm.weight\r\n> encoder.layer.2.attention.output.LayerNorm.bias\r\n> encoder.layer.2.intermediate.dense.weight\r\n> encoder.layer.2.intermediate.dense.bias\r\n> encoder.layer.2.output.dense.weight\r\n> encoder.layer.2.output.dense.bias\r\n> encoder.layer.2.output.LayerNorm.weight\r\n> encoder.layer.2.output.LayerNorm.bias\r\n> encoder.layer.3.attention.self.query.weight\r\n> encoder.layer.3.attention.self.query.bias\r\n> encoder.layer.3.attention.self.key.weight\r\n> encoder.layer.3.attention.self.key.bias\r\n> encoder.layer.3.attention.self.value.weight\r\n> encoder.layer.3.attention.self.value.bias\r\n> encoder.layer.3.attention.output.dense.weight\r\n> encoder.layer.3.attention.output.dense.bias\r\n> encoder.layer.3.attention.output.LayerNorm.weight\r\n> encoder.layer.3.attention.output.LayerNorm.bias\r\n> encoder.layer.3.intermediate.dense.weight\r\n> encoder.layer.3.intermediate.dense.bias\r\n> encoder.layer.3.output.dense.weight\r\n> encoder.layer.3.output.dense.bias\r\n> encoder.layer.3.output.LayerNorm.weight\r\n> encoder.layer.3.output.LayerNorm.bias\r\n> encoder.layer.4.attention.self.query.weight\r\n> encoder.layer.4.attention.self.query.bias\r\n> encoder.layer.4.attention.self.key.weight\r\n> encoder.layer.4.attention.self.key.bias\r\n> encoder.layer.4.attention.self.value.weight\r\n> encoder.layer.4.attention.self.value.bias\r\n> encoder.layer.4.attention.output.dense.weight\r\n> encoder.layer.4.attention.output.dense.bias\r\n> encoder.layer.4.attention.output.LayerNorm.weight\r\n> encoder.layer.4.attention.output.LayerNorm.bias\r\n> encoder.layer.4.intermediate.dense.weight\r\n> encoder.layer.4.intermediate.dense.bias\r\n> encoder.layer.4.output.dense.weight\r\n> encoder.layer.4.output.dense.bias\r\n> encoder.layer.4.output.LayerNorm.weight\r\n> encoder.layer.4.output.LayerNorm.bias\r\n> encoder.layer.5.attention.self.query.weight\r\n> encoder.layer.5.attention.self.query.bias\r\n> encoder.layer.5.attention.self.key.weight\r\n> encoder.layer.5.attention.self.key.bias\r\n> encoder.layer.5.attention.self.value.weight\r\n> encoder.layer.5.attention.self.value.bias\r\n> encoder.layer.5.attention.output.dense.weight\r\n> encoder.layer.5.attention.output.dense.bias\r\n> encoder.layer.5.attention.output.LayerNorm.weight\r\n> encoder.layer.5.attention.output.LayerNorm.bias\r\n> encoder.layer.5.intermediate.dense.weight\r\n> encoder.layer.5.intermediate.dense.bias\r\n> encoder.layer.5.output.dense.weight\r\n> encoder.layer.5.output.dense.bias\r\n> encoder.layer.5.output.LayerNorm.weight\r\n> encoder.layer.5.output.LayerNorm.bias\r\n> encoder.layer.6.attention.self.query.weight\r\n> encoder.layer.6.attention.self.query.bias\r\n> encoder.layer.6.attention.self.key.weight\r\n> encoder.layer.6.attention.self.key.bias\r\n> encoder.layer.6.attention.self.value.weight\r\n> encoder.layer.6.attention.self.value.bias\r\n> encoder.layer.6.attention.output.dense.weight\r\n> encoder.layer.6.attention.output.dense.bias\r\n> encoder.layer.6.attention.output.LayerNorm.weight\r\n> encoder.layer.6.attention.output.LayerNorm.bias\r\n> encoder.layer.6.intermediate.dense.weight\r\n> encoder.layer.6.intermediate.dense.bias\r\n> encoder.layer.6.output.dense.weight\r\n> encoder.layer.6.output.dense.bias\r\n> encoder.layer.6.output.LayerNorm.weight\r\n> encoder.layer.6.output.LayerNorm.bias\r\n> encoder.layer.7.attention.self.query.weight\r\n> encoder.layer.7.attention.self.query.bias\r\n> encoder.layer.7.attention.self.key.weight\r\n> encoder.layer.7.attention.self.key.bias\r\n> encoder.layer.7.attention.self.value.weight\r\n> encoder.layer.7.attention.self.value.bias\r\n> encoder.layer.7.attention.output.dense.weight\r\n> encoder.layer.7.attention.output.dense.bias\r\n> encoder.layer.7.attention.output.LayerNorm.weight\r\n> encoder.layer.7.attention.output.LayerNorm.bias\r\n> encoder.layer.7.intermediate.dense.weight\r\n> encoder.layer.7.intermediate.dense.bias\r\n> encoder.layer.7.output.dense.weight\r\n> encoder.layer.7.output.dense.bias\r\n> encoder.layer.7.output.LayerNorm.weight\r\n> encoder.layer.7.output.LayerNorm.bias\r\n> encoder.layer.8.attention.self.query.weight\r\n> encoder.layer.8.attention.self.query.bias\r\n> encoder.layer.8.attention.self.key.weight\r\n> encoder.layer.8.attention.self.key.bias\r\n> encoder.layer.8.attention.self.value.weight\r\n> encoder.layer.8.attention.self.value.bias\r\n> encoder.layer.8.attention.output.dense.weight\r\n> encoder.layer.8.attention.output.dense.bias\r\n> encoder.layer.8.attention.output.LayerNorm.weight\r\n> encoder.layer.8.attention.output.LayerNorm.bias\r\n> encoder.layer.8.intermediate.dense.weight\r\n> encoder.layer.8.intermediate.dense.bias\r\n> encoder.layer.8.output.dense.weight\r\n> encoder.layer.8.output.dense.bias\r\n> encoder.layer.8.output.LayerNorm.weight\r\n> encoder.layer.8.output.LayerNorm.bias\r\n> encoder.layer.9.attention.self.query.weight\r\n> encoder.layer.9.attention.self.query.bias\r\n> encoder.layer.9.attention.self.key.weight\r\n> encoder.layer.9.attention.self.key.bias\r\n> encoder.layer.9.attention.self.value.weight\r\n> encoder.layer.9.attention.self.value.bias\r\n> encoder.layer.9.attention.output.dense.weight\r\n> encoder.layer.9.attention.output.dense.bias\r\n> encoder.layer.9.attention.output.LayerNorm.weight\r\n> encoder.layer.9.attention.output.LayerNorm.bias\r\n> encoder.layer.9.intermediate.dense.weight\r\n> encoder.layer.9.intermediate.dense.bias\r\n> encoder.layer.9.output.dense.weight\r\n> encoder.layer.9.output.dense.bias\r\n> encoder.layer.9.output.LayerNorm.weight\r\n> encoder.layer.9.output.LayerNorm.bias\r\n> encoder.layer.10.attention.self.query.weight\r\n> encoder.layer.10.attention.self.query.bias\r\n> encoder.layer.10.attention.self.key.weight\r\n> encoder.layer.10.attention.self.key.bias\r\n> encoder.layer.10.attention.self.value.weight\r\n> encoder.layer.10.attention.self.value.bias\r\n> encoder.layer.10.attention.output.dense.weight\r\n> encoder.layer.10.attention.output.dense.bias\r\n> encoder.layer.10.attention.output.LayerNorm.weight\r\n> encoder.layer.10.attention.output.LayerNorm.bias\r\n> encoder.layer.10.intermediate.dense.weight\r\n> encoder.layer.10.intermediate.dense.bias\r\n> encoder.layer.10.output.dense.weight\r\n> encoder.layer.10.output.dense.bias\r\n> encoder.layer.10.output.LayerNorm.weight\r\n> encoder.layer.10.output.LayerNorm.bias\r\n> encoder.layer.11.attention.self.query.weight\r\n> encoder.layer.11.attention.self.query.bias\r\n> encoder.layer.11.attention.self.key.weight\r\n> encoder.layer.11.attention.self.key.bias\r\n> encoder.layer.11.attention.self.value.weight\r\n> encoder.layer.11.attention.self.value.bias\r\n> encoder.layer.11.attention.output.dense.weight\r\n> encoder.layer.11.attention.output.dense.bias\r\n> encoder.layer.11.attention.output.LayerNorm.weight\r\n> encoder.layer.11.attention.output.LayerNorm.bias\r\n> encoder.layer.11.intermediate.dense.weight\r\n> encoder.layer.11.intermediate.dense.bias\r\n> encoder.layer.11.output.dense.weight\r\n> encoder.layer.11.output.dense.bias\r\n> encoder.layer.11.output.LayerNorm.weight\r\n> encoder.layer.11.output.LayerNorm.bias\r\n> pooler.dense.weight\r\n> pooler.dense.bias\r\n\r\nInstead using the following does give the expected output.\r\n\r\n```python\r\nbert = BertModel.from_pretrained('bert-base-uncased')\r\nfor name, param in bert.named_parameters(): \r\n if name.startswith('embeddings'):\r\n param.requires_grad = False\r\n````\r\n", "> Nope I haven't. I plan to do that today. @thomwolf do you think we're on the right track? Just need your 2 cents :)\r\n\r\nHow to check the values of the embedding matrix change or not?", "> Correct me if I'm wrong, but setting `model.bert.embeddings.requires_grad = False` does not seem to propagate.\r\n> \r\n> ```python\r\n> bert = BertModel.from_pretrained('bert-base-uncased')\r\n> bert.embeddings.requires_grad = False\r\n> for name, param in bert.named_parameters(): \r\n> if param.requires_grad:\r\n> print(name)\r\n> ```\r\n> \r\n> Output:\r\n> \r\n> > embeddings.word_embeddings.weight\r\n> > embeddings.position_embeddings.weight\r\n> > embeddings.token_type_embeddings.weight\r\n> > embeddings.LayerNorm.weight\r\n> > embeddings.LayerNorm.bias\r\n> > encoder.layer.0.attention.self.query.weight\r\n> > encoder.layer.0.attention.self.query.bias\r\n> > encoder.layer.0.attention.self.key.weight\r\n> > encoder.layer.0.attention.self.key.bias\r\n> > encoder.layer.0.attention.self.value.weight\r\n> > encoder.layer.0.attention.self.value.bias\r\n> > encoder.layer.0.attention.output.dense.weight\r\n> > encoder.layer.0.attention.output.dense.bias\r\n> > encoder.layer.0.attention.output.LayerNorm.weight\r\n> > encoder.layer.0.attention.output.LayerNorm.bias\r\n> > encoder.layer.0.intermediate.dense.weight\r\n> > encoder.layer.0.intermediate.dense.bias\r\n> > encoder.layer.0.output.dense.weight\r\n> > encoder.layer.0.output.dense.bias\r\n> > encoder.layer.0.output.LayerNorm.weight\r\n> > encoder.layer.0.output.LayerNorm.bias\r\n> > encoder.layer.1.attention.self.query.weight\r\n> > encoder.layer.1.attention.self.query.bias\r\n> > encoder.layer.1.attention.self.key.weight\r\n> > encoder.layer.1.attention.self.key.bias\r\n> > encoder.layer.1.attention.self.value.weight\r\n> > encoder.layer.1.attention.self.value.bias\r\n> > encoder.layer.1.attention.output.dense.weight\r\n> > encoder.layer.1.attention.output.dense.bias\r\n> > encoder.layer.1.attention.output.LayerNorm.weight\r\n> > encoder.layer.1.attention.output.LayerNorm.bias\r\n> > encoder.layer.1.intermediate.dense.weight\r\n> > encoder.layer.1.intermediate.dense.bias\r\n> > encoder.layer.1.output.dense.weight\r\n> > encoder.layer.1.output.dense.bias\r\n> > encoder.layer.1.output.LayerNorm.weight\r\n> > encoder.layer.1.output.LayerNorm.bias\r\n> > encoder.layer.2.attention.self.query.weight\r\n> > encoder.layer.2.attention.self.query.bias\r\n> > encoder.layer.2.attention.self.key.weight\r\n> > encoder.layer.2.attention.self.key.bias\r\n> > encoder.layer.2.attention.self.value.weight\r\n> > encoder.layer.2.attention.self.value.bias\r\n> > encoder.layer.2.attention.output.dense.weight\r\n> > encoder.layer.2.attention.output.dense.bias\r\n> > encoder.layer.2.attention.output.LayerNorm.weight\r\n> > encoder.layer.2.attention.output.LayerNorm.bias\r\n> > encoder.layer.2.intermediate.dense.weight\r\n> > encoder.layer.2.intermediate.dense.bias\r\n> > encoder.layer.2.output.dense.weight\r\n> > encoder.layer.2.output.dense.bias\r\n> > encoder.layer.2.output.LayerNorm.weight\r\n> > encoder.layer.2.output.LayerNorm.bias\r\n> > encoder.layer.3.attention.self.query.weight\r\n> > encoder.layer.3.attention.self.query.bias\r\n> > encoder.layer.3.attention.self.key.weight\r\n> > encoder.layer.3.attention.self.key.bias\r\n> > encoder.layer.3.attention.self.value.weight\r\n> > encoder.layer.3.attention.self.value.bias\r\n> > encoder.layer.3.attention.output.dense.weight\r\n> > encoder.layer.3.attention.output.dense.bias\r\n> > encoder.layer.3.attention.output.LayerNorm.weight\r\n> > encoder.layer.3.attention.output.LayerNorm.bias\r\n> > encoder.layer.3.intermediate.dense.weight\r\n> > encoder.layer.3.intermediate.dense.bias\r\n> > encoder.layer.3.output.dense.weight\r\n> > encoder.layer.3.output.dense.bias\r\n> > encoder.layer.3.output.LayerNorm.weight\r\n> > encoder.layer.3.output.LayerNorm.bias\r\n> > encoder.layer.4.attention.self.query.weight\r\n> > encoder.layer.4.attention.self.query.bias\r\n> > encoder.layer.4.attention.self.key.weight\r\n> > encoder.layer.4.attention.self.key.bias\r\n> > encoder.layer.4.attention.self.value.weight\r\n> > encoder.layer.4.attention.self.value.bias\r\n> > encoder.layer.4.attention.output.dense.weight\r\n> > encoder.layer.4.attention.output.dense.bias\r\n> > encoder.layer.4.attention.output.LayerNorm.weight\r\n> > encoder.layer.4.attention.output.LayerNorm.bias\r\n> > encoder.layer.4.intermediate.dense.weight\r\n> > encoder.layer.4.intermediate.dense.bias\r\n> > encoder.layer.4.output.dense.weight\r\n> > encoder.layer.4.output.dense.bias\r\n> > encoder.layer.4.output.LayerNorm.weight\r\n> > encoder.layer.4.output.LayerNorm.bias\r\n> > encoder.layer.5.attention.self.query.weight\r\n> > encoder.layer.5.attention.self.query.bias\r\n> > encoder.layer.5.attention.self.key.weight\r\n> > encoder.layer.5.attention.self.key.bias\r\n> > encoder.layer.5.attention.self.value.weight\r\n> > encoder.layer.5.attention.self.value.bias\r\n> > encoder.layer.5.attention.output.dense.weight\r\n> > encoder.layer.5.attention.output.dense.bias\r\n> > encoder.layer.5.attention.output.LayerNorm.weight\r\n> > encoder.layer.5.attention.output.LayerNorm.bias\r\n> > encoder.layer.5.intermediate.dense.weight\r\n> > encoder.layer.5.intermediate.dense.bias\r\n> > encoder.layer.5.output.dense.weight\r\n> > encoder.layer.5.output.dense.bias\r\n> > encoder.layer.5.output.LayerNorm.weight\r\n> > encoder.layer.5.output.LayerNorm.bias\r\n> > encoder.layer.6.attention.self.query.weight\r\n> > encoder.layer.6.attention.self.query.bias\r\n> > encoder.layer.6.attention.self.key.weight\r\n> > encoder.layer.6.attention.self.key.bias\r\n> > encoder.layer.6.attention.self.value.weight\r\n> > encoder.layer.6.attention.self.value.bias\r\n> > encoder.layer.6.attention.output.dense.weight\r\n> > encoder.layer.6.attention.output.dense.bias\r\n> > encoder.layer.6.attention.output.LayerNorm.weight\r\n> > encoder.layer.6.attention.output.LayerNorm.bias\r\n> > encoder.layer.6.intermediate.dense.weight\r\n> > encoder.layer.6.intermediate.dense.bias\r\n> > encoder.layer.6.output.dense.weight\r\n> > encoder.layer.6.output.dense.bias\r\n> > encoder.layer.6.output.LayerNorm.weight\r\n> > encoder.layer.6.output.LayerNorm.bias\r\n> > encoder.layer.7.attention.self.query.weight\r\n> > encoder.layer.7.attention.self.query.bias\r\n> > encoder.layer.7.attention.self.key.weight\r\n> > encoder.layer.7.attention.self.key.bias\r\n> > encoder.layer.7.attention.self.value.weight\r\n> > encoder.layer.7.attention.self.value.bias\r\n> > encoder.layer.7.attention.output.dense.weight\r\n> > encoder.layer.7.attention.output.dense.bias\r\n> > encoder.layer.7.attention.output.LayerNorm.weight\r\n> > encoder.layer.7.attention.output.LayerNorm.bias\r\n> > encoder.layer.7.intermediate.dense.weight\r\n> > encoder.layer.7.intermediate.dense.bias\r\n> > encoder.layer.7.output.dense.weight\r\n> > encoder.layer.7.output.dense.bias\r\n> > encoder.layer.7.output.LayerNorm.weight\r\n> > encoder.layer.7.output.LayerNorm.bias\r\n> > encoder.layer.8.attention.self.query.weight\r\n> > encoder.layer.8.attention.self.query.bias\r\n> > encoder.layer.8.attention.self.key.weight\r\n> > encoder.layer.8.attention.self.key.bias\r\n> > encoder.layer.8.attention.self.value.weight\r\n> > encoder.layer.8.attention.self.value.bias\r\n> > encoder.layer.8.attention.output.dense.weight\r\n> > encoder.layer.8.attention.output.dense.bias\r\n> > encoder.layer.8.attention.output.LayerNorm.weight\r\n> > encoder.layer.8.attention.output.LayerNorm.bias\r\n> > encoder.layer.8.intermediate.dense.weight\r\n> > encoder.layer.8.intermediate.dense.bias\r\n> > encoder.layer.8.output.dense.weight\r\n> > encoder.layer.8.output.dense.bias\r\n> > encoder.layer.8.output.LayerNorm.weight\r\n> > encoder.layer.8.output.LayerNorm.bias\r\n> > encoder.layer.9.attention.self.query.weight\r\n> > encoder.layer.9.attention.self.query.bias\r\n> > encoder.layer.9.attention.self.key.weight\r\n> > encoder.layer.9.attention.self.key.bias\r\n> > encoder.layer.9.attention.self.value.weight\r\n> > encoder.layer.9.attention.self.value.bias\r\n> > encoder.layer.9.attention.output.dense.weight\r\n> > encoder.layer.9.attention.output.dense.bias\r\n> > encoder.layer.9.attention.output.LayerNorm.weight\r\n> > encoder.layer.9.attention.output.LayerNorm.bias\r\n> > encoder.layer.9.intermediate.dense.weight\r\n> > encoder.layer.9.intermediate.dense.bias\r\n> > encoder.layer.9.output.dense.weight\r\n> > encoder.layer.9.output.dense.bias\r\n> > encoder.layer.9.output.LayerNorm.weight\r\n> > encoder.layer.9.output.LayerNorm.bias\r\n> > encoder.layer.10.attention.self.query.weight\r\n> > encoder.layer.10.attention.self.query.bias\r\n> > encoder.layer.10.attention.self.key.weight\r\n> > encoder.layer.10.attention.self.key.bias\r\n> > encoder.layer.10.attention.self.value.weight\r\n> > encoder.layer.10.attention.self.value.bias\r\n> > encoder.layer.10.attention.output.dense.weight\r\n> > encoder.layer.10.attention.output.dense.bias\r\n> > encoder.layer.10.attention.output.LayerNorm.weight\r\n> > encoder.layer.10.attention.output.LayerNorm.bias\r\n> > encoder.layer.10.intermediate.dense.weight\r\n> > encoder.layer.10.intermediate.dense.bias\r\n> > encoder.layer.10.output.dense.weight\r\n> > encoder.layer.10.output.dense.bias\r\n> > encoder.layer.10.output.LayerNorm.weight\r\n> > encoder.layer.10.output.LayerNorm.bias\r\n> > encoder.layer.11.attention.self.query.weight\r\n> > encoder.layer.11.attention.self.query.bias\r\n> > encoder.layer.11.attention.self.key.weight\r\n> > encoder.layer.11.attention.self.key.bias\r\n> > encoder.layer.11.attention.self.value.weight\r\n> > encoder.layer.11.attention.self.value.bias\r\n> > encoder.layer.11.attention.output.dense.weight\r\n> > encoder.layer.11.attention.output.dense.bias\r\n> > encoder.layer.11.attention.output.LayerNorm.weight\r\n> > encoder.layer.11.attention.output.LayerNorm.bias\r\n> > encoder.layer.11.intermediate.dense.weight\r\n> > encoder.layer.11.intermediate.dense.bias\r\n> > encoder.layer.11.output.dense.weight\r\n> > encoder.layer.11.output.dense.bias\r\n> > encoder.layer.11.output.LayerNorm.weight\r\n> > encoder.layer.11.output.LayerNorm.bias\r\n> > pooler.dense.weight\r\n> > pooler.dense.bias\r\n> \r\n> Instead using the following does give the expected output.\r\n> \r\n> ```python\r\n> bert = BertModel.from_pretrained('bert-base-uncased')\r\n> for name, param in bert.named_parameters(): \r\n> if name.startswith('embeddings'):\r\n> param.requires_grad = False\r\n> ```\r\n\r\nHi \r\nHow to tell the optimizer that freezing the embedding?", "> > Nope I haven't. I plan to do that today. @thomwolf do you think we're on the right track? Just need your 2 cents :)\r\n> \r\n> How to check the values of the embedding matrix change or not?\r\n\r\nHi! You can use the following code in order to check if any layer has been modified (it should work for any pytorch code if I am not wrong, not just BERT):\r\n\r\n```python3\r\nimport copy\r\nfrom transformers import BertModel\r\n\r\nbert = BertModel.from_pretrained('bert-base-uncased')\r\nlayer = bert.embeddings\r\nfrozen_parameters = {}\r\n\r\n# Copy tensors\r\nfor name, p in layer.named_parameters():\r\n frozen_parameters[name] = copy.deepcopy(p.data) # Freeze in order to be able to compare later\r\n\r\n# Do stuff ...\r\n\r\n# Check if the value of the tensors have been updated\r\nfor name, p in layer.named_parameters():\r\n updated = (frozen_parameters[name] != p.data).any().cpu().detach().numpy()\r\n\r\n print(f\"Layer '{name}' has been updated? {'yes' if updated else 'no'}\")\r\n```\r\n\r\nIt is very similar to the code I use in order to check if a layer has been updated (remember that it won't be updated if `grad_fn` is `None`), but I have not tested this exactly code.\r\n\r\nI hope it helps!" ]
1,544
1,642
1,544
NONE
null
Is there any way of not updating the BERT embeddings during the fine tuning process? For example while running on SQUAD, I want to see the effect of not updating the parameters associated with the BERT embeddings. I saw that `required_grad` is set to True for cpu and fp16. Which makes me think that it's assuming `do_grad` for all the parameters. I'm asking if there's any quick way to disable the update to those embeddings but let the model update other parameters.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/95/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/95/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/94
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/94/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/94/comments
https://api.github.com/repos/huggingface/transformers/issues/94/events
https://github.com/huggingface/transformers/pull/94
388,188,357
MDExOlB1bGxSZXF1ZXN0MjM2NTA5NjIw
94
Fixing the commentary of the `SquadExample` class.
{ "login": "rodgzilla", "id": 12107203, "node_id": "MDQ6VXNlcjEyMTA3MjAz", "avatar_url": "https://avatars.githubusercontent.com/u/12107203?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rodgzilla", "html_url": "https://github.com/rodgzilla", "followers_url": "https://api.github.com/users/rodgzilla/followers", "following_url": "https://api.github.com/users/rodgzilla/following{/other_user}", "gists_url": "https://api.github.com/users/rodgzilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rodgzilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rodgzilla/subscriptions", "organizations_url": "https://api.github.com/users/rodgzilla/orgs", "repos_url": "https://api.github.com/users/rodgzilla/repos", "events_url": "https://api.github.com/users/rodgzilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rodgzilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,544
1,544
1,544
CONTRIBUTOR
null
Fixing the commentary of `SquadExample` that have been copy-pasted from `InputExample`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/94/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/94/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/94", "html_url": "https://github.com/huggingface/transformers/pull/94", "diff_url": "https://github.com/huggingface/transformers/pull/94.diff", "patch_url": "https://github.com/huggingface/transformers/pull/94.patch", "merged_at": 1544387251000 }
https://api.github.com/repos/huggingface/transformers/issues/93
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/93/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/93/comments
https://api.github.com/repos/huggingface/transformers/issues/93/events
https://github.com/huggingface/transformers/pull/93
388,026,951
MDExOlB1bGxSZXF1ZXN0MjM2Mzg2MDI5
93
Zoeliao/dev
{ "login": "ZoeLiao", "id": 29351339, "node_id": "MDQ6VXNlcjI5MzUxMzM5", "avatar_url": "https://avatars.githubusercontent.com/u/29351339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZoeLiao", "html_url": "https://github.com/ZoeLiao", "followers_url": "https://api.github.com/users/ZoeLiao/followers", "following_url": "https://api.github.com/users/ZoeLiao/following{/other_user}", "gists_url": "https://api.github.com/users/ZoeLiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZoeLiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZoeLiao/subscriptions", "organizations_url": "https://api.github.com/users/ZoeLiao/orgs", "repos_url": "https://api.github.com/users/ZoeLiao/repos", "events_url": "https://api.github.com/users/ZoeLiao/events{/privacy}", "received_events_url": "https://api.github.com/users/ZoeLiao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,544
1,544
1,544
NONE
null
RT
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/93/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/93/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/93", "html_url": "https://github.com/huggingface/transformers/pull/93", "diff_url": "https://github.com/huggingface/transformers/pull/93.diff", "patch_url": "https://github.com/huggingface/transformers/pull/93.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/92
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/92/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/92/comments
https://api.github.com/repos/huggingface/transformers/issues/92/events
https://github.com/huggingface/transformers/issues/92
387,903,721
MDU6SXNzdWUzODc5MDM3MjE=
92
Bert uncased and Bert large giving much lower results than Bert cased base
{ "login": "kh522", "id": 8645900, "node_id": "MDQ6VXNlcjg2NDU5MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/8645900?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kh522", "html_url": "https://github.com/kh522", "followers_url": "https://api.github.com/users/kh522/followers", "following_url": "https://api.github.com/users/kh522/following{/other_user}", "gists_url": "https://api.github.com/users/kh522/gists{/gist_id}", "starred_url": "https://api.github.com/users/kh522/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kh522/subscriptions", "organizations_url": "https://api.github.com/users/kh522/orgs", "repos_url": "https://api.github.com/users/kh522/repos", "events_url": "https://api.github.com/users/kh522/events{/privacy}", "received_events_url": "https://api.github.com/users/kh522/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Any specific example that we could investigate?", "I've implemented a version of SQuAD 2.0 on top of the current SQuAD that is similar to the way Google implemented their's on the official Bert repo. The base cased model works fine, but I noticed that uncased models tend to give worse results, even the large model.", "@kh522 would love to try it out. Are you planing to share your code?", "Sorry, but not quite yet. I was wondering if anyone had an intuition behind the error. If I recall correctly the SQuAD file lowercases the inputs for the tokenizer as a default. Shouldn't this mean that the pretrained uncased actually does better than the cased version?", "Hi @kh522, were you carefull no to lower case the input in the case of the uncased models? By default the tokenizer will lower the input [see here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#tokenizer-berttokenizer)", "Ah, I see. That would be a problem. I assume that the difference in accent markers will lead to a lower result. That being said, what is / is there a difference between lowercasing before and after the wordpiece tokenization?", "I've tried running it with the --do_lower_case flag set to False, and the results are still not good yet. Is there another possible idea?", "Try 10 different seeds maybe? A bigger batch-size can help too. More generally, you should try to explore the space of hyper-parameters for fine-tuning, there is often a high variance in the fine-tuning of bert so you will need to compute mean/variances of several results to get meaningful numbers.", "In the run_squad.py, the seed is set to 42, therefore the results reported in the repo should be reproducible, as there would not be any other randomness.\r\n", "> Hi @kh522, were you carefull no to lower case the input in the case of the uncased models? By default the tokenizer will lower the input [see here in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#tokenizer-berttokenizer)\r\n\r\n@thomwolf \r\nJust want to be sure (as the link does not land me in anything specific). If we are pre-training/fine tuning an uncased model still we don't need to lower case input text as tokenizer takes care of it; is this correct understanding?\r\n\r\nIn case, someone lowercases the text, what problem it can cause?", "@abmitra84 \r\nYes, the tokenizer of the uncased model takes care of it. **Lowercasing the text before by yourself doesn't affect it at all, it's just not necessary since the tokenizer takes care of it**. Code below should clarify it.\r\n\r\n```\r\n# Install last Hugging Face libraries (datasets & transformers)\r\n!pip install datasets git+https://github.com/huggingface/transformers/\r\n\r\nfrom transformers import AutoModelForMaskedLM, AutoTokenizer\r\nmodel = \"bert-base-uncased\"\r\ntokenizer = AutoTokenizer.from_pretrained(model, use_fast=True)\r\nmodel = AutoModelForMaskedLM.from_pretrained(model)\r\n\r\nsent = \"SARS-CoV-2 is a type of the Coronavirus\"\r\nsent_lower = sent.lower()\r\n\r\nprint(\"[1] \", sent)\r\nprint(\"[2] \",sent_lower)\r\nprint(\"[3] \",tokenizer.tokenize(sent))\r\nprint(\"[4] \",tokenizer.tokenize(sent_lower))\r\n```\r\n\r\nOutput:\r\n> [1] SARS-CoV-2 is a type of the Coronavirus\r\n> [2] sars-cov-2 is a type of the coronavirus\r\n> [3] ['sar', '##s', '-', 'co', '##v', '-', '2', 'is', 'a', 'type', 'of', 'the', 'corona', '##virus']\r\n> [4] ['sar', '##s', '-', 'co', '##v', '-', '2', 'is', 'a', 'type', 'of', 'the', 'corona', '##virus']" ]
1,544
1,640
1,544
NONE
null
Is there a reason why the Bert uncased model and the Bert large model give lower results that the cased model on downstream tasks?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/92/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/92/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/91
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/91/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/91/comments
https://api.github.com/repos/huggingface/transformers/issues/91/events
https://github.com/huggingface/transformers/pull/91
387,862,855
MDExOlB1bGxSZXF1ZXN0MjM2MjU4MzE3
91
run_classifier.py improvements
{ "login": "rodgzilla", "id": 12107203, "node_id": "MDQ6VXNlcjEyMTA3MjAz", "avatar_url": "https://avatars.githubusercontent.com/u/12107203?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rodgzilla", "html_url": "https://github.com/rodgzilla", "followers_url": "https://api.github.com/users/rodgzilla/followers", "following_url": "https://api.github.com/users/rodgzilla/following{/other_user}", "gists_url": "https://api.github.com/users/rodgzilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rodgzilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rodgzilla/subscriptions", "organizations_url": "https://api.github.com/users/rodgzilla/orgs", "repos_url": "https://api.github.com/users/rodgzilla/repos", "events_url": "https://api.github.com/users/rodgzilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rodgzilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Neat!" ]
1,544
1,544
1,544
CONTRIBUTOR
null
Hi ! This PR contains multiple improvements to the `run_classifier.py` file. The changes are: - removing trailing whitespaces ([PEP 8](https://www.python.org/dev/peps/pep-0008/)), - simplifying a bit the data processing code, in particular tensor formatting, - fixing issue #83 by adapting the value of the `num_labels` argument of `BertForSequenceClassification.from_pretrained` to the dataset being used.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/91/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/91/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/91", "html_url": "https://github.com/huggingface/transformers/pull/91", "diff_url": "https://github.com/huggingface/transformers/pull/91.diff", "patch_url": "https://github.com/huggingface/transformers/pull/91.patch", "merged_at": 1544523125000 }
https://api.github.com/repos/huggingface/transformers/issues/90
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/90/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/90/comments
https://api.github.com/repos/huggingface/transformers/issues/90/events
https://github.com/huggingface/transformers/issues/90
387,770,421
MDU6SXNzdWUzODc3NzA0MjE=
90
Fine tuned to Multi-choice dataset?
{ "login": "Qzsl123", "id": 23257340, "node_id": "MDQ6VXNlcjIzMjU3MzQw", "avatar_url": "https://avatars.githubusercontent.com/u/23257340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Qzsl123", "html_url": "https://github.com/Qzsl123", "followers_url": "https://api.github.com/users/Qzsl123/followers", "following_url": "https://api.github.com/users/Qzsl123/following{/other_user}", "gists_url": "https://api.github.com/users/Qzsl123/gists{/gist_id}", "starred_url": "https://api.github.com/users/Qzsl123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qzsl123/subscriptions", "organizations_url": "https://api.github.com/users/Qzsl123/orgs", "repos_url": "https://api.github.com/users/Qzsl123/repos", "events_url": "https://api.github.com/users/Qzsl123/events{/privacy}", "received_events_url": "https://api.github.com/users/Qzsl123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes it is, the code is not written yet but I'm planning to work on it. The idea is to format the input data the same way the authors of [Improving Language Understanding with Unsupervised Learning](https://blog.openai.com/language-unsupervised/)\r\n\r\n\r\n![Multiple choice GPT](https://i.imgur.com/z0Eanvy.png)\r\n\r\nYou run an inference `(context, choice)` for each choice, you compute the image of the `[CLS]` token by a linear layer with 1 output and then compute a softmax over the output of all choices.\r\n\r\nI will try to create a PR with this code very soon. ", "Thx for the reply.\r\nActually, I have the same plan. But I am not sure whether it will work. Anyway, I will have a try.", "If it worked in the OpenAI paper, I don't really see why it wouldn't work with this architecture.", "@Qzsl123 The code for multiple choice task is available in PR #96 if you want to test it.", "@rodgzilla yeah, I am trying to run it. Thanks for the wonderful job!", "> Yes it is, the code is not written yet but I'm planning to work on it. The idea is to format the input data the same way the authors of [Improving Language Understanding with Unsupervised Learning](https://blog.openai.com/language-unsupervised/)\r\n> \r\n> ![Multiple choice GPT](https://camo.githubusercontent.com/e5d95abc42ca2acb493a710383c949eb01c10bfb/68747470733a2f2f692e696d6775722e636f6d2f7a3045616e76792e706e67)\r\n> \r\n> You run an inference `(context, choice)` for each choice, you compute the image of the `[CLS]` token by a linear layer with 1 output and then compute a softmax over the output of all choices.\r\n> \r\n> I will try to create a PR with this code very soon.\r\n\r\nhi,The multi choices problem usually has one passage, question and ABCD four options。In your model, dose context means passage&question ?\r\n\r\n", "Any update on this issue?" ]
1,544
1,602
1,544
NONE
null
Is it posible to fine tuned to the multi choices problems , which usually has one passage, question and ABCD four options?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/90/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/90/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/89
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/89/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/89/comments
https://api.github.com/repos/huggingface/transformers/issues/89/events
https://github.com/huggingface/transformers/issues/89
387,683,054
MDU6SXNzdWUzODc2ODMwNTQ=
89
bert-base-multilingual-cased - Text bigger than 512
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/users/agemagician/followers", "following_url": "https://api.github.com/users/agemagician/following{/other_user}", "gists_url": "https://api.github.com/users/agemagician/gists{/gist_id}", "starred_url": "https://api.github.com/users/agemagician/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/agemagician/subscriptions", "organizations_url": "https://api.github.com/users/agemagician/orgs", "repos_url": "https://api.github.com/users/agemagician/repos", "events_url": "https://api.github.com/users/agemagician/events{/privacy}", "received_events_url": "https://api.github.com/users/agemagician/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello,\r\n\r\nI do not think that it is possible out of the box. The article states the following:\r\n\r\n> We use learned positional embeddings with supported sequence lengths up to 512 tokens.\r\n\r\nThe positional embeddings are therefore limited to 512 tokens. You may be able to add positional embeddings for position greater than 512 and learn them on your specific dataset but I don't know how efficient that would be.", "Hi @agemagician, you cannot really use pretrained bert for text longer than 512 tokens per se but you can use the sliding window approach.\r\n\r\nCheck this issue of the original bert repo for more details: https://github.com/google-research/bert/issues/66" ]
1,544
1,544
1,544
CONTRIBUTOR
null
Hello, I am trying to extract features from German text using bert-base-multilingual-cased. However, my text is bigger than 512 words. Is there any way to use the pertained Bert for text greater than 512 words
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/89/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/89/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/88
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/88/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/88/comments
https://api.github.com/repos/huggingface/transformers/issues/88/events
https://github.com/huggingface/transformers/issues/88
387,286,653
MDU6SXNzdWUzODcyODY2NTM=
88
Error when calculating loss and running backward
{ "login": "zhongpeixiang", "id": 11826803, "node_id": "MDQ6VXNlcjExODI2ODAz", "avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhongpeixiang", "html_url": "https://github.com/zhongpeixiang", "followers_url": "https://api.github.com/users/zhongpeixiang/followers", "following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}", "gists_url": "https://api.github.com/users/zhongpeixiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhongpeixiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhongpeixiang/subscriptions", "organizations_url": "https://api.github.com/users/zhongpeixiang/orgs", "repos_url": "https://api.github.com/users/zhongpeixiang/repos", "events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}", "received_events_url": "https://api.github.com/users/zhongpeixiang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I probably know the bug. The final output layer is for binary classification but I use it for 4-class classification. I thought BERT can automatically decide between sigmoid and soft max. I will replace it with my own classifier tomorrow and see how it goes.", "The mismatched output size between BERT and our dataset is the bug. Also, remember to set the num_labels to your output size:\r\n\r\n<pre>\r\noutput_size = 4\r\nmodel.classifier = nn.Linear(768, output_size)\r\nmodel.num_labels = output_size \r\n</pre>" ]
1,543
1,543
1,543
NONE
null
I'm using the sentence classification example. I used my own dataset for emotionclassification (4 classes). The hyper-parameters are as follows: <pre> args.max_seq_length = 100 args.do_train = True args.do_eval = True args.do_lower_case = True args.train_batch_size = 32 args.eval_batch_size = 8 args.learning_rate = 2e-5 args.num_train_epochs = 3 args.warmup_proportion = 0.1 args.no_cuda = False args.local_rank = -1 args.gpu_id = 1 args.seed = 412 args.gradient_accumulation_steps = 1 args.optimize_on_cpu = False args.fp16 = False args.loss_scale = 128 </pre> I prepared my dataset accordingly and properly: <pre> 12/04/2018 21:23:02 - INFO - __main__ - *** Example *** 12/04/2018 21:23:02 - INFO - __main__ - guid: train-1 12/04/2018 21:23:02 - INFO - __main__ - tokens: [CLS] but i don ' t [ sep ] u just did [ sep ] i don ##t want to talk to u [SEP] 12/04/2018 21:23:02 - INFO - __main__ - input_ids: 101 2021 1045 2123 1005 1056 1031 19802 1033 1057 2074 2106 1031 19802 1033 1045 2123 2102 2215 2000 2831 2000 1057 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/04/2018 21:23:02 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/04/2018 21:23:02 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/04/2018 21:23:02 - INFO - __main__ - label: angry (id = 3) </pre> When I run the following code, a runtime error occurred: <pre> for step, batch in enumerate(tqdm(train_dataloader, desc="Iteration")): &nbsp &nbsp batch = tuple(t.to(device) for t in batch) &nbsp &nbsp input_ids, input_mask, segment_ids, label_ids = batch &nbsp &nbsp loss = model(input_ids, segment_ids, input_mask, label_ids) --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-20-1977b86302ed> in <module>() 17 try: ---> 18 loss.backward() 19 except RuntimeError: /raid5/peixiang/anaconda3/lib/python3.6/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 92 """ ---> 93 torch.autograd.backward(self, gradient, retain_graph, create_graph) 94 /raid5/peixiang/anaconda3/lib/python3.6/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 89 tensors, grad_tensors, retain_graph, create_graph, ---> 90 allow_unreachable=True) # allow_unreachable flag 91 RuntimeError: cublas runtime error : the GPU program failed to execute at /opt/conda/conda-bld/pytorch_1532581333611/work/aten/src/THC/THCBlas.cu:411 </pre> What might be the cause? The dataset? I run the MRPC example without any issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/88/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/88/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/87
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/87/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/87/comments
https://api.github.com/repos/huggingface/transformers/issues/87/events
https://github.com/huggingface/transformers/pull/87
387,269,400
MDExOlB1bGxSZXF1ZXN0MjM1Nzk1ODUx
87
Readme file links
{ "login": "rodgzilla", "id": 12107203, "node_id": "MDQ6VXNlcjEyMTA3MjAz", "avatar_url": "https://avatars.githubusercontent.com/u/12107203?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rodgzilla", "html_url": "https://github.com/rodgzilla", "followers_url": "https://api.github.com/users/rodgzilla/followers", "following_url": "https://api.github.com/users/rodgzilla/following{/other_user}", "gists_url": "https://api.github.com/users/rodgzilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rodgzilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rodgzilla/subscriptions", "organizations_url": "https://api.github.com/users/rodgzilla/orgs", "repos_url": "https://api.github.com/users/rodgzilla/repos", "events_url": "https://api.github.com/users/rodgzilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rodgzilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks Grégory!" ]
1,543
1,544
1,544
CONTRIBUTOR
null
Adding links to examples files in `README.md`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/87/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/87/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/87", "html_url": "https://github.com/huggingface/transformers/pull/87", "diff_url": "https://github.com/huggingface/transformers/pull/87.diff", "patch_url": "https://github.com/huggingface/transformers/pull/87.patch", "merged_at": 1544024465000 }
https://api.github.com/repos/huggingface/transformers/issues/86
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/86/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/86/comments
https://api.github.com/repos/huggingface/transformers/issues/86/events
https://github.com/huggingface/transformers/issues/86
387,233,714
MDU6SXNzdWUzODcyMzM3MTQ=
86
code in run_squad.py line 263
{ "login": "hitxujian", "id": 11830865, "node_id": "MDQ6VXNlcjExODMwODY1", "avatar_url": "https://avatars.githubusercontent.com/u/11830865?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hitxujian", "html_url": "https://github.com/hitxujian", "followers_url": "https://api.github.com/users/hitxujian/followers", "following_url": "https://api.github.com/users/hitxujian/following{/other_user}", "gists_url": "https://api.github.com/users/hitxujian/gists{/gist_id}", "starred_url": "https://api.github.com/users/hitxujian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hitxujian/subscriptions", "organizations_url": "https://api.github.com/users/hitxujian/orgs", "repos_url": "https://api.github.com/users/hitxujian/repos", "events_url": "https://api.github.com/users/hitxujian/events{/privacy}", "received_events_url": "https://api.github.com/users/hitxujian/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "![image](https://user-images.githubusercontent.com/11830865/49438135-61d37c00-f7f8-11e8-8b2a-a7222bd30f0e.png)\r\n", "Hi, what is your question?", "Strictly speaking, the zero-padding in segment_ids leads to ambiguous tensor entries, because 0 can mean both \"first sentence\" (or query in another task?) and \"padding\".\r\n\r\nBut in practice this isn't a problem because anything related to padding gets masked out later." ]
1,543
1,544
1,544
NONE
null
# Zero-pad up to the sequence length. while len(input_ids) < max_seq_length: input_ids.append(0) input_mask.append(0) segment_ids.append(0) in segment_ids array,1 indicates token from passage and 0 indicate token form query. when padding,why segment_ids filled with 0,which represents query
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/86/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/86/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/85
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/85/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/85/comments
https://api.github.com/repos/huggingface/transformers/issues/85/events
https://github.com/huggingface/transformers/issues/85
387,100,844
MDU6SXNzdWUzODcxMDA4NDQ=
85
How to use pre-trained SQUAD model?
{ "login": "danyaljj", "id": 2441454, "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danyaljj", "html_url": "https://github.com/danyaljj", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "repos_url": "https://api.github.com/users/danyaljj/repos", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi there are now examples on how you can save and reload the models in the examples (`run_classifier`, `run_squad` and `run_swag`)" ]
1,543
1,544
1,544
CONTRIBUTOR
null
After training squad, I have a model file in a local folder: ``` -rw-rw-r-- 1 khashab2 cs_danr 4.7M Nov 21 19:20 dev-v1.1.json -rw-rw-r-- 1 khashab2 cs_danr 3.4K Nov 29 22:52 evaluate-v1.1.py drwxrwsr-x 2 khashab2 cs_danr 10 Nov 30 14:57 out2 -rw-rw-r-- 1 khashab2 cs_danr 29M Nov 21 19:20 train-v1.1.json -rw-rw-r-- 1 khashab2 cs_danr 490M Nov 29 23:14 train-v1.1.json_bert-base-uncased_384_128_64 -rw-rw-r-- 1 khashab2 cs_danr 490M Nov 30 15:05 train-v1.1.json_bert-large-uncased_384_128_64 ``` I want to use this pre-trained model to make predictions. Is there any example that I can follow this? (if not any pointers?) I looked into the instructions and didn't find anything relevant on this.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/85/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/85/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/84
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/84/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/84/comments
https://api.github.com/repos/huggingface/transformers/issues/84/events
https://github.com/huggingface/transformers/pull/84
387,059,110
MDExOlB1bGxSZXF1ZXN0MjM1NjM0MDky
84
elementwise_mean -> mean (thinking ahead to pytorch 1.0)
{ "login": "joelgrus", "id": 1308313, "node_id": "MDQ6VXNlcjEzMDgzMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1308313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joelgrus", "html_url": "https://github.com/joelgrus", "followers_url": "https://api.github.com/users/joelgrus/followers", "following_url": "https://api.github.com/users/joelgrus/following{/other_user}", "gists_url": "https://api.github.com/users/joelgrus/gists{/gist_id}", "starred_url": "https://api.github.com/users/joelgrus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joelgrus/subscriptions", "organizations_url": "https://api.github.com/users/joelgrus/orgs", "repos_url": "https://api.github.com/users/joelgrus/repos", "events_url": "https://api.github.com/users/joelgrus/events{/privacy}", "received_events_url": "https://api.github.com/users/joelgrus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "oops, doesn't work under current pytorch, never mind" ]
1,543
1,543
1,543
CONTRIBUTOR
null
under the pytorch 1.0 nightly this test generates ``` UserWarning: reduction='elementwise_mean' is deprecated, please use reduction='mean' instead. ``` so this PR fixes that.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/84/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/84/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/84", "html_url": "https://github.com/huggingface/transformers/pull/84", "diff_url": "https://github.com/huggingface/transformers/pull/84.diff", "patch_url": "https://github.com/huggingface/transformers/pull/84.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/83
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/83/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/83/comments
https://api.github.com/repos/huggingface/transformers/issues/83/events
https://github.com/huggingface/transformers/issues/83
386,988,878
MDU6SXNzdWUzODY5ODg4Nzg=
83
Error while runing example
{ "login": "chledowski", "id": 24462884, "node_id": "MDQ6VXNlcjI0NDYyODg0", "avatar_url": "https://avatars.githubusercontent.com/u/24462884?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chledowski", "html_url": "https://github.com/chledowski", "followers_url": "https://api.github.com/users/chledowski/followers", "following_url": "https://api.github.com/users/chledowski/following{/other_user}", "gists_url": "https://api.github.com/users/chledowski/gists{/gist_id}", "starred_url": "https://api.github.com/users/chledowski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chledowski/subscriptions", "organizations_url": "https://api.github.com/users/chledowski/orgs", "repos_url": "https://api.github.com/users/chledowski/repos", "events_url": "https://api.github.com/users/chledowski/events{/privacy}", "received_events_url": "https://api.github.com/users/chledowski/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi!\r\n\r\nIn case you haven't already, modifying the source at https://github.com/huggingface/pytorch-pretrained-BERT/blob/e60e8a606837ff7f49e583de8492e55575155eb6/examples/run_classifier.py#L491 and turning it into\r\n\r\n`cache_dir=PYTORCH_PRETRAINED_BERT_CACHE / 'distributed_{}'.format(args.local_rank), num_labels = 3)`\r\n\r\nshould get your finetuning started (you have three labels, `[\"contradiction\", \"entailment\", \"neutral\"]`)", "Thanks, it worked. I think it could be great if 3 classes was default when choosing MNLI :)" ]
1,543
1,543
1,543
NONE
null
Hi! I have a problem when running the example, could you please give me a hint on what may I be doing wrong? I use: `PYTHONPATH=. python examples/run_classifier.py --task_name MNLI --do_train --do_eval --do_lower_case --data_dir ../GLUE-baselines/glue_data/MNLI/ --bert_model bert-base-uncased --max_seq_len 40 --train_batch_size 10 --output_dir mnli/` And obtain: ``` ... 12/03/2018 21:11:10 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 12/03/2018 21:11:10 - INFO - __main__ - label: entailment (id = 1) 12/03/2018 21:11:10 - INFO - __main__ - *** Example *** 12/03/2018 21:11:10 - INFO - __main__ - guid: train-3 12/03/2018 21:11:10 - INFO - __main__ - tokens: [CLS] how do you know ? all this is their information again . [SEP] this information belongs to them . [SEP] 12/03/2018 21:11:10 - INFO - __main__ - input_ids: 101 2129 2079 2017 2113 1029 2035 2023 2003 2037 2592 2153 1012 102 2023 2592 7460 2000 2068 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/03/2018 21:11:10 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/03/2018 21:11:10 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 12/03/2018 21:11:10 - INFO - __main__ - label: entailment (id = 1) 12/03/2018 21:11:10 - INFO - __main__ - *** Example *** 12/03/2018 21:11:10 - INFO - __main__ - guid: train-4 12/03/2018 21:11:10 - INFO - __main__ - tokens: [CLS] yeah i tell you what though if you go price some of those tennis shoes i can see why now you know they ' re getting up in [SEP] the tennis shoes have a range of prices . [SEP] 12/03/2018 21:11:10 - INFO - __main__ - input_ids: 101 3398 1045 2425 2017 2054 2295 2065 2017 2175 3976 2070 1997 2216 5093 6007 1045 2064 2156 2339 2085 2017 2113 2027 1005 2128 2893 2039 1999 102 1996 5093 6007 2031 1037 2846 1997 7597 1012 102 12/03/2018 21:11:10 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 12/03/2018 21:11:10 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 12/03/2018 21:11:10 - INFO - __main__ - label: neutral (id = 2) 12/03/2018 21:14:39 - INFO - __main__ - ***** Running training ***** 12/03/2018 21:14:39 - INFO - __main__ - Num examples = 392702 12/03/2018 21:14:39 - INFO - __main__ - Batch size = 10 12/03/2018 21:14:39 - INFO - __main__ - Num steps = 117810 Epoch: 0%| | 0/3 [00:00<?, ?it/sTHCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THC/generic/THCTensorMath.cu line=26 error=59 : device-side assert triggered | 0/39271 [00:00<?, ?it/s] /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed. /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THCUNN/ClassNLLCriterion.cu:105: void cunn_ClassNLLCriterion_updateOutput_kernel(Dtype *, Dtype *, Dtype *, long *, Dtype *, int, int, int, int, long) [with Dtype = float, Acctype = float]: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "examples/run_classifier.py", line 637, in <module> main() File "examples/run_classifier.py", line 558, in main loss.backward() File "/home/kchledowski/anaconda2/envs/glue/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/kchledowski/anaconda2/envs/glue/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: cuda runtime error (59) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1533672544752/work/aten/src/THC/generic/THCTensorMath.cu:26 ``` I would be very grateful for any suggestions where to look. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/83/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/83/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/82
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/82/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/82/comments
https://api.github.com/repos/huggingface/transformers/issues/82/events
https://github.com/huggingface/transformers/issues/82
386,887,965
MDU6SXNzdWUzODY4ODc5NjU=
82
AttributeError: 'tuple' object has no attribute 'backward'
{ "login": "Qzsl123", "id": 23257340, "node_id": "MDQ6VXNlcjIzMjU3MzQw", "avatar_url": "https://avatars.githubusercontent.com/u/23257340?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Qzsl123", "html_url": "https://github.com/Qzsl123", "followers_url": "https://api.github.com/users/Qzsl123/followers", "following_url": "https://api.github.com/users/Qzsl123/following{/other_user}", "gists_url": "https://api.github.com/users/Qzsl123/gists{/gist_id}", "starred_url": "https://api.github.com/users/Qzsl123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Qzsl123/subscriptions", "organizations_url": "https://api.github.com/users/Qzsl123/orgs", "repos_url": "https://api.github.com/users/Qzsl123/repos", "events_url": "https://api.github.com/users/Qzsl123/events{/privacy}", "received_events_url": "https://api.github.com/users/Qzsl123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks like there was a code change which changed the forward method of the model involved here from returning a tensor to returning a tuple of tensors and the example hasn't been updated yet to reflect that change. There's probably a line in run_classifier.py like\r\n```Python\r\nloss = model(input...)\r\n```\r\nwhich now needs to be\r\n```Python\r\nloss, something_else = model(input...)\r\n```", "> Looks like there was a code change which changed the forward method of the model involved here from returning a tensor to returning a tuple of tensors and the example hasn't been updated yet to reflect that change. There's probably a line in run_classifier.py like\r\n> \r\n> ```python\r\n> loss = model(input...)\r\n> ```\r\n> \r\n> which now needs to be\r\n> \r\n> ```python\r\n> loss, something_else = model(input...)\r\n> ```\r\n\r\nYou are right! Thx!" ]
1,543
1,543
1,543
NONE
null
Traceback (most recent call last): | 0/11 [00:00<?, ?it/s] File "examples/run_classifier.py", line 637, in <module> main() File "examples/run_classifier.py", line 558, in main loss.backward() AttributeError: 'tuple' object has no attribute 'backward'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/82/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/82/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/81
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/81/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/81/comments
https://api.github.com/repos/huggingface/transformers/issues/81/events
https://github.com/huggingface/transformers/issues/81
386,786,079
MDU6SXNzdWUzODY3ODYwNzk=
81
There is some problem in supporting continuously training
{ "login": "ZacharyWaseda", "id": 16608767, "node_id": "MDQ6VXNlcjE2NjA4NzY3", "avatar_url": "https://avatars.githubusercontent.com/u/16608767?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZacharyWaseda", "html_url": "https://github.com/ZacharyWaseda", "followers_url": "https://api.github.com/users/ZacharyWaseda/followers", "following_url": "https://api.github.com/users/ZacharyWaseda/following{/other_user}", "gists_url": "https://api.github.com/users/ZacharyWaseda/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZacharyWaseda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZacharyWaseda/subscriptions", "organizations_url": "https://api.github.com/users/ZacharyWaseda/orgs", "repos_url": "https://api.github.com/users/ZacharyWaseda/repos", "events_url": "https://api.github.com/users/ZacharyWaseda/events{/privacy}", "received_events_url": "https://api.github.com/users/ZacharyWaseda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @ZacharyWaseda, continuous training is an open-research problem. You should rather seek some solution in the papers/workshop/conference discussing researches in this field. This is not my personal field of expertise so I can only direct you to google and other search engine for more information." ]
1,543
1,544
1,544
NONE
null
I change the run_classfifier.py in order to support continuously training. i save the model.state_dict() and the BertAdam optimizer.state_dict(), and I load them when start continuously training. However, After some epochs, the loss will increase little by little and finally end with a large loss value. I do not know the reason. Please help me.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/81/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/81/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/80
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/80/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/80/comments
https://api.github.com/repos/huggingface/transformers/issues/80/events
https://github.com/huggingface/transformers/issues/80
386,763,906
MDU6SXNzdWUzODY3NjM5MDY=
80
How can I apply BERT to a cloze task?
{ "login": "Deep1994", "id": 24366782, "node_id": "MDQ6VXNlcjI0MzY2Nzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24366782?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Deep1994", "html_url": "https://github.com/Deep1994", "followers_url": "https://api.github.com/users/Deep1994/followers", "following_url": "https://api.github.com/users/Deep1994/following{/other_user}", "gists_url": "https://api.github.com/users/Deep1994/gists{/gist_id}", "starred_url": "https://api.github.com/users/Deep1994/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Deep1994/subscriptions", "organizations_url": "https://api.github.com/users/Deep1994/orgs", "repos_url": "https://api.github.com/users/Deep1994/repos", "events_url": "https://api.github.com/users/Deep1994/events{/privacy}", "received_events_url": "https://api.github.com/users/Deep1994/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think that you best option would be to use the masked language modeling head and restrict the output of the softmax layer to your candidates.\r\n\r\nI think the following code does the job:\r\n\r\n```\r\nimport torch\r\nfrom pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM\r\n\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\ntext = 'From Monday to Friday most people are busy working or studying, '\\\r\n 'but in the evenings and weekends they are free and _ themselves.'\r\ntokenized_text = tokenizer.tokenize(text)\r\n\r\nmasked_index = tokenized_text.index('_')\r\ntokenized_text[masked_index] = '[MASK]'\r\n\r\ncandidates = ['love', 'work', 'enjoy', 'play']\r\ncandidates_ids = tokenizer.convert_tokens_to_ids(candidates)\r\n\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\r\n\r\nsegments_ids = [0] * len(tokenized_text)\r\n\r\ntokens_tensor = torch.tensor([indexed_tokens])\r\nsegments_tensors = torch.tensor([segments_ids])\r\n\r\nlanguage_model = BertForMaskedLM.from_pretrained('bert-base-uncased')\r\nlanguage_model.eval()\r\n\r\npredictions = language_model(tokens_tensor, segments_tensors)\r\npredictions_candidates = predictions[0, masked_index, candidates_ids]\r\nanswer_idx = torch.argmax(predictions_candidates).item()\r\n\r\nprint(f'The most likely word is \"{candidates[answer_idx]}\".')\r\n```\r\n\r\nWhen run, this code prints: \r\n\r\n```\r\nThe most likely word is \"enjoy\".\r\n```", "The solution of @rodgzilla looks good. Don't hesitate to re-open the issue if you have other questions.", "Just a note that this solution does not help you if any of your candidates are out of your model's whole-word vocabulary. (A work-around is required to deal with BERT's reliance on word-piece tokens.)", "> > I think that you best option would be to use the masked language modeling head and restrict the output of the softmax layer to your candidates.\r\n> > I think the following code does the job:\r\n> > ```\r\n> > import torch\r\n> > from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM\r\n> > \r\n> > tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n> > text = 'From Monday to Friday most people are busy working or studying, '\\\r\n> > 'but in the evenings and weekends they are free and _ themselves.'\r\n> > tokenized_text = tokenizer.tokenize(text)\r\n> > \r\n> > masked_index = tokenized_text.index('_')\r\n> > tokenized_text[masked_index] = '[MASK]'\r\n> > \r\n> > candidates = ['love', 'work', 'enjoy', 'play']\r\n> > candidates_ids = tokenizer.convert_tokens_to_ids(candidates)\r\n> > \r\n> > indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\r\n> > \r\n> > segments_ids = [0] * len(tokenized_text)\r\n> > \r\n> > tokens_tensor = torch.tensor([indexed_tokens])\r\n> > segments_tensors = torch.tensor([segments_ids])\r\n> > \r\n> > language_model = BertForMaskedLM.from_pretrained('bert-base-uncased')\r\n> > language_model.eval()\r\n> > \r\n> > predictions = language_model(tokens_tensor, segments_tensors)\r\n> > predictions_candidates = predictions[0, masked_index, candidates_ids]\r\n> > answer_idx = torch.argmax(predictions_candidates).item()\r\n> > \r\n> > print(f'The most likely word is \"{candidates[answer_idx]}\".')\r\n> > ```\r\n> > \r\n> > \r\n> > When run, this code prints:\r\n> > ```\r\n> > The most likely word is \"enjoy\".\r\n> > ```\r\n> \r\n> \r\nThanks, you solution is good\r\n\r\n" ]
1,543
1,586
1,544
NONE
null
Hi, I have a dataset like : From Monday to Friday most people are busy working or studying, but in the evenings and weekends they are free and _ themselves. And there are four candidates for the missing blank area: ["love", "work", "enjoy", "play"], here "enjoy" is the correct answer, it is a cloze-style task, and it looks like the maskLM in the BERT, the difference is that I don't want to search the candidate from all the tokens but the four given candidates, how can I do this? It looks like negtive sampling method. Do you have any idea? Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/80/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/80/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/79
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/79/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/79/comments
https://api.github.com/repos/huggingface/transformers/issues/79/events
https://github.com/huggingface/transformers/issues/79
386,698,511
MDU6SXNzdWUzODY2OTg1MTE=
79
numpy.core._internal.AxisError: axis 1 is out of bounds for array of dimension 1
{ "login": "A-Rain", "id": 29532760, "node_id": "MDQ6VXNlcjI5NTMyNzYw", "avatar_url": "https://avatars.githubusercontent.com/u/29532760?v=4", "gravatar_id": "", "url": "https://api.github.com/users/A-Rain", "html_url": "https://github.com/A-Rain", "followers_url": "https://api.github.com/users/A-Rain/followers", "following_url": "https://api.github.com/users/A-Rain/following{/other_user}", "gists_url": "https://api.github.com/users/A-Rain/gists{/gist_id}", "starred_url": "https://api.github.com/users/A-Rain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/A-Rain/subscriptions", "organizations_url": "https://api.github.com/users/A-Rain/orgs", "repos_url": "https://api.github.com/users/A-Rain/repos", "events_url": "https://api.github.com/users/A-Rain/events{/privacy}", "received_events_url": "https://api.github.com/users/A-Rain/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, just update the repo to the current master, this should have been fixed this weekend (re-open the issue of it's not)." ]
1,543
1,543
1,543
NONE
null
hello, when I am running run_classifier.py with MRPC dataset, there seems to be an mistake. the mistake is as following: <img width="752" alt="default" src="https://user-images.githubusercontent.com/29532760/49360256-9de0e100-f713-11e8-9a5c-d9f2bc5331e6.PNG"> the mistake is happening when training is over and the model is for evaluating ``` with torch.no_grad(): tmp_eval_loss, logits = model(input_ids, segment_ids, input_mask, label_ids) ``` here I found the size of logits is [] I'm using python3.5 and torch=0.4.1, I don't know how to fix it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/79/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/79/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/78
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/78/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/78/comments
https://api.github.com/repos/huggingface/transformers/issues/78/events
https://github.com/huggingface/transformers/issues/78
386,553,265
MDU6SXNzdWUzODY1NTMyNjU=
78
TypeError: object of type 'WindowsPath' has no len()
{ "login": "Deep1994", "id": 24366782, "node_id": "MDQ6VXNlcjI0MzY2Nzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24366782?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Deep1994", "html_url": "https://github.com/Deep1994", "followers_url": "https://api.github.com/users/Deep1994/followers", "following_url": "https://api.github.com/users/Deep1994/following{/other_user}", "gists_url": "https://api.github.com/users/Deep1994/gists{/gist_id}", "starred_url": "https://api.github.com/users/Deep1994/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Deep1994/subscriptions", "organizations_url": "https://api.github.com/users/Deep1994/orgs", "repos_url": "https://api.github.com/users/Deep1994/repos", "events_url": "https://api.github.com/users/Deep1994/events{/privacy}", "received_events_url": "https://api.github.com/users/Deep1994/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Can you post a more detailed log?", "I install your PyTorch pretrained bert with pip like \"pip install pytorch-pretrained-bert\", then I run the code in Usage section like:\r\n\r\n`import torch`\r\n`from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM`\r\n\r\n`# Load pre-trained model tokenizer (vocabulary)`\r\n`tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')`\r\n\r\nbut there is an error occurs, the error information is: \r\n\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-2-7725148c607d>\", line 5, in <module>\r\n tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\n File \"C:\\Users\\Deep\\Anaconda3\\lib\\site-packages\\pytorch_pretrained_bert\\tokenization.py\", line 117, in from_pretrained\r\n resolved_vocab_file = cached_path(vocab_file, cache_dir=cache_dir)\r\n\r\n File \"C:\\Users\\Deep\\Anaconda3\\lib\\site-packages\\pytorch_pretrained_bert\\file_utils.py\", line 88, in cached_path\r\n return get_from_cache(url_or_filename, cache_dir)\r\n\r\n File \"C:\\Users\\Deep\\Anaconda3\\lib\\site-packages\\pytorch_pretrained_bert\\file_utils.py\", line 169, in get_from_cache\r\n os.makedirs(cache_dir, exist_ok=True)\r\n\r\n File \"C:\\Users\\Deep\\Anaconda3\\lib\\os.py\", line 226, in makedirs\r\n head, tail = path.split(name)\r\n\r\n File \"C:\\Users\\Deep\\Anaconda3\\lib\\ntpath.py\", line 204, in split\r\n d, p = splitdrive(p)\r\n\r\n File \"C:\\Users\\Deep\\Anaconda3\\lib\\ntpath.py\", line 139, in splitdrive\r\n if len(p) >= 2:\r\n\r\nTypeError: object of type 'WindowsPath' has no len()", "Strange error. I am only using standard library here. Maybe it has something to do with your installation of Conda. You can try to manually specify a cache directory for the package by either:\r\n- setting the environment variable `PYTORCH_PRETRAINED_BERT_CACHE=XXX` to a directory `XXX` you created to store the downloaded models.\r\n- sending the path to this directory to the tokenizer and model using the `cache_dir=XXX` arguments, for example: `tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', cache_dir=XXX)`", "I follow your second instruction and change the code to:\r\n\r\n`tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', cache_dir='C:/Users/Deep/Anaconda3/Lib/site-packages')`\r\n\r\nIt works! Thank you!\r\n" ]
1,543
1,543
1,543
NONE
null
Hi, when I run "tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')", the error "TypeError: object of type 'WindowsPath' has no len()" occurs, what is the problem? Thank you for your excellent code!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/78/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/78/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/77
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/77/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/77/comments
https://api.github.com/repos/huggingface/transformers/issues/77/events
https://github.com/huggingface/transformers/pull/77
386,551,555
MDExOlB1bGxSZXF1ZXN0MjM1MjUzNTk5
77
Correct assignement for logits in classifier example
{ "login": "davidefiocco", "id": 4547987, "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidefiocco", "html_url": "https://github.com/davidefiocco", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "repos_url": "https://api.github.com/users/davidefiocco/repos", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Ok thanks, that should work for now. I simplified the output of the classes indeed (only send back loss when a label is provided) so this example broke." ]
1,543
1,543
1,543
CONTRIBUTOR
null
I tried to address https://github.com/huggingface/pytorch-pretrained-BERT/issues/76 should be correct, but there's likely a more efficient way.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/77/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/77/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/77", "html_url": "https://github.com/huggingface/transformers/pull/77", "diff_url": "https://github.com/huggingface/transformers/pull/77.diff", "patch_url": "https://github.com/huggingface/transformers/pull/77.patch", "merged_at": 1543752065000 }
https://api.github.com/repos/huggingface/transformers/issues/76
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/76/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/76/comments
https://api.github.com/repos/huggingface/transformers/issues/76/events
https://github.com/huggingface/transformers/issues/76
386,489,436
MDU6SXNzdWUzODY0ODk0MzY=
76
Wrong signature in model call in run_classifier.py example (?)
{ "login": "davidefiocco", "id": 4547987, "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidefiocco", "html_url": "https://github.com/davidefiocco", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "repos_url": "https://api.github.com/users/davidefiocco/repos", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You are right, I also encountered this small error.", "Thanks for noticing, fixed in #77." ]
1,543
1,543
1,543
CONTRIBUTOR
null
I think that https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/examples/run_classifier.py#L608 may well have a problem, as it's not consistent with https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/examples/run_classifier.py#L549 nor with https://github.com/huggingface/pytorch-pretrained-BERT/blob/063be09b714bf4d2fbbc3de7f52c45b8bc6817eb/pytorch_pretrained_bert/modeling.py#L875 and this currently breaks the example. One quick patch would be to replace that line with ``` tmp_eval_loss = model(input_ids, segment_ids, input_mask, label_ids) logits = model(input_ids, segment_ids, input_mask) ``` But I am not so sure, there are likely better ways.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/76/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/76/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/75
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/75/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/75/comments
https://api.github.com/repos/huggingface/transformers/issues/75/events
https://github.com/huggingface/transformers/pull/75
386,394,416
MDExOlB1bGxSZXF1ZXN0MjM1MTUwMDc2
75
Point typo fix
{ "login": "davidefiocco", "id": 4547987, "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidefiocco", "html_url": "https://github.com/davidefiocco", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "repos_url": "https://api.github.com/users/davidefiocco/repos", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,543
1,543
1,543
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/75/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/75/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/75", "html_url": "https://github.com/huggingface/transformers/pull/75", "diff_url": "https://github.com/huggingface/transformers/pull/75.diff", "patch_url": "https://github.com/huggingface/transformers/pull/75.patch", "merged_at": 1543623344000 }
https://api.github.com/repos/huggingface/transformers/issues/74
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/74/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/74/comments
https://api.github.com/repos/huggingface/transformers/issues/74/events
https://github.com/huggingface/transformers/pull/74
386,394,375
MDExOlB1bGxSZXF1ZXN0MjM1MTUwMDQ0
74
Update finetuning example in README adding --do_lower_case
{ "login": "davidefiocco", "id": 4547987, "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidefiocco", "html_url": "https://github.com/davidefiocco", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "repos_url": "https://api.github.com/users/davidefiocco/repos", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Indeed" ]
1,543
1,543
1,543
CONTRIBUTOR
null
Should be consistent with the fact that an uncased model is used
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/74/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/74/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/74", "html_url": "https://github.com/huggingface/transformers/pull/74", "diff_url": "https://github.com/huggingface/transformers/pull/74.diff", "patch_url": "https://github.com/huggingface/transformers/pull/74.patch", "merged_at": 1543623331000 }
https://api.github.com/repos/huggingface/transformers/issues/73
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/73/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/73/comments
https://api.github.com/repos/huggingface/transformers/issues/73/events
https://github.com/huggingface/transformers/pull/73
386,369,366
MDExOlB1bGxSZXF1ZXN0MjM1MTMwMzgw
73
Third release
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,543
1,543
1,543
MEMBER
null
This third release comprise the following updates: - added the two new pre-trained model from Google: `bert-large-cased` and `bert-multilingual-cased`, - added a model for token-level classification: `BertForTokenClassification`, - added tests for every model class, with and without labels, - fixed tokenizer loading function `BertTokenizer.from_pretrained()` when loading from a directory containing a pretrained model, - fixed typos in model docstrings and completed the docstrings, - improved examples (added `do_lower_case`arguments).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/73/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/73/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/73", "html_url": "https://github.com/huggingface/transformers/pull/73", "diff_url": "https://github.com/huggingface/transformers/pull/73.diff", "patch_url": "https://github.com/huggingface/transformers/pull/73.patch", "merged_at": 1543615831000 }
https://api.github.com/repos/huggingface/transformers/issues/72
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/72/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/72/comments
https://api.github.com/repos/huggingface/transformers/issues/72/events
https://github.com/huggingface/transformers/pull/72
386,352,239
MDExOlB1bGxSZXF1ZXN0MjM1MTE2ODY5
72
Fix internal hyperlink typo
{ "login": "NirantK", "id": 3250749, "node_id": "MDQ6VXNlcjMyNTA3NDk=", "avatar_url": "https://avatars.githubusercontent.com/u/3250749?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NirantK", "html_url": "https://github.com/NirantK", "followers_url": "https://api.github.com/users/NirantK/followers", "following_url": "https://api.github.com/users/NirantK/following{/other_user}", "gists_url": "https://api.github.com/users/NirantK/gists{/gist_id}", "starred_url": "https://api.github.com/users/NirantK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NirantK/subscriptions", "organizations_url": "https://api.github.com/users/NirantK/orgs", "repos_url": "https://api.github.com/users/NirantK/repos", "events_url": "https://api.github.com/users/NirantK/events{/privacy}", "received_events_url": "https://api.github.com/users/NirantK/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,543
1,543
1,543
CONTRIBUTOR
null
Fix #tup to #tpu
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/72/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/72/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/72", "html_url": "https://github.com/huggingface/transformers/pull/72", "diff_url": "https://github.com/huggingface/transformers/pull/72.diff", "patch_url": "https://github.com/huggingface/transformers/pull/72.patch", "merged_at": 1543617233000 }
https://api.github.com/repos/huggingface/transformers/issues/71
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/71/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/71/comments
https://api.github.com/repos/huggingface/transformers/issues/71/events
https://github.com/huggingface/transformers/issues/71
386,303,565
MDU6SXNzdWUzODYzMDM1NjU=
71
run_squad script gets stuck
{ "login": "samyam", "id": 3409344, "node_id": "MDQ6VXNlcjM0MDkzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/3409344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/samyam", "html_url": "https://github.com/samyam", "followers_url": "https://api.github.com/users/samyam/followers", "following_url": "https://api.github.com/users/samyam/following{/other_user}", "gists_url": "https://api.github.com/users/samyam/gists{/gist_id}", "starred_url": "https://api.github.com/users/samyam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samyam/subscriptions", "organizations_url": "https://api.github.com/users/samyam/orgs", "repos_url": "https://api.github.com/users/samyam/repos", "events_url": "https://api.github.com/users/samyam/events{/privacy}", "received_events_url": "https://api.github.com/users/samyam/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Never mind, it just needed time to process the examples. It might be good to have the progress bar inside convert_examples_to_features.", "Maybe try distributed training? I don't think PyTorch `DataParallel` will be very efficient on 8 GPUs due to the python GIL.", "Thanks for the suggestion. I will try that. Currently, its showing me about 9 hours to fine tune bert-large on squad with batch size of 32 using DataParallel.\r\n\r\nThe performance improves quite a bit if a if I use a batch size of 256 with gradient accumulate, which makes sense as this reduces the frequency of communication of the gradients. A question I have is, does the learning rate adapt automatically to the batch size being used? Have you tried larger batch sizes?" ]
1,543
1,543
1,543
NONE
null
Hello, I am trying to run the squad fine tuning script, but it hangs after printing out a few predictions. I am attaching the log. Can you help take a look? I am running the script on a machine with 8 M40s. [bert_squad.log](https://github.com/huggingface/pytorch-pretrained-BERT/files/2634588/bert_squad.log) Best, Samyam
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/71/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/71/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/70
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/70/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/70/comments
https://api.github.com/repos/huggingface/transformers/issues/70/events
https://github.com/huggingface/transformers/pull/70
386,245,181
MDExOlB1bGxSZXF1ZXN0MjM1MDMzMDI3
70
fix typo in input for masked lm loss function
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "following_url": "https://api.github.com/users/ghost/following{/other_user}", "gists_url": "https://api.github.com/users/ghost/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghost/subscriptions", "organizations_url": "https://api.github.com/users/ghost/orgs", "repos_url": "https://api.github.com/users/ghost/repos", "events_url": "https://api.github.com/users/ghost/events{/privacy}", "received_events_url": "https://api.github.com/users/ghost/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "thanks" ]
1,543
1,543
1,543
NONE
null
Fixing #55 . There was still a typo.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/70/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/70/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/70", "html_url": "https://github.com/huggingface/transformers/pull/70", "diff_url": "https://github.com/huggingface/transformers/pull/70.diff", "patch_url": "https://github.com/huggingface/transformers/pull/70.patch", "merged_at": 1543598627000 }
https://api.github.com/repos/huggingface/transformers/issues/69
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/69/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/69/comments
https://api.github.com/repos/huggingface/transformers/issues/69/events
https://github.com/huggingface/transformers/issues/69
386,197,836
MDU6SXNzdWUzODYxOTc4MzY=
69
cannot access to pretrained vocab file on S3
{ "login": "zeze-zzz", "id": 29975099, "node_id": "MDQ6VXNlcjI5OTc1MDk5", "avatar_url": "https://avatars.githubusercontent.com/u/29975099?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zeze-zzz", "html_url": "https://github.com/zeze-zzz", "followers_url": "https://api.github.com/users/zeze-zzz/followers", "following_url": "https://api.github.com/users/zeze-zzz/following{/other_user}", "gists_url": "https://api.github.com/users/zeze-zzz/gists{/gist_id}", "starred_url": "https://api.github.com/users/zeze-zzz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zeze-zzz/subscriptions", "organizations_url": "https://api.github.com/users/zeze-zzz/orgs", "repos_url": "https://api.github.com/users/zeze-zzz/repos", "events_url": "https://api.github.com/users/zeze-zzz/events{/privacy}", "received_events_url": "https://api.github.com/users/zeze-zzz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I have the same issue. \r\n\r\n> OSError: HEAD request failed for url https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt with status code 404\r\n\r\n\r\nIt would be nice to be able to cache the vocab files as well as the model weights out of the box.", "I found temporary solution for this issue.\r\n`BertTokenizer.from_pretrained` method accepts local file instead of model_name\r\nex) `BertTokenizer.from_pretrained('/dir/to/vocab/bert-base-uncased-vocab.txt')`\r\n\r\nvocab txt file can be downloaded from [google bert repo](https://github.com/google-research/bert#pre-trained-models).", "The files are back. Sorry, wrong manipulation while adding the new models.", "> I found temporary solution for this issue.\r\n> `BertTokenizer.from_pretrained` method accepts local file instead of model_name\r\n> ex) `BertTokenizer.from_pretrained('/dir/to/vocab/bert-base-uncased-vocab.txt')`\r\n> \r\n\r\n\r\nWell, this solution doesn't seem to be working now, I get \r\n\r\n`OSError: Model name 'path/to/model/vocab.txt' was not found in tokenizers model name list (bart-/model/large, bart-large-mnli, bart-large-cnn, bart-large-xsum). We assumed 'path/to/model/vocab.txt' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.json', 'merges.txt'] but couldn't find such vocabulary files at this path or url. `", "> I found temporary solution for this issue.\r\n> `BertTokenizer.from_pretrained` method accepts local file instead of model_name\r\n> ex) `BertTokenizer.from_pretrained('/dir/to/vocab/bert-base-uncased-vocab.txt')`\r\n> \r\n> vocab txt file can be downloaded from [google bert repo](https://github.com/google-research/bert#pre-trained-models).\r\n\r\nHi, I add this file, however I got another error:\r\n\r\n*** json.decoder.JSONDecodeError: Expecting value: line 1 column 2 (char 1)\r\n\r\nAny help please?" ]
1,543
1,620
1,543
NONE
null
Hi, thanks for develop well-made pytorch version of BERT. Unfortunately, pretrained vocab files are not reachable. error traceback is below. > File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/tokenization.py", line 124, in from_pretrained resolved_vocab_file = cached_path(vocab_file) File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/file_utils.py", line 88, in cached_path return get_from_cache(url_or_filename, cache_dir) File "/usr/local/lib/python3.6/dist-packages/pytorch_pretrained_bert/file_utils.py", line 178, in get_from_cache .format(url, response.status_code)) OSError: HEAD request failed for url https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt with status code 404
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/69/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/69/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/68
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/68/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/68/comments
https://api.github.com/repos/huggingface/transformers/issues/68/events
https://github.com/huggingface/transformers/issues/68
386,055,987
MDU6SXNzdWUzODYwNTU5ODc=
68
Accuracy on classification task is lower than the official tensorflow version
{ "login": "ejld", "id": 31990860, "node_id": "MDQ6VXNlcjMxOTkwODYw", "avatar_url": "https://avatars.githubusercontent.com/u/31990860?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ejld", "html_url": "https://github.com/ejld", "followers_url": "https://api.github.com/users/ejld/followers", "following_url": "https://api.github.com/users/ejld/following{/other_user}", "gists_url": "https://api.github.com/users/ejld/gists{/gist_id}", "starred_url": "https://api.github.com/users/ejld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ejld/subscriptions", "organizations_url": "https://api.github.com/users/ejld/orgs", "repos_url": "https://api.github.com/users/ejld/repos", "events_url": "https://api.github.com/users/ejld/events{/privacy}", "received_events_url": "https://api.github.com/users/ejld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi!\r\nCould it be different seeds?\r\nSee e.g. https://github.com/huggingface/pytorch-pretrained-BERT/issues/53#issuecomment-441565229", "Hi @ejld, yes BERT has a large variance on many fine-tuning tasks (see also the discussion in #64).\r\nYou should try a bunch of different seeds (like 10 seeds for example) and compare the mean and standard deviation of the results." ]
1,543
1,543
1,543
NONE
null
Hi, I am running the same task with the same hyper parameters as the official Google Tensorflow implementation of BERT, however, I am getting around 1.5% lower accuracy. Can you please give any hint about the possible cause? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/68/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/68/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/67
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/67/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/67/comments
https://api.github.com/repos/huggingface/transformers/issues/67/events
https://github.com/huggingface/transformers/issues/67
386,047,173
MDU6SXNzdWUzODYwNDcxNzM=
67
`TypeError: object of type 'NoneType' has no len()` when tuning on squad
{ "login": "danyaljj", "id": 2441454, "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danyaljj", "html_url": "https://github.com/danyaljj", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "repos_url": "https://api.github.com/users/danyaljj/repos", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Oh I see, this should be fixed in `master` by 257a35134a1bd378b16aa985ee76675289ff439c just update your repo please." ]
1,543
1,543
1,543
CONTRIBUTOR
null
When running the following command for tuning on squad, I am getting a petty error inside logger `TypeError: object of type 'NoneType' has no len()`. Any thoughts what could be the main cause of the problem? Full log: ``` python3.6 examples/run_squad.py \ > --bert_model bert-base-uncased \ > --do_train \ > --do_predict \ > --train_file $SQUAD_DIR/train-v1.1.json \ > --predict_file $SQUAD_DIR/dev-v1.1.json \ > --train_batch_size 12 \ > --learning_rate 3e-5 \ > --num_train_epochs 2.0 \ > --max_seq_length 384 \ > --doc_stride 128 \ > --output_dir out . . . 11/29/2018 23:10:14 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11/29/2018 23:10:14 - INFO - __main__ - segment_ids: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11/29/2018 23:10:14 - INFO - __main__ - start_position: 47 11/29/2018 23:10:14 - INFO - __main__ - end_position: 48 11/29/2018 23:10:14 - INFO - __main__ - answer: the 1870s 11/29/2018 23:14:38 - INFO - __main__ - Saving train features into cached file /shared/shelley/khashab2/pytorch-pretrained-BERT/squad/train-v1.1.json_bert-base-uncased_384_128_64 11/29/2018 23:14:51 - INFO - __main__ - ***** Running training ***** 11/29/2018 23:14:51 - INFO - __main__ - Num orig examples = 87599 Traceback (most recent call last): File "examples/run_squad.py", line 989, in <module> main() File "examples/run_squad.py", line 884, in main logger.info(" Num split examples = %d", len(train_features)) TypeError: object of type 'NoneType' has no len() ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/67/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/67/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/66
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/66/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/66/comments
https://api.github.com/repos/huggingface/transformers/issues/66/events
https://github.com/huggingface/transformers/pull/66
385,774,972
MDExOlB1bGxSZXF1ZXN0MjM0NjY4MDQy
66
speedup by truncating unused part
{ "login": "artemisart", "id": 9201969, "node_id": "MDQ6VXNlcjkyMDE5Njk=", "avatar_url": "https://avatars.githubusercontent.com/u/9201969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/artemisart", "html_url": "https://github.com/artemisart", "followers_url": "https://api.github.com/users/artemisart/followers", "following_url": "https://api.github.com/users/artemisart/following{/other_user}", "gists_url": "https://api.github.com/users/artemisart/gists{/gist_id}", "starred_url": "https://api.github.com/users/artemisart/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/artemisart/subscriptions", "organizations_url": "https://api.github.com/users/artemisart/orgs", "repos_url": "https://api.github.com/users/artemisart/repos", "events_url": "https://api.github.com/users/artemisart/events{/privacy}", "received_events_url": "https://api.github.com/users/artemisart/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi Mathis,\r\nThanks for that. I think it's better for the user to send inputs that they truncated themselves rather than doing that hidden inside the model.\r\nBest,\r\nThomas" ]
1,543
1,543
1,543
NONE
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/66/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/66/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/66", "html_url": "https://github.com/huggingface/transformers/pull/66", "diff_url": "https://github.com/huggingface/transformers/pull/66.diff", "patch_url": "https://github.com/huggingface/transformers/pull/66.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/65
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/65/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/65/comments
https://api.github.com/repos/huggingface/transformers/issues/65/events
https://github.com/huggingface/transformers/issues/65
385,638,595
MDU6SXNzdWUzODU2Mzg1OTU=
65
3 sentences as input for BertForSequenceClassification?
{ "login": "mikelkl", "id": 11305095, "node_id": "MDQ6VXNlcjExMzA1MDk1", "avatar_url": "https://avatars.githubusercontent.com/u/11305095?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mikelkl", "html_url": "https://github.com/mikelkl", "followers_url": "https://api.github.com/users/mikelkl/followers", "following_url": "https://api.github.com/users/mikelkl/following{/other_user}", "gists_url": "https://api.github.com/users/mikelkl/gists{/gist_id}", "starred_url": "https://api.github.com/users/mikelkl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mikelkl/subscriptions", "organizations_url": "https://api.github.com/users/mikelkl/orgs", "repos_url": "https://api.github.com/users/mikelkl/repos", "events_url": "https://api.github.com/users/mikelkl/events{/privacy}", "received_events_url": "https://api.github.com/users/mikelkl/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Technically it is possible but BERT was not pretrained to handle multiple SEP tokens between sentences and does not have a third token_type, so I think it won't be easy to make it work. You may also want to use a new token for the second separation.", "> Technically it is possible but BERT was not pretrained to handle multiple SEP tokens between sentences and does not have a third token_type, so I think it won't be easy to make it work. You may also want to use a new token for the second separation.\r\n\r\nHi artemisart,\r\n\r\nThanks for your reply.\r\n\r\nSo, if someone wanna take multiple sentences as input of BertForSequenceClassification, let's say a whole passage, an alternative way is to concatenate them into a single \"sentence\" and then fit it in, right?", "I you don't have a separation (like question/answer) then yes you can just concatenate them (but you are still limited to 512 tokens).", "@mikelkl I would also go with the solution and answer of @artemisart.", "@artemisart hi, if i have a single sentence classification task, should the max length of sentence limited to half of 512, that is to say 256?", "No, it will be better if you use the full 512 tokens.", "wouldn't concatenating the whole passage into a single sentence mean losing context of each sentence? @artemisart ", "No it shouldn't ", "What if I want to check on a huge corpus, that even concatenating into one sentence exceeds the 512 token limit? @artemisart", "@thedrowsywinger maybe u should try Transformer-XL", "> I you don't have a separation (like question/answer) then yes you can just concatenate them (but you are still limited to 512 tokens).\r\n\r\nI have 3 inputs, 1 of the input contains conversation (QUERY, ANSWER). \r\nQUERY: I want to ask a question.\r\n\r\n> ANSWER: Sure, ask away.\r\n> QUERY: How is the weather today?\r\n> ANSWER: It is nice and sunny.\r\n> QUERY: Okay, nice to know.\r\n> ANSWER: Would you like to know anything else?\r\n\r\nHow can I tell the model to separate the turns of conversation? Model is classification model.\r\nI was thinking to add a new special token <EOT> between the turns but could not get it work." ]
1,543
1,631
1,543
NONE
null
Hi there, Thanks for releasing this awesome repo, it does lots people like me a great favor. So far I've tried sentence-pair BertForSequenceClassification task, and it indeed work. I'd like to know if it is possible to use BertForSequenceClassification to model triple sentences classification problem and its input can be described as below: **[CLS]A[SEP]B[SEP]C[SEP]** Expecting for your reply! Thanks & Regards
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/65/reactions", "total_count": 18, "+1": 18, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/65/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/64
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/64/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/64/comments
https://api.github.com/repos/huggingface/transformers/issues/64/events
https://github.com/huggingface/transformers/issues/64
385,555,095
MDU6SXNzdWUzODU1NTUwOTU=
64
Feature extraction for sequential labelling
{ "login": "zhaoxy92", "id": 21225257, "node_id": "MDQ6VXNlcjIxMjI1MjU3", "avatar_url": "https://avatars.githubusercontent.com/u/21225257?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhaoxy92", "html_url": "https://github.com/zhaoxy92", "followers_url": "https://api.github.com/users/zhaoxy92/followers", "following_url": "https://api.github.com/users/zhaoxy92/following{/other_user}", "gists_url": "https://api.github.com/users/zhaoxy92/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhaoxy92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhaoxy92/subscriptions", "organizations_url": "https://api.github.com/users/zhaoxy92/orgs", "repos_url": "https://api.github.com/users/zhaoxy92/repos", "events_url": "https://api.github.com/users/zhaoxy92/events{/privacy}", "received_events_url": "https://api.github.com/users/zhaoxy92/received_events", "type": "User", "site_admin": false }
[ { "id": 1260952223, "node_id": "MDU6TGFiZWwxMjYwOTUyMjIz", "url": "https://api.github.com/repos/huggingface/transformers/labels/Discussion", "name": "Discussion", "color": "22870e", "default": false, "description": "Discussion on a topic (keep it focused or open a new issue though)" }, { "id": 1314768611, "node_id": "MDU6TGFiZWwxMzE0NzY4NjEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": null } ]
closed
false
null
[]
[ "Well that seems like a good approach. Maybe you can find some inspiration in the code of the `BertForQuestionAnswering` model? It is not exactly what you are doing but maybe it can help.", "Thanks. It worked. However, a interesting issue about BERT is that it's highly sensitive to learning rate, which makes it very difficult to combine with other models", "@zhaoxy92 what sequence labeling task are you doing? I've got CoNLL'03 NER running with the ``bert-base-cased`` model, and also found the same sensitivity to hyper-parameters.\r\n\r\nThe best dev F1 score i've gotten after ~~half a day~~ a day of trying some parameters is ~~92.4~~ 94.6, which is a bit lower than the 96.4 dev score for BERT_base reported in the paper. I guess more tuning will increase the score some more.\r\n\r\nThe best configuration for me so far is:\r\n\r\n- Batch size: 160 (on four P40 GPUs with 24GB RAM each). Smaller batch sizes that fit on one or two GPUs give bad results.\r\n- Optimizer: Adam with learning rate 1e-4. Tried BertAdam with learning rate 1e-5, but it didn't seem to converge.\r\n- fp16/fp32: Only fp32 works. Tried fp16 (half precision) to allow larger batch sizes, but this gave really low scores, with and without loss scaling.\r\n\r\nAlso, properly averaging the loss is important: Not just ``loss /= batch_size``. You need to take into account padding and word pieces without predictions (https://github.com/google-research/bert/issues/33#issuecomment-436726952). If you have a mask tensor that indicates which bert inputs correspond to tagged tokens, then the proper averaging is ``loss /= mask.float().sum``\r\n\r\nAnother tip, truncating the input (https://github.com/huggingface/pytorch-pretrained-BERT/pull/66) enables much larger batch sizes. Without it the largest possible batch size was 56, but with truncating 160 is possible.", "I am also working on CoNLL03. Similar results as you got.", "@bheinzerling with the risk of going off topic here, would you mind sharing your code? I'd love to read and adapt it for a similar sequential classification task.", "I have some code for preparing batches here:\r\n\r\nhttps://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98\r\n\r\nThe important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff.\r\n\r\nWith this, feature extraction for each sentence, i.e. a list of tokens, is simply:\r\n\r\n```Python\r\nbert = dougu.bert.Bert.Model(\"bert-base-cased\")\r\nfeaturized_sentences = []\r\nfor tokens in sentences:\r\n features = {}\r\n features[\"bert_ids\"], features[\"bert_mask\"], features[\"bert_token_starts\"] = bert.subword_tokenize_to_ids(tokens)\r\n featurized_sentences.append(features)\r\n```\r\nThen I use a custom collate function for a DataLoader that turns featurized_sentences into batches:\r\n\r\n```Python\r\ndef collate_fn(featurized_sentences_batch):\r\n bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in (\"bert_ids\", \"bert_mask\", \"bert_token_starts\")]\r\n return bert_batch\r\n```\r\nA simple sequence tagger module would look something like this:\r\n\r\n```Python\r\nclass SequenceTagger(torch.nn.Module):\r\n def __init__(self, data_parallel=True):\r\n bert = BertModel.from_pretrained(\"bert-base-cased\").to(device=torch.device(\"cuda\"))\r\n if data_parallel:\r\n self.bert = torch.nn.DataParallel(bert)\r\n else:\r\n self.bert = bert\r\n bert_dim = 786 # (or get the dim from BertEmbeddings)\r\n n_labels = 5 # need to set this for your task\r\n self.out = torch.nn.Linear(bert_dim, n_labels)\r\n ... # droput, log_softmax...\r\n \r\n def forward(self, bert_batch, true_labels):\r\n bert_ids, bert_mask, bert_token_starts = bert_batch\r\n # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM\r\n max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item()\r\n if max_length < bert_ids.shape[1]:\r\n bert_ids = bert_ids[:, :max_length]\r\n bert_mask = bert_mask[:, :max_length]\r\n\r\n segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence\r\n bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1]\r\n # select the states representing each token start, for each instance in the batch\r\n bert_token_reprs = [\r\n layer[starts.nonzero().squeeze(1)]\r\n for layer, starts in zip(bert_last_layer, bert_token_starts)]\r\n # need to pad because sentence length varies\r\n padded_bert_token_reprs = pad_sequence(\r\n bert_token_reprs, batch_first=True, padding_value=-1)\r\n # output/classification layer: input bert states and get log probabilities for cross entropy loss\r\n pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs)))\r\n mask = true_labels != -1 # I did set label = -1 for all padding tokens somewhere else\r\n loss = cross_entropy(pred_logits, true_labels)\r\n # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token).\r\n loss /= mask.float().sum()\r\n return loss\r\n```\r\n\r\nWrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started.", "@bheinzerling Thanks a lot for the starter, got awesome results!", "Thanks for sharing these tips here! It helps a lot. \r\n\r\nI tried to finetune BERT on multiple imbalanced datasets and found the result quite unstable... For an imbalanced dataset, I mean there are much more O labels than the others under the {B,I,O} tagging scheme. Tried weighted cross-entropy loss but the performance is still not as expected. Has anyone met the same issue? \r\n\r\nThanks!", "Hi~@bheinzerling\r\nI uesd batch size=16, and lr=2e-5, get the dev F1=0.951 and test F1=0.914 which lower than ELMO. What about your result now?\r\n", "@kugwzk I didn't do any more CoNLL'03 runs since the numbers reported in the BERT paper were apparently achieved by using document context, which is different from the standard sentence-based evaluation. You can find more details here: https://github.com/allenai/allennlp/pull/2067#issuecomment-443961816", "Hmmm...I think they should tell that in the paper...And do you know where to find that they used document context?", "That's what the folks over at allennlp said. I don't know where they got this information, maybe personal communication with one of the BERT authors?", "Anyway, thank you very much for tell me that.", "https://github.com/kamalkraj/BERT-NER\r\nReplicated results from BERT paper", "https://github.com/JianLiu91/bert_ner gives a solution that is very easy to understand. \r\nHowever, I still wonder whether is the best practice.", "Hi all, \r\n\r\nI am trying to train the BERT model on some data that I have. However, I am having trouble understanding how to adjust the labels following tokenization. I am trying to perform word level classification (similar to NER) \r\n\r\nIf I have the following tokenized sentence and its' labels:\r\n```\r\noriginal_tokens = ['The', <start>', 'eng-30-01258617-a', '<end>', 'frailty']\r\noriginal_labels = [0, 2, 3, 4, 1]\r\n```\r\n\r\nThen after using the BERT tokenizer I get the following:\r\n`bert_tokens = ['[CLS]', 'the', '<start>', 'eng-30-01258617-a', '<end>', 'frail', '##ty', '[SEP]']`\r\n\r\nAlso, I adjust my label array as follows:\r\n`bert_labels = [0, 2, 3, 4, 1, 1]`\r\n\r\n**N.B**. Tokens such as eng-30-01258617-a are not tokenized further as I included an ignore list which contains words and tokens that I do not want tokenized and I swapped them with the [unusedXXX] tokens found in the vocab.txt file. \r\n\r\nNotice how the last word 'frailty' is transformed into ['frail', '##ty'] and the label '1' which was used for the whole word is now placed under each word piece. Is this the correct way of doing it? If you would like a more in-depth explanation of what I am trying to achieve you can read the following: https://stackoverflow.com/questions/56129165/how-to-handle-labels-when-using-the-berts-wordpiece-tokenizer\r\n\r\nAny help would be greatly appreciated! Thanks in advance", "@dangal95, adjusting the original labels is probably not the best way. A simpler method that works well is described in this issue, here https://github.com/huggingface/pytorch-pretrained-BERT/issues/64#issuecomment-443703063", "This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.\n", "@nijianmo Hi, I am recently considering using weighted loss in NER task. I wonder if you have tried weighted crf or weighted softmax in pytorch implementation. If so, did you get a good performance ? Thanks in advance.", "Many thanks to @bheinzerling! For those who may concern , I've implemented a NER model based on pytorch-transformers and @bheinzerling's idea, which might help you get a quick start on it. Welcome to check [this](https://github.com/weizhepei/BERT-NER) out.", "> I have some code for preparing batches here:\r\n> \r\n> https://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98\r\n> \r\n> The important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff.\r\n> \r\n> With this, feature extraction for each sentence, i.e. a list of tokens, is simply:\r\n> \r\n> ```python\r\n> bert = dougu.bert.Bert.Model(\"bert-base-cased\")\r\n> featurized_sentences = []\r\n> for tokens in sentences:\r\n> features = {}\r\n> features[\"bert_ids\"], features[\"bert_mask\"], features[\"bert_token_starts\"] = bert.subword_tokenize_to_ids(tokens)\r\n> featurized_sentences.append(features)\r\n> ```\r\n> \r\n> Then I use a custom collate function for a DataLoader that turns featurized_sentences into batches:\r\n> \r\n> ```python\r\n> def collate_fn(featurized_sentences_batch):\r\n> bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in (\"bert_ids\", \"bert_mask\", \"bert_token_starts\")]\r\n> return bert_batch\r\n> ```\r\n> \r\n> A simple sequence tagger module would look something like this:\r\n> \r\n> ```python\r\n> class SequenceTagger(torch.nn.Module):\r\n> def __init__(self, data_parallel=True):\r\n> bert = BertModel.from_pretrained(\"bert-base-cased\").to(device=torch.device(\"cuda\"))\r\n> if data_parallel:\r\n> self.bert = torch.nn.DataParallel(bert)\r\n> else:\r\n> self.bert = bert\r\n> bert_dim = 786 # (or get the dim from BertEmbeddings)\r\n> n_labels = 5 # need to set this for your task\r\n> self.out = torch.nn.Linear(bert_dim, n_labels)\r\n> ... # droput, log_softmax...\r\n> \r\n> def forward(self, bert_batch, true_labels):\r\n> bert_ids, bert_mask, bert_token_starts = bert_batch\r\n> # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM\r\n> max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item()\r\n> if max_length < bert_ids.shape[1]:\r\n> bert_ids = bert_ids[:, :max_length]\r\n> bert_mask = bert_mask[:, :max_length]\r\n> \r\n> segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence\r\n> bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1]\r\n> # select the states representing each token start, for each instance in the batch\r\n> bert_token_reprs = [\r\n> layer[starts.nonzero().squeeze(1)]\r\n> for layer, starts in zip(bert_last_layer, bert_token_starts)]\r\n> # need to pad because sentence length varies\r\n> padded_bert_token_reprs = pad_sequence(\r\n> bert_token_reprs, batch_first=True, padding_value=-1)\r\n> # output/classification layer: input bert states and get log probabilities for cross entropy loss\r\n> pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs)))\r\n> mask = true_labels != -1 # I did set label = -1 for all padding tokens somewhere else\r\n> loss = cross_entropy(pred_logits, true_labels)\r\n> # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token).\r\n> loss /= mask.float().sum()\r\n> return loss\r\n> ```\r\n> \r\n> Wrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started.\r\n\r\nI did not realize there is a method subword_tokenize until seeing your post. I did spend a lot of time wirte this method.", "> That's what the folks over at allennlp said. I don't know where they got this information, maybe personal communication with one of the BERT authors?\r\n\r\nJust adding a bit of clarification since I revisited the paper after reading that comment.\r\n\r\nFrom the BERT Paper Section 5.3 (https://arxiv.org/pdf/1810.04805.pdf)\r\nIn this section, we compare the two approaches by applying BERT to the CoNLL-2003 Named Entity Recognition (NER) task (Tjong Kim Sang and De Meulder, 2003). In the input to BERT, we use a case-preserving WordPiece model, and we include the maximal document context provided by the data. ", "@ramithp that was added in v2 of the paper, but wasn't present in v1, which is the version the discussion here refers to", "@bheinzerling Yeah, I just realized that. No wonder I couldn't remember seeing it earlier. Thanks for confirming it. Just wanted to add that bit to the thread in case there were others that haven't read the revision.", "@zhaoxy92 @thomwolf @bheinzerling @srslynow @rremani \r\nSorry about tag all of you. I wonder how to set the weight decay other than the BERT structure, for example the crf parameter after BERT output. Should I set it to be 0.01 or 0? Sorry again for tagging all of you because it is kind of urgent. ", "> @zhaoxy92 @thomwolf @bheinzerling @srslynow @rremani\r\n> Sorry about tag all of you. I wonder how to set the weight decay other than the BERT structure, for example the crf parameter after BERT output. Should I set it to be 0.01 or 0? Sorry again for tagging all of you because it is kind of urgent.\r\n\r\nThis repository does not use a CRF for NER classification? Anyway, parameters of a CRF depend on the data distribution you have. These links might be usefull: https://towardsdatascience.com/conditional-random-field-tutorial-in-pytorch-ca0d04499463 and https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html", "@srslynow Thanks for your answer! I am familiar with CRF, but kind of confused how to set the weight decay when the CRF is connected with BERT. The authors or huggingface seem not to have mentioned how to set weight decay beside the BERT structure.", "Thanks to https://github.com/huggingface/transformers/issues/64#issuecomment-443703063, I could get the implementation to work - for anyone else that's struggling to reproduce the results: https://github.com/chnsh/BERT-NER-CoNLL", "BERT-NER in Tensorflow 2.0\r\nhttps://github.com/kamalkraj/BERT-NER-TF", "> ple sequence tagger\r\n\r\n\r\n\r\n> I have some code for preparing batches here:\r\n> \r\n> https://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98\r\n> \r\n> The important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff.\r\n> \r\n> With this, feature extraction for each sentence, i.e. a list of tokens, is simply:\r\n> \r\n> ```python\r\n> bert = dougu.bert.Bert.Model(\"bert-base-cased\")\r\n> featurized_sentences = []\r\n> for tokens in sentences:\r\n> features = {}\r\n> features[\"bert_ids\"], features[\"bert_mask\"], features[\"bert_token_starts\"] = bert.subword_tokenize_to_ids(tokens)\r\n> featurized_sentences.append(features)\r\n> ```\r\n> \r\n> Then I use a custom collate function for a DataLoader that turns featurized_sentences into batches:\r\n> \r\n> ```python\r\n> def collate_fn(featurized_sentences_batch):\r\n> bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in (\"bert_ids\", \"bert_mask\", \"bert_token_starts\")]\r\n> return bert_batch\r\n> ```\r\n> \r\n> A simple sequence tagger module would look something like this:\r\n> \r\n> ```python\r\n> class SequenceTagger(torch.nn.Module):\r\n> def __init__(self, data_parallel=True):\r\n> bert = BertModel.from_pretrained(\"bert-base-cased\").to(device=torch.device(\"cuda\"))\r\n> if data_parallel:\r\n> self.bert = torch.nn.DataParallel(bert)\r\n> else:\r\n> self.bert = bert\r\n> bert_dim = 786 # (or get the dim from BertEmbeddings)\r\n> n_labels = 5 # need to set this for your task\r\n> self.out = torch.nn.Linear(bert_dim, n_labels)\r\n> ... # droput, log_softmax...\r\n> \r\n> def forward(self, bert_batch, true_labels):\r\n> bert_ids, bert_mask, bert_token_starts = bert_batch\r\n> # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM\r\n> max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item()\r\n> if max_length < bert_ids.shape[1]:\r\n> bert_ids = bert_ids[:, :max_length]\r\n> bert_mask = bert_mask[:, :max_length]\r\n> \r\n> segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence\r\n> bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1]\r\n> # select the states representing each token start, for each instance in the batch\r\n> bert_token_reprs = [\r\n> layer[starts.nonzero().squeeze(1)]\r\n> for layer, starts in zip(bert_last_layer, bert_token_starts)]\r\n> # need to pad because sentence length varies\r\n> padded_bert_token_reprs = pad_sequence(\r\n> bert_token_reprs, batch_first=True, padding_value=-1)\r\n> # output/classification layer: input bert states and get log probabilities for cross entropy loss\r\n> pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs)))\r\n> mask = true_labels != -1 # I did set label = -1 for all padding tokens somewhere else\r\n> loss = cross_entropy(pred_logits, true_labels)\r\n> # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token).\r\n> loss /= mask.float().sum()\r\n> return loss\r\n> ```\r\n> \r\n> Wrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started.\r\n\r\n\r\n\r\n> ```python\r\n> bert_last_layer\r\n> ```\r\n\r\nHi, I am trying to make your code work, and here is my setup: I re-declare as free functions and constants everything that is needed\r\n```\r\nimport numpy as np\r\nfrom pytorch_transformers import BertModel\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\nSEP = \"[SEP]\"\r\nMASK = '[MASK]'\r\nCLS = \"[CLS]\"\r\nmax_len = 100\r\ndef flatten(list_of_lists):\r\n for list in list_of_lists:\r\n for item in list:\r\n yield item\r\ndef convert_tokens_to_ids(tokens, pad=True):\r\n token_ids = tokenizer.convert_tokens_to_ids(tokens)\r\n ids = torch.tensor([token_ids]).to(device=\"cpu\")\r\n assert ids.size(1) < max_len\r\n if pad:\r\n padded_ids = torch.zeros(1, max_len).to(ids)\r\n padded_ids[0, :ids.size(1)] = ids\r\n mask = torch.zeros(1, max_len).to(ids)\r\n mask[0, :ids.size(1)] = 1\r\n return padded_ids, mask\r\n else:\r\n return ids\r\n \r\ndef subword_tokenize(tokens):\r\n \"\"\"Segment each token into subwords while keeping track of\r\n token boundaries.\r\n Parameters\r\n ----------\r\n tokens: A sequence of strings, representing input tokens.\r\n Returns\r\n -------\r\n A tuple consisting of:\r\n - A list of subwords, flanked by the special symbols required\r\n by Bert (CLS and SEP).\r\n - An array of indices into the list of subwords, indicating\r\n that the corresponding subword is the start of a new\r\n token. For example, [1, 3, 4, 7] means that the subwords\r\n 1, 3, 4, 7 are token starts, while all other subwords\r\n (0, 2, 5, 6, 8...) are in or at the end of tokens.\r\n This list allows selecting Bert hidden states that\r\n represent tokens, which is necessary in sequence\r\n labeling.\r\n \"\"\"\r\n subwords = list(map(tokenizer.tokenize, tokens))\r\n print (\"subwords: \", subwords)\r\n subword_lengths = list(map(len, subwords))\r\n subwords = [CLS] + list(flatten(subwords)) + [SEP]\r\n print (\"subwords: \", subwords)\r\n token_start_idxs = 1 + np.cumsum([0] + subword_lengths[:-1])\r\n return subwords, token_start_idxs\r\n\r\ndef subword_tokenize_to_ids(tokens):\r\n \"\"\"Segment each token into subwords while keeping track of\r\n token boundaries and convert subwords into IDs.\r\n Parameters\r\n ----------\r\n tokens: A sequence of strings, representing input tokens.\r\n Returns\r\n -------\r\n A tuple consisting of:\r\n - A list of subword IDs, including IDs of the special\r\n symbols (CLS and SEP) required by Bert.\r\n - A mask indicating padding tokens.\r\n - An array of indices into the list of subwords. See\r\n doc of subword_tokenize.\r\n \"\"\"\r\n subwords, token_start_idxs = subword_tokenize(tokens)\r\n subword_ids, mask = convert_tokens_to_ids(subwords)\r\n token_starts = torch.zeros(1, 100).to(subword_ids)\r\n token_starts[0, token_start_idxs] = 1\r\n return subword_ids, mask, token_starts\r\n```\r\nand then i try to add your extra code.\r\ni try to understand the code for this simple case:\r\n```\r\nsentences = [[\"the\", \"rolerationing\", \"ends\"], [\"A\", \"sequence\", \"of\", \"strings\" ,\",\", \"representing\", \"input\", \"tokens\", \".\"]]\r\n```\r\nit is \r\n```max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item() ```\r\nwhich is 11\r\n\r\nSome questions:\r\n1) \r\n```\r\nbert(bert_ids, segment_ids)\r\n```\r\nis this the same with \r\n```bert(bert_ids)``` ?\r\nIn that case the following is not needed: ```segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence```\r\n\r\nAlso i do not understand what the comment means... ( # dummy segment IDs, since we only have one sentence)\r\n\r\n2) \r\n```bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1]```\r\nwhy do you take the last one? Here -1 is the last sentence. Why do we say last layer?\r\nAlso for the above simple example its size is torch.Size([11, 768]). Is this what we want?\r\n" ]
1,543
1,647
1,579
NONE
null
Hi, I have a question in terms of using BERT for sequential labeling task. Please correct me if I'm wrong. My understanding is: 1. Use BertModel loaded with pretrained weights instead of MaskedBertModel. 2. In such case, take a sequence of tokens as input, BertModel would output a list of hidden states, I only use the top layer hidden states as the embedding for that sequence. 3. Then to fine tune the model, add a linear fully connected layer and softmax to make final decision. Is this entire process correct? I followed this procedure but could not have any results. Thank you!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/64/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/64/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/63
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/63/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/63/comments
https://api.github.com/repos/huggingface/transformers/issues/63/events
https://github.com/huggingface/transformers/issues/63
385,487,365
MDU6SXNzdWUzODU0ODczNjU=
63
Unseen Vocab
{ "login": "siddsach", "id": 20043538, "node_id": "MDQ6VXNlcjIwMDQzNTM4", "avatar_url": "https://avatars.githubusercontent.com/u/20043538?v=4", "gravatar_id": "", "url": "https://api.github.com/users/siddsach", "html_url": "https://github.com/siddsach", "followers_url": "https://api.github.com/users/siddsach/followers", "following_url": "https://api.github.com/users/siddsach/following{/other_user}", "gists_url": "https://api.github.com/users/siddsach/gists{/gist_id}", "starred_url": "https://api.github.com/users/siddsach/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/siddsach/subscriptions", "organizations_url": "https://api.github.com/users/siddsach/orgs", "repos_url": "https://api.github.com/users/siddsach/repos", "events_url": "https://api.github.com/users/siddsach/events{/privacy}", "received_events_url": "https://api.github.com/users/siddsach/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "If you tokenize properly the input (tokenize before convert_tokens), it automatically 'fallbacks' to subword/character-level(-like) embedding.\r\nYou can add new words in the vocabulary but you'll have to train the corresponding embeddings.", "Hi @siddsach,\r\nThanks for your kind words!\r\n@artemisart is right, BPE progressively falls-back on character level embeddings for unseen words.", "> If you tokenize properly the input (tokenize before convert_tokens), it automatically 'fallbacks' to subword/character-level(-like) embedding.\r\n> You can add new words in the vocabulary but you'll have to train the corresponding embeddings.\r\n\r\nHi, what do you mean `tokenize properly the input (tokenize before convert_tokens)` ?\r\nCan you refer a tokenization sample (before and after) or a sample code if any? thank you" ]
1,543
1,571
1,543
NONE
null
Thank you so much for this well-documented and easy-to-understand implementation! I remember meeting you at WeCNLP and am so happy to see you push out usable implementations of the SOA in pytorch for the community!!!!! I have a question: The convert_tokens_to_ids method in the BertTokenizer that provides input to the BertEncoder uses an OrderedDict for the vocab attribute, which throws an error (e.g. `KeyError: 'ketorolac'`) for any words not in the vocab. Can I create another vocab object that adds unseen words and use that in the tokenizer? Does the pretrained BertEncoder depend on the default id mapping? It seems to me that ideally in the long-term, this repo would incorporate character level embeddings to deal with unseen words, but idk if that is necessary for this use-case.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/63/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/63/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/62
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/62/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/62/comments
https://api.github.com/repos/huggingface/transformers/issues/62/events
https://github.com/huggingface/transformers/issues/62
385,368,286
MDU6SXNzdWUzODUzNjgyODY=
62
Specify a model from a specific directory for extract_features.py
{ "login": "johann-petrak", "id": 619106, "node_id": "MDQ6VXNlcjYxOTEwNg==", "avatar_url": "https://avatars.githubusercontent.com/u/619106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johann-petrak", "html_url": "https://github.com/johann-petrak", "followers_url": "https://api.github.com/users/johann-petrak/followers", "following_url": "https://api.github.com/users/johann-petrak/following{/other_user}", "gists_url": "https://api.github.com/users/johann-petrak/gists{/gist_id}", "starred_url": "https://api.github.com/users/johann-petrak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johann-petrak/subscriptions", "organizations_url": "https://api.github.com/users/johann-petrak/orgs", "repos_url": "https://api.github.com/users/johann-petrak/repos", "events_url": "https://api.github.com/users/johann-petrak/events{/privacy}", "received_events_url": "https://api.github.com/users/johann-petrak/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The last update broke this, but you can fix this in tokenization.py, you have to add this after `vocab_file = pretrained_model_name`:\r\n```\r\nif os.path.isdir(vocab_file):\r\n vocab_file = os.path.join(vocab_file, \"vocab.txt\")\r\n```\r\n", "Thank you, is it fair to assume that this will get accepted as an issue and fixed in a future update/release?", "Yes :-) There is a new release planned for tonight that will fix this (among other things, basically all the other open issues).", "Ok, this is now included in the new release 0.3.0 (by #73)." ]
1,543
1,543
1,543
NONE
null
I have downloaded the model and vocab files into a specific location, using their original file names, so my directory for bert-base-cased contains: ``` bert-base-cased-vocab.txt bert_config.json pytorch_model.bin ``` But when I try to specify the directory which contains these files for the `--bert_model` parameter of `extract_features.py` I get the following error: ``` ValueError: Can't find a vocabulary file at path <THEDIRECTORYPATHISPECIFIED> ... ``` When I specify a file that exists and is a proper file, the error messages seem to indicate that the program wants to untar and uncompress the files. Is there no way to just specify a specific directory that contains the vocab, config, and model files?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/62/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/62/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/61
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/61/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/61/comments
https://api.github.com/repos/huggingface/transformers/issues/61/events
https://github.com/huggingface/transformers/issues/61
385,304,675
MDU6SXNzdWUzODUzMDQ2NzU=
61
BERTConfigs in example usages in `modeling.py` are not OK (?)
{ "login": "davidefiocco", "id": 4547987, "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidefiocco", "html_url": "https://github.com/davidefiocco", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "repos_url": "https://api.github.com/users/davidefiocco/repos", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @davidefiocco, you are right, I updated the docstrings in the new release 0.3.0." ]
1,543
1,543
1,543
CONTRIBUTOR
null
Hi! In the `config` definition https://github.com/huggingface/pytorch-pretrained-BERT/blob/21f0196412115876da1c38652d22d1f7a14b36ff/pytorch_pretrained_bert/modeling.py#L848 in the Example usage of `BertForSequenceClassification` in `modeling.py`, there's things I don't understand: - `vocab_size` in not an acceptable parameter name, by looking at the `BertConfig` class definition https://github.com/huggingface/pytorch-pretrained-BERT/blob/21f0196412115876da1c38652d22d1f7a14b36ff/pytorch_pretrained_bert/modeling.py#L70 - even by changing `vocab_size` into `vocab_size_or_config_json_file`, for the choice of the other params given in the example i.e. ``` vocab_size=32000, hidden_size=512, num_hidden_layers=8, num_attention_heads=6, intermediate_size=1024 ``` I get: `ValueError: The hidden size (512) is not a multiple of the number of attention heads (6)` I think that something similar may be true for the other classes as well, `BertForQuestionAnswering`, `BertForNextSentencePrediction`, etc. Am I missing something?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/61/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/61/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/60
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/60/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/60/comments
https://api.github.com/repos/huggingface/transformers/issues/60/events
https://github.com/huggingface/transformers/pull/60
385,278,339
MDExOlB1bGxSZXF1ZXN0MjM0Mjg0NDg3
60
Updated quick-start example with `BertForMaskedLM`
{ "login": "davidefiocco", "id": 4547987, "node_id": "MDQ6VXNlcjQ1NDc5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/4547987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidefiocco", "html_url": "https://github.com/davidefiocco", "followers_url": "https://api.github.com/users/davidefiocco/followers", "following_url": "https://api.github.com/users/davidefiocco/following{/other_user}", "gists_url": "https://api.github.com/users/davidefiocco/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidefiocco/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidefiocco/subscriptions", "organizations_url": "https://api.github.com/users/davidefiocco/orgs", "repos_url": "https://api.github.com/users/davidefiocco/repos", "events_url": "https://api.github.com/users/davidefiocco/events{/privacy}", "received_events_url": "https://api.github.com/users/davidefiocco/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Nice, thanks @davidefiocco " ]
1,543
1,543
1,543
CONTRIBUTOR
null
As `convert_ids_to_tokens` returns a list, the code in the README currently throws an `AssertionError`, so I propose a quick fix.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/60/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/60/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/60", "html_url": "https://github.com/huggingface/transformers/pull/60", "diff_url": "https://github.com/huggingface/transformers/pull/60.diff", "patch_url": "https://github.com/huggingface/transformers/pull/60.patch", "merged_at": 1543413549000 }
https://api.github.com/repos/huggingface/transformers/issues/59
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/59/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/59/comments
https://api.github.com/repos/huggingface/transformers/issues/59/events
https://github.com/huggingface/transformers/issues/59
385,158,595
MDU6SXNzdWUzODUxNTg1OTU=
59
not good when I use BERT for seq2seq model in keyphrase generation
{ "login": "whqwill", "id": 7381876, "node_id": "MDQ6VXNlcjczODE4NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/7381876?v=4", "gravatar_id": "", "url": "https://api.github.com/users/whqwill", "html_url": "https://github.com/whqwill", "followers_url": "https://api.github.com/users/whqwill/followers", "following_url": "https://api.github.com/users/whqwill/following{/other_user}", "gists_url": "https://api.github.com/users/whqwill/gists{/gist_id}", "starred_url": "https://api.github.com/users/whqwill/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/whqwill/subscriptions", "organizations_url": "https://api.github.com/users/whqwill/orgs", "repos_url": "https://api.github.com/users/whqwill/repos", "events_url": "https://api.github.com/users/whqwill/events{/privacy}", "received_events_url": "https://api.github.com/users/whqwill/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "have u tried transformer decoder ?instead of rnn decoder. ", "not yet, I will try. But I think rnn decoder should not be such bad. ", "> not yet, I will try. But I think rnn decoder should not be such bad.\r\n\r\nemmm,maybe u should used mean of last layer to initialize decoder, not the last token representation of last layer.\r\nI am also very concerned about the results of using transformer decoder. If you are done, can you tell me? Thank you.", "I think the batch size of RNN with BERT is too small. pleas see \r\n\r\n> https://github.com/memray/seq2seq-keyphrase-pytorch/blob/master/pykp/dataloader.py\r\nline 377-378", "I don't know what you mean by giving me this link. I set to 10 really because of the memory problem. Actually, when sentence length is 512, the max batch size is only 5, if it is 6 or bigger there will be memory error for my GPU. ", "> > not yet, I will try. But I think rnn decoder should not be such bad.\r\n> \r\n> emmm,maybe u should used mean of last layer to initialize decoder, not the last token representation of last layer.\r\n> I am also very concerned about the results of using transformer decoder. If you are done, can you tell me? Thank you.\r\n\r\nYou are right. Maybe the mean is better, I will try as well. Thanks.", "May i ask a question? R u chinese?23333", "Cause for one example, it has N targets. We wanna put all targets in the same batch. 10 is too small that the targets of one example would be in different batches probably.", "I know, but ... the same problem ... my memory is limited .. so ...\r\n\r\nPS. I am Chinese \r\n\r\n", "> I know, but ... the same problem ... my memory is limited .. so ...\r\n> \r\n> PS. I am Chinese\r\n\r\ni am as well hahaha", "是不是语料的问题,bert是在wiki上训练的。我用kp20k训练了一个mini bert,在测试集上的accuracy目前是80%,你要不要试试用我这个作为encoder?", "这个80%具体是什么数值这么高?f1 score吗? 你的encoder能不能发来看一下呢 谢\r\n\r\nwaynedane <[email protected]>于2018年11月28日 周三下午11:14写道:\r\n\r\n> 是不是语料的问题,bert是在wiki上训练的。我用kp20k训练了一个mini\r\n> bert,在测试集上的accuracy目前是80%,你要不要试试用我这个作为encoder?\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/pytorch-pretrained-BERT/issues/59#issuecomment-442482124>,\r\n> or mute the thread\r\n> <https://github.com/notifications/unsubscribe-auth/AHCjdIT1G6Icse3LK2SZXO194JJTiM1Qks5uzqhMgaJpZM4Y3HWV>\r\n> .\r\n>\r\n", "accuracy 是masklm和nextsentence两个任务的,不是key phrase generation,我没说清楚,抱歉。我的算力有限,两块p100, 快一个月了,目前还没训练完。80%是当前的表现。", "你提到的mini bert 是什么意思?", "我大概理解你的意思了,你相当于是用kp20重新预训练一个bert,不过这样做... 感觉确实蛮麻烦。 ", "> 我大概理解你的意思了,你相当于是用kp20重新预训练一个bert,不过这样做... 感觉确实蛮麻烦。\r\n\r\n是的,用的是 Junseong Kim的代码:https://github.com/codertimo/BERT-pytorch ,模型规模比谷歌的BERT-Base Uncased都小很多。这个是L-8 H-256 A-8.我把目前训练的checkpoint和vocab文件发给你", "但是你这个checkpoint,我的这个版本能直接用吗,还是说我必须装你的那个版本的代码?", "你可以发到我邮箱 [email protected] , 谢", "> 但是你这个checkpoint,我的这个版本能直接用吗,还是说我必须装你的那个版本的代码?\r\n\r\n可以根据Junseong Kim 的代码创建一个bert model然后加载参数,不一定得安装", "好的把。那你把checkpoint 发给我试试。 ", "Hi guys,\r\nI would like to keep the issues of this repository focused on the package it-self.\r\nI also think it's better to keep the conversation in english so everybody can participate.\r\nPlease move this conversation to your repository: https://github.com/memray/seq2seq-keyphrase-pytorch or emails.\r\nThanks, I am closing this discussion.\r\nBest,", "> accuracy 是masklm和nextsentence两个任务的,不是key phrase generation,我没说清楚,抱歉。我的算力有限,两块p100, 快一个月了,目前还没训练完。80%是当前的表现。\r\n 你好,能把mini版模型发我一下吗,[email protected],谢谢啦。\r\n", "hi, @whqwill I have some doubts about the usage manner of bert with RNN. \r\nIn bert with RNN method, I see you only consider the last term's representation (I mean the TN's) as the input to RNN decoder, why not use the other term's representation, like T1 to TN-1 ? I think the last term's information is too less to represent all the context information." ]
1,543
1,563
1,543
NONE
null
Hi, recently, I am researching about Keyphrase generation. Usually, people use seq2seq with attention model to deal with such problem. Specifically I use the framework: https://github.com/memray/seq2seq-keyphrase-pytorch, which is implementation of http://memray.me/uploads/acl17-keyphrase-generation.pdf . Now I just change its encoder part to BERT, but the result is not good. The experiment comparison of two models is in the attachment. Can you give me some advice if what I did is reasonable and if BERT is suitable for doing such a thing? Thanks. [RNN vs BERT in Keyphrase generation.pdf](https://github.com/huggingface/pytorch-pretrained-BERT/files/2623599/RNN.vs.BERT.in.Keyphrase.generation.pdf)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/59/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/59/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/58
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/58/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/58/comments
https://api.github.com/repos/huggingface/transformers/issues/58/events
https://github.com/huggingface/transformers/pull/58
384,691,312
MDExOlB1bGxSZXF1ZXN0MjMzODMwMzc1
58
Bug fix in examples;correct t_total for distributed training;run pred…
{ "login": "llidev", "id": 29957883, "node_id": "MDQ6VXNlcjI5OTU3ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/29957883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/llidev", "html_url": "https://github.com/llidev", "followers_url": "https://api.github.com/users/llidev/followers", "following_url": "https://api.github.com/users/llidev/following{/other_user}", "gists_url": "https://api.github.com/users/llidev/gists{/gist_id}", "starred_url": "https://api.github.com/users/llidev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/llidev/subscriptions", "organizations_url": "https://api.github.com/users/llidev/orgs", "repos_url": "https://api.github.com/users/llidev/repos", "events_url": "https://api.github.com/users/llidev/events{/privacy}", "received_events_url": "https://api.github.com/users/llidev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks @lliimsft!" ]
1,543
1,543
1,543
CONTRIBUTOR
null
Bug fix in examples; correct t_total for distributed training; run prediction for full dataset
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/58/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/58/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/58", "html_url": "https://github.com/huggingface/transformers/pull/58", "diff_url": "https://github.com/huggingface/transformers/pull/58.diff", "patch_url": "https://github.com/huggingface/transformers/pull/58.patch", "merged_at": 1543405186000 }
https://api.github.com/repos/huggingface/transformers/issues/57
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/57/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/57/comments
https://api.github.com/repos/huggingface/transformers/issues/57/events
https://github.com/huggingface/transformers/issues/57
384,525,339
MDU6SXNzdWUzODQ1MjUzMzk=
57
Missing function convert_to_unicode in tokenization.py
{ "login": "ptrichel", "id": 15148709, "node_id": "MDQ6VXNlcjE1MTQ4NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/15148709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ptrichel", "html_url": "https://github.com/ptrichel", "followers_url": "https://api.github.com/users/ptrichel/followers", "following_url": "https://api.github.com/users/ptrichel/following{/other_user}", "gists_url": "https://api.github.com/users/ptrichel/gists{/gist_id}", "starred_url": "https://api.github.com/users/ptrichel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ptrichel/subscriptions", "organizations_url": "https://api.github.com/users/ptrichel/orgs", "repos_url": "https://api.github.com/users/ptrichel/repos", "events_url": "https://api.github.com/users/ptrichel/events{/privacy}", "received_events_url": "https://api.github.com/users/ptrichel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Fixed in master, thanks!" ]
1,543
1,543
1,543
NONE
null
The function _convert_to_unicode_ is not in tokenization.py but used to be there in v0.1.2. When fine tuning with run_classifier.py, you get an ImportError: cannot import name 'convert_to_unicode'. https://github.com/huggingface/pytorch-pretrained-BERT/blob/ce37b8e4819142171b61558e64f7dcb0286e9937/examples/run_classifier.py#L33
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/57/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/57/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/56
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/56/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/56/comments
https://api.github.com/repos/huggingface/transformers/issues/56/events
https://github.com/huggingface/transformers/issues/56
384,276,059
MDU6SXNzdWUzODQyNzYwNTk=
56
[Feature request ] Add support for the new cased version of the multilingual model
{ "login": "elyase", "id": 1175888, "node_id": "MDQ6VXNlcjExNzU4ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/1175888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elyase", "html_url": "https://github.com/elyase", "followers_url": "https://api.github.com/users/elyase/followers", "following_url": "https://api.github.com/users/elyase/following{/other_user}", "gists_url": "https://api.github.com/users/elyase/gists{/gist_id}", "starred_url": "https://api.github.com/users/elyase/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elyase/subscriptions", "organizations_url": "https://api.github.com/users/elyase/orgs", "repos_url": "https://api.github.com/users/elyase/repos", "events_url": "https://api.github.com/users/elyase/events{/privacy}", "received_events_url": "https://api.github.com/users/elyase/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @elyase, this model is now added in the new release 0.3.0.\r\nI also added the other new model by Google (`bert-large-cased`)" ]
1,543
1,543
1,543
CONTRIBUTOR
null
https://github.com/google-research/bert/commit/332a68723c34062b8f58e5fec3e430db4563320a
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/56/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/56/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/55
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/55/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/55/comments
https://api.github.com/repos/huggingface/transformers/issues/55/events
https://github.com/huggingface/transformers/issues/55
384,044,666
MDU6SXNzdWUzODQwNDQ2NjY=
55
Loss calculation error
{ "login": "jwang-lp", "id": 944876, "node_id": "MDQ6VXNlcjk0NDg3Ng==", "avatar_url": "https://avatars.githubusercontent.com/u/944876?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jwang-lp", "html_url": "https://github.com/jwang-lp", "followers_url": "https://api.github.com/users/jwang-lp/followers", "following_url": "https://api.github.com/users/jwang-lp/following{/other_user}", "gists_url": "https://api.github.com/users/jwang-lp/gists{/gist_id}", "starred_url": "https://api.github.com/users/jwang-lp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jwang-lp/subscriptions", "organizations_url": "https://api.github.com/users/jwang-lp/orgs", "repos_url": "https://api.github.com/users/jwang-lp/repos", "events_url": "https://api.github.com/users/jwang-lp/events{/privacy}", "received_events_url": "https://api.github.com/users/jwang-lp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi Jian, can you give me a small (self-contained) example showing how to get this error?", "Hi Thomas! I modified the code in your `README.md` for an example:\r\n\r\n```python\r\nfrom pytorch_pretrained_bert.modeling import BertForMaskedLM, BertConfig\r\nfrom pytorch_pretrained_bert import BertTokenizer\r\nimport torch\r\n\r\nmodel = BertForMaskedLM.from_pretrained('bert-base-uncased')\r\n\r\n# Tokenized input\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\ntext = \"Who was Jim Henson ? Jim Henson was a puppeteer\"\r\ntokenized_text = tokenizer.tokenize(text)\r\n\r\n# Mask a token that we will try to predict back with `BertForMaskedLM`\r\nmasked_index = 6\r\ntokenized_text[masked_index] = '[MASK]'\r\n\r\n# Convert token to vocabulary indices\r\nindexed_truths = tokenizer.convert_tokens_to_ids(tokenized_text)\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\r\n\r\n# Convert inputs to PyTorch tensors\r\ntokens_tensor = torch.tensor([indexed_tokens])\r\nindexed_truths_tensor = torch.tensor([indexed_truths])\r\n\r\n# Evaluate loss\r\nmodel.eval()\r\nmasked_lm_logits_scores = model(tokens_tensor, masked_lm_labels=indexed_truths_tensor)\r\nprint(masked_lm_logits_scores)\r\n```", "Thank you, you are right, I fixed that on master. It will be in the next release." ]
1,543
1,543
1,543
NONE
null
https://github.com/huggingface/pytorch-pretrained-BERT/blob/982339d82984466fde3b1466f657a03200aa2ffb/pytorch_pretrained_bert/modeling.py#L744 Got `ValueError: Expected target size (1, 30522), got torch.Size([1, 11])` at line 744 of `modeling.py`. I think the line should be changed to `masked_lm_loss = loss_fct(prediction_scores.view([-1, self.config.vocab_size]), masked_lm_labels.view([-1]))`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/55/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/55/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/54
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/54/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/54/comments
https://api.github.com/repos/huggingface/transformers/issues/54/events
https://github.com/huggingface/transformers/issues/54
383,967,106
MDU6SXNzdWUzODM5NjcxMDY=
54
example in BertForSequenceClassification() conflicts with the api
{ "login": "labixiaoK", "id": 24908364, "node_id": "MDQ6VXNlcjI0OTA4MzY0", "avatar_url": "https://avatars.githubusercontent.com/u/24908364?v=4", "gravatar_id": "", "url": "https://api.github.com/users/labixiaoK", "html_url": "https://github.com/labixiaoK", "followers_url": "https://api.github.com/users/labixiaoK/followers", "following_url": "https://api.github.com/users/labixiaoK/following{/other_user}", "gists_url": "https://api.github.com/users/labixiaoK/gists{/gist_id}", "starred_url": "https://api.github.com/users/labixiaoK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/labixiaoK/subscriptions", "organizations_url": "https://api.github.com/users/labixiaoK/orgs", "repos_url": "https://api.github.com/users/labixiaoK/repos", "events_url": "https://api.github.com/users/labixiaoK/events{/privacy}", "received_events_url": "https://api.github.com/users/labixiaoK/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n(1) is solved on master. I will release a new release soon with the fixes on pip. In the mean time you can install from sources if you want.\r\nI fixed the typo in the docstring you mention in (2), thanks, it should be a `1` instead of a `2`." ]
1,543
1,543
1,543
NONE
null
Hi, firstly, admire u for the great job. but I encounter 2 problems when i use it: **1**. `UnicodeDecodeError: 'gbk' codec can't decode byte 0x85 in position 4527: illegal multibyte sequence`, same problem as ISSUE 52 when I excute the `BertTokenizer.from_pretrained('bert-base-uncased')`, but I successfully excute `BertForNextSentencePrediction.from_pretrained('bert-base-uncased')`, >.< **2**. in the pytorch-pretrained-BERT/pytorch_pretrained_bert/modeling.py, line 761 --> ``` `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] **with the token types indices selected in [0, 1]**. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). ``` but in the following example, in **line 784**--> `token_type_ids = torch.LongTensor([[0, 0, 1], [0, **2**, 0]])`, why the '2' appears? I am confused. Otherwise, is the situation similar to '0, 1, 0 ' correct ? Or it should be similar to [000000111111] , that is continuous '0' and continuous '1' ? ty.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/54/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/54/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/53
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/53/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/53/comments
https://api.github.com/repos/huggingface/transformers/issues/53/events
https://github.com/huggingface/transformers/issues/53
383,946,736
MDU6SXNzdWUzODM5NDY3MzY=
53
Multi-GPU training vs Distributed training
{ "login": "llidev", "id": 29957883, "node_id": "MDQ6VXNlcjI5OTU3ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/29957883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/llidev", "html_url": "https://github.com/llidev", "followers_url": "https://api.github.com/users/llidev/followers", "following_url": "https://api.github.com/users/llidev/following{/other_user}", "gists_url": "https://api.github.com/users/llidev/gists{/gist_id}", "starred_url": "https://api.github.com/users/llidev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/llidev/subscriptions", "organizations_url": "https://api.github.com/users/llidev/orgs", "repos_url": "https://api.github.com/users/llidev/repos", "events_url": "https://api.github.com/users/llidev/events{/privacy}", "received_events_url": "https://api.github.com/users/llidev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nThanks for the feedback, it's always interesting to compare the various possible ways to train the model indeed.\r\n\r\nThe most likely cause for (2) is that MRPC is a small dataset and the model shows a high variance in the results depending on the initialization of the weights for example (see the original BERT repo on that also). The distributed and multi-gpu setups probably do not use the random generators in the exact same order which lead to different initializations.\r\n\r\nYou can have an intuition of that by training with different seeds, you will see there is easily a 10% variation in the final accuracy...\r\n\r\nIf you can do that, a better way to compare the results would thus be to take something like 10 different seeds for each training condition and compare the mean and standard deviation of the results.", "Thanks for your feedback!\r\n\r\nAfter some investigations, it looks like `t_total` is not set properly for distributed training in BertAdam. The actual `t_total` per distributed worker should be divided by the worker count. \r\n\r\nI have included the following fix in my PR https://github.com/huggingface/pytorch-pretrained-BERT/pull/58\r\n\r\n```\r\n t_total = num_train_steps\r\n if args.local_rank != -1:\r\n t_total = t_total // torch.distributed.get_world_size()\r\n optimizer = BertAdam(optimizer_grouped_parameters,\r\n lr=args.learning_rate,\r\n warmup=args.warmup_proportion,\r\n t_total=t_total)\r\n``` " ]
1,543
1,543
1,543
CONTRIBUTOR
null
Hi, I have a question about Multi-GPU vs Distributed training, probably unrelated to BERT itself. I have a 4-GPU server, and was trying to run `run_classifier.py` in two ways: (a) run single-node distributed training with 4 processes and minibatch of 32 each (b) run Multi-GPU training with minibatch of 128, and all other hyperparams keep the same Intuitively I believe a and b should yield the closed accuracy and training times. Below please find my observations: 1. (a) runs ~20% faster than (b). 2. (b) yields a better final evaluation accuracy of ~4% than (a) The first looks like reasonable since I guess the loss.mean() is done by CPU which may be slower than using NCCL directly? However, I don't quite understand the second observation. Can you please give any hint or reference about the possible cause? Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/53/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/53/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/52
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/52/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/52/comments
https://api.github.com/repos/huggingface/transformers/issues/52/events
https://github.com/huggingface/transformers/issues/52
383,586,156
MDU6SXNzdWUzODM1ODYxNTY=
52
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 3920: character maps to <undefined>
{ "login": "raskolnnikov", "id": 5455837, "node_id": "MDQ6VXNlcjU0NTU4Mzc=", "avatar_url": "https://avatars.githubusercontent.com/u/5455837?v=4", "gravatar_id": "", "url": "https://api.github.com/users/raskolnnikov", "html_url": "https://github.com/raskolnnikov", "followers_url": "https://api.github.com/users/raskolnnikov/followers", "following_url": "https://api.github.com/users/raskolnnikov/following{/other_user}", "gists_url": "https://api.github.com/users/raskolnnikov/gists{/gist_id}", "starred_url": "https://api.github.com/users/raskolnnikov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/raskolnnikov/subscriptions", "organizations_url": "https://api.github.com/users/raskolnnikov/orgs", "repos_url": "https://api.github.com/users/raskolnnikov/repos", "events_url": "https://api.github.com/users/raskolnnikov/events{/privacy}", "received_events_url": "https://api.github.com/users/raskolnnikov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I am facing the same problem.\r\n\r\nFixed it with \"with open(vocab_file, \"r\"**, encoding=\"utf-8\"**) as reader:\" in line 68 of tokenization.py", "Thanks, it's fixed on master and will be included in the next release." ]
1,542
1,542
1,542
NONE
null
Installed pytorch-pretrained-BERT from source, Python 3.7, Windows 10 When I run the following snippet: import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') I get the following: --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-input-2-7725148c607d> in <module>() 3 4 # Load pre-trained model tokenizer (vocabulary) ----> 5 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') ~\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py in from_pretrained(cls, pretrained_model_name, do_lower_case) 139 vocab_file, resolved_vocab_file)) 140 # Instantiate tokenizer. --> 141 tokenizer = cls(resolved_vocab_file, do_lower_case) 142 except FileNotFoundError: 143 logger.error( ~\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py in __init__(self, vocab_file, do_lower_case) 93 "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained " 94 "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)) ---> 95 self.vocab = load_vocab(vocab_file) 96 self.ids_to_tokens = collections.OrderedDict( 97 [(ids, tok) for tok, ids in self.vocab.items()]) ~\Anaconda3\lib\site-packages\pytorch_pretrained_bert\tokenization.py in load_vocab(vocab_file) 68 with open(vocab_file, "r", encoding="utf8") as reader: 69 while True: ---> 70 token = convert_to_unicode(reader.readline()) 71 if not token: 72 break ~\Anaconda3\lib\encodings\cp1252.py in decode(self, input, final) 21 class IncrementalDecoder(codecs.IncrementalDecoder): 22 def decode(self, input, final=False): ---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0] 24 25 class StreamWriter(Codec,codecs.StreamWriter): UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 3920: character maps to <undefined>
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/52/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/52/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/51
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/51/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/51/comments
https://api.github.com/repos/huggingface/transformers/issues/51/events
https://github.com/huggingface/transformers/issues/51
383,162,319
MDU6SXNzdWUzODMxNjIzMTk=
51
Missing options/arguments in run_squad.py for BERT Large
{ "login": "avisil", "id": 43005718, "node_id": "MDQ6VXNlcjQzMDA1NzE4", "avatar_url": "https://avatars.githubusercontent.com/u/43005718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avisil", "html_url": "https://github.com/avisil", "followers_url": "https://api.github.com/users/avisil/followers", "following_url": "https://api.github.com/users/avisil/following{/other_user}", "gists_url": "https://api.github.com/users/avisil/gists{/gist_id}", "starred_url": "https://api.github.com/users/avisil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avisil/subscriptions", "organizations_url": "https://api.github.com/users/avisil/orgs", "repos_url": "https://api.github.com/users/avisil/repos", "events_url": "https://api.github.com/users/avisil/events{/privacy}", "received_events_url": "https://api.github.com/users/avisil/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, the readme example was for an older version. I have updated them with the simplified parameters used in the current release. Thanks." ]
1,542
1,543
1,543
NONE
null
Thanks for the great code..However, the `run_squad.py` for BERT Large seems to not have the `vocab_file` and `bert_config_file` (or other) options/arguments. Did you push the latest version? Also, it is looking for a pytorch model file (a bin file). Does it need to be there? I also had to add this line to the file to make BERT base to run on Squad 1.1: `parser.add_argument('--do_lower_case', action="store_true", default=True, help="Lowercase the input")`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/51/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/51/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/50
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/50/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/50/comments
https://api.github.com/repos/huggingface/transformers/issues/50/events
https://github.com/huggingface/transformers/issues/50
383,055,235
MDU6SXNzdWUzODMwNTUyMzU=
50
pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py error
{ "login": "antxiaojun", "id": 44923827, "node_id": "MDQ6VXNlcjQ0OTIzODI3", "avatar_url": "https://avatars.githubusercontent.com/u/44923827?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antxiaojun", "html_url": "https://github.com/antxiaojun", "followers_url": "https://api.github.com/users/antxiaojun/followers", "following_url": "https://api.github.com/users/antxiaojun/following{/other_user}", "gists_url": "https://api.github.com/users/antxiaojun/gists{/gist_id}", "starred_url": "https://api.github.com/users/antxiaojun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antxiaojun/subscriptions", "organizations_url": "https://api.github.com/users/antxiaojun/orgs", "repos_url": "https://api.github.com/users/antxiaojun/repos", "events_url": "https://api.github.com/users/antxiaojun/events{/privacy}", "received_events_url": "https://api.github.com/users/antxiaojun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Maybe some additional information could help me help you?", "Initialize PyTorch weight ['cls', 'seq_relationship', 'output_weights']\r\nSkipping cls/seq_relationship/output_weights/adam_m\r\nSkipping cls/seq_relationship/output_weights/adam_v\r\nTraceback (most recent call last):\r\n File \"/home/tiandan.cxj/python/model_serving_python/lib/python3.5/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/home/tiandan.cxj/python/model_serving_python/lib/python3.5/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/tiandan.cxj/platform/pytorch_BERT/pytorch-pretrained-BERT/pytorch_pretrained_bert/__main__.py\", line 19, in <module>\r\n convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT)\r\n File \"/home/tiandan.cxj/platform/pytorch_BERT/pytorch-pretrained-BERT/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py\", line 69, in convert_tf_checkpoint_to_pytorch\r\n pointer = getattr(pointer, l[0])\r\n File \"/home/tiandan.cxj/python/model_serving_python/lib/python3.5/site-packages/torch/nn/modules/module.py\", line 518, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'BertForPreTraining' object has no attribute 'global_step'", "Hum I will see if I can let people import any kind of TF model in PyTorch, that's a bit risky so it has to be done properly.\r\nIn the meantime you can add `global_step` in the list line 53 of `convert_tf_checkpoint_to_pytorch.py`", "@thomwolf sir, i am also same issue. it doen't resolve. how i am convert my finetuned pretrained model to pytorch?\r\n\r\n```\r\nexport BERT_BASE_DIR=/home/dell/backup/NWP/bert-base-uncased/bert_tensorflow_e100\r\n\r\npytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \\\r\n $BERT_BASE_DIR/model.ckpt-100 \\\r\n $BERT_BASE_DIR/bert_config.json \\\r\n $BERT_BASE_DIR/pytorch_model.bin\r\n\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/dell/Downloads/Downloads/lib/python3.6/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/home/dell/Downloads/Downloads/lib/python3.6/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/__main__.py\", line 19, in <module>\r\n convert_tf_checkpoint_to_pytorch(TF_CHECKPOINT, TF_CONFIG, PYTORCH_DUMP_OUTPUT)\r\n File \"/home/dell/backup/bert_env/lib/python3.6/site-packages/pytorch_pretrained_bert/convert_tf_checkpoint_to_pytorch.py\", line 69, in convert_tf_checkpoint_to_pytorch\r\n pointer = getattr(pointer, l[0])\r\n File \"/home/dell/backup/bert_env/lib/python3.6/site-packages/torch/nn/modules/module.py\", line 535, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'BertForPreTraining' object has no attribute 'global_step'\r\n```\r\nsir how to resolve this issue?\r\nthanks.\r\n", "thanks @thomwolf sir. it was resolved.", "I added the global_step to the skipping list in the modelling.py . Still facing the error. Am I missing something?\r\n " ]
1,542
1,554
1,542
NONE
null
attributeError: 'BertForPreTraining' object has no attribute 'global_step'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/50/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/50/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/49
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/49/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/49/comments
https://api.github.com/repos/huggingface/transformers/issues/49/events
https://github.com/huggingface/transformers/issues/49
383,028,844
MDU6SXNzdWUzODMwMjg4NDQ=
49
Multilingual Issue
{ "login": "hahmyg", "id": 3884429, "node_id": "MDQ6VXNlcjM4ODQ0Mjk=", "avatar_url": "https://avatars.githubusercontent.com/u/3884429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hahmyg", "html_url": "https://github.com/hahmyg", "followers_url": "https://api.github.com/users/hahmyg/followers", "following_url": "https://api.github.com/users/hahmyg/following{/other_user}", "gists_url": "https://api.github.com/users/hahmyg/gists{/gist_id}", "starred_url": "https://api.github.com/users/hahmyg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hahmyg/subscriptions", "organizations_url": "https://api.github.com/users/hahmyg/orgs", "repos_url": "https://api.github.com/users/hahmyg/repos", "events_url": "https://api.github.com/users/hahmyg/events{/privacy}", "received_events_url": "https://api.github.com/users/hahmyg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, you can use the multilingual model as [indicated in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#loading-google-ais-pre-trained-weigths-and-pytorch-dump) with the commands:\r\n```python\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-multilingual')\r\nmodel = BertModel.from_pretrained('bert-base-multilingual')\r\n````\r\nThis will load the multilingual vocabulary (which should contain korean) that your command was not loading." ]
1,542
1,542
1,542
NONE
null
Dear authors, I have two questions. First, how can I use multilingual pre-trained BERT in pytorch? Is it all download model to $BERT_BASE_DIR? Second is tokenization issue. For Chinese and Japanese, tokenizer may works, however, for Korean, it shows different result that I expected ``` import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') text = "안녕하세요" tokenized_text = tokenizer.tokenize(text) print(tokenized_text) ``` ` ['ᄋ', '##ᅡ', '##ᆫ', '##ᄂ', '##ᅧ', '##ᆼ', '##ᄒ', '##ᅡ', '##ᄉ', '##ᅦ', '##ᄋ', '##ᅭ'] The result is based on not 'character' but 'byte-based character' May it comes from unicode issue. (I expect ['안녕', '##하세요'])
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/49/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/49/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/48
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/48/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/48/comments
https://api.github.com/repos/huggingface/transformers/issues/48/events
https://github.com/huggingface/transformers/issues/48
382,937,718
MDU6SXNzdWUzODI5Mzc3MTg=
48
example for is next sentence
{ "login": "charlesmartin14", "id": 498448, "node_id": "MDQ6VXNlcjQ5ODQ0OA==", "avatar_url": "https://avatars.githubusercontent.com/u/498448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/charlesmartin14", "html_url": "https://github.com/charlesmartin14", "followers_url": "https://api.github.com/users/charlesmartin14/followers", "following_url": "https://api.github.com/users/charlesmartin14/following{/other_user}", "gists_url": "https://api.github.com/users/charlesmartin14/gists{/gist_id}", "starred_url": "https://api.github.com/users/charlesmartin14/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/charlesmartin14/subscriptions", "organizations_url": "https://api.github.com/users/charlesmartin14/orgs", "repos_url": "https://api.github.com/users/charlesmartin14/repos", "events_url": "https://api.github.com/users/charlesmartin14/events{/privacy}", "received_events_url": "https://api.github.com/users/charlesmartin14/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I think it should work. You should get a [1, 2] tensor of logits where `predictions[0, 0]` is the score of Next sentence being `True` and `predictions[0, 1]` is the score of Next sentence being `False`. So just take the max of the two (or use a `SoftMax` to get probabilities).\r\nDid you try it?\r\nThe model behaves better on longer sentences of course (it's mainly trained on 512 tokens inputs).", "Closing that for now, feel free to reopen if there is another issue.", "Guys, are [CLS] and [SEP] tokens mandatory for this example?", "This is not super clear, even wrong in the examples, but there is this note in the docstring for `BertModel`:\r\n```\r\n`pooled_output`: a torch.FloatTensor of size [batch_size, hidden_size] which is the output of a\r\n classifier pretrained on top of the hidden state associated to the first character of the\r\n input (`CLF`) to train on the Next-Sentence task (see BERT's paper).\r\n```\r\nThat seems to suggest pretty strongly that you have to put in the `CLF` token.", "```import torch\r\nfrom pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction\r\n\r\n# Load pre-trained model tokenizer (vocabulary)\r\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\r\n\r\n# Tokenized input\r\ntext = \"[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]\"\r\ntokenized_text = tokenizer.tokenize(text)\r\n\r\n# Convert token to vocabulary indices\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)\r\n# Define sentence A and B indices associated to 1st and 2nd sentences (see paper)\r\nsegments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]\r\n\r\n# Convert inputs to PyTorch tensors\r\ntokens_tensor = torch.tensor([indexed_tokens])\r\nsegments_tensors = torch.tensor([segments_ids])\r\n\r\n# Load pre-trained model (weights)\r\nmodel = BertForNextSentencePrediction.from_pretrained('bert-base-uncased')\r\nmodel.eval()\r\n\r\n# Predict is Next Sentence ?\r\npredictions = model(tokens_tensor, segments_tensors )\r\n\r\n\r\n\r\n\r\nprint(predictions)\r\n```\r\n\r\n\r\n```\r\ntensor([[ 6.3714, -6.3910]], grad_fn=<AddmmBackward>)\r\n```\r\nHow do i infer this as true or false", "Those are the logits, because you did not pass the `next_sentence_label`.\r\n\r\nMy understanding is that you could apply a softmax and get the probability for the sequence to be a possible sequence.\r\n\r\n`Sentence 1: How old are you?`\r\n`Sentence 2: The Eiffel Tower is in Paris`\r\n`tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)`\r\n`Sentence 1: How old are you?`\r\n`Sentence 2: I am 193 years old`\r\n`tensor([[ 6.0164, -5.7138]], grad_fn=<AddmmBackward>)`\r\n\r\nFor the first example the probability that the second sentence is a probable continuation is very low.\r\nFor the second example the probability is very high (I am looking at the first logit)", "predictions = model(tokens_tensor, segments_tensors )\r\nI try the code more than once,why I have the different result?\r\nsometime predictions[0, 0] is higher ,however, the same sentence pair,predictions[0, 0] is lower.", "Maybe your model is not in evaluation mode (`model.eval()`)?\r\nYou need to do this to desactivate the dropout modules.", "It is OK.THANKS A LOT.", "`error: \r\n--> 197 embeddings = words_embeddings + position_embeddings + token_type_embeddings\r\n 198 embeddings = self.LayerNorm(embeddings)\r\n 199 embeddings = self.dropout(embeddings)\r\nThe size of tensor a (21) must match the size of tensor b (14) at non-singleton dimension 1`\r\n\r\nThe above issues get resolved, when I added few extra 1's and 0's to make the shape similar tokens_tensor and segments_tensors. Just wondering am I using in a right way. \r\n\r\nMy predictions output is a tensor array of size 21 X 30522 . \r\nAnd what I believe the example is to predict the word which is [MASK] . Can you also please guide how to predict the next sentence? ", "> Maybe your model is not in evaluation mode (`model.eval()`)?\r\n> You need to do this to desactivate the dropout modules.\r\n\r\n@thomwolf Actually even when I used model.eval() I still got different results. I observed this when I use every model of the package (BertModel, BertForNextSentencePrediction etc). Only when I fixed the length of the input (e.g. to 128), I can get the same results. In this way I have to pad 0 to indexed_tokens so it has a fixed length.\r\n\r\nCould you explain why is like this, or did I make any mistake?\r\n\r\nThank you so much!", "> > Maybe your model is not in evaluation mode (`model.eval()`)?\r\n> > You need to do this to desactivate the dropout modules.\r\n> \r\n> @thomwolf Actually even when I used model.eval() I still got different results. I observed this when I use every model of the package (BertModel, BertForNextSentencePrediction etc). Only when I fixed the length of the input (e.g. to 128), I can get the same results. In this way I have to pad 0 to indexed_tokens so it has a fixed length.\r\n> \r\n> Could you explain why is like this, or did I make any mistake?\r\n> \r\n> Thank you so much!\r\n\r\nMake sure\r\n1) input_ids, input_mask, segment_ids have same length\r\n2) vocabulary file for tokenizer is from the same config dir as your bert_config.json\r\n\r\nI had symilar symptoms when vocab and config was from diferent berts", "I noticed that the probability for longer sentences, regardless of how much they are related to the same subject, is higher than the shorter ones. For example, I added some random sentences to the end of the first or second part and observed significant increase in the first logit value. Is it a way to regularize the model for the next sentence prediction? \r\n", "@pbabvey I am observing the same thing.\r\nare the probabilities length normalized?", "> Those are the logits, because you did not pass the `next_sentence_label`.\r\n> \r\n> My understanding is that you could apply a softmax and get the probability for the sequence to be a possible sequence.\r\n> \r\n> `Sentence 1: How old are you?`\r\n> `Sentence 2: The Eiffel Tower is in Paris`\r\n> `tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)`\r\n> `Sentence 1: How old are you?`\r\n> `Sentence 2: I am 193 years old`\r\n> `tensor([[ 6.0164, -5.7138]], grad_fn=<AddmmBackward>)`\r\n> \r\n> For the first example the probability that the second sentence is a probable continuation is very low.\r\n> For the second example the probability is very high (I am looking at the first logit)\r\n\r\nim getting different scores for the sentences that you have tried . please advise why i'm getting it below is my code .\r\n\r\nimport torch\r\nfrom transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction\r\ntokenizer=BertTokenizer.from_pretrained('bert-base-uncased')\r\nBertNSP=BertForNextSentencePrediction.from_pretrained('bert-base-uncased')\r\n\r\ntext1 = \"How old are you?\"\r\ntext2 = \"The Eiffel Tower is in Paris\"\r\n\r\ntext1_toks = [\"[CLS]\"] + tokenizer.tokenize(text1) + [\"[SEP]\"]\r\ntext2_toks = tokenizer.tokenize(text2) + [\"[SEP]\"]\r\ntext=text1_toks+text2_toks\r\nprint(text)\r\nindexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks)\r\nsegments_ids = [0]*len(text1_toks) + [1]*len(text2_toks)\r\n\r\ntokens_tensor = torch.tensor([indexed_tokens])\r\nsegments_tensors = torch.tensor([segments_ids])\r\nprint(indexed_tokens)\r\nprint(segments_ids)\r\nBertNSP.eval()\r\nprediction = BertNSP(tokens_tensor, segments_tensors)\r\nprediction=prediction[0] # tuple to tensor\r\nprint(predictions)\r\nsoftmax = torch.nn.Softmax(dim=1)\r\nprediction_sm = softmax(prediction)\r\nprint (prediction_sm)\r\n\r\no/p of predictions\r\ntensor([[ 2.1772, -0.8097]], grad_fn=)\r\n\r\no/p of prediction_sm\r\ntensor([[0.9923, 0.0077]], grad_fn=)\r\n\r\nwhy is the score still high 0.9923 even after apply softmax ?", "> > Those are the logits, because you did not pass the `next_sentence_label`.\r\n> > My understanding is that you could apply a softmax and get the probability for the sequence to be a possible sequence.\r\n> > `Sentence 1: How old are you?`\r\n> > `Sentence 2: The Eiffel Tower is in Paris`\r\n> > `tensor([[-2.3808, 5.4018]], grad_fn=<AddmmBackward>)`\r\n> > `Sentence 1: How old are you?`\r\n> > `Sentence 2: I am 193 years old`\r\n> > `tensor([[ 6.0164, -5.7138]], grad_fn=<AddmmBackward>)`\r\n> > For the first example the probability that the second sentence is a probable continuation is very low.\r\n> > For the second example the probability is very high (I am looking at the first logit)\r\n> \r\n> im getting different scores for the sentences that you have tried . please advise why i'm getting it below is my code .\r\n> \r\n> import torch\r\n> from transformers import BertTokenizer, BertModel, BertForMaskedLM,BertForNextSentencePrediction\r\n> tokenizer=BertTokenizer.from_pretrained('bert-base-uncased')\r\n> BertNSP=BertForNextSentencePrediction.from_pretrained('bert-base-uncased')\r\n> \r\n> text1 = \"How old are you?\"\r\n> text2 = \"The Eiffel Tower is in Paris\"\r\n> \r\n> text1_toks = [\"[CLS]\"] + tokenizer.tokenize(text1) + [\"[SEP]\"]\r\n> text2_toks = tokenizer.tokenize(text2) + [\"[SEP]\"]\r\n> text=text1_toks+text2_toks\r\n> print(text)\r\n> indexed_tokens = tokenizer.convert_tokens_to_ids(text1_toks + text2_toks)\r\n> segments_ids = [0]*len(text1_toks) + [1]*len(text2_toks)\r\n> \r\n> tokens_tensor = torch.tensor([indexed_tokens])\r\n> segments_tensors = torch.tensor([segments_ids])\r\n> print(indexed_tokens)\r\n> print(segments_ids)\r\n> BertNSP.eval()\r\n> prediction = BertNSP(tokens_tensor, segments_tensors)\r\n> prediction=prediction[0] # tuple to tensor\r\n> print(predictions)\r\n> softmax = torch.nn.Softmax(dim=1)\r\n> prediction_sm = softmax(prediction)\r\n> print (prediction_sm)\r\n> \r\n> o/p of predictions\r\n> tensor([[ 2.1772, -0.8097]], grad_fn=)\r\n> \r\n> o/p of prediction_sm\r\n> tensor([[0.9923, 0.0077]], grad_fn=)\r\n> \r\n> why is the score still high 0.9923 even after apply softmax ?\r\n\r\nI am facing the same issue. No matter what sentences I use, I always get very high probability of the second sentence being related to the first.\r\n", "@parth126 have you seen https://github.com/huggingface/transformers/issues/1788 and is it related to your issue?", "> @parth126 have you seen #1788 and is it related to your issue?\r\n\r\nYes it was the same issue. And the solution worked like a charm. \r\nMany thanks @LysandreJik ", "@LysandreJik thanks for the information" ]
1,542
1,574
1,542
NONE
null
Can you make up a working example for 'is next sentence' Is this expected to work properly ? ``` # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Tokenized input text = "Who was Jim Morrison ? Jim Morrison was a puppeteer" tokenized_text = tokenizer.tokenize(text) # Convert token to vocabulary indices indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) # Define sentence A and B indices associated to 1st and 2nd sentences (see paper) segments_ids = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1] # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) # Load pre-trained model (weights) model = BertForNextSentencePrediction.from_pretrained('bert-base-uncased') model.eval() # Predict is Next Sentence ? predictions = model(tokens_tensor, segments_tensors) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/48/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/48/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/47
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/47/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/47/comments
https://api.github.com/repos/huggingface/transformers/issues/47/events
https://github.com/huggingface/transformers/issues/47
382,761,771
MDU6SXNzdWUzODI3NjE3NzE=
47
Fine-Tuned BERT-base on Squad v1.
{ "login": "Maaarcocr", "id": 9624267, "node_id": "MDQ6VXNlcjk2MjQyNjc=", "avatar_url": "https://avatars.githubusercontent.com/u/9624267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Maaarcocr", "html_url": "https://github.com/Maaarcocr", "followers_url": "https://api.github.com/users/Maaarcocr/followers", "following_url": "https://api.github.com/users/Maaarcocr/following{/other_user}", "gists_url": "https://api.github.com/users/Maaarcocr/gists{/gist_id}", "starred_url": "https://api.github.com/users/Maaarcocr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Maaarcocr/subscriptions", "organizations_url": "https://api.github.com/users/Maaarcocr/orgs", "repos_url": "https://api.github.com/users/Maaarcocr/repos", "events_url": "https://api.github.com/users/Maaarcocr/events{/privacy}", "received_events_url": "https://api.github.com/users/Maaarcocr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for the details.\r\nThis PyTorch repo is starting to be used by a larger community so we would have to be a little more precise than just rough numbers if we want to include such pre-trained weights.\r\nIf you want to add your weights to the repo, you should convert the weights in the PyTorch repo model and get evaluation results on SQuAD with the PyTorch model so everybody has a clean knowledge of what they are using. Otherwise I think it's better that people do their own training and know what are the capabilities of the fine-tuned model they are using.\r\nFeel free to come back and re-open the issue if this something you would like to do.\r\n", "@thomwolf On SQuAD v1.1, BERT (single) scored 85.083 EM and 91.835 F1 as reported in their paper but when I fine-tuned BERT using `run_squad.py` I got {\"exact_match\": 81.0975, \"f1\": 88.7005}. Why there is a difference? What I am missing?\r\n\r\n\r\n" ]
1,542
1,555
1,542
NONE
null
I have fine-tuned the TF model on SQuAD v1 and I've made the weights available at: https://s3.eu-west-2.amazonaws.com/nlpfiles/squad_bert_base.tgz I get 88.5 FM using these weights on SQuAD dev. (If I recall correctly I get roughly 82 EM). I think it may be beneficial to have these weights here, so that people could play with SQuAD and BERT without the need of fine-tuning, which requires a decent enough setup. Let me know what you think!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/47/reactions", "total_count": 6, "+1": 0, "-1": 0, "laugh": 0, "hooray": 6, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/47/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/46
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/46/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/46/comments
https://api.github.com/repos/huggingface/transformers/issues/46/events
https://github.com/huggingface/transformers/issues/46
382,649,103
MDU6SXNzdWUzODI2NDkxMDM=
46
Assertion `srcIndex < srcSelectDimSize` failed.
{ "login": "SparkJiao", "id": 16469472, "node_id": "MDQ6VXNlcjE2NDY5NDcy", "avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SparkJiao", "html_url": "https://github.com/SparkJiao", "followers_url": "https://api.github.com/users/SparkJiao/followers", "following_url": "https://api.github.com/users/SparkJiao/following{/other_user}", "gists_url": "https://api.github.com/users/SparkJiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/SparkJiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SparkJiao/subscriptions", "organizations_url": "https://api.github.com/users/SparkJiao/orgs", "repos_url": "https://api.github.com/users/SparkJiao/repos", "events_url": "https://api.github.com/users/SparkJiao/events{/privacy}", "received_events_url": "https://api.github.com/users/SparkJiao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Your log is very hard to read. Can you format it cleanly?", "I'm so sorry\r\nThe first error log is as follows:\r\n```bash\r\n/opt/conda/conda-bld/pytorch_1532584813488/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [11,0,0], thread: [95,0,0] Assertion \\`srcIndex < srcSelectDimSize\\` failed.\r\n````\r\nAnd then the Traceback finally points to line 1026 torch/nn/functional.py in linear:\r\n`output = input.matmul(weight.t())`\r\nIt seems that somewhere crashed while using `torch.index_select() `, do you think it is because my sentence is too long? I will check other aspects, thank you very much", "It seems like a failed resource allocation.\r\nMaybe you don't have enough RAM or your GPU is too small ?", "My GPU has 12400 MB and I think that's enough, may be I should use 'yield' to input the data one by one? I will load less data to try, thanks u a lot! ", "Ok feel free to re-open the issue if you still have troubles.", "Hi @SparkJiao \r\n\r\nI met the same issue here, how did you resolve this?", "I have the same issue, did you resolve this? @zyfedward @SparkJiao ", "@nv-quan, do you mind opening a new issue with the template so that we may help?", "I have forgot how to reproduce the problem but the `index_select` error usually happened due to wrong index. You can use a smaller batch size and run the script on CPU to check the full traceback since the traceback while using GPU is delayed." ]
1,542
1,591
1,542
NONE
null
Sorry to bother you I recently have used your extract_features.py to extract features of some data set but failed. The error information is as follows: `/opt/conda/conda-bld/pytorch_1532584813488/work/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo<T, IndexType>, TensorInfo<T, IndexType>, TensorInfo<long, IndexType>, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [11,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "examples/extract_features.py", line 405, in <module> main() File "examples/extract_features.py", line 375, in main all_encoder_layers, _ = model(input_ids, token_type_ids=None, attention_mask=input_mask) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 610, in forward output_all_encoded_layers=output_all_encoded_layers) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 328, in forward hidden_states = layer_module(hidden_states, attention_mask) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 313, in forward attention_output = self.attention(hidden_states, attention_mask) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 273, in forward self_output = self.self(input_tensor, attention_mask) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/pytorch_pretrained_bert/modeling.py", line 224, in forward mixed_query_layer = self.query(hidden_states) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 55, in forward return F.linear(input, self.weight, self.bias) File "/home/jiaofangkai/anaconda3/envs/allennlp-env/lib/python3.7/site-packages/torch/nn/functional.py", line 1026, in linear output = input.matmul(weight.t()) RuntimeError: cublas runtime error : resource allocation failed at /opt/conda/conda-bld/pytorch_1532584813488/work/aten/src/THC/THCGeneral.cpp:333 ` It seems that the index_select function in the models crashed. I read my own data from json files and construct examples from them. I set the batch-size equals 1 and I modified the max_seq_length to the max_length of the input sentences. Thanks for your help!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/46/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/46/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/45
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/45/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/45/comments
https://api.github.com/repos/huggingface/transformers/issues/45/events
https://github.com/huggingface/transformers/issues/45
382,579,717
MDU6SXNzdWUzODI1Nzk3MTc=
45
Issue of `bert_model` arg in `run_classify.py`
{ "login": "llidev", "id": 29957883, "node_id": "MDQ6VXNlcjI5OTU3ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/29957883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/llidev", "html_url": "https://github.com/llidev", "followers_url": "https://api.github.com/users/llidev/followers", "following_url": "https://api.github.com/users/llidev/following{/other_user}", "gists_url": "https://api.github.com/users/llidev/gists{/gist_id}", "starred_url": "https://api.github.com/users/llidev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/llidev/subscriptions", "organizations_url": "https://api.github.com/users/llidev/orgs", "repos_url": "https://api.github.com/users/llidev/repos", "events_url": "https://api.github.com/users/llidev/events{/privacy}", "received_events_url": "https://api.github.com/users/llidev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, please read [this section](https://github.com/huggingface/pytorch-pretrained-BERT#loading-google-ais-pre-trained-weigths-and-pytorch-dump) of the readme." ]
1,542
1,542
1,542
CONTRIBUTOR
null
Hi, I am trying to understand the `bert_model` arg in `run_classify.py`. In the file, I can see ``` tokenizer = BertTokenizer.from_pretrained(args.bert_model) ``` where `bert_model` is expected to be the vocab text file of the model However, I also see ``` model = BertForSequenceClassification.from_pretrained(args.bert_model, len(label_list)) ``` where `bert_model` is expected to be a archive file containing the model checkpoint and config. Please help to advice the correct use of `bert_model` if I have my pretrained model converted locally already. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/45/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/45/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/44
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/44/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/44/comments
https://api.github.com/repos/huggingface/transformers/issues/44/events
https://github.com/huggingface/transformers/issues/44
382,576,559
MDU6SXNzdWUzODI1NzY1NTk=
44
Race condition when prepare pretrained model in distributed training
{ "login": "llidev", "id": 29957883, "node_id": "MDQ6VXNlcjI5OTU3ODgz", "avatar_url": "https://avatars.githubusercontent.com/u/29957883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/llidev", "html_url": "https://github.com/llidev", "followers_url": "https://api.github.com/users/llidev/followers", "following_url": "https://api.github.com/users/llidev/following{/other_user}", "gists_url": "https://api.github.com/users/llidev/gists{/gist_id}", "starred_url": "https://api.github.com/users/llidev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/llidev/subscriptions", "organizations_url": "https://api.github.com/users/llidev/orgs", "repos_url": "https://api.github.com/users/llidev/repos", "events_url": "https://api.github.com/users/llidev/events{/privacy}", "received_events_url": "https://api.github.com/users/llidev/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "My current workaround is to set the env var `PYTORCH_PRETRAINED_BERT_CACHE` to a different path per process before import `pytorch_pretrained_bert`. But I think the module itself should handle this properly", "I see, thanks for the feedback. I will find a way to make that better in the next release. Not sure we need to store the model gzipped anyway since they mostly contains a torch dump which is already compressed.", "Ok, I've added a `cache_dir` option in `from_pretrained` in the master to specify a different cache dir for a script. I will release the updated version today on pip. Thanks for the feedback.", "Thanks for fixing this. \r\n\r\nSince the way I use this repo is to add ./pytorch_pretrained_bert in PYTHONPATH, so I think directly add the following import in `run_classifier.py` and `run_squad.py` is more appropriate in my case \r\n\r\n```\r\nfrom pytorch_pretrained_bert.file_utils import PYTORCH_PRETRAINED_BERT_CACHE\r\n```\r\n\r\nwhich is included in my PR: https://github.com/huggingface/pytorch-pretrained-BERT/pull/58 " ]
1,542
1,543
1,543
CONTRIBUTOR
null
Hi, I launched two processes per node to run distributed run_classifier.py. However, I am occasionally get below error: ``` 11/20/2018 09:31:48 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmpa25_y4es to cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 93%|█████████▎| 381028352/407873900 [00:11<00:01, 14366075.22B/s] 94%|█████████▍| 383812608/407873900 [00:11<00:01, 16210783.00B/s] 95%|█████████▍| 386455552/407873900 [00:11<00:01, 16205260.89B/s]11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.file_utils - creating metadata file for /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.file_utils - removing temp file /tmp/tmpa25_y4es 95%|█████████▌| 388946944/407873900 [00:11<00:01, 18097539.03B/s]11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.modeling - loading archive file https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased.tar.gz from cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba 11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.modeling - extracting archive file /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba to temp dir /tmp/tmpvxvnr8_1 97%|█████████▋| 393660416/407873900 [00:11<00:00, 22199883.93B/s] 98%|█████████▊| 399411200/407873900 [00:11<00:00, 27211860.00B/s] 99%|█████████▉| 405128192/407873900 [00:11<00:00, 32287252.94B/s] 100%|██████████| 407873900/407873900 [00:11<00:00, 34098120.40B/s] 11/20/2018 09:31:49 - INFO - pytorch_pretrained_bert.file_utils - copying /tmp/tmp5fcm4v8x to cache at /root/.pytorch_pretrained_bert/9c41111e2de84547a463fd39217199738d1e3deb72d4fec4399e6e241983c6f0.ae3cef932725ca7a30cdcb93fc6e09150a55e2a130ec7af63975a16c153ae2ba Traceback (most recent call last): File "examples/run_classifier.py", line 629, in <module> main() File "examples/run_classifier.py", line 485, in main model = BertForSequenceClassification.from_pretrained(args.bert_model, len(label_list)) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/site-packages/pytorch_pretrained_bert-0.1.2-py3.6.egg/pytorch_pretrained_bert/modeling.py", line 495, in from_pretrained archive.extractall(tempdir) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2007, in extractall numeric_owner=numeric_owner) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2049, in extract numeric_owner=numeric_owner) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2119, in _extract_member self.makefile(tarinfo, targetpath) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 2168, in makefile copyfileobj(source, target, tarinfo.size, ReadError, bufsize) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/tarfile.py", line 248, in copyfileobj buf = src.read(bufsize) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/gzip.py", line 276, in read return self._buffer.read(size) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/azureml-envs/azureml_49b6ba977c83839baa597001c9b55a6f/lib/python3.6/gzip.py", line 482, in read raise EOFError("Compressed file ended before the " EOFError: Compressed file ended before the end-of-stream marker was reached ``` It looks like a race-condition that two processes are simultaneously writing model file to `/root/.pytorch_pretrained_bert/`. Please help to advice any workaround. Thanks!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/44/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/44/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/43
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/43/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/43/comments
https://api.github.com/repos/huggingface/transformers/issues/43/events
https://github.com/huggingface/transformers/issues/43
382,553,589
MDU6SXNzdWUzODI1NTM1ODk=
43
grad is None in squad example
{ "login": "vpegasus", "id": 22723154, "node_id": "MDQ6VXNlcjIyNzIzMTU0", "avatar_url": "https://avatars.githubusercontent.com/u/22723154?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vpegasus", "html_url": "https://github.com/vpegasus", "followers_url": "https://api.github.com/users/vpegasus/followers", "following_url": "https://api.github.com/users/vpegasus/following{/other_user}", "gists_url": "https://api.github.com/users/vpegasus/gists{/gist_id}", "starred_url": "https://api.github.com/users/vpegasus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vpegasus/subscriptions", "organizations_url": "https://api.github.com/users/vpegasus/orgs", "repos_url": "https://api.github.com/users/vpegasus/repos", "events_url": "https://api.github.com/users/vpegasus/events{/privacy}", "received_events_url": "https://api.github.com/users/vpegasus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Oh you're right. I've just fixed that. you can try to pull the current master and test again.", "@thomwolf it works, thanks" ]
1,542
1,542
1,542
NONE
null
Hi, guys, I try the `run_squad` example with ``` Traceback (most recent call last): | 0/7331 [00:00<?, ?it/s] File "examples/run_squad.py", line 973, in <module> main() File "examples/run_squad.py", line 904, in main param.grad.data = param.grad.data / args.loss_scale AttributeError: 'NoneType' object has no attribute 'data' ``` I find one of the param.grads is None, so the param.grad.data doesn't exist. by the way I down load the data by myself from the urls in this prject. my os is ubuntu 18.04, pytorch 0.41 gpu 1080t anyone else encounters this situation? wanna help, please, thx in advance...
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/43/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/43/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/42
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/42/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/42/comments
https://api.github.com/repos/huggingface/transformers/issues/42/events
https://github.com/huggingface/transformers/pull/42
382,492,723
MDExOlB1bGxSZXF1ZXN0MjMyMTg2NjE0
42
Fixed UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2
{ "login": "weiyumou", "id": 9312916, "node_id": "MDQ6VXNlcjkzMTI5MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/9312916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weiyumou", "html_url": "https://github.com/weiyumou", "followers_url": "https://api.github.com/users/weiyumou/followers", "following_url": "https://api.github.com/users/weiyumou/following{/other_user}", "gists_url": "https://api.github.com/users/weiyumou/gists{/gist_id}", "starred_url": "https://api.github.com/users/weiyumou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weiyumou/subscriptions", "organizations_url": "https://api.github.com/users/weiyumou/orgs", "repos_url": "https://api.github.com/users/weiyumou/repos", "events_url": "https://api.github.com/users/weiyumou/events{/privacy}", "received_events_url": "https://api.github.com/users/weiyumou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks!" ]
1,542
1,542
1,542
CONTRIBUTOR
null
I encountered `UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 3793: ordinal not in range(128)` when running the starter example shown under the Usage section. It turned out to be related to the `load_vocab` function in `tokenization.py`. Forcing `open` to use encoding `utf8` solved this issue on my machine.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/42/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/42/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/42", "html_url": "https://github.com/huggingface/transformers/pull/42", "diff_url": "https://github.com/huggingface/transformers/pull/42.diff", "patch_url": "https://github.com/huggingface/transformers/pull/42.patch", "merged_at": 1542704990000 }
https://api.github.com/repos/huggingface/transformers/issues/41
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/41/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/41/comments
https://api.github.com/repos/huggingface/transformers/issues/41/events
https://github.com/huggingface/transformers/issues/41
382,489,751
MDU6SXNzdWUzODI0ODk3NTE=
41
Typo in README
{ "login": "weiyumou", "id": 9312916, "node_id": "MDQ6VXNlcjkzMTI5MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/9312916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/weiyumou", "html_url": "https://github.com/weiyumou", "followers_url": "https://api.github.com/users/weiyumou/followers", "following_url": "https://api.github.com/users/weiyumou/following{/other_user}", "gists_url": "https://api.github.com/users/weiyumou/gists{/gist_id}", "starred_url": "https://api.github.com/users/weiyumou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weiyumou/subscriptions", "organizations_url": "https://api.github.com/users/weiyumou/orgs", "repos_url": "https://api.github.com/users/weiyumou/repos", "events_url": "https://api.github.com/users/weiyumou/events{/privacy}", "received_events_url": "https://api.github.com/users/weiyumou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes" ]
1,542
1,542
1,542
CONTRIBUTOR
null
I think I spotted a typo in the README file under the Usage header. There is a piece of code that uses `BertTokenizer` and the typo is on this line: `tokenized_text = "Who was Jim Henson ? Jim Henson was a puppeteer"` I think `tokenized_text` should be replaced with `text`, since the next line is `tokenized_text = tokenizer.tokenize(text)`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/41/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/41/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/40
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/40/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/40/comments
https://api.github.com/repos/huggingface/transformers/issues/40/events
https://github.com/huggingface/transformers/pull/40
382,327,249
MDExOlB1bGxSZXF1ZXN0MjMyMDYwODU1
40
update pip package name
{ "login": "joelgrus", "id": 1308313, "node_id": "MDQ6VXNlcjEzMDgzMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1308313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joelgrus", "html_url": "https://github.com/joelgrus", "followers_url": "https://api.github.com/users/joelgrus/followers", "following_url": "https://api.github.com/users/joelgrus/following{/other_user}", "gists_url": "https://api.github.com/users/joelgrus/gists{/gist_id}", "starred_url": "https://api.github.com/users/joelgrus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joelgrus/subscriptions", "organizations_url": "https://api.github.com/users/joelgrus/orgs", "repos_url": "https://api.github.com/users/joelgrus/repos", "events_url": "https://api.github.com/users/joelgrus/events{/privacy}", "received_events_url": "https://api.github.com/users/joelgrus/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,542
1,542
1,542
CONTRIBUTOR
null
dashes not underscores
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/40/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/40/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/40", "html_url": "https://github.com/huggingface/transformers/pull/40", "diff_url": "https://github.com/huggingface/transformers/pull/40.diff", "patch_url": "https://github.com/huggingface/transformers/pull/40.patch", "merged_at": 1542657287000 }
https://api.github.com/repos/huggingface/transformers/issues/39
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/39/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/39/comments
https://api.github.com/repos/huggingface/transformers/issues/39/events
https://github.com/huggingface/transformers/issues/39
382,300,869
MDU6SXNzdWUzODIzMDA4Njk=
39
Command-line interface Document Bug
{ "login": "delldu", "id": 31266222, "node_id": "MDQ6VXNlcjMxMjY2MjIy", "avatar_url": "https://avatars.githubusercontent.com/u/31266222?v=4", "gravatar_id": "", "url": "https://api.github.com/users/delldu", "html_url": "https://github.com/delldu", "followers_url": "https://api.github.com/users/delldu/followers", "following_url": "https://api.github.com/users/delldu/following{/other_user}", "gists_url": "https://api.github.com/users/delldu/gists{/gist_id}", "starred_url": "https://api.github.com/users/delldu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/delldu/subscriptions", "organizations_url": "https://api.github.com/users/delldu/orgs", "repos_url": "https://api.github.com/users/delldu/repos", "events_url": "https://api.github.com/users/delldu/events{/privacy}", "received_events_url": "https://api.github.com/users/delldu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks!" ]
1,542
1,542
1,542
NONE
null
There is a bug in README.md about Command-line interface: `export BERT_BASE_DIR=chinese_L-12_H-768_A-12` **Wrong:** ``` pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \ --tf_checkpoint_path $BERT_BASE_DIR/bert_model.ckpt.index \ --bert_config_file $BERT_BASE_DIR/bert_config.json \ --pytorch_dump_path $BERT_BASE_DIR/pytorch_model.bin ``` **Right:** ``` pytorch_pretrained_bert convert_tf_checkpoint_to_pytorch \ $BERT_BASE_DIR/bert_model.ckpt.index \ $BERT_BASE_DIR/bert_config.json \ $BERT_BASE_DIR/pytorch_model.bin ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/39/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/39/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/38
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/38/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/38/comments
https://api.github.com/repos/huggingface/transformers/issues/38/events
https://github.com/huggingface/transformers/issues/38
382,297,444
MDU6SXNzdWUzODIyOTc0NDQ=
38
truncated normal initializer
{ "login": "ruotianluo", "id": 16023153, "node_id": "MDQ6VXNlcjE2MDIzMTUz", "avatar_url": "https://avatars.githubusercontent.com/u/16023153?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ruotianluo", "html_url": "https://github.com/ruotianluo", "followers_url": "https://api.github.com/users/ruotianluo/followers", "following_url": "https://api.github.com/users/ruotianluo/following{/other_user}", "gists_url": "https://api.github.com/users/ruotianluo/gists{/gist_id}", "starred_url": "https://api.github.com/users/ruotianluo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ruotianluo/subscriptions", "organizations_url": "https://api.github.com/users/ruotianluo/orgs", "repos_url": "https://api.github.com/users/ruotianluo/repos", "events_url": "https://api.github.com/users/ruotianluo/events{/privacy}", "received_events_url": "https://api.github.com/users/ruotianluo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "We could try that. Not sure how important it is though. Did you try it?", "Ok I think we will stick to the normal_initializer for now. Thanks for indicating this option!" ]
1,542
1,543
1,543
NONE
null
I have a reasonable truncated normal approximation. (Actually that is what tf does). https://discuss.pytorch.org/t/implementing-truncated-normal-initializer/4778/16?u=ruotianluo
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/38/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/38/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/37
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/37/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/37/comments
https://api.github.com/repos/huggingface/transformers/issues/37/events
https://github.com/huggingface/transformers/issues/37
382,265,174
MDU6SXNzdWUzODIyNjUxNzQ=
37
using BERT as a language Model
{ "login": "mdasadul", "id": 8009589, "node_id": "MDQ6VXNlcjgwMDk1ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/8009589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mdasadul", "html_url": "https://github.com/mdasadul", "followers_url": "https://api.github.com/users/mdasadul/followers", "following_url": "https://api.github.com/users/mdasadul/following{/other_user}", "gists_url": "https://api.github.com/users/mdasadul/gists{/gist_id}", "starred_url": "https://api.github.com/users/mdasadul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mdasadul/subscriptions", "organizations_url": "https://api.github.com/users/mdasadul/orgs", "repos_url": "https://api.github.com/users/mdasadul/repos", "events_url": "https://api.github.com/users/mdasadul/events{/privacy}", "received_events_url": "https://api.github.com/users/mdasadul/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "I don't think you can do that with Bert. The masked LM loss is not a Language Modeling loss, it doesn't work nicely with the [chain rule](https://en.wikipedia.org/wiki/Chain_rule_%28probability%29) like the usual Language Modeling loss.\r\nPlease see the discussion on the TensorFlow repo on that [here](https://github.com/google-research/bert/issues/35).", "Hello @thomwolf I can see it is possible to assign score by using [BERT ](https://github.com/google-research/bert/issues/139#issuecomment-441322849). By masking each word sequentially. Then score sentence by summary of word score. Here is how people were doing it for [Tensorflow](https://github.com/xu-song/bert-as-language-model). I am trying to do following\r\n\r\n```\r\nimport numpy as np\r\nimport torch\r\nfrom pytorch_pretrained_bert import BertTokenizer,BertForMaskedLM\r\n# Load pre-trained model (weights)\r\nwith torch.no_grad():\r\n model = BertForMaskedLM.from_pretrained('bert-large-cased')\r\n model.eval()\r\n # Load pre-trained model tokenizer (vocabulary)\r\n tokenizer = BertTokenizer.from_pretrained('bert-large-cased')\r\ndef score(sentence):\r\n tokenize_input = tokenizer.tokenize(sentence)\r\n tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])\r\n sentence_loss=0.\r\n for i,word in enumerate(tokenize_input):\r\n\r\n tokenize_input[i]='[MASK]'\r\n mask_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])\r\n word_loss=model(mask_input, masked_lm_labels=tensor_input).data.numpy()\r\n sentence_loss +=word_loss\r\n #print(\"Word: %s : %f\"%(word, np.exp(-word_loss)))\r\n\r\n return np.exp(sentence_loss/len(tokenize_input))\r\n\r\n```\r\n\r\n```\r\nscore(\"There is a book on the table\")\r\n88.899999\r\n```\r\nIs it the right way to assign score using BERT?\r\n\r\n", "> Hello @thomwolf I can see it is possible to assign score by using [BERT ](https://github.com/google-research/bert/issues/139#issuecomment-441322849). By masking each word sequentially. Then score sentence by summary of word score. Here is how people were doing it for [Tensorflow](https://github.com/xu-song/bert-as-language-model). I am trying to do following\r\n> \r\n> ```\r\n> import numpy as np\r\n> import torch\r\n> from pytorch_pretrained_bert import BertTokenizer,BertForMaskedLM\r\n> # Load pre-trained model (weights)\r\n> with torch.no_grad():\r\n> model = BertForMaskedLM.from_pretrained('bert-large-cased')\r\n> model.eval()\r\n> # Load pre-trained model tokenizer (vocabulary)\r\n> tokenizer = BertTokenizer.from_pretrained('bert-large-cased')\r\n> def score(sentence):\r\n> tokenize_input = tokenizer.tokenize(sentence)\r\n> tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])\r\n> sentence_loss=0.\r\n> for i,word in enumerate(tokenize_input):\r\n> \r\n> tokenize_input[i]='[MASK]'\r\n> mask_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])\r\n> word_loss=model(mask_input, masked_lm_labels=tensor_input).data.numpy()\r\n> sentence_loss +=word_loss\r\n> #print(\"Word: %s : %f\"%(word, np.exp(-word_loss)))\r\n> \r\n> return np.exp(sentence_loss/len(tokenize_input))\r\n> ```\r\n> \r\n> ```\r\n> score(\"There is a book on the table\")\r\n> 88.899999\r\n> ```\r\n> \r\n> Is it the right way to assign score using BERT?\r\n\r\nno, you masked word but not restore.", "@mdasadul Did you managed to do it?", "Yes please check my tweet on this @mdasaduluofa\n\n\nOn Wed, May 27, 2020, 1:37 PM orko19 <[email protected]> wrote:\n\n> @mdasadul <https://github.com/mdasadul> Did you managed to do it?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/37#issuecomment-634485380>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AB5DO5N2MGF6QCTAZ3L3NITRTS7J3ANCNFSM4GFFKJJA>\n> .\n>\n", "@mdasadul Do you mean this one?\r\nhttps://twitter.com/mdasaduluofa/status/1181917072999231489/photo/1 \r\nI see this it for GPT-2, do you have a code for BERT?", "It should be similar. Following code is for distilBert\r\n```import math\r\nfrom torch.multiprocessing import TimeoutError, Pool,set_start_method,Queue\r\nimport torch.multiprocessing as mp\r\nimport torch\r\nfrom transformers import DistilBertTokenizer,DistilBertForMaskedLM\r\nfrom flask import Flask,request\r\nimport json\r\n\r\ntry:\r\n set_start_method('spawn')\r\nexcept RuntimeError:\r\n pass\r\n\r\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\ndef load_model():\r\n model = DistilBertForMaskedLM.from_pretrained('distilbert-base-uncased').to(device)\r\n model.eval()\r\n tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')\r\n return tokenizer, model\r\n\r\ntokenizer, model =load_model()\r\n#st.text('Done!')\r\n\r\ndef score(sentence):\r\n if len(sentence.strip().split())<=1 : return 10000\r\n tokenize_input = tokenizer.tokenize(sentence)\r\n if len(tokenize_input)>512: return 10000\r\n input_ids = torch.tensor(tokenizer.encode(tokenize_input)).unsqueeze(0).to(device)\r\n with torch.no_grad():\r\n loss=model(input_ids,masked_lm_labels = input_ids)[0]\r\n return math.exp(loss.item()/len(tokenize_input))```\r\n", "@mdasadul I get the error:\r\n`TypeError: forward() got an unexpected keyword argument 'masked_lm_labels'`\r\nAlso, can you please explain why for following steps are necessary:\r\n1. `unsqueeze(0)`\r\n2. add `torch.no_grad()`\r\n3. add `model.eval()`", "The score is equivalent to perplexity. Hence lower the score better the sentence, right?", "Yes that is right\nMd Asadul Islam\nMachine Learning Engineer\nScribendi Inc\n\n\nOn Mon, Jul 6, 2020 at 11:54 PM nlp-sudo <[email protected]> wrote:\n\n> The score is equivalent to perplexity. Hence lower the score better the\n> sentence, right?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/37#issuecomment-654618996>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AB5DO5KTBQJEEM7J72TCH2LR2K2AVANCNFSM4GFFKJJA>\n> .\n>\n", "@mdasadul I get the error:\r\n```\r\n return math.exp(loss.item() / len(tokenize_input))\r\nValueError: only one element tensors can be converted to Python scalars\r\n```\r\nAny idea why?", "Yes, your sentence needs to be longer than 1 word. PPL of 1 word sentence\ndoesn't mean anything. Please try with longer sentences\nMd Asadul Islam\nMachine Learning Engineer\nScribendi Inc\n\n\nOn Sun, Mar 14, 2021 at 7:48 AM orenschonlab ***@***.***>\nwrote:\n\n> @mdasadul <https://github.com/mdasadul> I get the error:\n>\n> return math.exp(loss.item() / len(tokenize_input))\n> ValueError: only one element tensors can be converted to Python scalars\n>\n> Any idea why?\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/issues/37#issuecomment-798893364>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AB5DO5ITYG7M6TG2XV5NZ6LTDSPARANCNFSM4GFFKJJA>\n> .\n>\n", "@mdasadul I have a sentence with more than 1 word and still get the error\r\nsentence is `' Harry had never believed he would'`\r\ninput_ids is tensor`([[ 101, 4302, 2018, 2196, 3373, 2002, 2052, 102]])`", "Below is an example from the official docs on how to implement GPT2 to determine perplexity. \r\n\r\nhttps://huggingface.co./transformers/perplexity.html", "@EricFillion But how can it be used for a sentence, not for a dataset?\r\nMeaning I want the perplexity of the sentence:\r\n`Harry had never believed he would`", "@orenschonlab Try below \r\n```\r\nimport torch\r\nimport sys\r\nimport numpy as np\r\n \r\nfrom transformers import GPT2Tokenizer, GPT2LMHeadModel\r\n# Load pre-trained model (weights)\r\nwith torch.no_grad():\r\n model = GPT2LMHeadModel.from_pretrained('gpt2')\r\n model.eval()\r\n# Load pre-trained model tokenizer (vocabulary)\r\ntokenizer = GPT2Tokenizer.from_pretrained('gpt2')\r\n\r\ndef score(sentence):\r\n tokenize_input = tokenizer.encode(sentence)\r\n tensor_input = torch.tensor([tokenize_input])\r\n loss=model(tensor_input, labels=tensor_input)[0]\r\n return np.exp(loss.detach().numpy())\r\n \r\nif __name__=='__main__':\r\n for line in sys.stdin:\r\n if line.strip() !='':\r\n print(line.strip()+'\\t'+ str(score(line.strip())))\r\n else:\r\n break\r\n```", "> @EricFillion But how can it be used for a sentence, not for a dataset?\r\n> Meaning I want the perplexity of the sentence:\r\n> `Harry had never believed he would`\r\n\r\nI just played around with the code @mdasadul posted above. It works perfectly and is nice and concise. It outputted the same scores from the official documentation for short inputs. \r\n\r\nIf you're still interested in using the method from the official documentation, then you can replace \"'\\n\\n'.join(test['text'])\" with the text you wish to determine the perplexity of. You'll also want to add \".item()\" to ppl to convert the tensor to a float. ", "This repo is quite useful. It supports Huggingface models.\r\n\r\nhttps://github.com/awslabs/mlm-scoring" ]
1,542
1,627
1,542
NONE
null
I was trying to use BERT as a language model to assign a score(could be PPL score) of a given sentence. Something like P("He is go to school")=0.008 P("He is going to school")=0.08 Which is indicating that the probability of second sentence is higher than first sentence. Is there a way to get a score like this? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/37/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/37/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/36
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/36/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/36/comments
https://api.github.com/repos/huggingface/transformers/issues/36/events
https://github.com/huggingface/transformers/issues/36
382,054,626
MDU6SXNzdWUzODIwNTQ2MjY=
36
How to detokenize a BertTokenizer output?
{ "login": "bprabhakar", "id": 5628886, "node_id": "MDQ6VXNlcjU2Mjg4ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/5628886?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bprabhakar", "html_url": "https://github.com/bprabhakar", "followers_url": "https://api.github.com/users/bprabhakar/followers", "following_url": "https://api.github.com/users/bprabhakar/following{/other_user}", "gists_url": "https://api.github.com/users/bprabhakar/gists{/gist_id}", "starred_url": "https://api.github.com/users/bprabhakar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bprabhakar/subscriptions", "organizations_url": "https://api.github.com/users/bprabhakar/orgs", "repos_url": "https://api.github.com/users/bprabhakar/repos", "events_url": "https://api.github.com/users/bprabhakar/events{/privacy}", "received_events_url": "https://api.github.com/users/bprabhakar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You can remove ' ##' but you cannot know if there was a space around punctuations tokens or uppercase words.", "Yes. I don't plan to include a reverse conversion of tokens in the tokenizer.\r\nFor an example on how to keep track of the original characters position, please read the `run_squad.py` example.", "In my case, I do: \r\n```\r\ntokens = ['[UNK]', '[CLS]', '[SEP]', 'want', '##ed', 'wa', 'un', 'runn', '##ing', ',']\r\ntext = ' '.join([x for x in tokens])\r\nfine_text = text.replace(' ##', '')\r\n```\r\n", "Apostrophe is considered as a punctuation mark, but often it is an integrated part of the word. Regular `.tokenize()` always converts apostrophe to the stand alone token, so the information to which word it belongs is lost. If the original sentence contains apostrophes, it is impossible to recreate the original sentence from its' tokens (for example when apostrophe is a last symbol in some word `convert_tokens_to_string()` will join it with the following one). In order to overcome this, one can check the surroundings of the apostrophe and add `##` immediately after the tokenization. For example:\r\n```\r\nsent = \"The Smiths' used their son's car\" \r\ntokens = tokenizer.tokenize(sent)\r\n```\r\nnow if you fix `tokens` to look like:\r\n\r\n**original** `=>['the', 'smith', '##s', \"'\", 'used', 'their', 'son', \"'\", 's', 'car']`\r\n**fixed** ` => ['the', 'smith', '##s', \"##'\", 'used', 'their', 'son', \"##'\", '##s', 'car']`\r\n\r\nyou will be able to restore the original words.\r\n", "@thomwolf could you point to the specific section of `run_squad.py` that handles this, I'm having trouble\r\n\r\nEDIT: is it this bit from `processors/squad.py`? \r\n```python\r\ntok_to_orig_index = []\r\n orig_to_tok_index = []\r\n all_doc_tokens = []\r\n for (i, token) in enumerate(example.doc_tokens):\r\n orig_to_tok_index.append(len(all_doc_tokens))\r\n sub_tokens = tokenizer.tokenize(token)\r\n for sub_token in sub_tokens:\r\n tok_to_orig_index.append(i)\r\n all_doc_tokens.append(sub_token)\r\n```" ]
1,542
1,576
1,542
NONE
null
I was wondering if there's a proper way of detokenizing the output tokens, i.e., constructing the sentence back from the tokens? Considering the fact that the word-piece tokenisation introduces lots of `#`s.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/36/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/36/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/35
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/35/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/35/comments
https://api.github.com/repos/huggingface/transformers/issues/35/events
https://github.com/huggingface/transformers/issues/35
381,998,040
MDU6SXNzdWUzODE5OTgwNDA=
35
issues with accents on convert_ids_to_tokens()
{ "login": "perezjln", "id": 5373778, "node_id": "MDQ6VXNlcjUzNzM3Nzg=", "avatar_url": "https://avatars.githubusercontent.com/u/5373778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/perezjln", "html_url": "https://github.com/perezjln", "followers_url": "https://api.github.com/users/perezjln/followers", "following_url": "https://api.github.com/users/perezjln/following{/other_user}", "gists_url": "https://api.github.com/users/perezjln/gists{/gist_id}", "starred_url": "https://api.github.com/users/perezjln/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/perezjln/subscriptions", "organizations_url": "https://api.github.com/users/perezjln/orgs", "repos_url": "https://api.github.com/users/perezjln/repos", "events_url": "https://api.github.com/users/perezjln/events{/privacy}", "received_events_url": "https://api.github.com/users/perezjln/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This is expected behaviour and is how the multilingual and the uncased models were trained. From the [original repo](https://github.com/google-research/bert/blob/master/README.md):\r\n\r\n> We are releasing the BERT-Base and BERT-Large models from the paper. Uncased means that the text has been lowercased before WordPiece tokenization, e.g., John Smith becomes john smith. The Uncased model also strips out any accent markers. \r\n\r\n", "Yes this is expected." ]
1,542
1,542
1,542
NONE
null
Hello, the BertTokenizer seems loose accents when convert_ids_to_tokens() is used : Example: - original sentence: "great breakfasts in a nice furnished cafè, slightly bohemian." - corresponding list of token produced : ['great', 'breakfast', '##s', 'in', 'a', 'nice', 'fur', '##nis', '##hed', 'cafe', ',', 'slightly', 'bohemia', '##n', '.'] Here the problem is in "cafe" that loses its accent. I'm using BertTokenizer.from_pretrained('Bert-base-multilingual') as the tokenizer, I also tried with "Bert-base-uncased" and experienced the same issue. Thanks for this great work!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/35/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/35/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/34
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/34/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/34/comments
https://api.github.com/repos/huggingface/transformers/issues/34/events
https://github.com/huggingface/transformers/issues/34
381,965,833
MDU6SXNzdWUzODE5NjU4MzM=
34
Can not find vocabulary file for Chinese model
{ "login": "zlinao", "id": 33000929, "node_id": "MDQ6VXNlcjMzMDAwOTI5", "avatar_url": "https://avatars.githubusercontent.com/u/33000929?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zlinao", "html_url": "https://github.com/zlinao", "followers_url": "https://api.github.com/users/zlinao/followers", "following_url": "https://api.github.com/users/zlinao/following{/other_user}", "gists_url": "https://api.github.com/users/zlinao/gists{/gist_id}", "starred_url": "https://api.github.com/users/zlinao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zlinao/subscriptions", "organizations_url": "https://api.github.com/users/zlinao/orgs", "repos_url": "https://api.github.com/users/zlinao/repos", "events_url": "https://api.github.com/users/zlinao/events{/privacy}", "received_events_url": "https://api.github.com/users/zlinao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "need to specify the path of vocab.txt for:\r\ntokenizer = BertTokenizer.from_pretrained(args.bert_model)", "@zlinao ,i try to load the vocab using the following code:\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-chinese//vocab.txt\"\r\n\r\nhowever,get errors\r\n11/19/2018 15:33:13 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file bert-base-chinese//vocab.txt\r\nTraceback (most recent call last):\r\n File \"E:/PythonWorkSpace/PytorchBert/BertTest/torchTest.py\", line 6, in <module>\r\n tokenizer = BertTokenizer.from_pretrained(\"bert-base-chinese//vocab.txt\")\r\n File \"C:\\anaconda\\lib\\site-packages\\pytorch_pretrained_bert-0.1.2-py3.6.egg\\pytorch_pretrained_bert\\tokenization.py\", line 141, in from_pretrained\r\n File \"C:\\anaconda\\lib\\site-packages\\pytorch_pretrained_bert-0.1.2-py3.6.egg\\pytorch_pretrained_bert\\tokenization.py\", line 95, in __init__\r\n File \"C:\\anaconda\\lib\\site-packages\\pytorch_pretrained_bert-0.1.2-py3.6.egg\\pytorch_pretrained_bert\\tokenization.py\", line 70, in load_vocab\r\nUnicodeDecodeError: 'gbk' codec can't decode byte 0x81 in position 1564: illegal multibyte sequenc\r\n\r\ndo you have the same problem?", "Hi,\r\nWhy don't you guys just do `tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')` as [indicated in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#loading-google-ais-pre-trained-weigths-and-pytorch-dump) and the `run_classifier.py` example?", "> Hi,\r\n> Why don't you guys just do `tokenizer = BertTokenizer.from_pretrained('bert-base-chinese')` as [indicated in the readme](https://github.com/huggingface/pytorch-pretrained-BERT#loading-google-ais-pre-trained-weigths-and-pytorch-dump) and the `run_classifier.py` example?\r\n\r\nYes, it is easier to use shortcut name. Thanks for your great work.", "> @zlinao ,i try to load the vocab using the following code:\r\n> tokenizer = BertTokenizer.from_pretrained(\"bert-base-chinese//vocab.txt\"\r\n> \r\n> however,get errors\r\n> 11/19/2018 15:33:13 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file bert-base-chinese//vocab.txt\r\n> Traceback (most recent call last):\r\n> File \"E:/PythonWorkSpace/PytorchBert/BertTest/torchTest.py\", line 6, in \r\n> tokenizer = BertTokenizer.from_pretrained(\"bert-base-chinese//vocab.txt\")\r\n> File \"C:\\anaconda\\lib\\site-packages\\pytorch_pretrained_bert-0.1.2-py3.6.egg\\pytorch_pretrained_bert\\tokenization.py\", line 141, in from_pretrained\r\n> File \"C:\\anaconda\\lib\\site-packages\\pytorch_pretrained_bert-0.1.2-py3.6.egg\\pytorch_pretrained_bert\\tokenization.py\", line 95, in **init**\r\n> File \"C:\\anaconda\\lib\\site-packages\\pytorch_pretrained_bert-0.1.2-py3.6.egg\\pytorch_pretrained_bert\\tokenization.py\", line 70, in load_vocab\r\n> UnicodeDecodeError: 'gbk' codec can't decode byte 0x81 in position 1564: illegal multibyte sequenc\r\n> \r\n> do you have the same problem?\r\n\r\nyou can change you encoding to 'utf-8' when you load the vocab.txt" ]
1,542
1,542
1,542
NONE
null
After I convert the TF model to pytorch model, I run a classification task on a new Chinese dataset, but get this: CUDA_VISIBLE_DEVICES=3 python run_classifier.py --task_name weibo --do_eval --do_train --bert_model chinese_L-12_H-768_A-12 --max_seq_length 128 --train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir bert_result 11/18/2018 21:56:59 - INFO - __main__ - device cuda n_gpu 1 distributed training False 11/18/2018 21:56:59 - INFO - pytorch_pretrained_bert.tokenization - loading vocabulary file chinese_L-12_H-768_A-12 Traceback (most recent call last): File "run_classifier.py", line 661, in <module> main() File "run_classifier.py", line 508, in main tokenizer = BertTokenizer.from_pretrained(args.bert_model) File "/home/lin/jpmorgan/pytorch-pretrained-BERT/pytorch_pretrained_bert/tokenization.py", line 141, in from_pretrained tokenizer = cls(resolved_vocab_file, do_lower_case) File "/home/lin/jpmorgan/pytorch-pretrained-BERT/pytorch_pretrained_bert/tokenization.py", line 94, in __init__ "model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file)) ValueError: Can't find a vocabulary file at path 'chinese_L-12_H-768_A-12'. To load the vocabulary from a Google pretrained model use `tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/34/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/34/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/33
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/33/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/33/comments
https://api.github.com/repos/huggingface/transformers/issues/33/events
https://github.com/huggingface/transformers/issues/33
381,939,792
MDU6SXNzdWUzODE5Mzk3OTI=
33
[Bug report] Ineffective no_decay when using BERTAdam
{ "login": "xiaoda99", "id": 6015633, "node_id": "MDQ6VXNlcjYwMTU2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/6015633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiaoda99", "html_url": "https://github.com/xiaoda99", "followers_url": "https://api.github.com/users/xiaoda99/followers", "following_url": "https://api.github.com/users/xiaoda99/following{/other_user}", "gists_url": "https://api.github.com/users/xiaoda99/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiaoda99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaoda99/subscriptions", "organizations_url": "https://api.github.com/users/xiaoda99/orgs", "repos_url": "https://api.github.com/users/xiaoda99/repos", "events_url": "https://api.github.com/users/xiaoda99/events{/privacy}", "received_events_url": "https://api.github.com/users/xiaoda99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "You're right, thanks!" ]
1,542
1,542
1,542
CONTRIBUTOR
null
https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_classifier.py#L505-L508 With this code, all parameters are decayed because the condition "parameter_name in no_decay" will never be satisfied. I've made a PR #32 to fix it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/33/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/33/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/32
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/32/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/32/comments
https://api.github.com/repos/huggingface/transformers/issues/32/events
https://github.com/huggingface/transformers/pull/32
381,939,230
MDExOlB1bGxSZXF1ZXN0MjMxNzc1MTI1
32
Fix ineffective no_decay bug when using BERTAdam
{ "login": "xiaoda99", "id": 6015633, "node_id": "MDQ6VXNlcjYwMTU2MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/6015633?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xiaoda99", "html_url": "https://github.com/xiaoda99", "followers_url": "https://api.github.com/users/xiaoda99/followers", "following_url": "https://api.github.com/users/xiaoda99/following{/other_user}", "gists_url": "https://api.github.com/users/xiaoda99/gists{/gist_id}", "starred_url": "https://api.github.com/users/xiaoda99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiaoda99/subscriptions", "organizations_url": "https://api.github.com/users/xiaoda99/orgs", "repos_url": "https://api.github.com/users/xiaoda99/repos", "events_url": "https://api.github.com/users/xiaoda99/events{/privacy}", "received_events_url": "https://api.github.com/users/xiaoda99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "thanks!", "Question - wouldn't `.named_parameters()` for the model return a tuple `(name, param_tensor)`, where name looks similar to these\r\n```\r\n['bert.embeddings.word_embeddings.weight',\r\n 'bert.embeddings.position_embeddings.weight',\r\n 'bert.embeddings.token_type_embeddings.weight',\r\n 'bert.embeddings.LayerNorm.weight',\r\n 'bert.embeddings.LayerNorm.bias',\r\n 'bert.encoder.layer.0.attention.self.query.weight',\r\n 'bert.encoder.layer.0.attention.self.query.bias',\r\n 'bert.encoder.layer.0.attention.self.key.weight',\r\n 'bert.encoder.layer.0.attention.self.key.bias',\r\n 'bert.encoder.layer.0.attention.self.value.weight',\r\n 'bert.encoder.layer.0.attention.self.value.bias',\r\n 'bert.encoder.layer.0.attention.output.dense.weight',\r\n 'bert.encoder.layer.0.attention.output.dense.bias',\r\n 'bert.encoder.layer.0.attention.output.LayerNorm.weight',\r\n 'bert.encoder.layer.0.attention.output.LayerNorm.bias',\r\n...\r\n...\r\n'classifier.linear.weight',\r\n'classifier.linear.bias']\r\n``` \r\ntherefore requiring slightly smarter conditions than just `in`? Something along the lines?\r\n```\r\n[p for n, p in param_optimizer if any(True for x in no_decay if n.endswith(x))]\r\n```", "Don't mind my comment, tested it further this morning and everything seems to work as expected!" ]
1,542
1,557
1,542
CONTRIBUTOR
null
With the original code, all parameters are decayed because the condition "parameter_name in no_decay" will never be satisfied.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/32/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/32/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/32", "html_url": "https://github.com/huggingface/transformers/pull/32", "diff_url": "https://github.com/huggingface/transformers/pull/32.diff", "patch_url": "https://github.com/huggingface/transformers/pull/32.patch", "merged_at": 1542705107000 }
https://api.github.com/repos/huggingface/transformers/issues/31
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/31/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/31/comments
https://api.github.com/repos/huggingface/transformers/issues/31/events
https://github.com/huggingface/transformers/issues/31
381,920,522
MDU6SXNzdWUzODE5MjA1MjI=
31
BERT model for Machine Translation
{ "login": "KeremTurgutlu", "id": 19826777, "node_id": "MDQ6VXNlcjE5ODI2Nzc3", "avatar_url": "https://avatars.githubusercontent.com/u/19826777?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KeremTurgutlu", "html_url": "https://github.com/KeremTurgutlu", "followers_url": "https://api.github.com/users/KeremTurgutlu/followers", "following_url": "https://api.github.com/users/KeremTurgutlu/following{/other_user}", "gists_url": "https://api.github.com/users/KeremTurgutlu/gists{/gist_id}", "starred_url": "https://api.github.com/users/KeremTurgutlu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KeremTurgutlu/subscriptions", "organizations_url": "https://api.github.com/users/KeremTurgutlu/orgs", "repos_url": "https://api.github.com/users/KeremTurgutlu/repos", "events_url": "https://api.github.com/users/KeremTurgutlu/events{/privacy}", "received_events_url": "https://api.github.com/users/KeremTurgutlu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi Kerem, I don't think so. Have a look at the fairsep repo maybe.", "@thomwolf hi there, I couldn't find out anything about the fairsep repo. Could you post a link? Thanks!", "Hi, I am talking about this repo: https://github.com/pytorch/fairseq.\r\nHave a look at their Transformer's models for machine translation.", "I have conducted several MT experiments which fixed the embeddings by using BERT, **UNFORTUNATELY**, I find it makes performance worse. @JasonVann @thomwolf ", "Hey! \r\n\r\nFAIR has demonstrated that using BERT for unsupervised translation greatly improves BLEU.\r\n\r\nPaper: https://arxiv.org/abs/1901.07291\r\n\r\nRepo: https://github.com/facebookresearch/XLM\r\n\r\nOlder papers showing pre-training with LM (not MLM) helps Seq2Seq: https://arxiv.org/abs/1611.02683\r\n\r\nHope this helps!", "These links are useful. \r\n\r\nDoes anyone know if BERT improves things also for supervised translation?\r\n\r\nThanks. ", "> Does anyone know if BERT improves things also for supervised translation?\r\n\r\nAlso interested", "Because BERT is an encoder, I guess we need a decoder. I looked here: https://jalammar.github.io/\r\nand it seems Openai Transformer is a decoder. But I cannot find a repo for it. \r\nhttps://www.tensorflow.org/alpha/tutorials/text/transformer \r\nI think Bert outputs a vector of size 768. Can we just do a `reshape` and use the decoder in that transformer notebook? In general can I just `reshape` and try out a bunch of decoders?", "> These links are useful.\r\n> \r\n> Does anyone know if BERT improves things also for supervised translation?\r\n> \r\n> Thanks.\r\n\r\nhttps://arxiv.org/pdf/1901.07291.pdf seems to suggest that it does improve the results for supervised translation as well. However this paper is not about using BERT embeddings, rather about pre-training the encoder and decoder on an Masked Language Modelling objective. The biggest benefit comes from initializing the encoder with the weights from BERT, and surprisingly using it to initialize the decoder also brings small benefits, even though if I understand correctly you still have to randomly initialize the weights for the encoder attention module, since it's not present in the pre-trained network.\r\n\r\nEDIT: of course the pre-trained network needs to have been trained on multi-lingual data, as stated in the paper", "I have managed to replace transformer's encoder with a pretrained bert encoder, however experiment results were very poor. It dropped BLEU score by about 4\r\n\r\nThe source code is available here: https://github.com/torshie/bert-nmt , implemented as a fairseq user model. It may not work out of box, some minor tweeks may be needed.", "Could be relevant:\r\n\r\n[Towards Making the Most of BERT in Neural Machine Translation](https://arxiv.org/pdf/1908.05672.pdf)\r\n[On the use of BERT for Neural Machine Translation](https://arxiv.org/pdf/1909.12744.pdf)", "Also have a look at [MASS](https://github.com/microsoft/MASS) and [XLM](https://github.com/facebookresearch/XLM).", "Yes. It is possible to use both BERT as encoder and GPT as decoder and glue them together. \r\nThere is a recent paper on this: Multilingual Translation via Grafting Pre-trained Language Models\r\nhttps://aclanthology.org/2021.findings-emnlp.233.pdf\r\nhttps://github.com/sunzewei2715/Graformer" ]
1,542
1,636
1,542
NONE
null
Is there a way to use any of the provided pre-trained models in the repository for machine translation task? Thanks
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/31/reactions", "total_count": 12, "+1": 12, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/31/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/30
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/30/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/30/comments
https://api.github.com/repos/huggingface/transformers/issues/30/events
https://github.com/huggingface/transformers/issues/30
381,872,071
MDU6SXNzdWUzODE4NzIwNzE=
30
[Feature request] Add example of finetuning the pretrained models on custom corpus
{ "login": "elyase", "id": 1175888, "node_id": "MDQ6VXNlcjExNzU4ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/1175888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elyase", "html_url": "https://github.com/elyase", "followers_url": "https://api.github.com/users/elyase/followers", "following_url": "https://api.github.com/users/elyase/following{/other_user}", "gists_url": "https://api.github.com/users/elyase/gists{/gist_id}", "starred_url": "https://api.github.com/users/elyase/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elyase/subscriptions", "organizations_url": "https://api.github.com/users/elyase/orgs", "repos_url": "https://api.github.com/users/elyase/repos", "events_url": "https://api.github.com/users/elyase/events{/privacy}", "received_events_url": "https://api.github.com/users/elyase/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi I don't plan to add that in the near future but feel free to open a PR if you would like to share an additional example.", "Necrobumping this for reference, as this is addressed in https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/examples/run_lm_finetuning.py" ]
1,542
1,547
1,542
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/30/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/30/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/29
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/29/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/29/comments
https://api.github.com/repos/huggingface/transformers/issues/29/events
https://github.com/huggingface/transformers/pull/29
381,853,838
MDExOlB1bGxSZXF1ZXN0MjMxNzIyMjU4
29
First release
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,542
1,542
1,542
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/29/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/29/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/29", "html_url": "https://github.com/huggingface/transformers/pull/29", "diff_url": "https://github.com/huggingface/transformers/pull/29.diff", "patch_url": "https://github.com/huggingface/transformers/pull/29.patch", "merged_at": 1542453709000 }
https://api.github.com/repos/huggingface/transformers/issues/28
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/28/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/28/comments
https://api.github.com/repos/huggingface/transformers/issues/28/events
https://github.com/huggingface/transformers/issues/28
381,835,436
MDU6SXNzdWUzODE4MzU0MzY=
28
speed is very slow
{ "login": "susht3", "id": 12723964, "node_id": "MDQ6VXNlcjEyNzIzOTY0", "avatar_url": "https://avatars.githubusercontent.com/u/12723964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susht3", "html_url": "https://github.com/susht3", "followers_url": "https://api.github.com/users/susht3/followers", "following_url": "https://api.github.com/users/susht3/following{/other_user}", "gists_url": "https://api.github.com/users/susht3/gists{/gist_id}", "starred_url": "https://api.github.com/users/susht3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susht3/subscriptions", "organizations_url": "https://api.github.com/users/susht3/orgs", "repos_url": "https://api.github.com/users/susht3/repos", "events_url": "https://api.github.com/users/susht3/events{/privacy}", "received_events_url": "https://api.github.com/users/susht3/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Running on a GPU, I find that dumping extracted features takes up most time. So you may optimize it yourself. ", "Hi, these examples are provided as starting point to write your own training scripts using the package modules. I don't plan to update them any further." ]
1,542
1,542
1,542
NONE
null
convert samples to features, is very slow
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/28/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/28/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/27
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/27/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/27/comments
https://api.github.com/repos/huggingface/transformers/issues/27/events
https://github.com/huggingface/transformers/issues/27
381,833,694
MDU6SXNzdWUzODE4MzM2OTQ=
27
how to load checkpoint?
{ "login": "susht3", "id": 12723964, "node_id": "MDQ6VXNlcjEyNzIzOTY0", "avatar_url": "https://avatars.githubusercontent.com/u/12723964?v=4", "gravatar_id": "", "url": "https://api.github.com/users/susht3", "html_url": "https://github.com/susht3", "followers_url": "https://api.github.com/users/susht3/followers", "following_url": "https://api.github.com/users/susht3/following{/other_user}", "gists_url": "https://api.github.com/users/susht3/gists{/gist_id}", "starred_url": "https://api.github.com/users/susht3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/susht3/subscriptions", "organizations_url": "https://api.github.com/users/susht3/orgs", "repos_url": "https://api.github.com/users/susht3/repos", "events_url": "https://api.github.com/users/susht3/events{/privacy}", "received_events_url": "https://api.github.com/users/susht3/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Converting TensorFlow checkpoint from ../dataset/bert/uncased_L-12_H-768_A-12/bert_model\r\nTraceback (most recent call last):\r\n File \"convert_tf_checkpoint_to_pytorch.py\", line 111, in <module>\r\n convert()\r\n File \"convert_tf_checkpoint_to_pytorch.py\", line 60, in convert\r\n init_vars = tf.train.list_variables(path)\r\n File \"/home/susht3/local/anaconda3/envs/susht/lib/python3.6/site-packages/tensorflow/python/training/checkpoint_utils.py\", line 95, in list_variables\r\n reader = load_checkpoint(ckpt_dir_or_file)\r\n File \"/home/susht3/local/anaconda3/envs/susht/lib/python3.6/site-packages/tensorflow/python/training/checkpoint_utils.py\", line 64, in load_checkpoint\r\n return pywrap_tensorflow.NewCheckpointReader(filename)\r\n File \"/home/susht3/local/anaconda3/envs/susht/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py\", line 326, in NewCheckpointReader\r\n return CheckpointReader(compat.as_bytes(filepattern), status)\r\n File \"/home/susht3/local/anaconda3/envs/susht/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py\", line 528, in __exit__\r\n c_api.TF_GetCode(self.status.status))\r\ntensorflow.python.framework.errors_impl.NotFoundError: Unsuccessful TensorSliceReader constructor: Failed to find any matching files for ../dataset/bert/uncased_L-12_H-768_A-12/bert_model", "@susht3 what was your fix? ", "I encountered a similar issue and didn't find a solution with ALBERT. I tried using the `export_checkpoint.py` file in ALBERT and sent that into the `convert_tf_checkpoint_to_pytorch` command and there was no error. However the resulting `pytorch.bin` output was unusable :\\", "@dan-hu-spring do you mind opening a new issue with your issue, so that we may take a look?" ]
1,542
1,591
1,542
NONE
null
i download the model from bert, it only has model.ckpt.data,model.ckpt.meta and model.ckpt.index, i donnot which to load, what is checkpoint file for convert.py?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/27/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/27/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26/comments
https://api.github.com/repos/huggingface/transformers/issues/26/events
https://github.com/huggingface/transformers/issues/26
381,718,424
MDU6SXNzdWUzODE3MTg0MjQ=
26
Checkpoints not saved
{ "login": "ylhsieh", "id": 9377337, "node_id": "MDQ6VXNlcjkzNzczMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/9377337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ylhsieh", "html_url": "https://github.com/ylhsieh", "followers_url": "https://api.github.com/users/ylhsieh/followers", "following_url": "https://api.github.com/users/ylhsieh/following{/other_user}", "gists_url": "https://api.github.com/users/ylhsieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ylhsieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ylhsieh/subscriptions", "organizations_url": "https://api.github.com/users/ylhsieh/orgs", "repos_url": "https://api.github.com/users/ylhsieh/repos", "events_url": "https://api.github.com/users/ylhsieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ylhsieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "In the `run_squad.py`script, I added the following lines after the training loop:\r\n\r\n```\r\nlogger.info(***** Saving fine-tuned model *****)\r\noutput_model_file = os.path.join(args.output_dir, \"pytorch_model.bin\")\r\nif n_gpu > 1:\r\n torch.save(model.module.bert.state_dict(), output_model_file)\r\nelse:\r\n torch.save(model.bert.state_dict(), output_model_file)\r\n```\r\n\r\nThe code runs and I was able to load the model to test on the Adversarial SQuAD datasets.\r\n\r\nI do not use the other `run_*` scripts but this may be applicable as well.\r\n\r\nEdit: the files have been modified in the latest commits so I think it's now necessary to check the loading of fine-tuned models in the script.", "You are right this argument was not used. I removed it, thanks. These examples are provided as starting point to write training scripts for the package module. I don't plan to update them any further (except fixing bugs).", "> In the `run_squad.py`script, I added the following lines after the training loop:\r\n> \r\n> ```\r\n> logger.info(***** Saving fine-tuned model *****)\r\n> output_model_file = os.path.join(args.output_dir, \"pytorch_model.bin\")\r\n> if n_gpu > 1:\r\n> torch.save(model.module.bert.state_dict(), output_model_file)\r\n> else:\r\n> torch.save(model.bert.state_dict(), output_model_file)\r\n> ```\r\n> The code runs and I was able to load the model to test on the Adversarial SQuAD datasets.\r\n> \r\n> I do not use the other `run_*` scripts but this may be applicable as well.\r\n> \r\n> Edit: the files have been modified in the latest commits so I think it's now necessary to check the loading of fine-tuned models in the script.\r\n\r\nwhat is your result on adversarial-squad?", "At that time I got:\r\n**AddSent**\r\nBERT base 58.7 EM / 66.2 F1\r\nBERT large 65.5 EM / 71.9 F1\r\n\r\n**AddOneSent**\r\nBERT base 67.0 EM / 74.7 F1\r\nBERT large 72.7 EM / 79.1 F1\r\n\r\n\r\n\r\n", "> At that time I got:\r\n> **AddSent**\r\n> BERT base 58.7 EM / 66.2 F1\r\n> BERT large 65.5 EM / 71.9 F1\r\n> \r\n> **AddOneSent**\r\n> BERT base 67.0 EM / 74.7 F1\r\n> BERT large 72.7 EM / 79.1 F1\r\n\r\nThanks a lot! Do you release your paper? i want to cite your result and paper in my paper.", "Unfortunately it was not part of a paper, just preliminary results." ]
1,542
1,551
1,542
NONE
null
There is an option `save_checkpoints_steps` that seems to control checkpointing. However, there is no actual saving operation in the `run_*` scripts. So, should we add that functionality or remove this argument?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/25
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/25/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/25/comments
https://api.github.com/repos/huggingface/transformers/issues/25/events
https://github.com/huggingface/transformers/issues/25
381,490,584
MDU6SXNzdWUzODE0OTA1ODQ=
25
can you push the run-pretraining and create_pretraining_data codes?
{ "login": "koukoulala", "id": 30341159, "node_id": "MDQ6VXNlcjMwMzQxMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/30341159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koukoulala", "html_url": "https://github.com/koukoulala", "followers_url": "https://api.github.com/users/koukoulala/followers", "following_url": "https://api.github.com/users/koukoulala/following{/other_user}", "gists_url": "https://api.github.com/users/koukoulala/gists{/gist_id}", "starred_url": "https://api.github.com/users/koukoulala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koukoulala/subscriptions", "organizations_url": "https://api.github.com/users/koukoulala/orgs", "repos_url": "https://api.github.com/users/koukoulala/repos", "events_url": "https://api.github.com/users/koukoulala/events{/privacy}", "received_events_url": "https://api.github.com/users/koukoulala/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, I don't have plan for that in the near future." ]
1,542
1,542
1,542
NONE
null
just want to study codes, don't need to have same pre-train performance.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/25/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/25/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/24
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/24/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/24/comments
https://api.github.com/repos/huggingface/transformers/issues/24/events
https://github.com/huggingface/transformers/issues/24
381,387,717
MDU6SXNzdWUzODEzODc3MTc=
24
[Feature request] Port SQuAD 2.0 support
{ "login": "elyase", "id": 1175888, "node_id": "MDQ6VXNlcjExNzU4ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/1175888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elyase", "html_url": "https://github.com/elyase", "followers_url": "https://api.github.com/users/elyase/followers", "following_url": "https://api.github.com/users/elyase/following{/other_user}", "gists_url": "https://api.github.com/users/elyase/gists{/gist_id}", "starred_url": "https://api.github.com/users/elyase/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elyase/subscriptions", "organizations_url": "https://api.github.com/users/elyase/orgs", "repos_url": "https://api.github.com/users/elyase/repos", "events_url": "https://api.github.com/users/elyase/events{/privacy}", "received_events_url": "https://api.github.com/users/elyase/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, I don't have plan for that in the near future but feel free to open a PR." ]
1,542
1,542
1,542
CONTRIBUTOR
null
Recently the Google team added support for Squad 2.0: https://github.com/google-research/bert/commit/60454702590a6c69bd45c5d4258c7e17b8a3e1da Would be great to also have it available in the Pytorch version.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/24/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/24/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/23
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/23/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/23/comments
https://api.github.com/repos/huggingface/transformers/issues/23/events
https://github.com/huggingface/transformers/issues/23
381,250,921
MDU6SXNzdWUzODEyNTA5MjE=
23
ValueError while using --optimize_on_cpu
{ "login": "rsanjaykamath", "id": 18527321, "node_id": "MDQ6VXNlcjE4NTI3MzIx", "avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rsanjaykamath", "html_url": "https://github.com/rsanjaykamath", "followers_url": "https://api.github.com/users/rsanjaykamath/followers", "following_url": "https://api.github.com/users/rsanjaykamath/following{/other_user}", "gists_url": "https://api.github.com/users/rsanjaykamath/gists{/gist_id}", "starred_url": "https://api.github.com/users/rsanjaykamath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rsanjaykamath/subscriptions", "organizations_url": "https://api.github.com/users/rsanjaykamath/orgs", "repos_url": "https://api.github.com/users/rsanjaykamath/repos", "events_url": "https://api.github.com/users/rsanjaykamath/events{/privacy}", "received_events_url": "https://api.github.com/users/rsanjaykamath/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks! I pushed a fix for that, you can try it again. You should be able to increase a bit the batch size.\r\n\r\nBy the way, the real batch size that is used on the gpu is `train_batch_size / gradient_accumulation_steps` so `2` in your case. I think you should be able to go to `3` with `--optimize_on_cpu`\r\n\r\nThe recommended batch_size to get good results (EM, F1) with BERT large on SQuaD is `24`. You can try the following possibilities to get to this batch_size:\r\n- keeping the same 'real batch size' that you currently have but just a bigger batch_size `--train_batch_size 24 --gradient_accumulation_steps 12`\r\n- trying a 'real batch size' of 3 with optimization on cpu `--train_batch_size 24 --gradient_accumulation_steps 8 --optimize_on_cpu`\r\n- switching to fp16 (implies optimization on cpu): `--train_batch_size 24 --gradient_accumulation_steps 6 or 4 --fp16`\r\n\r\nIf your GPU supports fp16, the last solution should be the fastest, otherwise the second should be the fastest. The first solution should work out-of-the box and give better results (EM, F1) but you won't have any speed-up.", "Should be fixed now. Don't hesitate to re-open an issue if needed. Thanks for the feedback!", "Yes it works now! \r\n\r\nWith \r\n\r\n> --train_batch_size 24 --gradient_accumulation_steps 8 --optimize_on_cpu\r\n\r\nI get {\"exact_match\": 83.78429517502366, \"f1\": 90.75733469379139} which is pretty close.\r\n\r\nThanks for this amazing work! " ]
1,542
1,542
1,542
NONE
null
> Traceback (most recent call last): | 1/87970 [00:00<8:35:35, 2.84it/s] File "./run_squad.py", line 990, in <module> main() File "./run_squad.py", line 922, in main is_nan = set_optimizer_params_grad(param_optimizer, model.named_parameters(), test_nan=True) File "./run_squad.py", line 691, in set_optimizer_params_grad if test_nan and torch.isnan(param_model.grad).sum() > 0: File "/people/sanjay/anaconda2/envs/bert_pytorch/lib/python3.5/site-packages/torch/functional.py", line 289, in isnan raise ValueError("The argument is not a tensor", str(tensor)) ValueError: ('The argument is not a tensor', 'None') Command: CUDA_VISIBLE_DEVICES=0 python ./run_squad.py \ --vocab_file bert_large/uncased_L-24_H-1024_A-16/vocab.txt \ --bert_config_file bert_large/uncased_L-24_H-1024_A-16/bert_config.json \ --init_checkpoint bert_large/uncased_L-24_H-1024_A-16/pytorch_model.bin \ --do_lower_case \ --do_train \ --do_predict \ --train_file squad_dir/train-v1.1.json \ --predict_file squad_dir/dev-v1.1.json \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir outputs \ --train_batch_size 4 \ --gradient_accumulation_steps 2 \ --optimize_on_cpu Error while using --optimize_on_cpu only. Works fine without the argument. GPU: Nvidia GTX 1080Ti Single GPU. PS: I can only fit in train_batch_size 4 on the memory of a single GPU.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/23/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/23/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/22
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/22/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/22/comments
https://api.github.com/repos/huggingface/transformers/issues/22/events
https://github.com/huggingface/transformers/pull/22
381,097,721
MDExOlB1bGxSZXF1ZXN0MjMxMTQ0MTAx
22
adding `no_cuda` flag
{ "login": "rahular", "id": 1104544, "node_id": "MDQ6VXNlcjExMDQ1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1104544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rahular", "html_url": "https://github.com/rahular", "followers_url": "https://api.github.com/users/rahular/followers", "following_url": "https://api.github.com/users/rahular/following{/other_user}", "gists_url": "https://api.github.com/users/rahular/gists{/gist_id}", "starred_url": "https://api.github.com/users/rahular/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rahular/subscriptions", "organizations_url": "https://api.github.com/users/rahular/orgs", "repos_url": "https://api.github.com/users/rahular/repos", "events_url": "https://api.github.com/users/rahular/events{/privacy}", "received_events_url": "https://api.github.com/users/rahular/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks, I've added that manually (the library organization has changed a bit with the first pip release)." ]
1,542
1,542
1,542
CONTRIBUTOR
null
The `--no_cuda` flag is missing from the flagset in `extract_features.py`. On running the current code, the following error occurs. ``` (py3.5) [rahul pytorch-pretrained-BERT]$ python extract_features.py \ > --input_file=./input.txt \ > --output_file=./output.jsonl \ > --vocab_file=$BERT_BASE_DIR/vocab.txt \ > --bert_config_file=$BERT_BASE_DIR/bert_config.json \ > --init_checkpoint=$BERT_BASE_DIR/pytorch_model.bin \ > --layers=-4 \ > --max_seq_length=128 \ > --batch_size=8 Traceback (most recent call last): File "extract_features.py", line 306, in <module> main() File "extract_features.py", line 223, in main device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu") AttributeError: 'Namespace' object has no attribute 'no_cuda' ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/22/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/22/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/22", "html_url": "https://github.com/huggingface/transformers/pull/22", "diff_url": "https://github.com/huggingface/transformers/pull/22.diff", "patch_url": "https://github.com/huggingface/transformers/pull/22.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/21
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/21/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/21/comments
https://api.github.com/repos/huggingface/transformers/issues/21/events
https://github.com/huggingface/transformers/pull/21
381,038,724
MDExOlB1bGxSZXF1ZXN0MjMxMDk4MjMy
21
Fix some glitches in extract_features.py
{ "login": "cnrpman", "id": 9862022, "node_id": "MDQ6VXNlcjk4NjIwMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/9862022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cnrpman", "html_url": "https://github.com/cnrpman", "followers_url": "https://api.github.com/users/cnrpman/followers", "following_url": "https://api.github.com/users/cnrpman/following{/other_user}", "gists_url": "https://api.github.com/users/cnrpman/gists{/gist_id}", "starred_url": "https://api.github.com/users/cnrpman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cnrpman/subscriptions", "organizations_url": "https://api.github.com/users/cnrpman/orgs", "repos_url": "https://api.github.com/users/cnrpman/repos", "events_url": "https://api.github.com/users/cnrpman/events{/privacy}", "received_events_url": "https://api.github.com/users/cnrpman/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks, I've pushed these fixes in the first release (the organization of the library changed quite a bit)." ]
1,542
1,542
1,542
NONE
null
Do the following fixing to make the extract_features.py runnable: 1. Add no_cuda argument 2. Fix the "not all arguments converted during string formatting" error thrown at line 230
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/21/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/21/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/21", "html_url": "https://github.com/huggingface/transformers/pull/21", "diff_url": "https://github.com/huggingface/transformers/pull/21.diff", "patch_url": "https://github.com/huggingface/transformers/pull/21.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/20
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/20/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/20/comments
https://api.github.com/repos/huggingface/transformers/issues/20/events
https://github.com/huggingface/transformers/issues/20
380,581,495
MDU6SXNzdWUzODA1ODE0OTU=
20
model loading the checkpoint error
{ "login": "TIANRENK", "id": 35832397, "node_id": "MDQ6VXNlcjM1ODMyMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/35832397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TIANRENK", "html_url": "https://github.com/TIANRENK", "followers_url": "https://api.github.com/users/TIANRENK/followers", "following_url": "https://api.github.com/users/TIANRENK/following{/other_user}", "gists_url": "https://api.github.com/users/TIANRENK/gists{/gist_id}", "starred_url": "https://api.github.com/users/TIANRENK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TIANRENK/subscriptions", "organizations_url": "https://api.github.com/users/TIANRENK/orgs", "repos_url": "https://api.github.com/users/TIANRENK/repos", "events_url": "https://api.github.com/users/TIANRENK/events{/privacy}", "received_events_url": "https://api.github.com/users/TIANRENK/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "But I print the model.embeddings.token_type_embeddings it was Embedding(16,768) .", "which model are you loading?", "> which model are you loading?\r\n\r\nthe pre-trained model chinese_L-12_H-768_A-12", "mycode:\r\nbert_config = BertConfig.from_json_file('bert_config.json')\r\nmodel=BertModel(bert_config)\r\nmodel.load_state_dict(torch.load('pytorch_model.bin'))\r\n\r\nThe error:\r\nRuntimeError: Error(s) in loading state_dict for BertModel:\r\n\tsize mismatch for embeddings.token_type_embeddings.weight: copying a param of torch.Size([16, 768]) from checkpoint, where the shape is torch.Size([2, 768]) in current model.\r\n", "I'm testing the chinese model.\r\nDo you use the `config.json` of the chinese_L-12_H-768_A-12 ?\r\n Can you send the content of your `config_json` ?", "> I'm testing the chinese model.\r\n> Do you use the `config.json` of the chinese_L-12_H-768_A-12 ?\r\n> Can you send the content of your `config_json` ?\r\n\r\nIn the 'config.json' of the chinese_L-12_H-768_A-12 ,the type_vocab_size=2.But I change the config.type_vocab_size=16, it still error.", "> I'm testing the chinese model.\r\n> Do you use the `config.json` of the chinese_L-12_H-768_A-12 ?\r\n> Can you send the content of your `config_json` ?\r\n\r\n{\r\n \"attention_probs_dropout_prob\": 0.1, \r\n \"directionality\": \"bidi\", \r\n \"hidden_act\": \"gelu\", \r\n \"hidden_dropout_prob\": 0.1, \r\n \"hidden_size\": 768, \r\n \"initializer_range\": 0.02, \r\n \"intermediate_size\": 3072, \r\n \"max_position_embeddings\": 512, \r\n \"num_attention_heads\": 12, \r\n \"num_hidden_layers\": 12, \r\n \"pooler_fc_size\": 768, \r\n \"pooler_num_attention_heads\": 12, \r\n \"pooler_num_fc_layers\": 3, \r\n \"pooler_size_per_head\": 128, \r\n \"pooler_type\": \"first_token_transform\", \r\n \"type_vocab_size\": 2, \r\n \"vocab_size\": 21128\r\n}\r\n\r\n\r\nI change my code:\r\nbert_config = BertConfig.from_json_file('bert_config.json')\r\nbert_config.type_vocab_size=16\r\nmodel=BertModel(bert_config)\r\nmodel.load_state_dict(torch.load('pytorch_model.bin'))\r\n\r\nit still error.", "> I see you have `\"type_vocab_size\": 2` in your config file, how is that?\r\n\r\nYes,but I change it in my code.", "> is your `pytorch_model.bin` the good converted model of the chinese one (and not of an English one)?\r\n\r\nI think it's good.", "Ok, I have the models. I think `type_vocab_size` should be 2 also for chinese. I am wondering why it is 16 in your `pytorch_model.bin`", "I have no idea.Did my model make the wrong convert?", "I am testing that right now. I haven't played with the multi-lingual models yet.", "> I am testing that right now. I haven't played with the multi-lingual models yet.\r\n\r\nI also use it for the first time.I am looking forward to your test results.", "> I am testing that right now. I haven't played with the multi-lingual models yet.\r\n\r\nWhen I was converting the model .\r\n\r\nTraceback (most recent call last):\r\n File \"convert_tf_checkpoint_to_pytorch.py\", line 95, in <module>\r\n convert()\r\n File \"convert_tf_checkpoint_to_pytorch.py\", line 85, in convert\r\n assert pointer.shape == array.shape\r\nAssertionError: (torch.Size([16, 768]), (2, 768)) ", "are you supplying a config file with `\"type_vocab_size\": 2` to the conversion script?", "> are you supplying a config file with `\"type_vocab_size\": 2` to the conversion script?\r\n\r\nI used the 'bert_config.json' of the chinese_L-12_H-768_A-12 when I was converting .", "Ok, I think I found the issue, your BertConfig is not build from the configuration file for some reason and thus use the default value of `type_vocab_size` in BertConfig which is 16.\r\n\r\nThis error happen on my system when I use `config = BertConfig('bert_config.json')` instead of `config = BertConfig.from_json_file('bert_config.json')`.\r\n\r\nI will make sure these two ways of initializing the configuration file (from parameters or from json file) cannot be messed up.", "> 运行时错误:加载 BertModel state_dict时出错:embeddings.token_type_embeddings 的大小不匹配.weight:\r\n> 复制火炬参数。大小([16, 768]) 从检查点开始,其中形状为火炬。当前模型中的大小([2, 768]\r\n\r\ni have the same problem as you. did you solve the problem?" ]
1,542
1,684
1,542
NONE
null
RuntimeError: Error(s) in loading state_dict for BertModel: size mismatch for embeddings.token_type_embeddings.weight: copying a param of torch.Size([16, 768]) from checkpoint, where the shape is torch.Size([2, 768]) in current model.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/20/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/20/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/19
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/19/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/19/comments
https://api.github.com/repos/huggingface/transformers/issues/19/events
https://github.com/huggingface/transformers/issues/19
380,555,132
MDU6SXNzdWUzODA1NTUxMzI=
19
will you push the pytorch code for the pre-training process?
{ "login": "koukoulala", "id": 30341159, "node_id": "MDQ6VXNlcjMwMzQxMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/30341159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koukoulala", "html_url": "https://github.com/koukoulala", "followers_url": "https://api.github.com/users/koukoulala/followers", "following_url": "https://api.github.com/users/koukoulala/following{/other_user}", "gists_url": "https://api.github.com/users/koukoulala/gists{/gist_id}", "starred_url": "https://api.github.com/users/koukoulala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koukoulala/subscriptions", "organizations_url": "https://api.github.com/users/koukoulala/orgs", "repos_url": "https://api.github.com/users/koukoulala/repos", "events_url": "https://api.github.com/users/koukoulala/events{/privacy}", "received_events_url": "https://api.github.com/users/koukoulala/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, I don't have plan for that in the near future." ]
1,542
1,542
1,542
NONE
null
Can you push the pytorch code for the pre-training process,such as MLM task, please? I really want to study, but I can't understand tensorflow, it's so complex. thanks!!!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/19/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/19/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/18
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/18/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/18/comments
https://api.github.com/repos/huggingface/transformers/issues/18/events
https://github.com/huggingface/transformers/pull/18
380,305,486
MDExOlB1bGxSZXF1ZXN0MjMwNTM2Mzg4
18
include the output layer in the model using the pretrained weights
{ "login": "fabiopetroni", "id": 12832592, "node_id": "MDQ6VXNlcjEyODMyNTky", "avatar_url": "https://avatars.githubusercontent.com/u/12832592?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fabiopetroni", "html_url": "https://github.com/fabiopetroni", "followers_url": "https://api.github.com/users/fabiopetroni/followers", "following_url": "https://api.github.com/users/fabiopetroni/following{/other_user}", "gists_url": "https://api.github.com/users/fabiopetroni/gists{/gist_id}", "starred_url": "https://api.github.com/users/fabiopetroni/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fabiopetroni/subscriptions", "organizations_url": "https://api.github.com/users/fabiopetroni/orgs", "repos_url": "https://api.github.com/users/fabiopetroni/repos", "events_url": "https://api.github.com/users/fabiopetroni/events{/privacy}", "received_events_url": "https://api.github.com/users/fabiopetroni/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks for that. I've ended up taking a more modular approach in the first pip release of the library." ]
1,542
1,542
1,542
NONE
null
This is to be able to load the final output layer (bert.output_layer) from the TensorFlow pre-trained model. In particular, it is a fully connected layer that is used to map the final hidden layer to the vocabulary size, to then apply the softmax, as follows: logits = bert.output_layer(sequence_output) log_softmax = nn.LogSoftmax(dim=-1) log_probs = log_softmax(logits)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/18/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/18/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/18", "html_url": "https://github.com/huggingface/transformers/pull/18", "diff_url": "https://github.com/huggingface/transformers/pull/18.diff", "patch_url": "https://github.com/huggingface/transformers/pull/18.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/17
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/17/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/17/comments
https://api.github.com/repos/huggingface/transformers/issues/17/events
https://github.com/huggingface/transformers/pull/17
380,292,054
MDExOlB1bGxSZXF1ZXN0MjMwNTI2MTY0
17
activation function in BERTIntermediate
{ "login": "lukovnikov", "id": 1732910, "node_id": "MDQ6VXNlcjE3MzI5MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lukovnikov", "html_url": "https://github.com/lukovnikov", "followers_url": "https://api.github.com/users/lukovnikov/followers", "following_url": "https://api.github.com/users/lukovnikov/following{/other_user}", "gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}", "starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions", "organizations_url": "https://api.github.com/users/lukovnikov/orgs", "repos_url": "https://api.github.com/users/lukovnikov/repos", "events_url": "https://api.github.com/users/lukovnikov/events{/privacy}", "received_events_url": "https://api.github.com/users/lukovnikov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Looks good, thanks for that!" ]
1,542
1,542
1,542
CONTRIBUTOR
null
Was previously hardcoded to gelu because pretrained BERT models use gelu. Changed to make BERTIntermediate use functions and "gelu", "relu" or "swish" from `config`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/17/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/17/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/17", "html_url": "https://github.com/huggingface/transformers/pull/17", "diff_url": "https://github.com/huggingface/transformers/pull/17.diff", "patch_url": "https://github.com/huggingface/transformers/pull/17.patch", "merged_at": 1542124810000 }
https://api.github.com/repos/huggingface/transformers/issues/16
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/16/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/16/comments
https://api.github.com/repos/huggingface/transformers/issues/16/events
https://github.com/huggingface/transformers/pull/16
380,272,853
MDExOlB1bGxSZXF1ZXN0MjMwNTEwOTY4
16
Excluding AdamWeightDecayOptimizer internal variables from restoring
{ "login": "donatasrep", "id": 19597219, "node_id": "MDQ6VXNlcjE5NTk3MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/19597219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/donatasrep", "html_url": "https://github.com/donatasrep", "followers_url": "https://api.github.com/users/donatasrep/followers", "following_url": "https://api.github.com/users/donatasrep/following{/other_user}", "gists_url": "https://api.github.com/users/donatasrep/gists{/gist_id}", "starred_url": "https://api.github.com/users/donatasrep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donatasrep/subscriptions", "organizations_url": "https://api.github.com/users/donatasrep/orgs", "repos_url": "https://api.github.com/users/donatasrep/repos", "events_url": "https://api.github.com/users/donatasrep/events{/privacy}", "received_events_url": "https://api.github.com/users/donatasrep/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Is your pre-trained model a TensorFlow model?", "Yes", "Nice, thanks for that!" ]
1,542
1,542
1,542
CONTRIBUTOR
null
I tried to use convert_tf_checkpoint_to_pytorch.py script to convert my pretrained model, but in order to do so, I had to make some minor tweaks. I thought I would share in case you find it useful.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/16/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/16/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/16", "html_url": "https://github.com/huggingface/transformers/pull/16", "diff_url": "https://github.com/huggingface/transformers/pull/16.diff", "patch_url": "https://github.com/huggingface/transformers/pull/16.patch", "merged_at": 1542122369000 }
https://api.github.com/repos/huggingface/transformers/issues/15
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/15/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/15/comments
https://api.github.com/repos/huggingface/transformers/issues/15/events
https://github.com/huggingface/transformers/issues/15
380,271,134
MDU6SXNzdWUzODAyNzExMzQ=
15
activation function in BERTIntermediate
{ "login": "lukovnikov", "id": 1732910, "node_id": "MDQ6VXNlcjE3MzI5MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1732910?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lukovnikov", "html_url": "https://github.com/lukovnikov", "followers_url": "https://api.github.com/users/lukovnikov/followers", "following_url": "https://api.github.com/users/lukovnikov/following{/other_user}", "gists_url": "https://api.github.com/users/lukovnikov/gists{/gist_id}", "starred_url": "https://api.github.com/users/lukovnikov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lukovnikov/subscriptions", "organizations_url": "https://api.github.com/users/lukovnikov/orgs", "repos_url": "https://api.github.com/users/lukovnikov/repos", "events_url": "https://api.github.com/users/lukovnikov/events{/privacy}", "received_events_url": "https://api.github.com/users/lukovnikov/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes, I hard coded that since the pre-trained models are all trained with gelu anyway.", "ok. but since config is there anyway, isn't it cleaner to use it (to avoid errors for people using configs that use a different activation for some reason) ?", "Yes we can, I'll change that in the coming first release (unless you would like to submit a PR which I would be happy to merge).", "yeah let me clean up and I'll PR" ]
1,542
1,542
1,542
CONTRIBUTOR
null
BERTConfig is not used for `BERTIntermediate`'s activation function. `intermediate_act_fn` is always `gelu`. Is this normal? https://github.com/huggingface/pytorch-pretrained-BERT/blob/master/modeling.py#L240
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/15/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/15/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/14
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/14/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/14/comments
https://api.github.com/repos/huggingface/transformers/issues/14/events
https://github.com/huggingface/transformers/pull/14
379,587,417
MDExOlB1bGxSZXF1ZXN0MjI5OTk0NDY2
14
fixed typo
{ "login": "kornosk", "id": 15230011, "node_id": "MDQ6VXNlcjE1MjMwMDEx", "avatar_url": "https://avatars.githubusercontent.com/u/15230011?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kornosk", "html_url": "https://github.com/kornosk", "followers_url": "https://api.github.com/users/kornosk/followers", "following_url": "https://api.github.com/users/kornosk/following{/other_user}", "gists_url": "https://api.github.com/users/kornosk/gists{/gist_id}", "starred_url": "https://api.github.com/users/kornosk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kornosk/subscriptions", "organizations_url": "https://api.github.com/users/kornosk/orgs", "repos_url": "https://api.github.com/users/kornosk/repos", "events_url": "https://api.github.com/users/kornosk/events{/privacy}", "received_events_url": "https://api.github.com/users/kornosk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\nThanks for the PR, we don't want to add a shell script to the repo.\r\nI will correct the typo,\r\nBest,\r\nThom" ]
1,541
1,542
1,542
NONE
null
When test with SQuAD
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/14/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/14/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/14", "html_url": "https://github.com/huggingface/transformers/pull/14", "diff_url": "https://github.com/huggingface/transformers/pull/14.diff", "patch_url": "https://github.com/huggingface/transformers/pull/14.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/13
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/13/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/13/comments
https://api.github.com/repos/huggingface/transformers/issues/13/events
https://github.com/huggingface/transformers/issues/13
379,440,759
MDU6SXNzdWUzNzk0NDA3NTk=
13
Bug in run_classifier.py
{ "login": "rawatprateek", "id": 32642916, "node_id": "MDQ6VXNlcjMyNjQyOTE2", "avatar_url": "https://avatars.githubusercontent.com/u/32642916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rawatprateek", "html_url": "https://github.com/rawatprateek", "followers_url": "https://api.github.com/users/rawatprateek/followers", "following_url": "https://api.github.com/users/rawatprateek/following{/other_user}", "gists_url": "https://api.github.com/users/rawatprateek/gists{/gist_id}", "starred_url": "https://api.github.com/users/rawatprateek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rawatprateek/subscriptions", "organizations_url": "https://api.github.com/users/rawatprateek/orgs", "repos_url": "https://api.github.com/users/rawatprateek/repos", "events_url": "https://api.github.com/users/rawatprateek/events{/privacy}", "received_events_url": "https://api.github.com/users/rawatprateek/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,541
1,541
1,541
NONE
null
If I am running only evaluation and not training, there are errors as tr_loss and nb_tr_steps are undefined.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/13/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/13/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/12
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/12/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/12/comments
https://api.github.com/repos/huggingface/transformers/issues/12/events
https://github.com/huggingface/transformers/issues/12
379,422,090
MDU6SXNzdWUzNzk0MjIwOTA=
12
py2 code
{ "login": "antxiaojun", "id": 44923827, "node_id": "MDQ6VXNlcjQ0OTIzODI3", "avatar_url": "https://avatars.githubusercontent.com/u/44923827?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antxiaojun", "html_url": "https://github.com/antxiaojun", "followers_url": "https://api.github.com/users/antxiaojun/followers", "following_url": "https://api.github.com/users/antxiaojun/following{/other_user}", "gists_url": "https://api.github.com/users/antxiaojun/gists{/gist_id}", "starred_url": "https://api.github.com/users/antxiaojun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/antxiaojun/subscriptions", "organizations_url": "https://api.github.com/users/antxiaojun/orgs", "repos_url": "https://api.github.com/users/antxiaojun/repos", "events_url": "https://api.github.com/users/antxiaojun/events{/privacy}", "received_events_url": "https://api.github.com/users/antxiaojun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, we won't provide a python 2 version but if you want to do a python 2/3 compatible version feel free to open a PR." ]
1,541
1,541
1,541
NONE
null
if I convert code to python2 version of code, it can't converage ; Would you present py2 code?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/12/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/12/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/11
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11/comments
https://api.github.com/repos/huggingface/transformers/issues/11/events
https://github.com/huggingface/transformers/issues/11
379,036,394
MDU6SXNzdWUzNzkwMzYzOTQ=
11
Swapped to_seq_len/from_seq_len in comment
{ "login": "nikitakit", "id": 252225, "node_id": "MDQ6VXNlcjI1MjIyNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/252225?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nikitakit", "html_url": "https://github.com/nikitakit", "followers_url": "https://api.github.com/users/nikitakit/followers", "following_url": "https://api.github.com/users/nikitakit/following{/other_user}", "gists_url": "https://api.github.com/users/nikitakit/gists{/gist_id}", "starred_url": "https://api.github.com/users/nikitakit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nikitakit/subscriptions", "organizations_url": "https://api.github.com/users/nikitakit/orgs", "repos_url": "https://api.github.com/users/nikitakit/repos", "events_url": "https://api.github.com/users/nikitakit/events{/privacy}", "received_events_url": "https://api.github.com/users/nikitakit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes! fixed the comment" ]
1,541
1,541
1,541
NONE
null
I'm pretty sure this comment: https://github.com/huggingface/pytorch-pretrained-BERT/blob/2c5d993ba48841575d9c58f0754bca00b288431c/modeling.py#L339-L343 should instead say: ``` # Sizes are [batch_size, 1, 1, to_seq_length] # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] ``` When masking out tokens for attention, it doesn't matter what happens to attention *from* padding tokens, only that there is no attention *to* padding tokens. I don't believe the code is doing what the comment currently suggests because that would be an implementation flaw.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/11/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/11/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/10
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/10/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/10/comments
https://api.github.com/repos/huggingface/transformers/issues/10/events
https://github.com/huggingface/transformers/issues/10
378,996,831
MDU6SXNzdWUzNzg5OTY4MzE=
10
Is there a plan to have a FP16 for GPU so to have larger batch size or longer text documents support ?
{ "login": "howardhsu", "id": 10661375, "node_id": "MDQ6VXNlcjEwNjYxMzc1", "avatar_url": "https://avatars.githubusercontent.com/u/10661375?v=4", "gravatar_id": "", "url": "https://api.github.com/users/howardhsu", "html_url": "https://github.com/howardhsu", "followers_url": "https://api.github.com/users/howardhsu/followers", "following_url": "https://api.github.com/users/howardhsu/following{/other_user}", "gists_url": "https://api.github.com/users/howardhsu/gists{/gist_id}", "starred_url": "https://api.github.com/users/howardhsu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/howardhsu/subscriptions", "organizations_url": "https://api.github.com/users/howardhsu/orgs", "repos_url": "https://api.github.com/users/howardhsu/repos", "events_url": "https://api.github.com/users/howardhsu/events{/privacy}", "received_events_url": "https://api.github.com/users/howardhsu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yes probably. I am testing fp16 right now. If it works well I will push it to the repo.", "Ok I've added FP16 support (see updated readme)", "Thanks for this quick updates.", "I'm not able to work with FP16 for pytorch BERT code. Particularly for BertForSequenceClassification, which I tried and got the issue\r\n**Runtime error: Expected scalar type object Half but got scalar type Float for argument #2 target**\r\nwhen I enabled fp16.\r\nAlso when using \r\n`logits = logits.half()\r\nlabels = labels.half()`\r\nthen the epoch time also increased." ]
1,541
1,545
1,542
CONTRIBUTOR
null
Is there a plan to have an FP16 for GPU so to have a larger batch size or longer text documents support?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/10/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/10/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/9
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/9/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/9/comments
https://api.github.com/repos/huggingface/transformers/issues/9/events
https://github.com/huggingface/transformers/issues/9
378,935,595
MDU6SXNzdWUzNzg5MzU1OTU=
9
Crash at the end of training
{ "login": "bkgoksel", "id": 6436274, "node_id": "MDQ6VXNlcjY0MzYyNzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6436274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bkgoksel", "html_url": "https://github.com/bkgoksel", "followers_url": "https://api.github.com/users/bkgoksel/followers", "following_url": "https://api.github.com/users/bkgoksel/following{/other_user}", "gists_url": "https://api.github.com/users/bkgoksel/gists{/gist_id}", "starred_url": "https://api.github.com/users/bkgoksel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bkgoksel/subscriptions", "organizations_url": "https://api.github.com/users/bkgoksel/orgs", "repos_url": "https://api.github.com/users/bkgoksel/repos", "events_url": "https://api.github.com/users/bkgoksel/events{/privacy}", "received_events_url": "https://api.github.com/users/bkgoksel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Here's the specific command I ran for more context: \r\n```\r\npython3.6 code/run_squad.py \\\r\n --bert_config_file bert/bert_config.json \\\r\n --vocab_file bert/vocab.txt \\\r\n --output_dir output \\\r\n --train_file data/original/train.json \\\r\n --predict_file data/original/dev.json \\\r\n --init_checkpoint bert-pytorch/pytorch_model.bin \\\r\n --do_lower_case \\\r\n --do_train \\\r\n --do_predict \\\r\n --train_batch_size 10 \\\r\n --gradient_accumulation_steps 3 \\\r\n --accumulate_gradients 3\r\n```", "Hi Kerem, yes I fixed this bug yesterday in commit 2c5d993 (a bug with batches of dimension 1)\r\nYou can try again with the current version and it should be fine.\r\n\r\nI got good results with these hyperparameters last night:\r\n```bash\r\npython run_squad.py \\\r\n --vocab_file $BERT_BASE_DIR/vocab.txt \\\r\n --bert_config_file $BERT_BASE_DIR/bert_config.json \\\r\n --init_checkpoint $BERT_PYTORCH_DIR/pytorch_model.bin \\\r\n --do_train \\\r\n --do_predict \\\r\n --do_lower_case\r\n --train_file $SQUAD_DIR/train-v1.1.json \\\r\n --predict_file $SQUAD_DIR/dev-v1.1.json \\\r\n --train_batch_size 12 \\\r\n --learning_rate 3e-5 \\\r\n --num_train_epochs 2.0 \\\r\n --max_seq_length 384 \\\r\n --doc_stride 128 \\\r\n --output_dir ../debug_squad/\r\n```\r\n\r\nI found:\r\n```bash\r\n{\"f1\": 88.52381567990474, \"exact_match\": 81.22043519394512}\r\n```\r\n\r\nFeel free to reopen the issue if needed." ]
1,541
1,541
1,541
NONE
null
Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output: I was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8 Is this an issue you know about? ``` 11/08/2018 17:50:03 - INFO - __main__ - device cuda n_gpu 1 distributed training False 11/08/2018 17:50:18 - INFO - __main__ - *** Example *** 11/08/2018 17:50:18 - INFO - __main__ - unique_id: 1000000000 11/08/2018 17:50:18 - INFO - __main__ - example_index: 0 11/08/2018 17:50:18 - INFO - __main__ - doc_span_index: 0 11/08/2018 17:50:18 - INFO - __main__ - tokens: [CLS] to whom did the virgin mary allegedly appear in 1858 in lou ##rdes france ? [SEP] architectural ##ly , the school has a catholic character . atop the main building ' s gold dome is a golden statue of the virgin mary . immediately in front of the main building and facing it , is a copper statue of christ with arms up ##rai ##sed with the legend " ve ##ni ##te ad me om ##nes " . next to the main building is the basilica of the sacred heart . immediately behind the basilica is the gr ##otto , a marian place of prayer and reflection . it is a replica of the gr ##otto at lou ##rdes , france where the virgin mary reputed ##ly appeared to saint bern ##ade ##tte so ##ub ##iro ##us in 1858 . at the end of the main drive ( and in a direct line that connects through 3 statues and the gold dome ) , is a simple , modern stone statue of mary . [SEP] 11/08/2018 17:50:18 - INFO - __main__ - token_to_orig_map: 17:0 18:0 19:0 20:1 21:2 22:3 23:4 24:5 25:6 26:6 27:7 28:8 29:9 30:10 31:10 32:10 33:11 34:12 35:13 36:14 37:15 38:16 39:17 40:18 41:19 42:20 43:20 44:21 45:22 46:23 47:24 48:25 49:26 50:27 51:28 52:29 53:30 54:30 55:31 56:32 57:33 58:34 59:35 60:36 61:37 62:38 63:39 64:39 65:39 66:40 67:41 68:42 69:43 70:43 71:43 72:43 73:44 74:45 75:46 76:46 77:46 78:46 79:47 80:48 81:49 82:50 83:51 84:52 85:53 86:54 87:55 88:56 89:57 90:58 91:58 92:59 93:60 94:61 95:62 96:63 97:64 98:65 99:65 100:65 101:66 102:67 103:68 104:69 105:70 106:71 107:72 108:72 109:73 110:74 111:75 112:76 113:77 114:78 115:79 116:79 117:80 118:81 119:81 120:81 121:82 122:83 123:84 124:85 125:86 126:87 127:87 128:88 129:89 130:90 131:91 132:91 133:91 134:92 135:92 136:92 137:92 138:93 139:94 140:94 141:95 142:96 143:97 144:98 145:99 146:100 147:101 148:102 149:102 150:103 151:104 152:105 153:106 154:107 155:108 156:109 157:110 158:111 159:112 160:113 161:114 162:115 163:115 164:115 165:116 166:117 167:118 168:118 169:119 170:120 171:121 172:122 173:123 174:123 11/08/2018 17:50:18 - INFO - __main__ - token_is_max_context: 17:True 18:True 19:True 20:True 21:True 22:True 23:True 24:True 25:True 26:True 27:True 28:True 29:True 30:True 31:True 32:True 33:True 34:True 35:True 36:True 37:True 38:True 39:True 40:True 41:True 42:True 43:True 44:True 45:True 46:True 47:True 48:True 49:True 50:True 51:True 52:True 53:True 54:True 55:True 56:True 57:True 58:True 59:True 60:True 61:True 62:True 63:True 64:True 65:True 66:True 67:True 68:True 69:True 70:True 71:True 72:True 73:True 74:True 75:True 76:True 77:True 78:True 79:True 80:True 81:True 82:True 83:True 84:True 85:True 86:True 87:True 88:True 89:True 90:True 91:True 92:True 93:True 94:True 95:True 96:True 97:True 98:True 99:True 100:True 101:True 102:True 103:True 104:True 105:True 106:True 107:True 108:True 109:True 110:True 111:True 112:True 113:True 114:True 115:True 116:True 117:True 118:True 119:True 120:True 121:True 122:True 123:True 124:True 125:True 126:True 127:True 128:True 129:True 130:True 131:True 132:True 133:True 134:True 135:True 136:True 137:True 138:True 139:True 140:True 141:True 142:True 143:True 144:True 145:True 146:True 147:True 148:True 149:True 150:True 151:True 152:True 153:True 154:True 155:True 156:True 157:True 158:True 159:True 160:True 161:True 162:True 163:True 164:True 165:True 166:True 167:True 168:True 169:True 170:True 171:True 172:True 173:True 174:True 11/08/2018 17:50:18 - INFO - __main__ - input_ids: 101 2000 3183 2106 1996 6261 2984 9382 3711 1999 8517 1999 10223 26371 2605 1029 102 6549 2135 1010 1996 2082 2038 1037 3234 2839 1012 10234 1996 2364 2311 1005 1055 2751 8514 2003 1037 3585 6231 1997 1996 6261 2984 1012 3202 1999 2392 1997 1996 2364 2311 1998 5307 2009 1010 2003 1037 6967 6231 1997 4828 2007 2608 2039 14995 6924 2007 1996 5722 1000 2310 3490 2618 4748 2033 18168 5267 1000 1012 2279 2000 1996 2364 2311 2003 1996 13546 1997 1996 6730 2540 1012 3202 2369 1996 13546 2003 1996 24665 23052 1010 1037 14042 2173 1997 7083 1998 9185 1012 2009 2003 1037 15059 1997 1996 24665 23052 2012 10223 26371 1010 2605 2073 1996 6261 2984 22353 2135 2596 2000 3002 16595 9648 4674 2061 12083 9711 2271 1999 8517 1012 2012 1996 2203 1997 1996 2364 3298 1006 1998 1999 1037 3622 2240 2008 8539 2083 1017 11342 1998 1996 2751 8514 1007 1010 2003 1037 3722 1010 2715 2962 6231 1997 2984 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 11/08/2018 17:50:18 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... [truncated] ... Iteration: 100%|█████████▉| 29314/29324 [3:27:55<00:04, 2.36it/s] Iteration: 100%|█████████▉| 29315/29324 [3:27:55<00:03, 2.44it/s] Iteration: 100%|█████████▉| 29316/29324 [3:27:56<00:03, 2.26it/s] Iteration: 100%|█████████▉| 29317/29324 [3:27:56<00:02, 2.35it/s] Iteration: 100%|█████████▉| 29318/29324 [3:27:56<00:02, 2.44it/s] Iteration: 100%|█████████▉| 29319/29324 [3:27:57<00:02, 2.25it/s] Iteration: 100%|█████████▉| 29320/29324 [3:27:57<00:01, 2.35it/s] Iteration: 100%|█████████▉| 29321/29324 [3:27:58<00:01, 2.41it/s] Iteration: 100%|█████████▉| 29322/29324 [3:27:58<00:00, 2.25it/s] Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00, 2.36it/s]Traceback (most recent call last): File "code/run_squad.py", line 929, in <module> main() File "code/run_squad.py", line 862, in main loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/0x0d4ff90d01fa4168983197b17d73bb0c_dependencies/code/modeling.py", line 467, in forward start_loss = loss_fct(start_logits, start_positions) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__ result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 862, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1550, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1403, in nll_loss if input.size(0) != target.size(0): RuntimeError: dimension specified as 0 but tensor has no dimensions Exception ignored in: <bound method tqdm.__del__ of Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00, 2.36it/s]> Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 931, in __del__ self.close() File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 1133, in close self._decr_instances(self) File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 496, in _decr_instances cls.monitor.exit() File "/usr/local/lib/python3.6/dist-packages/tqdm/_monitor.py", line 52, in exit self.join() File "/usr/lib/python3.6/threading.py", line 1053, in join raise RuntimeError("cannot join current thread") RuntimeError: cannot join current thread ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/9/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/9/timeline
completed
null
null