File size: 2,680 Bytes
c090ba2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/log.txt. Loading [94mnlp[0m dataset [94mglue[0m, subset [94mrte[0m, split [94mtrain[0m. Loading [94mnlp[0m dataset [94mglue[0m, subset [94mrte[0m, split [94mvalidation[0m. Loaded dataset. Found: 2 labels: ([0, 1]) Loading transformers AutoModelForSequenceClassification: bert-base-uncased Tokenizing training data. (len: 2490) Tokenizing eval data (len: 277) Loaded data and tokenized in 14.295648097991943s Training model across 1 GPUs ***** Running training ***** Num examples = 2490 Batch size = 128 Max sequence length = 128 Num steps = 95 Num epochs = 5 Learning rate = 3e-05 Failed to predict with model <class 'transformers.modeling_bert.BertForSequenceClassification'>. Check tokenizer configuration. Writing logs to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/log.txt. Loading [94mnlp[0m dataset [94mglue[0m, subset [94mrte[0m, split [94mtrain[0m. Loading [94mnlp[0m dataset [94mglue[0m, subset [94mrte[0m, split [94mvalidation[0m. Loaded dataset. Found: 2 labels: ([0, 1]) Loading transformers AutoModelForSequenceClassification: bert-base-uncased Tokenizing training data. (len: 2490) Tokenizing eval data (len: 277) Loaded data and tokenized in 13.395596742630005s Training model across 1 GPUs ***** Running training ***** Num examples = 2490 Batch size = 8 Max sequence length = 128 Num steps = 1555 Num epochs = 5 Learning rate = 2e-05 Eval accuracy: 68.23104693140795% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/. Eval accuracy: 70.03610108303249% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/. Eval accuracy: 72.56317689530685% Best acc found. Saved model to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/. Eval accuracy: 69.67509025270758% Eval accuracy: 69.31407942238266% Saved tokenizer <textattack.models.tokenizers.auto_tokenizer.AutoTokenizer object at 0x7fc688911c10> to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/. Wrote README to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/README.md. Wrote training args to /p/qdata/jm8wx/research/text_attacks/textattack/outputs/training/bert-base-uncased-glue:rte-2020-06-29-14:24/train_args.json. |