metadata
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
- generated_from_trainer
datasets:
- essays_su_g
metrics:
- accuracy
model-index:
- name: longformer-sep_tok_full_labels
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: essays_su_g
type: essays_su_g
config: sep_tok_full_labels
split: train[80%:100%]
args: sep_tok_full_labels
metrics:
- name: Accuracy
type: accuracy
value: 0.8969870392189867
longformer-sep_tok_full_labels
This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:
- Loss: 0.3037
- B-claim: {'precision': 0.6779661016949152, 'recall': 0.5904059040590406, 'f1-score': 0.631163708086785, 'support': 271.0}
- B-majorclaim: {'precision': 0.8333333333333334, 'recall': 0.8633093525179856, 'f1-score': 0.8480565371024734, 'support': 139.0}
- B-premise: {'precision': 0.8672699849170438, 'recall': 0.9083728278041074, 'f1-score': 0.8873456790123457, 'support': 633.0}
- I-claim: {'precision': 0.656284454244763, 'recall': 0.5951012246938265, 'f1-score': 0.624197142482632, 'support': 4001.0}
- I-majorclaim: {'precision': 0.8737373737373737, 'recall': 0.8594138102334824, 'f1-score': 0.8665164037064863, 'support': 2013.0}
- I-premise: {'precision': 0.8829488380011918, 'recall': 0.9149611856033875, 'f1-score': 0.8986700168955508, 'support': 11336.0}
- O: {'precision': 1.0, 'recall': 0.999557991513437, 'f1-score': 0.9997789469030461, 'support': 11312.0}
- Accuracy: 0.8970
- Macro avg: {'precision': 0.8273628694183744, 'recall': 0.8187317566321809, 'f1-score': 0.8222469191699027, 'support': 29705.0}
- Weighted avg: {'precision': 0.8939329914052613, 'recall': 0.8969870392189867, 'f1-score': 0.8951068198954036, 'support': 29705.0}
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
Training results
Training Loss | Epoch | Step | Validation Loss | B-claim | B-majorclaim | B-premise | I-claim | I-majorclaim | I-premise | O | Accuracy | Macro avg | Weighted avg |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
No log | 1.0 | 41 | 0.3860 | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 271.0} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 139.0} | {'precision': 0.6388308977035491, 'recall': 0.966824644549763, 'f1-score': 0.7693274670018857, 'support': 633.0} | {'precision': 0.46779336734693877, 'recall': 0.36665833541614595, 'f1-score': 0.4110970996216898, 'support': 4001.0} | {'precision': 0.7217828900071891, 'recall': 0.49875807252856436, 'f1-score': 0.5898942420681551, 'support': 2013.0} | {'precision': 0.8227455841338704, 'recall': 0.9368383909668313, 'f1-score': 0.8760930539514932, 'support': 11336.0} | {'precision': 0.9961987270155587, 'recall': 0.9961987270155587, 'f1-score': 0.9961987270155587, 'support': 11312.0} | 0.8407 | {'precision': 0.5210502094581579, 'recall': 0.5378968814966948, 'f1-score': 0.520372941379826, 'support': 29705.0} | {'precision': 0.8188727190818877, 'recall': 0.8406665544521125, 'f1-score': 0.8254378640321797, 'support': 29705.0} |
No log | 2.0 | 82 | 0.3175 | {'precision': 0.40588235294117647, 'recall': 0.25461254612546125, 'f1-score': 0.3129251700680272, 'support': 271.0} | {'precision': 0.7419354838709677, 'recall': 0.16546762589928057, 'f1-score': 0.2705882352941177, 'support': 139.0} | {'precision': 0.7554744525547445, 'recall': 0.981042654028436, 'f1-score': 0.8536082474226805, 'support': 633.0} | {'precision': 0.5985010706638115, 'recall': 0.2794301424643839, 'f1-score': 0.3809848355767592, 'support': 4001.0} | {'precision': 0.7330453563714903, 'recall': 0.8430203676105316, 'f1-score': 0.7841959334565619, 'support': 2013.0} | {'precision': 0.8209451795841209, 'recall': 0.9577452364149612, 'f1-score': 0.8840845242457555, 'support': 11336.0} | {'precision': 0.9995565016852936, 'recall': 0.9961987270155587, 'f1-score': 0.9978747896927299, 'support': 11312.0} | 0.8636 | {'precision': 0.7221914853816579, 'recall': 0.6396453285083733, 'f1-score': 0.6406088193938045, 'support': 29705.0} | {'precision': 0.8474929899782405, 'recall': 0.8636256522470964, 'f1-score': 0.8441540829980674, 'support': 29705.0} |
No log | 3.0 | 123 | 0.2658 | {'precision': 0.5686274509803921, 'recall': 0.4280442804428044, 'f1-score': 0.4884210526315789, 'support': 271.0} | {'precision': 0.8235294117647058, 'recall': 0.60431654676259, 'f1-score': 0.6970954356846473, 'support': 139.0} | {'precision': 0.8116531165311653, 'recall': 0.9462875197472354, 'f1-score': 0.8738147337709702, 'support': 633.0} | {'precision': 0.6275706940874036, 'recall': 0.488127968007998, 'f1-score': 0.5491353859131168, 'support': 4001.0} | {'precision': 0.804162724692526, 'recall': 0.8445106805762543, 'f1-score': 0.8238429852192877, 'support': 2013.0} | {'precision': 0.8631604978979474, 'recall': 0.9236944248412138, 'f1-score': 0.8924020965611283, 'support': 11336.0} | {'precision': 1.0, 'recall': 0.9992927864214993, 'f1-score': 0.9996462681287585, 'support': 11312.0} | 0.8829 | {'precision': 0.7855291279934485, 'recall': 0.7477534581142279, 'f1-score': 0.7606225654156411, 'support': 29705.0} | {'precision': 0.875570522344255, 'recall': 0.8829153341188352, 'f1-score': 0.8773653747609701, 'support': 29705.0} |
No log | 4.0 | 164 | 0.2599 | {'precision': 0.6265560165975104, 'recall': 0.5571955719557196, 'f1-score': 0.58984375, 'support': 271.0} | {'precision': 0.7547169811320755, 'recall': 0.8633093525179856, 'f1-score': 0.8053691275167785, 'support': 139.0} | {'precision': 0.8724727838258165, 'recall': 0.8862559241706162, 'f1-score': 0.8793103448275862, 'support': 633.0} | {'precision': 0.6471803956303513, 'recall': 0.5478630342414397, 'f1-score': 0.5933946940985381, 'support': 4001.0} | {'precision': 0.7891540130151844, 'recall': 0.9036264282165921, 'f1-score': 0.8425196850393701, 'support': 2013.0} | {'precision': 0.8842854692056956, 'recall': 0.9094036697247706, 'f1-score': 0.8966686961816125, 'support': 11336.0} | {'precision': 0.9998231966053748, 'recall': 0.9998231966053748, 'f1-score': 0.9998231966053748, 'support': 11312.0} | 0.8908 | {'precision': 0.7963126937160012, 'recall': 0.8096395967760712, 'f1-score': 0.8009899277527515, 'support': 29705.0} | {'precision': 0.8866915833384748, 'recall': 0.8908264601918869, 'f1-score': 0.8878369988297578, 'support': 29705.0} |
No log | 5.0 | 205 | 0.2695 | {'precision': 0.6768558951965066, 'recall': 0.5719557195571956, 'f1-score': 0.6200000000000001, 'support': 271.0} | {'precision': 0.8051948051948052, 'recall': 0.8920863309352518, 'f1-score': 0.8464163822525598, 'support': 139.0} | {'precision': 0.8679817905918058, 'recall': 0.9036334913112164, 'f1-score': 0.8854489164086687, 'support': 633.0} | {'precision': 0.6750599520383693, 'recall': 0.5628592851787053, 'f1-score': 0.6138748807414475, 'support': 4001.0} | {'precision': 0.812194036493102, 'recall': 0.9066070541480378, 'f1-score': 0.8568075117370892, 'support': 2013.0} | {'precision': 0.8850399049074545, 'recall': 0.9195483415666902, 'f1-score': 0.9019641775547288, 'support': 11336.0} | {'precision': 1.0, 'recall': 0.9991159830268741, 'f1-score': 0.9995577960555408, 'support': 11312.0} | 0.8973 | {'precision': 0.8174751977745777, 'recall': 0.8222580293891387, 'f1-score': 0.8177242378214336, 'support': 29705.0} | {'precision': 0.8929626771439818, 'recall': 0.8972900185154015, 'f1-score': 0.8940815238489738, 'support': 29705.0} |
No log | 6.0 | 246 | 0.2840 | {'precision': 0.6135593220338983, 'recall': 0.6678966789667896, 'f1-score': 0.6395759717314488, 'support': 271.0} | {'precision': 0.8347107438016529, 'recall': 0.7266187050359713, 'f1-score': 0.7769230769230769, 'support': 139.0} | {'precision': 0.8869426751592356, 'recall': 0.8799368088467614, 'f1-score': 0.8834258524980174, 'support': 633.0} | {'precision': 0.603156450137237, 'recall': 0.6590852286928268, 'f1-score': 0.6298817628090291, 'support': 4001.0} | {'precision': 0.894268224819143, 'recall': 0.7983109786388475, 'f1-score': 0.8435695538057743, 'support': 2013.0} | {'precision': 0.8947791882710531, 'recall': 0.882939308398024, 'f1-score': 0.8888198206198383, 'support': 11336.0} | {'precision': 1.0, 'recall': 0.9994695898161244, 'f1-score': 0.9997347245556637, 'support': 11312.0} | 0.8887 | {'precision': 0.8182023720317456, 'recall': 0.8020367569136208, 'f1-score': 0.8088472518489783, 'support': 29705.0} | {'precision': 0.8925211868317149, 'recall': 0.8886719407507153, 'f1-score': 0.8902019557715158, 'support': 29705.0} |
No log | 7.0 | 287 | 0.3009 | {'precision': 0.6595744680851063, 'recall': 0.5719557195571956, 'f1-score': 0.6126482213438735, 'support': 271.0} | {'precision': 0.8538461538461538, 'recall': 0.7985611510791367, 'f1-score': 0.8252788104089219, 'support': 139.0} | {'precision': 0.8569321533923304, 'recall': 0.9178515007898894, 'f1-score': 0.8863463005339437, 'support': 633.0} | {'precision': 0.648795871559633, 'recall': 0.5656085978505374, 'f1-score': 0.6043530511416746, 'support': 4001.0} | {'precision': 0.8962108731466227, 'recall': 0.8107302533532041, 'f1-score': 0.8513302034428796, 'support': 2013.0} | {'precision': 0.8709945210028225, 'recall': 0.9255469301340861, 'f1-score': 0.8974424771191514, 'support': 11336.0} | {'precision': 1.0, 'recall': 0.999557991513437, 'f1-score': 0.9997789469030461, 'support': 11312.0} | 0.8935 | {'precision': 0.8266220058618099, 'recall': 0.7985445920396409, 'f1-score': 0.8110254301276415, 'support': 29705.0} | {'precision': 0.8895932001068932, 'recall': 0.893485945127083, 'f1-score': 0.8906396315774223, 'support': 29705.0} |
No log | 8.0 | 328 | 0.3037 | {'precision': 0.6779661016949152, 'recall': 0.5904059040590406, 'f1-score': 0.631163708086785, 'support': 271.0} | {'precision': 0.8333333333333334, 'recall': 0.8633093525179856, 'f1-score': 0.8480565371024734, 'support': 139.0} | {'precision': 0.8672699849170438, 'recall': 0.9083728278041074, 'f1-score': 0.8873456790123457, 'support': 633.0} | {'precision': 0.656284454244763, 'recall': 0.5951012246938265, 'f1-score': 0.624197142482632, 'support': 4001.0} | {'precision': 0.8737373737373737, 'recall': 0.8594138102334824, 'f1-score': 0.8665164037064863, 'support': 2013.0} | {'precision': 0.8829488380011918, 'recall': 0.9149611856033875, 'f1-score': 0.8986700168955508, 'support': 11336.0} | {'precision': 1.0, 'recall': 0.999557991513437, 'f1-score': 0.9997789469030461, 'support': 11312.0} | 0.8970 | {'precision': 0.8273628694183744, 'recall': 0.8187317566321809, 'f1-score': 0.8222469191699027, 'support': 29705.0} | {'precision': 0.8939329914052613, 'recall': 0.8969870392189867, 'f1-score': 0.8951068198954036, 'support': 29705.0} |
Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2