Theoreticallyhugo's picture
Training in progress, epoch 1
d7e7d38 verified
|
raw
history blame
17.2 kB
metadata
license: apache-2.0
base_model: allenai/longformer-base-4096
tags:
  - generated_from_trainer
datasets:
  - essays_su_g
metrics:
  - accuracy
model-index:
  - name: longformer-sep_tok_full_labels
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: essays_su_g
          type: essays_su_g
          config: sep_tok_full_labels
          split: train[80%:100%]
          args: sep_tok_full_labels
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8934186163945463

longformer-sep_tok_full_labels

This model is a fine-tuned version of allenai/longformer-base-4096 on the essays_su_g dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3686
  • B-claim: {'precision': 0.6666666666666666, 'recall': 0.6051660516605166, 'f1-score': 0.6344294003868471, 'support': 271.0}
  • B-majorclaim: {'precision': 0.8695652173913043, 'recall': 0.8633093525179856, 'f1-score': 0.8664259927797834, 'support': 139.0}
  • B-premise: {'precision': 0.8636363636363636, 'recall': 0.9004739336492891, 'f1-score': 0.8816705336426915, 'support': 633.0}
  • I-claim: {'precision': 0.6413013509787703, 'recall': 0.5813546613346663, 'f1-score': 0.609858416360776, 'support': 4001.0}
  • I-majorclaim: {'precision': 0.8951271186440678, 'recall': 0.8395429706905116, 'f1-score': 0.8664445014098949, 'support': 2013.0}
  • I-premise: {'precision': 0.8752006759611323, 'recall': 0.9137261820748059, 'f1-score': 0.894048595226792, 'support': 11336.0}
  • O: {'precision': 1.0, 'recall': 0.9999115983026874, 'f1-score': 0.9999557971975424, 'support': 11312.0}
  • Accuracy: 0.8934
  • Macro avg: {'precision': 0.8302139133254721, 'recall': 0.814783535747209, 'f1-score': 0.8218333195720467, 'support': 29705.0}
  • Weighted avg: {'precision': 0.8903965833313531, 'recall': 0.8934186163945463, 'f1-score': 0.8914691865640177, 'support': 29705.0}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 11

Training results

Training Loss Epoch Step Validation Loss B-claim B-majorclaim B-premise I-claim I-majorclaim I-premise O Accuracy Macro avg Weighted avg
No log 1.0 41 0.4100 {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 271.0} {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 139.0} {'precision': 0.6235662148070907, 'recall': 0.9447077409162717, 'f1-score': 0.7512562814070352, 'support': 633.0} {'precision': 0.44710491946016545, 'recall': 0.25668582854286426, 'f1-score': 0.3261352810416005, 'support': 4001.0} {'precision': 0.7319587628865979, 'recall': 0.4585196224540487, 'f1-score': 0.5638362858888211, 'support': 2013.0} {'precision': 0.7872860635696821, 'recall': 0.965772759350741, 'f1-score': 0.8674431503050472, 'support': 11336.0} {'precision': 0.9986704485020387, 'recall': 0.9960219236209336, 'f1-score': 0.9973444277241746, 'support': 11312.0} 0.8336 {'precision': 0.5126552013179392, 'recall': 0.5173868392692657, 'f1-score': 0.5008593466238113, 'support': 29705.0} {'precision': 0.8038596908434509, 'recall': 0.8336307019020367, 'f1-score': 0.8089786449199181, 'support': 29705.0}
No log 2.0 82 0.3090 {'precision': 0.3548387096774194, 'recall': 0.2029520295202952, 'f1-score': 0.2582159624413145, 'support': 271.0} {'precision': 0.875, 'recall': 0.1510791366906475, 'f1-score': 0.25766871165644173, 'support': 139.0} {'precision': 0.7284382284382285, 'recall': 0.9873617693522907, 'f1-score': 0.8383635144198526, 'support': 633.0} {'precision': 0.5705128205128205, 'recall': 0.31142214446388405, 'f1-score': 0.40291026677445435, 'support': 4001.0} {'precision': 0.7969515514425695, 'recall': 0.7272727272727273, 'f1-score': 0.7605194805194805, 'support': 2013.0} {'precision': 0.8184882762753765, 'recall': 0.9638320395201129, 'f1-score': 0.8852339477415435, 'support': 11336.0} {'precision': 0.9998229775181448, 'recall': 0.9985855728429985, 'f1-score': 0.999203892083149, 'support': 11312.0} 0.8629 {'precision': 0.7348646519806513, 'recall': 0.620357917094708, 'f1-score': 0.6288736822337481, 'support': 29705.0} {'precision': 0.8467986392322029, 'recall': 0.862918700555462, 'f1-score': 0.8455631284922618, 'support': 29705.0}
No log 3.0 123 0.2592 {'precision': 0.6200873362445415, 'recall': 0.5239852398523985, 'f1-score': 0.5680000000000001, 'support': 271.0} {'precision': 0.7635135135135135, 'recall': 0.8129496402877698, 'f1-score': 0.7874564459930314, 'support': 139.0} {'precision': 0.8573573573573574, 'recall': 0.9020537124802528, 'f1-score': 0.8791377983063896, 'support': 633.0} {'precision': 0.633972602739726, 'recall': 0.5783554111472132, 'f1-score': 0.6048882499019735, 'support': 4001.0} {'precision': 0.7454036770583533, 'recall': 0.9264778936910084, 'f1-score': 0.8261351052048727, 'support': 2013.0} {'precision': 0.9024368472730518, 'recall': 0.891848976711362, 'f1-score': 0.89711167309996, 'support': 11336.0} {'precision': 1.0, 'recall': 0.999557991513437, 'f1-score': 0.9997789469030461, 'support': 11312.0} 0.8895 {'precision': 0.7889673334552205, 'recall': 0.8050326950976345, 'f1-score': 0.7946440313441819, 'support': 29705.0} {'precision': 0.8886020986323946, 'recall': 0.8894798855411546, 'f1-score': 0.8881401750743844, 'support': 29705.0}
No log 4.0 164 0.2631 {'precision': 0.640625, 'recall': 0.6051660516605166, 'f1-score': 0.6223908918406073, 'support': 271.0} {'precision': 0.7380952380952381, 'recall': 0.8920863309352518, 'f1-score': 0.8078175895765473, 'support': 139.0} {'precision': 0.8949919224555735, 'recall': 0.8751974723538705, 'f1-score': 0.8849840255591054, 'support': 633.0} {'precision': 0.6294777139732021, 'recall': 0.57535616095976, 'f1-score': 0.6012013580569339, 'support': 4001.0} {'precision': 0.7759211653813196, 'recall': 0.899652260307998, 'f1-score': 0.8332183114791811, 'support': 2013.0} {'precision': 0.8938286821022977, 'recall': 0.8956422018348624, 'f1-score': 0.8947345230226923, 'support': 11336.0} {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 11312.0} 0.8894 {'precision': 0.7961342460010901, 'recall': 0.820442925436037, 'f1-score': 0.8063352427907239, 'support': 29705.0} {'precision': 0.8876500952647918, 'recall': 0.8894125568086181, 'f1-score': 0.8880166676450929, 'support': 29705.0}
No log 5.0 205 0.2617 {'precision': 0.6733067729083665, 'recall': 0.6236162361623616, 'f1-score': 0.6475095785440612, 'support': 271.0} {'precision': 0.8322147651006712, 'recall': 0.8920863309352518, 'f1-score': 0.8611111111111112, 'support': 139.0} {'precision': 0.8771384136858476, 'recall': 0.8909952606635071, 'f1-score': 0.8840125391849529, 'support': 633.0} {'precision': 0.6285642190259904, 'recall': 0.6225943514121469, 'f1-score': 0.6255650426921144, 'support': 4001.0} {'precision': 0.8478366553232863, 'recall': 0.8663686040735221, 'f1-score': 0.857002457002457, 'support': 2013.0} {'precision': 0.8928854800317376, 'recall': 0.8934368383909669, 'f1-score': 0.8931610741214341, 'support': 11336.0} {'precision': 1.0, 'recall': 0.9988507779349364, 'f1-score': 0.9994250585997965, 'support': 11312.0} 0.8927 {'precision': 0.8217066151536999, 'recall': 0.8268497713675275, 'f1-score': 0.8239695516079897, 'support': 29705.0} {'precision': 0.8923986881938678, 'recall': 0.8927453290691802, 'f1-score': 0.8925484382566077, 'support': 29705.0}
No log 6.0 246 0.2902 {'precision': 0.6445993031358885, 'recall': 0.6826568265682657, 'f1-score': 0.6630824372759856, 'support': 271.0} {'precision': 0.8740157480314961, 'recall': 0.7985611510791367, 'f1-score': 0.8345864661654135, 'support': 139.0} {'precision': 0.8887122416534181, 'recall': 0.8830963665086888, 'f1-score': 0.8858954041204438, 'support': 633.0} {'precision': 0.611534795042898, 'recall': 0.6413396650837291, 'f1-score': 0.6260827131877515, 'support': 4001.0} {'precision': 0.8983783783783784, 'recall': 0.8256333830104322, 'f1-score': 0.8604711364224696, 'support': 2013.0} {'precision': 0.8894587902370004, 'recall': 0.8872618207480593, 'f1-score': 0.8883589471824767, 'support': 11336.0} {'precision': 1.0, 'recall': 0.9996463932107497, 'f1-score': 0.9998231653404068, 'support': 11312.0} 0.8904 {'precision': 0.8295284652112971, 'recall': 0.8168850866012944, 'f1-score': 0.8226143242421353, 'support': 29705.0} {'precision': 0.8924026489096706, 'recall': 0.8903888234303989, 'f1-score': 0.8912303199724251, 'support': 29705.0}
No log 7.0 287 0.3483 {'precision': 0.62882096069869, 'recall': 0.5313653136531366, 'f1-score': 0.576, 'support': 271.0} {'precision': 0.9074074074074074, 'recall': 0.7050359712230215, 'f1-score': 0.7935222672064776, 'support': 139.0} {'precision': 0.8330975954738331, 'recall': 0.9304897314375987, 'f1-score': 0.8791044776119402, 'support': 633.0} {'precision': 0.6139279169211973, 'recall': 0.5023744063984004, 'f1-score': 0.5525773195876289, 'support': 4001.0} {'precision': 0.9276228419654714, 'recall': 0.6939890710382514, 'f1-score': 0.7939755612389883, 'support': 2013.0} {'precision': 0.8497734679278277, 'recall': 0.9431016231474947, 'f1-score': 0.8940084458753188, 'support': 11336.0} {'precision': 1.0, 'recall': 0.998939179632249, 'f1-score': 0.9994693083318592, 'support': 11312.0} 0.8830 {'precision': 0.8229500271992037, 'recall': 0.7578993280757359, 'f1-score': 0.7840939114074591, 'support': 29705.0} {'precision': 0.878389271059484, 'recall': 0.8829826628513718, 'f1-score': 0.8777135144994731, 'support': 29705.0}
No log 8.0 328 0.3245 {'precision': 0.6728624535315985, 'recall': 0.6678966789667896, 'f1-score': 0.6703703703703703, 'support': 271.0} {'precision': 0.8671328671328671, 'recall': 0.8920863309352518, 'f1-score': 0.8794326241134752, 'support': 139.0} {'precision': 0.884493670886076, 'recall': 0.8830963665086888, 'f1-score': 0.8837944664031621, 'support': 633.0} {'precision': 0.6309348996573666, 'recall': 0.6443389152711823, 'f1-score': 0.6375664646964263, 'support': 4001.0} {'precision': 0.886991461577097, 'recall': 0.877297565822156, 'f1-score': 0.8821178821178822, 'support': 2013.0} {'precision': 0.8936396700079837, 'recall': 0.8886732533521524, 'f1-score': 0.891149542217701, 'support': 11336.0} {'precision': 1.0, 'recall': 0.9999115983026874, 'f1-score': 0.9999557971975424, 'support': 11312.0} 0.8952 {'precision': 0.8337221461132841, 'recall': 0.8361858155941297, 'f1-score': 0.83491244958808, 'support': 29705.0} {'precision': 0.8959752678674884, 'recall': 0.8952364921730348, 'f1-score': 0.8955910221439993, 'support': 29705.0}
No log 9.0 369 0.3334 {'precision': 0.6714285714285714, 'recall': 0.6937269372693727, 'f1-score': 0.6823956442831216, 'support': 271.0} {'precision': 0.8714285714285714, 'recall': 0.8776978417266187, 'f1-score': 0.8745519713261649, 'support': 139.0} {'precision': 0.8926282051282052, 'recall': 0.8799368088467614, 'f1-score': 0.8862370723945903, 'support': 633.0} {'precision': 0.6312316715542522, 'recall': 0.6455886028492877, 'f1-score': 0.6383294204868406, 'support': 4001.0} {'precision': 0.8960703205791106, 'recall': 0.8609041231992052, 'f1-score': 0.8781352926273118, 'support': 2013.0} {'precision': 0.892557605720844, 'recall': 0.891848976711362, 'f1-score': 0.8922031505096412, 'support': 11336.0} {'precision': 1.0, 'recall': 0.9996463932107497, 'f1-score': 0.9998231653404068, 'support': 11312.0} 0.8955 {'precision': 0.8364778494056507, 'recall': 0.8356213834019082, 'f1-score': 0.8359536738525826, 'support': 29705.0} {'precision': 0.8963979080894687, 'recall': 0.8955058071031813, 'f1-score': 0.8959143890380555, 'support': 29705.0}
No log 10.0 410 0.3606 {'precision': 0.6779661016949152, 'recall': 0.5904059040590406, 'f1-score': 0.631163708086785, 'support': 271.0} {'precision': 0.8680555555555556, 'recall': 0.8992805755395683, 'f1-score': 0.8833922261484098, 'support': 139.0} {'precision': 0.8599397590361446, 'recall': 0.9020537124802528, 'f1-score': 0.8804934464148034, 'support': 633.0} {'precision': 0.6539440203562341, 'recall': 0.5781054736315921, 'f1-score': 0.6136906341204564, 'support': 4001.0} {'precision': 0.8929313929313929, 'recall': 0.8534525583705912, 'f1-score': 0.8727457454914911, 'support': 2013.0} {'precision': 0.8752838281052897, 'recall': 0.918136908962597, 'f1-score': 0.8961983898049684, 'support': 11336.0} {'precision': 1.0, 'recall': 0.9997347949080623, 'f1-score': 0.9998673798682641, 'support': 11312.0} 0.8956 {'precision': 0.8325886653827903, 'recall': 0.8201671325645291, 'f1-score': 0.825364504276454, 'support': 29705.0} {'precision': 0.8919996228940978, 'recall': 0.8956068002019862, 'f1-score': 0.8932236120719058, 'support': 29705.0}
No log 11.0 451 0.3686 {'precision': 0.6666666666666666, 'recall': 0.6051660516605166, 'f1-score': 0.6344294003868471, 'support': 271.0} {'precision': 0.8695652173913043, 'recall': 0.8633093525179856, 'f1-score': 0.8664259927797834, 'support': 139.0} {'precision': 0.8636363636363636, 'recall': 0.9004739336492891, 'f1-score': 0.8816705336426915, 'support': 633.0} {'precision': 0.6413013509787703, 'recall': 0.5813546613346663, 'f1-score': 0.609858416360776, 'support': 4001.0} {'precision': 0.8951271186440678, 'recall': 0.8395429706905116, 'f1-score': 0.8664445014098949, 'support': 2013.0} {'precision': 0.8752006759611323, 'recall': 0.9137261820748059, 'f1-score': 0.894048595226792, 'support': 11336.0} {'precision': 1.0, 'recall': 0.9999115983026874, 'f1-score': 0.9999557971975424, 'support': 11312.0} 0.8934 {'precision': 0.8302139133254721, 'recall': 0.814783535747209, 'f1-score': 0.8218333195720467, 'support': 29705.0} {'precision': 0.8903965833313531, 'recall': 0.8934186163945463, 'f1-score': 0.8914691865640177, 'support': 29705.0}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2