izaitova's picture
End of training
3c7ed7f verified
|
raw
history blame
7.03 kB
metadata
license: cc-by-4.0
base_model: allegro/herbert-large-cased
tags:
  - generated_from_trainer
datasets:
  - universal_dependencies
model-index:
  - name: herbert-large-cased_deprel
    results: []

herbert-large-cased_deprel

This model is a fine-tuned version of allegro/herbert-large-cased on the universal_dependencies dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4494
  • : {'precision': 0.9848484848484849, 'recall': 0.9154929577464789, 'f1': 0.948905109489051, 'number': 71}
  • Arataxis:insert: {'precision': 0.6216216216216216, 'recall': 0.34328358208955223, 'f1': 0.4423076923076923, 'number': 67}
  • Arataxis:obj: {'precision': 0.6428571428571429, 'recall': 0.46551724137931033, 'f1': 0.5399999999999999, 'number': 58}
  • Ark: {'precision': 0.8614457831325302, 'recall': 0.7944444444444444, 'f1': 0.8265895953757226, 'number': 180}
  • Ase: {'precision': 0.9363103953147877, 'recall': 0.900070372976777, 'f1': 0.9178327951202009, 'number': 1421}
  • Bj: {'precision': 0.8612244897959184, 'recall': 0.8115384615384615, 'f1': 0.8356435643564357, 'number': 520}
  • Bl: {'precision': 0.8, 'recall': 0.8054054054054054, 'f1': 0.8026936026936028, 'number': 740}
  • Bl:agent: {'precision': 0.6875, 'recall': 0.6875, 'f1': 0.6875, 'number': 16}
  • Bl:arg: {'precision': 0.7847222222222222, 'recall': 0.710691823899371, 'f1': 0.7458745874587458, 'number': 318}
  • Bl:cmpr: {'precision': 0.8461538461538461, 'recall': 0.6470588235294118, 'f1': 0.7333333333333334, 'number': 17}
  • C: {'precision': 0.8974358974358975, 'recall': 0.8115942028985508, 'f1': 0.8523592085235921, 'number': 345}
  • C:preconj: {'precision': 1.0, 'recall': 0.3333333333333333, 'f1': 0.5, 'number': 6}
  • Cl: {'precision': 0.8581081081081081, 'recall': 0.8141025641025641, 'f1': 0.8355263157894737, 'number': 156}
  • Cl:relcl: {'precision': 0.7368421052631579, 'recall': 0.5526315789473685, 'f1': 0.631578947368421, 'number': 76}
  • Comp: {'precision': 0.7606382978723404, 'recall': 0.7079207920792079, 'f1': 0.7333333333333332, 'number': 202}
  • Comp:cleft: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4}
  • Comp:obj: {'precision': 0.5, 'recall': 0.2916666666666667, 'f1': 0.3684210526315789, 'number': 24}
  • Comp:pred: {'precision': 0.4375, 'recall': 0.7, 'f1': 0.5384615384615384, 'number': 10}
  • Comp:subj: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
  • Dvcl: {'precision': 0.7983193277310925, 'recall': 0.7480314960629921, 'f1': 0.7723577235772359, 'number': 127}
  • Dvcl:cmpr: {'precision': 0.3333333333333333, 'recall': 0.25, 'f1': 0.28571428571428575, 'number': 4}
  • Dvmod: {'precision': 0.8131868131868132, 'recall': 0.7789473684210526, 'f1': 0.7956989247311829, 'number': 380}
  • Dvmod:arg: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4}
  • Dvmod:emph: {'precision': 0.7755102040816326, 'recall': 0.7354838709677419, 'f1': 0.7549668874172186, 'number': 155}
  • Dvmod:neg: {'precision': 0.9067796610169492, 'recall': 0.8492063492063492, 'f1': 0.8770491803278689, 'number': 126}
  • Et: {'precision': 0.9072164948453608, 'recall': 0.7927927927927928, 'f1': 0.8461538461538461, 'number': 111}
  • Et:numgov: {'precision': 0.8421052631578947, 'recall': 0.8, 'f1': 0.8205128205128205, 'number': 20}
  • Et:nummod: {'precision': 0.5, 'recall': 1.0, 'f1': 0.6666666666666666, 'number': 1}
  • Et:poss: {'precision': 0.8928571428571429, 'recall': 0.8620689655172413, 'f1': 0.8771929824561403, 'number': 58}
  • Iscourse:intj: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
  • Ist: {'precision': 1.0, 'recall': 0.6666666666666666, 'f1': 0.8, 'number': 9}
  • Ixed: {'precision': 0.6833333333333333, 'recall': 0.47674418604651164, 'f1': 0.5616438356164384, 'number': 86}
  • Lat: {'precision': 0.6724137931034483, 'recall': 0.5416666666666666, 'f1': 0.6, 'number': 72}
  • Mod: {'precision': 0.7808471454880295, 'recall': 0.7138047138047138, 'f1': 0.7458223394898855, 'number': 1188}
  • Mod:arg: {'precision': 0.5681818181818182, 'recall': 0.4878048780487805, 'f1': 0.5249343832020996, 'number': 205}
  • Mod:flat: {'precision': 0.5609756097560976, 'recall': 0.3898305084745763, 'f1': 0.46, 'number': 59}
  • Mod:poss: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 4}
  • Mod:pred: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
  • Obj: {'precision': 0.7905759162303665, 'recall': 0.6832579185520362, 'f1': 0.733009708737864, 'number': 221}
  • Ocative: {'precision': 0.75, 'recall': 0.9, 'f1': 0.8181818181818182, 'number': 10}
  • Onj: {'precision': 0.7920792079207921, 'recall': 0.6517311608961304, 'f1': 0.7150837988826816, 'number': 491}
  • Oot: {'precision': 0.955, 'recall': 0.955, 'f1': 0.955, 'number': 1000}
  • Op: {'precision': 0.7974683544303798, 'recall': 0.7682926829268293, 'f1': 0.782608695652174, 'number': 82}
  • Ppos: {'precision': 0.7272727272727273, 'recall': 0.5423728813559322, 'f1': 0.6213592233009708, 'number': 59}
  • Rphan: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
  • Subj: {'precision': 0.9121287128712872, 'recall': 0.8826347305389222, 'f1': 0.8971393791844188, 'number': 835}
  • Subj:pass: {'precision': 0.7727272727272727, 'recall': 0.5862068965517241, 'f1': 0.6666666666666667, 'number': 29}
  • Ummod: {'precision': 0.8769230769230769, 'recall': 0.890625, 'f1': 0.883720930232558, 'number': 64}
  • Ummod:gov: {'precision': 0.7346938775510204, 'recall': 0.72, 'f1': 0.7272727272727272, 'number': 50}
  • Unct: {'precision': 0.9216317767042405, 'recall': 0.8516865079365079, 'f1': 0.8852797112657901, 'number': 2016}
  • Ux: {'precision': 0.9166666666666666, 'recall': 0.6111111111111112, 'f1': 0.7333333333333334, 'number': 36}
  • Ux:clitic: {'precision': 0.9473684210526315, 'recall': 0.9, 'f1': 0.9230769230769231, 'number': 60}
  • Ux:cnd: {'precision': 0.8, 'recall': 0.7272727272727273, 'f1': 0.761904761904762, 'number': 22}
  • Ux:imp: {'precision': 1.0, 'recall': 0.75, 'f1': 0.8571428571428571, 'number': 4}
  • Ux:pass: {'precision': 0.7297297297297297, 'recall': 0.6923076923076923, 'f1': 0.7105263157894737, 'number': 39}
  • Xpl:pv: {'precision': 0.8973214285714286, 'recall': 0.8410041841004184, 'f1': 0.8682505399568035, 'number': 239}
  • Overall Precision: 0.8599
  • Overall Recall: 0.7975
  • Overall F1: 0.8275
  • Overall Accuracy: 0.8468

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1