NLP-at-home / README.md
TeamNL's picture
Model save
cea69a0 verified
|
raw
history blame
4.34 kB
metadata
license: apache-2.0
base_model: google-bert/bert-base-multilingual-uncased
tags:
  - generated_from_trainer
metrics:
  - f1
model-index:
  - name: NLP-at-home
    results: []

NLP-at-home

This model is a fine-tuned version of google-bert/bert-base-multilingual-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6113
  • F1: 0.7954

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss F1
1.55 1.0 24 1.4596 0.3490
1.4214 2.0 48 1.3385 0.4138
1.2835 3.0 72 1.1953 0.4668
1.1497 4.0 96 1.0505 0.5073
1.0245 5.0 120 0.9573 0.5699
0.8971 6.0 144 0.8738 0.6081
0.7976 7.0 168 0.8189 0.6308
0.7152 8.0 192 0.7628 0.6572
0.6554 9.0 216 0.7268 0.6595
0.593 10.0 240 0.6843 0.7038
0.5481 11.0 264 0.6570 0.6999
0.4966 12.0 288 0.6382 0.7195
0.4585 13.0 312 0.6236 0.7169
0.425 14.0 336 0.6207 0.6902
0.3826 15.0 360 0.6062 0.7246
0.3512 16.0 384 0.6122 0.7498
0.3225 17.0 408 0.5965 0.7596
0.3051 18.0 432 0.5882 0.7503
0.2834 19.0 456 0.5915 0.7556
0.2538 20.0 480 0.6036 0.7599
0.2458 21.0 504 0.5993 0.7455
0.2186 22.0 528 0.5876 0.7598
0.2081 23.0 552 0.5915 0.7495
0.1893 24.0 576 0.5855 0.7736
0.1732 25.0 600 0.6043 0.7445
0.1675 26.0 624 0.5903 0.7598
0.1505 27.0 648 0.5872 0.7820
0.141 28.0 672 0.5923 0.7847
0.1333 29.0 696 0.5937 0.7859
0.1225 30.0 720 0.5885 0.7888
0.1113 31.0 744 0.5829 0.7882
0.1012 32.0 768 0.5783 0.7887
0.0997 33.0 792 0.5830 0.7887
0.0936 34.0 816 0.5773 0.7992
0.0867 35.0 840 0.5876 0.8033
0.0844 36.0 864 0.5836 0.7887
0.0803 37.0 888 0.5947 0.7842
0.0731 38.0 912 0.5983 0.7981
0.0718 39.0 936 0.5971 0.7851
0.0672 40.0 960 0.6077 0.7932
0.0672 41.0 984 0.6077 0.7941
0.063 42.0 1008 0.6087 0.7954
0.0597 43.0 1032 0.6059 0.7954
0.0593 44.0 1056 0.5986 0.8073
0.0592 45.0 1080 0.6020 0.7954
0.0533 46.0 1104 0.6060 0.8008
0.0537 47.0 1128 0.6091 0.7994
0.0531 48.0 1152 0.6096 0.8113
0.0516 49.0 1176 0.6112 0.7954
0.0532 50.0 1200 0.6113 0.7954

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.19.1