node-py's picture
Training in progress epoch 48
65d8e26
|
raw
history blame
No virus
2.36 kB
metadata
base_model: bert-base-chinese
tags:
  - generated_from_keras_callback
model-index:
  - name: node-py/my_awesome_eli5_clm-model
    results: []

node-py/my_awesome_eli5_clm-model

This model is a fine-tuned version of bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.7162
  • Epoch: 48

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: float32

Training results

Train Loss Epoch
6.5795 0
5.8251 1
5.3850 2
5.0469 3
4.8048 4
4.6144 5
4.4743 6
4.3366 7
4.2178 8
4.1022 9
3.9908 10
3.8856 11
3.7700 12
3.6673 13
3.5560 14
3.4401 15
3.3328 16
3.2248 17
3.1290 18
3.0121 19
2.8978 20
2.7830 21
2.6913 22
2.5822 23
2.4772 24
2.3761 25
2.2792 26
2.1664 27
2.0731 28
1.9734 29
1.8900 30
1.7927 31
1.7036 32
1.6202 33
1.5329 34
1.4535 35
1.3778 36
1.3093 37
1.2413 38
1.1709 39
1.1114 40
1.0563 41
0.9950 42
0.9344 43
0.8830 44
0.8380 45
0.7966 46
0.7552 47
0.7162 48

Framework versions

  • Transformers 4.44.0
  • TensorFlow 2.16.1
  • Datasets 2.21.0
  • Tokenizers 0.19.1