node-py's picture
Training in progress epoch 34
0f705b5
|
raw
history blame
2.06 kB
metadata
library_name: transformers
base_model: bert-base-chinese
tags:
  - generated_from_keras_callback
model-index:
  - name: node-py/my_awesome_eli5_clm-model
    results: []

node-py/my_awesome_eli5_clm-model

This model is a fine-tuned version of bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0530
  • Epoch: 34

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: float32

Training results

Train Loss Epoch
0.0882 0
0.0878 1
0.0852 2
0.0824 3
0.0810 4
0.0812 5
0.0790 6
0.0772 7
0.0755 8
0.0749 9
0.0717 10
0.0722 11
0.0718 12
0.0689 13
0.0863 14
0.0838 15
0.0731 16
0.0768 17
0.0675 18
0.0646 19
0.0650 20
0.0627 21
0.0610 22
0.0594 23
0.0585 24
0.0585 25
0.0577 26
0.0569 27
0.0565 28
0.0542 29
0.0531 30
0.0531 31
0.0532 32
0.0535 33
0.0530 34

Framework versions

  • Transformers 4.44.2
  • TensorFlow 2.17.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1