File size: 2,265 Bytes
fb38c45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16dfdba
 
fb38c45
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c326355
19154e9
 
05570df
5c76c53
 
9264567
 
0dbb7ef
 
bef7118
 
9ead14a
 
d992c9e
a0f6f37
 
300a9b1
6466a56
 
6fbd563
88bee79
 
37ea056
 
60fad6e
fecd799
73629f7
 
5b57d3c
 
8cc02c7
709cb46
 
20c8ea0
 
974529d
 
7eb4081
3ccda5a
 
c6e52fc
 
16dfdba
 
fb38c45
 
 
 
c326355
 
fb38c45
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
base_model: bert-base-chinese
tags:
- generated_from_keras_callback
model-index:
- name: node-py/my_awesome_eli5_clm-model
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# node-py/my_awesome_eli5_clm-model

This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co./bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8830
- Epoch: 44

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32

### Training results

| Train Loss | Epoch |
|:----------:|:-----:|
| 6.5795     | 0     |
| 5.8251     | 1     |
| 5.3850     | 2     |
| 5.0469     | 3     |
| 4.8048     | 4     |
| 4.6144     | 5     |
| 4.4743     | 6     |
| 4.3366     | 7     |
| 4.2178     | 8     |
| 4.1022     | 9     |
| 3.9908     | 10    |
| 3.8856     | 11    |
| 3.7700     | 12    |
| 3.6673     | 13    |
| 3.5560     | 14    |
| 3.4401     | 15    |
| 3.3328     | 16    |
| 3.2248     | 17    |
| 3.1290     | 18    |
| 3.0121     | 19    |
| 2.8978     | 20    |
| 2.7830     | 21    |
| 2.6913     | 22    |
| 2.5822     | 23    |
| 2.4772     | 24    |
| 2.3761     | 25    |
| 2.2792     | 26    |
| 2.1664     | 27    |
| 2.0731     | 28    |
| 1.9734     | 29    |
| 1.8900     | 30    |
| 1.7927     | 31    |
| 1.7036     | 32    |
| 1.6202     | 33    |
| 1.5329     | 34    |
| 1.4535     | 35    |
| 1.3778     | 36    |
| 1.3093     | 37    |
| 1.2413     | 38    |
| 1.1709     | 39    |
| 1.1114     | 40    |
| 1.0563     | 41    |
| 0.9950     | 42    |
| 0.9344     | 43    |
| 0.8830     | 44    |


### Framework versions

- Transformers 4.44.0
- TensorFlow 2.16.1
- Datasets 2.21.0
- Tokenizers 0.19.1