File size: 3,217 Bytes
35440f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
553c039
47164cf
1ee104d
47164cf
35440f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3aa2e6e
f1ca019
dee4d38
80ab7fc
0eeae5e
ab87387
eac84cd
b3e1eb2
5dccae3
e74cb63
2f5d1ce
74bc742
779e6a6
6c1f704
4ad15de
674b4eb
553c039
ebd9b6b
2d2535f
acd9707
a774b8f
1ee104d
2a33eaf
47164cf
35440f3
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
license: mit
base_model: indobenchmark/indobert-base-p1
tags:
- generated_from_keras_callback
model-index:
- name: aditnnda/gacoanReviewer
  results: []
---

<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->

# aditnnda/gacoanReviewer

This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co./indobenchmark/indobert-base-p1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0001
- Validation Loss: 0.5471
- Train Accuracy: 0.9163
- Epoch: 24

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3550, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32

### Training results

| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2751     | 0.2043          | 0.9107         | 0     |
| 0.1202     | 0.2077          | 0.9177         | 1     |
| 0.0583     | 0.2770          | 0.9079         | 2     |
| 0.0435     | 0.3412          | 0.9066         | 3     |
| 0.0251     | 0.3762          | 0.9079         | 4     |
| 0.0208     | 0.2241          | 0.9303         | 5     |
| 0.0070     | 0.2794          | 0.9317         | 6     |
| 0.0151     | 0.3823          | 0.9219         | 7     |
| 0.0088     | 0.3740          | 0.9261         | 8     |
| 0.0019     | 0.4286          | 0.9261         | 9     |
| 0.0030     | 0.6086          | 0.8912         | 10    |
| 0.0052     | 0.4023          | 0.9344         | 11    |
| 0.0005     | 0.5193          | 0.9121         | 12    |
| 0.0002     | 0.5171          | 0.9135         | 13    |
| 0.0002     | 0.5276          | 0.9163         | 14    |
| 0.0002     | 0.5344          | 0.9135         | 15    |
| 0.0002     | 0.5362          | 0.9163         | 16    |
| 0.0001     | 0.5407          | 0.9163         | 17    |
| 0.0001     | 0.5406          | 0.9163         | 18    |
| 0.0001     | 0.5484          | 0.9149         | 19    |
| 0.0001     | 0.5406          | 0.9177         | 20    |
| 0.0001     | 0.5431          | 0.9177         | 21    |
| 0.0001     | 0.5453          | 0.9163         | 22    |
| 0.0001     | 0.5466          | 0.9163         | 23    |
| 0.0001     | 0.5471          | 0.9163         | 24    |


### Framework versions

- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.15.0
- Tokenizers 0.15.0