Edit model card

Labira/LabiraPJOK_1_100

This model is a fine-tuned version of indolem/indobert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.0026
  • Validation Loss: 8.2710
  • Epoch: 94

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
0.5573 6.1112 0
0.2525 4.7174 1
0.4330 4.9959 2
0.2496 5.5703 3
0.1868 5.9061 4
0.1491 6.0854 5
0.1951 6.3417 6
0.0492 6.4913 7
0.0296 6.5959 8
0.0352 6.7091 9
0.0530 6.7985 10
0.0265 6.9577 11
0.0239 7.1006 12
0.0221 7.2133 13
0.0171 7.3099 14
0.0154 7.4198 15
0.0089 7.5009 16
0.0268 7.4562 17
0.0150 7.4317 18
0.0153 7.4442 19
0.0076 7.4628 20
0.0137 7.5225 21
0.0186 7.5907 22
0.0078 7.6655 23
0.0087 7.7399 24
0.0074 7.8006 25
0.0082 7.8462 26
0.0113 7.8900 27
0.0092 7.9273 28
0.0075 7.9621 29
0.0066 8.0058 30
0.0061 8.0359 31
0.0058 8.0681 32
0.0043 8.0693 33
0.0058 8.0939 34
0.0066 8.0860 35
0.0065 8.0647 36
0.0040 8.0451 37
0.0039 8.0354 38
0.0050 8.0227 39
0.0033 8.0230 40
0.0049 8.0296 41
0.0055 8.0414 42
0.0042 8.0576 43
0.0035 8.0749 44
0.0041 8.0896 45
0.0032 8.1037 46
0.0037 8.1187 47
0.0038 8.1324 48
0.0040 8.1458 49
0.0046 8.1626 50
0.0028 8.1726 51
0.0061 8.1180 52
0.0043 8.0579 53
0.0032 8.0223 54
0.0029 8.0125 55
0.0068 8.0192 56
0.0034 8.0336 57
0.0044 8.0478 58
0.0025 8.0648 59
0.0026 8.0813 60
0.0031 8.0949 61
0.0024 8.1060 62
0.0030 8.1093 63
0.0051 8.1350 64
0.0046 8.1498 65
0.0057 8.1556 66
0.0030 8.1641 67
0.0038 8.1758 68
0.0040 8.1901 69
0.0027 8.2013 70
0.0036 8.2115 71
0.0055 8.2151 72
0.0025 8.2120 73
0.0026 8.2121 74
0.0036 8.2132 75
0.0031 8.2141 76
0.0023 8.2159 77
0.0036 8.2201 78
0.0032 8.2284 79
0.0025 8.2278 80
0.0030 8.2375 81
0.0028 8.2451 82
0.0023 8.2491 83
0.0073 8.2581 84
0.0035 8.2639 85
0.0024 8.2646 86
0.0031 8.2628 87
0.0036 8.2641 88
0.0025 8.2660 89
0.0031 8.2669 90
0.0047 8.2678 91
0.0021 8.2687 92
0.0023 8.2698 93
0.0026 8.2710 94

Framework versions

  • Transformers 4.44.2
  • TensorFlow 2.17.0
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Labira/LabiraPJOK_1_100

Finetuned
(365)
this model