Hubert-kakeiken-W-elevator_hall
This model is a fine-tuned version of rinna/japanese-hubert-base on the ORIGINAL_KAKEIKEN_W_ELEVATOR_HALL - JA dataset. It achieves the following results on the evaluation set:
- Loss: 0.0253
- Wer: 0.9988
- Cer: 1.0162
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 12500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
53.9873 | 1.0 | 820 | 19.7137 | 1.0 | 1.1284 |
16.2788 | 2.0 | 1640 | 12.8321 | 1.0 | 1.1284 |
11.6832 | 3.0 | 2460 | 5.6187 | 1.0 | 1.1284 |
4.3059 | 4.0 | 3280 | 3.3938 | 1.0 | 1.1284 |
3.1294 | 5.0 | 4100 | 2.8854 | 1.0 | 1.1284 |
2.6448 | 6.0 | 4920 | 1.4073 | 1.0 | 1.1019 |
1.022 | 7.0 | 5740 | 0.6323 | 1.0 | 1.0336 |
0.4449 | 8.0 | 6560 | 0.3110 | 0.9988 | 1.0403 |
0.305 | 9.0 | 7380 | 0.2348 | 0.9991 | 1.0485 |
0.1895 | 10.0 | 8200 | 0.1109 | 0.9988 | 1.0230 |
0.1437 | 11.0 | 9020 | 0.0931 | 0.9990 | 1.0221 |
0.1258 | 12.0 | 9840 | 0.0828 | 0.9988 | 1.0258 |
0.1175 | 13.0 | 10660 | 0.0814 | 0.9991 | 1.0232 |
0.1083 | 14.0 | 11480 | 0.0415 | 0.9988 | 1.0194 |
0.0974 | 15.0 | 12300 | 0.0653 | 0.9990 | 1.0239 |
0.0967 | 16.0 | 13120 | 0.0495 | 0.9991 | 1.0200 |
0.087 | 17.0 | 13940 | 0.0601 | 0.9990 | 1.0224 |
0.0798 | 18.0 | 14760 | 0.0544 | 0.9990 | 1.0218 |
0.0719 | 19.0 | 15580 | 0.0426 | 0.9990 | 1.0191 |
0.0731 | 20.0 | 16400 | 0.0587 | 0.9991 | 1.0208 |
0.0693 | 21.0 | 17220 | 0.0603 | 0.9988 | 1.0222 |
0.0614 | 22.0 | 18040 | 0.0361 | 0.9988 | 1.0191 |
0.0582 | 23.0 | 18860 | 0.0332 | 0.9988 | 1.0173 |
0.0535 | 24.0 | 19680 | 0.0347 | 0.9988 | 1.0172 |
0.0467 | 25.0 | 20500 | 0.0334 | 0.9988 | 1.0180 |
0.0456 | 26.0 | 21320 | 0.0283 | 0.9988 | 1.0164 |
0.0389 | 27.0 | 22140 | 0.0361 | 0.9988 | 1.0172 |
0.04 | 28.0 | 22960 | 0.0258 | 0.9988 | 1.0167 |
0.0348 | 29.0 | 23780 | 0.0328 | 0.9990 | 1.0176 |
0.0343 | 30.0 | 24600 | 0.0276 | 0.9988 | 1.0162 |
0.0323 | 31.0 | 25420 | 0.0297 | 0.9988 | 1.0165 |
0.0283 | 32.0 | 26240 | 0.0291 | 0.9988 | 1.0165 |
0.0275 | 33.0 | 27060 | 0.0252 | 0.9988 | 1.0161 |
0.0256 | 34.0 | 27880 | 0.0245 | 0.9988 | 1.0164 |
0.0241 | 35.0 | 28700 | 0.0240 | 0.9988 | 1.0159 |
0.0237 | 36.0 | 29520 | 0.0278 | 0.9988 | 1.0166 |
0.0238 | 37.0 | 30340 | 0.0275 | 0.9988 | 1.0163 |
0.022 | 38.0 | 31160 | 0.0247 | 0.9988 | 1.0163 |
0.0184 | 39.0 | 31980 | 0.0262 | 0.9988 | 1.0163 |
0.0199 | 39.9518 | 32760 | 0.0244 | 0.9988 | 1.0160 |
Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
- Downloads last month
- 22
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for utakumi/Hubert-kakeiken-W-elevator_hall
Base model
rinna/japanese-hubert-base