laalays's picture
End of training
c390821 verified
|
raw
history blame
2.66 kB
metadata
license: cc-by-nc-4.0
base_model: facebook/mms-1b-all
tags:
  - generated_from_trainer
datasets:
  - cs224s
metrics:
  - wer
model-index:
  - name: mms1b-finetuned-somali-2
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: cs224s
          type: cs224s
          config: default
          split: validation
          args: default
        metrics:
          - name: Wer
            type: wer
            value: 0.7171474358974359

mms1b-finetuned-somali-2

This model is a fine-tuned version of facebook/mms-1b-all on the cs224s dataset. It achieves the following results on the evaluation set:

  • Loss: inf
  • Wer: 0.7171

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
6.6588 0.0621 10 inf 1.0780
4.564 0.1242 20 inf 1.0011
3.0343 0.1863 30 inf 0.9997
0.0 0.2484 40 inf 1.0
2.7924 0.3106 50 inf 1.0
2.4904 0.3727 60 inf 0.9960
1.9781 0.4348 70 inf 0.7764
0.0 0.4969 80 inf 0.7893
1.2978 0.5590 90 inf 0.7252
1.3457 0.6211 100 inf 0.7145
1.7188 0.6832 110 inf 0.6912
0.0 0.7453 120 inf 0.7086
1.3715 0.8075 130 inf 0.9119
1.09 0.8696 140 inf 0.7236
1.7369 0.9317 150 inf 0.7123
0.0 0.9938 160 inf 0.7171

Framework versions

  • Transformers 4.42.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1