dysarthria_emo_enhancer_0_0
This model is a fine-tuned version of openai/whisper-small on the custom_torgo_0_0 dataset merged with the UASpeech dataset. It achieves the following results on the evaluation set:
- Wer: 34.5269
- Wer Ortho: 36.0478
And the following results on the TORGO + UAS training set:
- Acc: 0.68
- Wer: 32.28
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.1
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for FilippoLampa/dysarthria-emo-enhancer
Base model
openai/whisper-small