TTS_NB / README.md
Edmon02's picture
End of training
5901bb3 verified
metadata
base_model: Edmon02/TTS_NB
tags:
  - generated_from_trainer
model-index:
  - name: TTS_NB
    results: []

TTS_NB

This model is a fine-tuned version of Edmon02/TTS_NB on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4601

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 6000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6192 27.3973 500 0.5595
0.5686 54.7945 1000 0.5097
0.537 82.1918 1500 0.4940
0.5239 109.5890 2000 0.4815
0.5143 136.9863 2500 0.4775
0.5087 166.2740 3000 0.4622
0.5007 193.6712 3500 0.4592
0.5083 137.5527 4000 0.4671
0.5025 154.4304 4500 0.4657
0.4963 171.3080 5000 0.4626
0.4982 188.1857 5500 0.4603
0.4939 205.0633 6000 0.4601

Framework versions

  • Transformers 4.43.3
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1