GaetanMichelet's picture
End of training
258a90d verified
metadata
base_model: mistralai/Mistral-7B-Instruct-v0.3
datasets:
  - GaetanMichelet/chat-60_ft_task-2_auto
library_name: peft
license: apache-2.0
tags:
  - alignment-handbook
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: Mistral-7B_task-2_60-samples_config-1_full_auto
    results: []

Mistral-7B_task-2_60-samples_config-1_full_auto

This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.3 on the GaetanMichelet/chat-60_ft_task-2_auto dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7818

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss
1.1106 0.8696 5 1.0845
1.0327 1.9130 11 0.9768
0.9473 2.9565 17 0.9031
0.8032 4.0 23 0.8144
0.7285 4.8696 28 0.7916
0.6954 5.9130 34 0.7828
0.6329 6.9565 40 0.7818
0.6007 8.0 46 0.7876
0.5438 8.8696 51 0.8092
0.4987 9.9130 57 0.8370
0.4104 10.9565 63 0.8623
0.3387 12.0 69 0.9043
0.3103 12.8696 74 0.9516
0.2409 13.9130 80 1.0892

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.0
  • Pytorch 2.1.2+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1