Llama-3.1-8B-Instruct-SAA-300

This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on the bct_non_cot_dpo_300 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2542
  • Rewards/chosen: -0.0199
  • Rewards/rejected: -0.0490
  • Rewards/accuracies: 0.7333
  • Rewards/margins: 0.0291
  • Logps/rejected: -0.4904
  • Logps/chosen: -0.1993
  • Logits/rejected: -0.4804
  • Logits/chosen: -0.4435
  • Sft Loss: 0.0199
  • Odds Ratio Loss: 2.3427

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10.0

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen Sft Loss Odds Ratio Loss
1.0613 2.9630 50 0.9214 -0.0866 -0.1163 0.7667 0.0297 -1.1632 -0.8661 -0.4969 -0.4510 0.0877 8.3368
0.3375 5.9259 100 0.3373 -0.0282 -0.0580 0.7333 0.0297 -0.5795 -0.2823 -0.4847 -0.4453 0.0271 3.1020
0.2209 8.8889 150 0.2542 -0.0199 -0.0490 0.7333 0.0291 -0.4904 -0.1993 -0.4804 -0.4435 0.0199 2.3427

Framework versions

  • PEFT 0.12.0
  • Transformers 4.45.2
  • Pytorch 2.3.0
  • Datasets 2.19.0
  • Tokenizers 0.20.0
Downloads last month
8
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for chchen/Llama-3.1-8B-Instruct-SAA-300

Adapter
(627)
this model