MikaSie's picture
End of training
b6442c9 verified
|
raw
history blame
2.04 kB
metadata
base_model: meta-llama/Meta-Llama-3-8B
datasets:
  - generator
library_name: peft
license: llama3
tags:
  - trl
  - sft
  - generated_from_trainer
model-index:
  - name: RoBERTa_Llama3_dependent_V2
    results: []

RoBERTa_Llama3_dependent_V2

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on the generator dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3585

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 16
  • total_eval_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss
No log 0.9882 42 1.4093
No log 2.0 85 1.3646
No log 2.9882 127 1.3536
No log 4.0 170 1.3487
No log 4.9882 212 1.3485
No log 6.0 255 1.3498
No log 6.9882 297 1.3531
No log 8.0 340 1.3552
No log 8.9882 382 1.3572
No log 9.8824 420 1.3585

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1