Edit model card

Llama-3.1-8B-Instruct-EI2-2ep-sft-bs

This model is a fine-tuned version of qfq/Llama-3.1-8B-Instruct-EI1-2ep-sft on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1866

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-06
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 16
  • total_train_batch_size: 16
  • total_eval_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 2.0

Training results

Training Loss Epoch Step Validation Loss
No log 0.1456 100 0.2037
No log 0.2911 200 0.2081
No log 0.4367 300 0.2095
No log 0.5822 400 0.2072
0.1766 0.7278 500 0.2051
0.1766 0.8734 600 0.1987
0.1766 1.0189 700 0.2012
0.1766 1.1645 800 0.2008
0.1766 1.3100 900 0.1960
0.1264 1.4556 1000 0.1920
0.1264 1.6012 1100 0.1894
0.1264 1.7467 1200 0.1877
0.1264 1.8923 1300 0.1866

Framework versions

  • Transformers 4.43.4
  • Pytorch 2.4.0+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
10
Safetensors
Model size
8.03B params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for qfq/Llama-3.1-8B-Instruct-EI2-2ep-sft-bs