gui8600k's picture
fine-tuned-llama3-1b-small-newsGenerator-PTBR
53bde8b verified
metadata
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
  - generated_from_trainer
model-index:
  - name: results_llama_1b_small
    results: []

results_llama_1b_small

This model is a fine-tuned version of meta-llama/Llama-3.2-1B on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2799

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 2
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.3753 0.1431 1000 1.3197
1.2734 0.2862 2000 1.3018
1.2748 0.4292 3000 1.2926
1.3498 0.5723 4000 1.2866
1.1796 0.7154 5000 1.2825
1.28 0.8585 6000 1.2799

Framework versions

  • PEFT 0.14.0
  • Transformers 4.47.1
  • Pytorch 2.5.1+cu124
  • Datasets 2.17.0
  • Tokenizers 0.21.0