Built with Axolotl

6b67ac92-efd2-4551-8eea-03e7ce027d2a

This model is a fine-tuned version of Qwen/Qwen2-0.5B-Instruct on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.3801

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.000211
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 50
  • training_steps: 500

Training results

Training Loss Epoch Step Validation Loss
No log 0.0004 1 3.0103
3.0992 0.0176 50 2.7828
3.0439 0.0353 100 3.1977
3.1371 0.0529 150 2.8632
2.719 0.0705 200 2.5822
2.9678 0.0882 250 2.5127
3.011 0.1058 300 2.4688
3.0224 0.1234 350 2.4107
3.0345 0.1411 400 2.3896
2.8108 0.1587 450 2.3815
2.9961 0.1763 500 2.3801

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.0+cu124
  • Datasets 3.0.1
  • Tokenizers 0.20.1
Downloads last month
9
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for lesso11/6b67ac92-efd2-4551-8eea-03e7ce027d2a

Base model

Qwen/Qwen2-0.5B
Adapter
(480)
this model