ptrdvn's picture
Upload folder using huggingface_hub
f2cc44a verified
metadata
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-0.5B-Instruct
tags:
  - llama-factory
  - full
  - generated_from_trainer
model-index:
  - name: reranker_continuous_filt_max7_train
    results: []

reranker_continuous_filt_max7_train

This model is a fine-tuned version of Qwen/Qwen2.5-0.5B-Instruct on the reranker_continuous_filt_max7_train dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3869

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • total_train_batch_size: 8
  • total_eval_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.01
  • num_epochs: 1.0

Training results

Training Loss Epoch Step Validation Loss
0.403 0.1000 1977 0.4783
0.5192 0.2000 3954 0.4524
0.3639 0.3000 5931 0.4370
0.4343 0.4000 7908 0.4286
0.3929 0.5000 9885 0.4163
0.4455 0.6000 11862 0.4040
0.3775 0.7000 13839 0.3947
0.3629 0.8000 15816 0.3898
0.5186 0.9000 17793 0.3872

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.4.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3