Edit model card

run

This model is a fine-tuned version of google/gemma-2b on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4823
  • Accuracy: 0.75

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6406 0.1773 25 0.5870 0.6806
0.5156 0.3546 50 0.5284 0.7292
0.6289 0.5319 75 0.4963 0.7639
0.4766 0.7092 100 0.4715 0.7639
0.3594 0.8865 125 0.4581 0.7431
0.4082 1.0638 150 0.4663 0.75
0.3262 1.2411 175 0.4452 0.7569
0.3594 1.4184 200 0.4306 0.75
0.4033 1.5957 225 0.4411 0.7639
0.3789 1.7730 250 0.4331 0.7708
0.293 1.9504 275 0.4652 0.7569
0.3555 2.1277 300 0.4356 0.7569
0.4375 2.3050 325 0.4415 0.7569
0.377 2.4823 350 0.4525 0.7292
0.3633 2.6596 375 0.4505 0.7708
0.2461 2.8369 400 0.4581 0.7431
0.3115 3.0142 425 0.4499 0.7361
0.3896 3.1915 450 0.4421 0.75
0.373 3.3688 475 0.4602 0.7569
0.2415 3.5461 500 0.4537 0.7639
0.334 3.7234 525 0.4650 0.7569
0.3662 3.9007 550 0.4750 0.7569
0.3232 4.0780 575 0.4778 0.7361
0.3369 4.2553 600 0.4709 0.75
0.5273 4.4326 625 0.4780 0.75
0.3623 4.6099 650 0.4839 0.75
0.2148 4.7872 675 0.4855 0.7431
0.3604 4.9645 700 0.4823 0.75

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Baidicoot/run

Base model

google/gemma-2b
Adapter
this model