--- base_model: google/paligemma-3b-pt-224 library_name: peft license: gemma tags: - generated_from_trainer model-index: - name: paligemma-cnmc-ft results: [] --- # paligemma-cnmc-ft This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co./google/paligemma-3b-pt-224) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1871 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 170 - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | No log | 0.9645 | 17 | 1.4372 | | No log | 1.9858 | 35 | 1.1505 | | 1.1745 | 2.9504 | 52 | 0.6419 | | 1.1745 | 3.9716 | 70 | 0.3542 | | 1.1745 | 4.9929 | 88 | 0.3141 | | 0.3853 | 5.9574 | 105 | 0.2847 | | 0.3853 | 6.9787 | 123 | 0.2544 | | 0.3853 | 8.0 | 141 | 0.2498 | | 0.2598 | 8.9645 | 158 | 0.2074 | | 0.2598 | 9.9858 | 176 | 0.1840 | | 0.2598 | 10.9504 | 193 | 0.1656 | | 0.1867 | 11.9716 | 211 | 0.1665 | | 0.1867 | 12.9929 | 229 | 0.1719 | | 0.1867 | 13.9574 | 246 | 0.1871 | ### Framework versions - PEFT 0.11.1 - Transformers 4.43.0.dev0 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1