File size: 2,559 Bytes
a78c134 d4418c3 a78c134 d4418c3 a78c134 d4418c3 a78c134 d4418c3 a78c134 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
base_model: google/gemma-2-2b-it
datasets:
- GaetanMichelet/chat-60_ft_task-3
- GaetanMichelet/chat-120_ft_task-3
- GaetanMichelet/chat-180_ft_task-3
library_name: peft
license: gemma
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
model-index:
- name: Gemma-2-2B_task-3_180-samples_config-2_full
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma-2-2B_task-3_180-samples_config-2_full
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co./google/gemma-2-2b-it) on the GaetanMichelet/chat-60_ft_task-3, the GaetanMichelet/chat-120_ft_task-3 and the GaetanMichelet/chat-180_ft_task-3 datasets.
It achieves the following results on the evaluation set:
- Loss: 0.9456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 1.3981 | 0.9412 | 8 | 1.3785 |
| 1.3072 | 2.0 | 17 | 1.2429 |
| 1.1985 | 2.9412 | 25 | 1.1442 |
| 0.9971 | 4.0 | 34 | 1.0312 |
| 0.9268 | 4.9412 | 42 | 0.9882 |
| 0.9442 | 6.0 | 51 | 0.9653 |
| 0.9253 | 6.9412 | 59 | 0.9537 |
| 0.8684 | 8.0 | 68 | 0.9479 |
| 0.8043 | 8.9412 | 76 | 0.9456 |
| 0.7924 | 10.0 | 85 | 0.9502 |
| 0.7535 | 10.9412 | 93 | 0.9591 |
| 0.694 | 12.0 | 102 | 0.9863 |
| 0.6881 | 12.9412 | 110 | 0.9994 |
| 0.6566 | 14.0 | 119 | 1.0534 |
| 0.5597 | 14.9412 | 127 | 1.1117 |
| 0.497 | 16.0 | 136 | 1.1691 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.1.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1 |