metadata
license: apache-2.0
base_model: BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny
tags:
- generated_from_trainer
datasets:
- BramVanroy/dutch_chat_datasets
model-index:
- name: 2e-4lr+64tbs+32a+4r
results: []
2e-4lr+64tbs+32a+4r
This model is a fine-tuned version of BramVanroy/llama2-13b-ft-mc4_nl_cleaned_tiny on the BramVanroy/dutch_chat_datasets dataset. It achieves the following results on the evaluation set:
- Loss: 1.0848
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0193 | 0.09 | 20 | 1.1583 |
0.9743 | 0.17 | 40 | 1.1339 |
0.9159 | 0.26 | 60 | 1.1218 |
0.9131 | 0.35 | 80 | 1.1153 |
0.8816 | 0.44 | 100 | 1.1130 |
0.8977 | 0.52 | 120 | 1.1069 |
0.9061 | 0.61 | 140 | 1.1025 |
0.8672 | 0.7 | 160 | 1.1024 |
0.8956 | 0.79 | 180 | 1.0971 |
0.8514 | 0.87 | 200 | 1.0995 |
0.8357 | 0.96 | 220 | 1.0952 |
0.8294 | 1.05 | 240 | 1.0964 |
0.8531 | 1.13 | 260 | 1.0947 |
0.8321 | 1.22 | 280 | 1.0951 |
0.8365 | 1.31 | 300 | 1.0910 |
0.8616 | 1.4 | 320 | 1.0894 |
0.8397 | 1.48 | 340 | 1.0904 |
0.861 | 1.57 | 360 | 1.0880 |
0.8116 | 1.66 | 380 | 1.0871 |
0.8285 | 1.74 | 400 | 1.0855 |
0.8603 | 1.83 | 420 | 1.0856 |
0.8126 | 1.92 | 440 | 1.0848 |
Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3