File size: 2,190 Bytes
81f22b4
 
 
 
 
 
 
 
21c306e
 
 
81f22b4
21c306e
81f22b4
21c306e
81f22b4
21c306e
 
 
 
 
 
 
 
 
 
 
 
81f22b4
 
 
c2f8482
81f22b4
 
 
 
 
 
 
 
 
 
 
 
 
 
0bab23a
81f22b4
 
 
 
 
c2f8482
81f22b4
 
 
 
 
 
 
 
ebf33d4
36859ec
81f22b4
 
 
 
 
 
 
 
 
 
 
 
 
21c306e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: apache-2.0
base_model: h2oai/h2o-danube2-1.8b-base
datasets:
- cgato/SlimOrcaDedupCleaned
language:
- en
library_name: transformers
tags:
- llama-factory
- unsloth
---
# h2o-danube2 with ChatML template

This model was first fine-tuned with [BAdam](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models") on [cgato/SlimOrcaDedupCleaned](https://huggingface.co./datasets/cgato/SlimOrcaDedupCleaned) using LLama-Factory.

## Template

```jinja
<|im_start|>system
{{system}}<|im_end|>
<|im_start|>user
{{instruction}}<|im_end|>
<|im_start|>assistant
{{response}}<|im_end|>
```

## BAdam config

```yaml
### model
model_name_or_path: danube2-base-chatml

### method
stage: sft
do_train: true
finetuning_type: full
use_badam: true
badam_switch_mode: ascending
badam_switch_interval: 50
badam_verbose: 1
badam_start_block: 13
seed: 314

### dataset
dataset: slimorca_dedup_cleaned
template: hermes_chatml
cutoff_len: 8192
overwrite_cache: false
preprocessing_num_workers: 12

### output
output_dir: slim-chatml-badam
logging_steps: 5
save_steps: 1
save_strategy: epoch
plot_loss: true
overwrite_output_dir: false

### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 4
learning_rate: 0.000005
num_train_epochs: 1
lr_scheduler_type: cosine
warmup_ratio: 0.01
bf16: true
flash_attn: fa2

### eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 2000
```

### BAdam training results

| Training Loss | Epoch  | Step  | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.8535        | 0.0889 | 2000  | 0.8340          |
| 0.8735        | 0.1778 | 4000  | 0.8128          |
| 0.8054        | 0.2668 | 6000  | 0.8008          |
| 0.7907        | 0.3557 | 8000  | 0.8002          |
| 0.8749        | 0.4446 | 10000 | 0.7972          |
| 0.7463        | 0.5335 | 12000 | 0.7899          |
| 0.7762        | 0.6225 | 14000 | 0.7870          |
| 0.8231        | 0.7114 | 16000 | 0.7854          |
| 0.8686        | 0.8003 | 18000 | 0.7801          |
| 0.9159        | 0.8892 | 20000 | 0.7877          |
| 0.8281        | 0.9782 | 22000 | 0.7786          |