File size: 1,279 Bytes
d9cf33b
 
d2eb966
d9cf33b
 
3cb4a20
d9cf33b
 
 
 
 
 
 
 
99a750a
d2eb966
 
 
 
 
 
 
 
 
 
 
 
 
99a750a
 
 
d2eb966
 
 
 
 
 
 
f5110dc
99a750a
a86dd09
a123ed0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a86dd09
 
 
a123ed0
 
 
a86dd09
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
# Train

## Tokenizer

```bash
cd scripts
python -m venv venv
source venv/bin/activate
pip install -U -r requirements.in
```

```bash
python -B train_tokenizer.py
```

## Dataset

```bash
cd scripts
python -m venv venv-lit
source venv-lit/bin/activate
pip install -U -r requirements-lit.in
```

```bash
python -B prepare_pretrain_dataset.py
```

## Model

```bash
cd scripts
python -m venv venv-lit
source venv-lit/bin/activate
pip install -U -r requirements-lit.in
```

```bash
litgpt pretrain --config ./model.yaml
```

```bash
litgpt convert_from_litgpt out/pretrain/final/ out/converted_model
cp config.json out/pretrain/final/
cp config.json out/converted_model/
```

```python
import torch
from transformers import AutoModel

state_dict = torch.load('out/converted_model/model.pth')
model = AutoModel.from_pretrained('TinyLlama/TinyLlama_v1.1', state_dict=state_dict, ignore_mismatched_sizes=True)
model.save_pretrained('out/converted_model/')
```

## Evaluate

```bash
# litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --batch_size 8 out/pretrain/final/

litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,mmlu_pro,winogrande,arc_challenge,leaderboard,ifeval,mgsm_direct,mathqa,gpqa' --batch_size 8 out/pretrain/final/
```