File size: 1,232 Bytes
0f5c39f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# Steps to run continued pretraining 

1. Install the environment from as given in `multilinguality_megatron/Readme.md`

2. Run the following commands

```bash
conda activate towerllm-env
bash multilinguality_megatron/convert2megatron.sh
bash multilinguality_megatron/model_sharding.sh
bash multilinguality_megatron/continue_pretraining.sh
```

Arguments to take care of:
```bash
convert2megatron.sh 
        --megatron_model: Path where the megatron weights are to be saved
        --model: Path of huggingface model (KshitijAmbilduke/extended_non_uniform_model_tinyllama)
        --size: 1 (for TinyLlama)
        --repo: Location of the multilingual megatron repository


model_sharding.sh
        --megatron_model: Path where the megatron weights are saved
        --sharded_model: Path of folder to save shards of the model 
        --tp: Number of shards to create. (Number of shards == Number of GPUs used)
        --vocab_size: 37005 (32000+5005)


continue_pretraining.sh
        --data_path="1 data/data_text_document"
        megatron_model: Path of folder containing sharded model
        model_dir: Path for folding storing checkpoints
        tokenizer_path: Path of extended tokenizer
        tp: number of shards
```