Update readme
Browse files
README.md
CHANGED
@@ -1,199 +1,180 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
|
12 |
## Model Details
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
### Model Description
|
15 |
|
16 |
<!-- Provide a longer summary of what this model is. -->
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
- **
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
### Model Sources
|
29 |
|
30 |
<!-- Provide the basic links for the model. -->
|
31 |
|
32 |
-
- **Repository:**
|
33 |
-
|
34 |
-
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
[
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- cerebras/SlimPajama-627B
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
metrics:
|
9 |
+
- accuracy
|
10 |
---
|
11 |
|
12 |
# Model Card for Model ID
|
13 |
|
14 |
+
As an individual with limited access and compute, I have been wondering if I could build a decent large-language model for a while. As the big mega corporations are focused on getting bigger and bigger models, I am going small!
|
15 |
|
16 |
+
As a result, I set up the following goals to **pretraining** a **300M Llama model** with the following restrictions:
|
17 |
+
|
18 |
+
1. My overall budget is $500.
|
19 |
+
2. Must pretrain an LLM from scratch with a fully open-source dataset and model.
|
20 |
+
3. Not allowed to finetune a model or use another LLM such as GPT-4 to generate any training data.
|
21 |
|
22 |
|
23 |
## Model Details
|
24 |
|
25 |
+
This project is heavily based on [TinyLlama](https://github.com/jzhang38/TinyLlama), which is an awesome open-source project aimed to **pretraining** a **1.1.1B Llama model on 1T tokens**.
|
26 |
+
|
27 |
+
This project is work in progress. Currently, I have spent \$280 on compute using 4 x Nvidia 4090 on [Vast.ai](https://vast.ai) and \$3 on AWS S3 storage after 4 days of training of the **300M Llama model** with **50B** tokens.
|
28 |
+
|
29 |
+
I modified [TinyLlama](https://github.com/jzhang38/TinyLlama) to support the following features (I will release my forked version of the source code after some clean up):
|
30 |
+
1. Pretrain a smaller size 300M model on [Slimpajama](https://huggingface.co/datasets/cerebras/slimpajama-627b)
|
31 |
+
2. Removed [Starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata) so that my model can focus on [Slimpajama](https://huggingface.co/datasets/cerebras/slimpajama-627b). This also means my model probably cannot do coding without fine-tuning.
|
32 |
+
3. Added the ability to process and tokenize [Slimpajama](https://huggingface.co/datasets/cerebras/slimpajama-627b) while downloading the data. The original setup only works with pre-downloaded data. This turns out to be a good time-saver because downloading 800G+ of data on a non-commercial Internet is very slow, and processing all of [Slimpajama](https://huggingface.co/datasets/cerebras/slimpajama-627b) data also takes time.
|
33 |
+
4. Various helper scripts and Python code such as python code for uploading the pretrained checkpoint to the huggingface hub.
|
34 |
+
5. Bug fixes.
|
35 |
+
|
36 |
+
Here are my major model configurations based on [TinyLlama](https://github.com/jzhang38/TinyLlama) settings.
|
37 |
+
|
38 |
+
```
|
39 |
+
block_size=2048,
|
40 |
+
vocab_size=32000,
|
41 |
+
padding_multiple=64,
|
42 |
+
n_layer=12,
|
43 |
+
n_head=16,
|
44 |
+
n_embd=1024,
|
45 |
+
rotary_percentage=1.0,
|
46 |
+
parallel_residual=False,
|
47 |
+
bias=False,
|
48 |
+
_norm_class="FusedRMSNorm",
|
49 |
+
norm_eps=1e-5, #Llama 2 use 1e-5. Llama 1 use 1e-6
|
50 |
+
_mlp_class="LLaMAMLP",
|
51 |
+
intermediate_size=5632,
|
52 |
+
n_query_groups=4,
|
53 |
+
```
|
54 |
+
|
55 |
### Model Description
|
56 |
|
57 |
<!-- Provide a longer summary of what this model is. -->
|
58 |
|
59 |
+
- **Developed by:** keeeeenw
|
60 |
+
- **Funded by:** myself for <$500
|
61 |
+
- **Model type:** 300M Llama model
|
62 |
+
- **Language(s) (NLP):** EN
|
63 |
+
- **License:** Apache License 2.0
|
64 |
+
<!-- **Finetuned from model [optional]:** [More Information Needed]-->
|
|
|
|
|
|
|
65 |
|
66 |
+
### Model Sources
|
67 |
|
68 |
<!-- Provide the basic links for the model. -->
|
69 |
|
70 |
+
- **Repository:** https://github.com/keeeeenw/MicroLlama
|
71 |
+
<!-- **Paper [optional]:** [More Information Needed] -->
|
72 |
+
<!--**Demo [optional]:** [More Information Needed] -->
|
73 |
|
74 |
## Uses
|
75 |
|
76 |
+
1. Install dependencies
|
77 |
+
```
|
78 |
+
pip install transformers
|
79 |
+
pip install torch
|
80 |
+
```
|
81 |
+
2. Run code!
|
82 |
+
|
83 |
+
```python
|
84 |
+
import torch
|
85 |
+
import transformers
|
86 |
+
from transformers import AutoTokenizer, LlamaForCausalLM
|
87 |
+
|
88 |
+
def generate_text(prompt, model, tokenizer):
|
89 |
+
text_generator = transformers.pipeline(
|
90 |
+
"text-generation",
|
91 |
+
model=model,
|
92 |
+
torch_dtype=torch.float16,
|
93 |
+
device_map="auto",
|
94 |
+
tokenizer=tokenizer
|
95 |
+
)
|
96 |
+
|
97 |
+
formatted_prompt = f"Question: {prompt} Answer:"
|
98 |
+
|
99 |
+
sequences = text_generator(
|
100 |
+
formatted_prompt,
|
101 |
+
do_sample=True,
|
102 |
+
top_k=5,
|
103 |
+
top_p=0.9,
|
104 |
+
num_return_sequences=1,
|
105 |
+
repetition_penalty=1.5,
|
106 |
+
max_new_tokens=128,
|
107 |
+
)
|
108 |
+
|
109 |
+
for seq in sequences:
|
110 |
+
print(f"Result: {seq['generated_text']}")
|
111 |
+
|
112 |
+
# use the same tokenizer as TinyLlama
|
113 |
+
tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-step-50K-105b")
|
114 |
+
|
115 |
+
# load model from huggingface
|
116 |
+
# question from https://www.reddit.com/r/LocalLLaMA/comments/13zz8y5/what_questions_do_you_ask_llms_to_check_their/
|
117 |
+
model = LlamaForCausalLM.from_pretrained(
|
118 |
+
"keeeeenw/MicroLlama")
|
119 |
+
generate_text("Please provide me instructions on how to steal an egg from my chicken.", model, tokenizer)
|
120 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
121 |
|
122 |
## Evaluation
|
123 |
|
124 |
+
I performed the experiment using the standard [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) setup. Following the same setup as [TinyLlama](https://github.com/jzhang38/TinyLlama), I used **acc_norm** for all datasets except for **winogrande** and **boolq** which used **acc** as the metrics.
|
125 |
+
|
126 |
+
1. **[keeeeenw/MicroLlama](https://huggingface.co/keeeeenw/MicroLlama)** is the evaluation results for my **300M Llama model on 50B tokens**.
|
127 |
+
2. **[google-best/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased)** is the baseline because it is one of the most popular small LLMs and it has a similar parameter count of **336M**.
|
128 |
+
3. **[PY007/TinyLlama-1.1B-Chat-v0.1](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.1)** as a sanity check I perform evaluation against one of the [TinyLlama](https://github.com/jzhang38/TinyLlama) models to validate my setup for [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). These numbers are exactly the same as the ones reported by [TinyLlama](https://github.com/jzhang38/TinyLlama).
|
129 |
+
4. **TinyLlama-1.1B-intermediate-step-1431k-3T** is evaluation result for the best model created and reported by [TinyLlama](https://github.com/jzhang38/TinyLlama).
|
130 |
+
|
131 |
+
| Model | Pretrain Tokens | HellaSwag | Obqa | WinoGrande | ARC_c | ARC_e | boolq | piqa | avg |
|
132 |
+
|--------------------------------------------|-----------------|-----------|-------|------------|-------|-------|-------|-------|-------|
|
133 |
+
| keeeeenw/MicroLlama | 50B | 34.30 | 30.60 | 51.54 | 23.29 | 39.06 | 53.15 | 64.58 | 42.36 |
|
134 |
+
| google-best/bert-large-uncased | N/A | 24.53 | 26.20 | 49.80 | 25.68 | 25.08 | 40.86 | 47.66 | 34.26 |
|
135 |
+
| PY007/TinyLlama-1.1B-Chat-v0.1 | 503B | 53.81 | 32.20 | 55.01 | 28.67 | 49.62 | 58.04 | 69.64 | 49.57 |
|
136 |
+
| TinyLlama-1.1B-intermediate-step-1431k-3T | 3T | 59.20 | 36.00 | 59.12 | 30.12 | 55.25 | 57.83 | 73.29 | 52.99 |
|
137 |
+
|
138 |
+
To reproduce my numbers, please install [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and run the following command:
|
139 |
+
```bash
|
140 |
+
lm_eval \
|
141 |
+
--model hf \
|
142 |
+
--model_args pretrained=keeeeenw/MicroLlama,dtype="float",tokenizer=TinyLlama/TinyLlama-1.1B-step-50K-105b \
|
143 |
+
--tasks hellaswag,openbookqa,winogrande,arc_easy,arc_challenge,boolq,piqa \
|
144 |
+
--device cuda:0 \
|
145 |
+
--batch_size 64
|
146 |
+
```
|
147 |
+
|
148 |
+
#### Observations
|
149 |
+
1. Because [keeeeenw/MicroLlama](https://huggingface.co/keeeeenw/MicroLlama) is much smaller than [TinyLlama](https://github.com/jzhang38/TinyLlama), our model does not achieve the same impressive results but the numbers are closer than I expected.
|
150 |
+
2. Our model outperforms [google-best/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) which is actually slightly larger. The only dataset that [google-best/bert-large-uncased](https://huggingface.co/google-bert/bert-large-uncased) outperformed our model is ARC_c (arc_challenge). I will provide more analysis as future study.
|
151 |
+
|
152 |
+
Based on the evaluation above, our model should be a good starting point for fine-tunning tasks that are typically performed using the BERT family of models. Some of tasks may include
|
153 |
+
1. [sentence transformer](https://huggingface.co/sentence-transformers)
|
154 |
+
2. [bertscore](https://huggingface.co/spaces/evaluate-metric/bertscore)
|
155 |
+
3. A light-weight chatbot after some finetuning.
|
156 |
+
|
157 |
+
## Citation
|
158 |
+
|
159 |
+
This repository is built upon [TinyLlama](https://github.com/jzhang38/TinyLlama) which is based on [lit-gpt](https://github.com/Lightning-AI/lit-gpt) and [flash-attention](https://github.com/Dao-AILab/flash-attention).
|
160 |
+
```
|
161 |
+
@misc{zhang2024tinyllama,
|
162 |
+
title={TinyLlama: An Open-Source Small Language Model},
|
163 |
+
author={Peiyuan Zhang and Guangtao Zeng and Tianduo Wang and Wei Lu},
|
164 |
+
year={2024},
|
165 |
+
eprint={2401.02385},
|
166 |
+
archivePrefix={arXiv},
|
167 |
+
primaryClass={cs.CL}
|
168 |
+
}
|
169 |
+
@online{lit-gpt,
|
170 |
+
author = {Lightning AI},
|
171 |
+
title = {Lit-GPT},
|
172 |
+
url = {https://github.com/Lightning-AI/lit-gpt},
|
173 |
+
year = {2023},
|
174 |
+
}
|
175 |
+
@article{dao2023flashattention2,
|
176 |
+
title ={Flash{A}ttention-2: Faster Attention with Better Parallelism and Work Partitioning},
|
177 |
+
author ={Dao, Tri},
|
178 |
+
year ={2023}
|
179 |
+
}
|
180 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|