princeton-nlp commited on
Commit
e591e7b
1 Parent(s): b4f554d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md CHANGED
@@ -6,8 +6,54 @@ license: apache-2.0
6
  license: apache-2.0
7
  ---
8
 
 
 
 
 
 
 
9
  Sheared-LLaMA-2.7B is a model pruned and further pre-trained from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). We dynamically load data from different domains in the [RedPajama dataset](https://github.com/togethercomputeub.com/togethercomputer/RedPajama-Data). We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded into huggingface via
10
 
11
  ```
12
  model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-2.7B")
13
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  license: apache-2.0
7
  ---
8
 
9
+ **Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
10
+ **Code**: https://github.com/princeton-nlp/LLM-Shearing
11
+ **Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
12
+
13
+ ---
14
+
15
  Sheared-LLaMA-2.7B is a model pruned and further pre-trained from [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf). We dynamically load data from different domains in the [RedPajama dataset](https://github.com/togethercomputeub.com/togethercomputer/RedPajama-Data). We use 0.4B tokens for pruning and 50B tokens for continued pre-training the pruned model. This model can be loaded into huggingface via
16
 
17
  ```
18
  model = AutoModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-2.7B")
19
  ```
20
+
21
+ - Smaller-scale
22
+ - Same vocabulary as LLaMA1 and LLaMA2
23
+ - Derived with 50B tokens by utilizing existing strong LLMs
24
+
25
+ ## Downstream Tasks
26
+
27
+ We evaluate on an extensive set of downstream tasks including reasoning, reading comprehension, language modeling and knowledge intensive tasks. Our Sheared-LLaMA models outperform existing large language models.
28
+
29
+ | Model | # Pre-training Tokens | Average Performance |
30
+ | ------------------- | --------------------- | ------------------- |
31
+ | LLaMA2-7B | 2T | 64.6 |
32
+
33
+ **1.3B**
34
+
35
+ | Model | # Pre-training Tokens | Average Performance |
36
+ | ------------------- | --------------------- | ------------------- |
37
+ | OPT-1.3B | 300B | 48.2 |
38
+ | Pythia-1.4B | 300B | 48.9 |
39
+ | Sheared-LLaMA-1.3B | 50B | 51.0 |
40
+
41
+ **3B**
42
+
43
+ | Model | # Pre-training Tokens | Average Performance |
44
+ | ------------------- | --------------------- | ------------------- |
45
+ | OPT-2.7B | 300B | 51.4 |
46
+ | Pythia-2.8B | 300B | 52.5 |
47
+ | INCITE-Base-3B | 800B | 54.7 |
48
+ | Open-LLaMA-3B-v1 | 1T | 55.1 |
49
+ | Open-LLaMA-3B-v2 | 1T | 55.7 |
50
+ | **Sheared-LLaMA-2.7B** | **50B** | **56.7** |
51
+
52
+ ## Bibtex
53
+ ```
54
+ @article{xia2023sheared,
55
+ title={Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning},
56
+ author={Xia, Mengzhou and Gao, Tianyu, and Zeng Zhiyuan, and Chen Danqi},
57
+ year={2023}
58
+ }
59
+ ```