jacobfulano
commited on
Commit
•
515e294
1
Parent(s):
c5ccdb7
Create README.md
Browse filesinitial model card
README.md
ADDED
@@ -0,0 +1,185 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# MPT-7B (Base)
|
6 |
+
|
7 |
+
MPT-7B (Base) is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
|
8 |
+
This model was trained by [MosaicML](https://www.mosaicml.com) and is **open-sourced for commercial use** (_Apache-2.0_).
|
9 |
+
|
10 |
+
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
|
11 |
+
|
12 |
+
These architectural changes include performance-optimized layer implementations, changes that provide greater training stability, and the elimination of context length limits by replacing
|
13 |
+
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
|
14 |
+
Thanks to these modifications, MPT models can be trained with high throughput efficiency and highly stable convergence.
|
15 |
+
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
|
16 |
+
|
17 |
+
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry), and was built by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for pretraining, finetuning and/or deploying LLMs for inference.
|
18 |
+
|
19 |
+
### How is this model different?
|
20 |
+
|
21 |
+
* **Licensed for commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
|
22 |
+
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
|
23 |
+
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we trained on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
|
24 |
+
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and FasterTransformer)
|
25 |
+
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
|
26 |
+
|
27 |
+
### Models finetuned off MPT-7B (Base):
|
28 |
+
|
29 |
+
* [MPT-7B-StoryWriter-65k+ [LINK]]{}: a model designed to read and write fictional stories with super long context lengths.
|
30 |
+
It is built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
|
31 |
+
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
|
32 |
+
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our blogpost {HERE}.
|
33 |
+
* License: _Apache-2.0_ (commercial use permitted)
|
34 |
+
|
35 |
+
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
|
36 |
+
It is built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/sam-mosaic/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
|
37 |
+
* License: _CC-By-SA-3.0_ (commercial use permitted)
|
38 |
+
* [Online Demo](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
|
39 |
+
|
40 |
+
* [MPT-7B-Chat](TBD): a chatbot-like model for dialogue generation.
|
41 |
+
It is built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
|
42 |
+
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
|
43 |
+
* License: _CC-By-NC-SA-4.0_ (non-commercial use only)
|
44 |
+
* [Online Demo](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
|
45 |
+
|
46 |
+
## Model Date
|
47 |
+
|
48 |
+
May 7, 2023
|
49 |
+
|
50 |
+
## Model License
|
51 |
+
|
52 |
+
Apache-2.0 (commercial use permitted)
|
53 |
+
|
54 |
+
## Documentation
|
55 |
+
|
56 |
+
* [Blog post] (LINK)
|
57 |
+
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
|
58 |
+
* Questions: contact us via the [MosaicML Community Slack](https://join.slack.com/t/mosaicml-community/shared_invite/zt-w0tiddn9-WGTlRpfjcO9J5jyrMub1dg)
|
59 |
+
|
60 |
+
|
61 |
+
## How to Use
|
62 |
+
|
63 |
+
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training, finetuning, evaluating, and deploying LLMs for inference.
|
64 |
+
|
65 |
+
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
|
66 |
+
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
|
67 |
+
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
|
68 |
+
|
69 |
+
```python
|
70 |
+
import transformers
|
71 |
+
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b', trust_remote_code=True)
|
72 |
+
```
|
73 |
+
|
74 |
+
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention (`pip install flash_attn`), you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
|
75 |
+
```python
|
76 |
+
config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b', trust_remote_code=True)
|
77 |
+
config.attn_config['attn_impl'] = 'triton'
|
78 |
+
|
79 |
+
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True)
|
80 |
+
model.to(device='cuda:0')
|
81 |
+
```
|
82 |
+
|
83 |
+
The model size is approximately 13 GB total in two shards.
|
84 |
+
|
85 |
+
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
|
86 |
+
|
87 |
+
```python
|
88 |
+
from transformers import AutoTokenizer
|
89 |
+
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b")
|
90 |
+
```
|
91 |
+
|
92 |
+
## Model Description
|
93 |
+
|
94 |
+
The architecture is a modification of a standard decoder-only transformer.
|
95 |
+
|
96 |
+
The model has been modified from a standard transformer in the following ways:
|
97 |
+
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
|
98 |
+
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
|
99 |
+
* It does not use biases
|
100 |
+
|
101 |
+
|
102 |
+
| Hyperparameter | Value |
|
103 |
+
|----------------|-------|
|
104 |
+
|n_parameters | 6.7B |
|
105 |
+
|n_layers | 32 |
|
106 |
+
| n_heads | 32 |
|
107 |
+
| d_model | 4096 |
|
108 |
+
| vocab size | 50432 |
|
109 |
+
| sequence length | 2048 |
|
110 |
+
|
111 |
+
|
112 |
+
|
113 |
+
## Training Data
|
114 |
+
|
115 |
+
### Streaming Datasets
|
116 |
+
|
117 |
+
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
|
118 |
+
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
|
119 |
+
|
120 |
+
|
121 |
+
### Data Mix
|
122 |
+
|
123 |
+
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
|
124 |
+
|
125 |
+
|
126 |
+
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|
127 |
+
|-------------|----------------------------|------------|----------------------------|--------|
|
128 |
+
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
|
129 |
+
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
|
130 |
+
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
|
131 |
+
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
|
132 |
+
| RedPajama - Wikipedia | 24.84 B | 0.04 | 40 B | 1.61 |
|
133 |
+
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
|
134 |
+
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
|
135 |
+
| RedPajama - Books | 26.02 B | 0.03 | 30 B | 1.15 |
|
136 |
+
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.04 |
|
137 |
+
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
|
138 |
+
|
139 |
+
Samples for each batch were selected from one of the datasets with the probability specified above.
|
140 |
+
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
|
141 |
+
|
142 |
+
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
|
143 |
+
most of which are relevant for tokenizing code:
|
144 |
+
(1) It was trained on a diverse mix of data that includes code (The Pile)
|
145 |
+
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
|
146 |
+
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
|
147 |
+
|
148 |
+
|
149 |
+
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
|
150 |
+
|
151 |
+
### Training Configuration
|
152 |
+
|
153 |
+
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
|
154 |
+
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
|
155 |
+
|
156 |
+
## Limitations and Biases
|
157 |
+
|
158 |
+
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
|
159 |
+
|
160 |
+
MPT-7B (Base) is **not** intended for deployment without finetuning.
|
161 |
+
It should not be used for human-facing interactions without further guardrails and user consent.
|
162 |
+
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
|
163 |
+
MPT-7B was trained on various public datasets detailed below including [C4](https://huggingface.co/datasets/c4), the colossal, cleaned version of Common Crawl's web crawl corpus.
|
164 |
+
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
|
165 |
+
|
166 |
+
|
167 |
+
## Acknowledgements
|
168 |
+
|
169 |
+
We gratefully acknowledge the work of the researchers who created the [LLaMA series of models](https://arxiv.org/abs/2302.13971), which was the impetus for our efforts.
|
170 |
+
We also gratefully acknowledge the hard work of the [Together](https://www.together.xyz) team, which put together the RedPajama dataset.
|
171 |
+
|
172 |
+
## Citation
|
173 |
+
|
174 |
+
Please cite this model using the following format:
|
175 |
+
|
176 |
+
```
|
177 |
+
@online{MosaicML2023BLOGPOST,
|
178 |
+
author = {MosaicML NLP Team},
|
179 |
+
title = {MosaicML Foundation Series: MPT-7B},
|
180 |
+
year = {2023},
|
181 |
+
url = {https://www.mosaicml.com/blog/TBD},
|
182 |
+
note = {Accessed: 2023-03-28}, % change this date
|
183 |
+
urldate = {2023-03-28} % change this date
|
184 |
+
}
|
185 |
+
```
|