Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference
jfrankle commited on
Commit
26f3be4
1 Parent(s): 6a60d6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- license: cc-by-nc-4.0
3
  tags:
4
  - Composer
5
  - MosaicML
@@ -15,7 +15,7 @@ MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories
15
  It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
16
  At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
17
  We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
18
- * License: Creative Commons Attribution Non Commercial 4.0
19
 
20
  This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
21
 
@@ -25,7 +25,7 @@ May 5, 2023
25
 
26
  ## Model License
27
 
28
- Creative Commons Attribution Non Commercial 4.0
29
 
30
  ## Documentation
31
 
@@ -167,6 +167,10 @@ This model was finetuned by Alex Trott and the MosaicML NLP team
167
 
168
  If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
169
 
 
 
 
 
170
 
171
  ## Citation
172
 
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
  - Composer
5
  - MosaicML
 
15
  It was built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
16
  At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
17
  We demonstrate generations as long as 84k tokens on a single node of 8 A100-80GB GPUs in our [blogpost](https://www.mosaicml.com/blog/mpt-7b).
18
+ * License: Apache 2.0
19
 
20
  This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
21
 
 
25
 
26
  ## Model License
27
 
28
+ Apache 2.0
29
 
30
  ## Documentation
31
 
 
167
 
168
  If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
169
 
170
+ ## Disclaimer
171
+
172
+ The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
173
+
174
 
175
  ## Citation
176