Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference
abhi-mosaic commited on
Commit
ee38fc3
1 Parent(s): 96a47ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -5
README.md CHANGED
@@ -42,15 +42,26 @@ It includes options for many training efficiency features such as [FlashAttentio
42
 
43
  ```python
44
  import transformers
45
- model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-storywriter', trust_remote_code=True)
 
 
 
46
  ```
47
 
48
  To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
49
  ```python
50
- config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b-storywriter', trust_remote_code=True)
 
 
 
51
  config.attn_config['attn_impl'] = 'triton'
52
 
53
- model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-storywriter', config=config, torch_dtype=torch.bfloat16, trust_remote_code=True)
 
 
 
 
 
54
  model.to(device='cuda:0')
55
  ```
56
 
@@ -58,9 +69,16 @@ Although the model was trained with a sequence length of 2048 and finetuned with
58
  ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
59
 
60
  ```python
61
- config = transformers.AutoConfig.from_pretrained('mosaicml/mpt-7b-storywriter', trust_remote_code=True)
 
 
 
62
  config.update({"max_seq_len": 83968})
63
- model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-7b-storywriter', config=config, trust_remote_code=True)
 
 
 
 
64
  ```
65
 
66
  This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
 
42
 
43
  ```python
44
  import transformers
45
+ model = transformers.AutoModelForCausalLM.from_pretrained(
46
+ 'mosaicml/mpt-7b-storywriter',
47
+ trust_remote_code=True
48
+ )
49
  ```
50
 
51
  To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model with `attn_impl='triton'` and move the model to `bfloat16`:
52
  ```python
53
+ config = transformers.AutoConfig.from_pretrained(
54
+ 'mosaicml/mpt-7b-storywriter',
55
+ trust_remote_code=True
56
+ )
57
  config.attn_config['attn_impl'] = 'triton'
58
 
59
+ model = transformers.AutoModelForCausalLM.from_pretrained(
60
+ 'mosaicml/mpt-7b-storywriter',
61
+ config=config,
62
+ torch_dtype=torch.bfloat16,
63
+ trust_remote_code=True
64
+ )
65
  model.to(device='cuda:0')
66
  ```
67
 
 
69
  ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
70
 
71
  ```python
72
+ config = transformers.AutoConfig.from_pretrained(
73
+ 'mosaicml/mpt-7b-storywriter',
74
+ trust_remote_code=True
75
+ )
76
  config.update({"max_seq_len": 83968})
77
+ model = transformers.AutoModelForCausalLM.from_pretrained(
78
+ 'mosaicml/mpt-7b-storywriter',
79
+ config=config,
80
+ trust_remote_code=True
81
+ )
82
  ```
83
 
84
  This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.