gugarosa commited on
Commit
f27cd93
1 Parent(s): 80c0ba9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -4
README.md CHANGED
@@ -75,13 +75,34 @@ def print_prime(n):
75
  ```
76
  where the model generates the text after the comments.
77
 
78
- **Notes**
79
  * Phi-1.5 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
80
  * Direct adoption for production tasks is out of the scope of this research project. As a result, Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
81
  * If you are using `transformers>=4.36.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
82
 
83
  ## Sample Code
84
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85
  ```python
86
  import torch
87
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -91,8 +112,7 @@ torch.set_default_device("cuda")
91
  model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype="auto", trust_remote_code=True)
92
  tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
93
 
94
- inputs = tokenizer('''```python
95
- def print_prime(n):
96
  """
97
  Print all primes between 1 and n
98
  """''', return_tensors="pt", return_attention_mask=False)
@@ -102,9 +122,10 @@ text = tokenizer.batch_decode(outputs)[0]
102
  print(text)
103
  ```
104
 
105
- **Remark.** In the generation function, our model currently does not support beam search (`num_beams > 1`).
106
  Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings.
107
 
 
108
  ## Limitations of Phi-1.5
109
 
110
  * Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
 
75
  ```
76
  where the model generates the text after the comments.
77
 
78
+ **Notes:**
79
  * Phi-1.5 is intended for research purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
80
  * Direct adoption for production tasks is out of the scope of this research project. As a result, Phi-1.5 has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
81
  * If you are using `transformers>=4.36.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
82
 
83
  ## Sample Code
84
 
85
+ There are four types of execution mode:
86
+
87
+ 1. FP16 / Flash-Attention / CUDA:
88
+ ```python
89
+ model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype="auto", flash_attn=True, flash_rotary=True, fused_dense=True, device_map="cuda", trust_remote_code=True)
90
+ ```
91
+ 2. FP16 / CUDA:
92
+ ```python
93
+ model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype="auto", device_map="cuda", trust_remote_code=True)
94
+ ```
95
+ 3. FP32 / CUDA:
96
+ ```python
97
+ model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype=torch.float32, device_map="cuda", trust_remote_code=True)
98
+ ```
99
+ 4. FP32 / CPU:
100
+ ```python
101
+ model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype=torch.float32, device_map="cpu", trust_remote_code=True)
102
+ ```
103
+
104
+ To ensure the maximum compatibility, we recommend using the second execution mode (FP16 / CUDA), as follows:
105
+
106
  ```python
107
  import torch
108
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
112
  model = AutoModelForCausalLM.from_pretrained("microsoft/phi-1_5", torch_dtype="auto", trust_remote_code=True)
113
  tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-1_5", trust_remote_code=True)
114
 
115
+ inputs = tokenizer('''def print_prime(n):
 
116
  """
117
  Print all primes between 1 and n
118
  """''', return_tensors="pt", return_attention_mask=False)
 
122
  print(text)
123
  ```
124
 
125
+ **Remark:** In the generation function, our model currently does not support beam search (`num_beams > 1`).
126
  Furthermore, in the forward pass of the model, we currently do not support outputting hidden states or attention values, or using custom input embeddings.
127
 
128
+
129
  ## Limitations of Phi-1.5
130
 
131
  * Generate Inaccurate Code and Facts: The model often produces incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.