abhinavnmagic commited on
Commit
8f06008
1 Parent(s): 9a3f13c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +258 -2
README.md CHANGED
@@ -1,3 +1,259 @@
1
  ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ license: llama2
6
+ ---
7
+
8
+ # Phi-3-medium-128k-instruct-quantized.w4a16
9
+
10
+ ## Model Overview
11
+ - **Model Architecture:** Phi-3
12
+ - **Input:** Text
13
+ - **Output:** Text
14
+ - **Model Optimizations:**
15
+ - **Weight quantization:** INT4
16
+ - **Intended Use Cases:** Intended for commercial and research use in English. Similarly to [Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct), this models is intended for assistant-like chat.
17
+ - **Out-of-scope:** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English.
18
+ - **Release Date:** 7/11/2024
19
+ - **Version:** 1.0
20
+ - **License(s)**: [LLama2](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/LICENSE.txt)
21
+ - **Model Developers:** Neural Magic
22
+
23
+ Quantized version of [Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct).
24
+ It achieves an average score of 72.38 on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) benchmark (version 1), whereas the unquantized model achieves 74.46.
25
+
26
+ ### Model Optimizations
27
+
28
+ This model was obtained by quantizing the weights of [Phi-3-medium-4k-instruct](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) to INT4 data type.
29
+ This optimization reduces the number of bits per parameter from 16 to 4, reducing the disk size and GPU memory requirements by approximately 25%.
30
+
31
+ Only the weights of the linear operators within transformers blocks are quantized. Symmetric group-wise quantization is applied, in which a linear scaling per group maps the INT4 and floating point representations of the quantized weights.
32
+ [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) is used for quantization with 1% damping factor, group-size as 128 and 512 sequences sampled from [Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
33
+
34
+
35
+ ## Deployment
36
+
37
+ ### Use with vLLM
38
+
39
+ This model can be deployed efficiently using the [vLLM](https://docs.vllm.ai/en/latest/) backend, as shown in the example below.
40
+
41
+ ```python
42
+ from vllm import LLM, SamplingParams
43
+ from transformers import AutoTokenizer
44
+
45
+ model_id = "neuralmagic/Phi-3-medium-128k-instruct-quantized.w4a16"
46
+
47
+ sampling_params = SamplingParams(temperature=0.6, top_p=0.9, max_tokens=256)
48
+
49
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
50
+
51
+ messages = [
52
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
53
+ {"role": "user", "content": "Who are you? Please respond in pirate speak."},
54
+ ]
55
+
56
+ prompts = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
57
+
58
+ llm = LLM(model=model_id, tensor_parallel_size=2)
59
+
60
+ outputs = llm.generate(prompts, sampling_params)
61
+
62
+ generated_text = outputs[0].outputs[0].text
63
+ print(generated_text)
64
+ ```
65
+
66
+ vLLM aslo supports OpenAI-compatible serving. See the [documentation](https://docs.vllm.ai/en/latest/) for more details.
67
+
68
+ ### Use with transformers
69
+
70
+ This model is supported by Transformers leveraging the integration with the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) data format.
71
+ The following example contemplates how the model can be used using the `generate()` function.
72
+
73
+ ```python
74
+ from transformers import AutoTokenizer, AutoModelForCausalLM
75
+
76
+ model_id = "neuralmagic/Phi-3-medium-128k-instruct-quantized.w4a16"
77
+
78
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
79
+ model = AutoModelForCausalLM.from_pretrained(
80
+ model_id,
81
+ torch_dtype="auto",
82
+ device_map="auto",
83
+ )
84
+
85
+ messages = [
86
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
87
+ {"role": "user", "content": "Who are you? Please respond in pirate speak"},
88
+ ]
89
+
90
+ input_ids = tokenizer.apply_chat_template(
91
+ messages,
92
+ add_generation_prompt=True,
93
+ return_tensors="pt"
94
+ ).to(model.device)
95
+
96
+ terminators = [
97
+ tokenizer.eos_token_id,
98
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
99
+ ]
100
+
101
+ outputs = model.generate(
102
+ input_ids,
103
+ max_new_tokens=256,
104
+ eos_token_id=terminators,
105
+ do_sample=True,
106
+ temperature=0.6,
107
+ top_p=0.9,
108
+ )
109
+ response = outputs[0][input_ids.shape[-1]:]
110
+ print(tokenizer.decode(response, skip_special_tokens=True))
111
+ ```
112
+
113
+ ## Creation
114
+
115
+ This model was created by applying the [AutoGPTQ](https://github.com/AutoGPTQ/AutoGPTQ) library as presented in the code snipet below.
116
+ Although AutoGPTQ was used for this particular model, Neural Magic is transitioning to using [llm-compressor](https://github.com/vllm-project/llm-compressor) which supports several quantization schemes and models not supported by AutoGPTQ.
117
+
118
+ ```python
119
+ from transformers import AutoTokenizer
120
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
121
+ from datasets import load_dataset
122
+ import random
123
+
124
+ model_id = "microsoft/Phi-3-medium-4k-instruct"
125
+
126
+ num_samples = 512
127
+ max_seq_len = 4096
128
+
129
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
130
+
131
+ preprocess_fn = lambda example: {"text": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n{text}".format_map(example)}
132
+
133
+ dataset_name = "neuralmagic/LLM_compression_calibration"
134
+ dataset = load_dataset(dataset_name, split="train")
135
+ ds = dataset.shuffle().select(range(num_samples))
136
+ ds = ds.map(preprocess_fn)
137
+
138
+ examples = [
139
+ tokenizer(
140
+ example["text"], padding=False, max_length=max_seq_len, truncation=True,
141
+ ) for example in ds
142
+ ]
143
+
144
+ quantize_config = BaseQuantizeConfig(
145
+ bits=4,
146
+ group_size=128,
147
+ desc_act=True,
148
+ model_file_base_name="model",
149
+ damp_percent=0.01,
150
+ )
151
+
152
+ model = AutoGPTQForCausalLM.from_pretrained(
153
+ model_id,
154
+ quantize_config,
155
+ device_map="auto",
156
+ )
157
+
158
+ model.quantize(examples)
159
+ model.save_pretrained("Phi-3-medium-128k-instruct-quantized.w4a16")
160
+ ```
161
+
162
+
163
+
164
+ ## Evaluation
165
+
166
+ The model was evaluated on the [OpenLLM](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) leaderboard tasks (version 1) with the [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/383bbd54bc621086e05aa1b030d8d4d5635b25e6) (commit 383bbd54bc621086e05aa1b030d8d4d5635b25e6) and the [vLLM](https://docs.vllm.ai/en/stable/) engine, using the following command:
167
+ ```
168
+ lm_eval \
169
+ --model vllm \
170
+ --model_args pretrained="neuralmagic/Phi-3-medium-128k-instruct-quantized.w4a16",dtype=auto,tensor_parallel_size=2,gpu_memory_utilization=0.4,add_bos_token=True,max_model_len=4096,trust_remote_code=True \
171
+ --tasks openllm \
172
+ --batch_size auto
173
+ ```
174
+
175
+ ### Accuracy
176
+
177
+ #### Open LLM Leaderboard evaluation scores
178
+ <table>
179
+ <tr>
180
+ <td><strong>Benchmark</strong>
181
+ </td>
182
+ <td><strong>Phi-3-medium-4k-instruct </strong>
183
+ </td>
184
+ <td><strong>Phi-3-medium-128k-instruct-quantized.w4a16(this model)</strong>
185
+ </td>
186
+ <td><strong>Recovery</strong>
187
+ </td>
188
+ </tr>
189
+ <tr>
190
+ <td>MMLU (5-shot)
191
+ </td>
192
+ <td>75.63
193
+ </td>
194
+ <td>75.54
195
+ </td>
196
+ <td>99.89%
197
+ </td>
198
+ </tr>
199
+ <tr>
200
+ <td>ARC Challenge (25-shot)
201
+ </td>
202
+ <td>67.57
203
+ </td>
204
+ <td>67.06
205
+ </td>
206
+ <td>94.25%
207
+ </td>
208
+ </tr>
209
+ <tr>
210
+ <td>GSM-8K (5-shot, strict-match)
211
+ </td>
212
+ <td>83.32
213
+ </td>
214
+ <td>82.18
215
+ </td>
216
+ <td>98.64%
217
+ </td>
218
+ </tr>
219
+ <tr>
220
+ <td>Hellaswag (10-shot)
221
+ </td>
222
+ <td>84.36
223
+ </td>
224
+ <td>84.04
225
+ </td>
226
+ <td>99.62%
227
+ </td>
228
+ </tr>
229
+ <tr>
230
+ <td>Winogrande (5-shot)
231
+ </td>
232
+ <td>75.45
233
+ </td>
234
+ <td>72.85
235
+ </td>
236
+ <td>96.55%
237
+ </td>
238
+ </tr>
239
+ <tr>
240
+ <td>TruthfulQA (0-shot)
241
+ </td>
242
+ <td>53.54
243
+ </td>
244
+ <td>52.64
245
+ </td>
246
+ <td>98.31%
247
+ </td>
248
+ </tr>
249
+ <tr>
250
+ <td><strong>Average</strong>
251
+ </td>
252
+ <td><strong>74.46</strong>
253
+ </td>
254
+ <td><strong>72.39</strong>
255
+ </td>
256
+ <td><strong>97.21%</strong>
257
+ </td>
258
+ </tr>
259
+ </table>