|
--- |
|
base_model: databricks/dbrx-instruct |
|
--- |
|
# dbrx_moe_fp8_test |
|
- ## Introduction |
|
This model was created by applying [Quark](https://quark.docs.amd.com/latest/index.html) with calibration samples from Pile dataset. |
|
- ## Quantization Stragegy |
|
- ***Quantized Layers***: All linear layers excluding "lm_head" and "router.layer" |
|
- ***Weight***: FP8 symmetric per-tensor |
|
- ***Activation***: FP8 symmetric per-tensor |
|
- ***KV Cache***: FP8 symmetric per-tensor |
|
- ## Quick Start |
|
1. [Download and install Quark](https://quark.docs.amd.com/latest/install.html) |
|
2. Run the quantization script in the example folder using the following command line: |
|
```sh |
|
export MODEL_DIR = [local model checkpoint folder] or databricks/dbrx-instruct |
|
# single GPU |
|
python3 quantize_quark.py \ |
|
--model_dir $MODEL_DIR \ |
|
--output_dir dbrx_moe_fp8_test \ |
|
--quant_scheme w_fp8_a_fp8 \ |
|
--kv_cache_dtype fp8 \ |
|
--num_calib_data 128 \ |
|
--model_export quark_safetensors \ |
|
--no_weight_matrix_merge |
|
# If model size is too large for single GPU, please use multi GPU instead. |
|
python3 quantize_quark.py |
|
--model_dir $MODEL_DIR \ |
|
--output_dir dbrx_moe_fp8_test\ |
|
--quant_scheme w_fp8_a_fp8 \ |
|
--kv_cache_dtype fp8 \ |
|
--num_calib_data 128 \ |
|
--multi_gpu \ |
|
--model_export quark_safetensors \ |
|
--no_weight_matrix_merge |
|
``` |
|
## Deployment |
|
Quark has its own export format and allows FP8 quantized models to be efficiently deployed using the vLLM backend(vLLM-compatible). |
|
In the dbrx-instruct model, "transformer.blocks.\*.ffn.experts" modules can be divided into experts-num mlps, and if the shape of the weight of w1 in one of the mlps is [dim1, dim2], |
|
then the shape of “transformer.blocks.\*.ffn.experts.mlp.w1.weight“ in the exported safetensors file is [dim1\*experts-num, dim2]. The shapes of "transformer.blocks.\*.ffn.experts.mlp.w1.weight_scale" |
|
and "transformer.blocks.\*.ffn.experts.mlp.w1.input_scale" are [dim1]. Similarly, this also applies to the w2 and v1 of "transformer.blocks.\*.ffn.experts.mlp". |
|
## Evaluation |
|
Quark currently uses perplexity(PPL) as the evaluation metric for accuracy loss before and after quantization.The specific PPL algorithm can be referenced in the quantize_quark.py. |
|
The quantization evaluation results are conducted in pseudo-quantization mode, which may slightly differ from the actual quantized inference accuracy. These results are provided for reference only. |
|
#### Evaluation scores |
|
<table> |
|
<tr> |
|
<td><strong>Benchmark</strong> |
|
</td> |
|
<td><strong>dbrx-instruct </strong> |
|
</td> |
|
<td><strong>dbrx_moe_fp8_test(this model)</strong> |
|
</td> |
|
</tr> |
|
<tr> |
|
<td>Perplexity-wikitext2 |
|
</td> |
|
<td>4.2275 |
|
</td> |
|
<td>4.3033 |
|
</td> |
|
</tr> |
|
|
|
</table> |
|
|
|
#### License |
|
Modifications copyright(c) 2024 Advanced Micro Devices,Inc. All rights reserved. |
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); |
|
you may not use this file except in compliance with the License. |
|
You may obtain a copy of the License at |
|
|
|
http://www.apache.org/licenses/LICENSE-2.0 |
|
|
|
Unless required by applicable law or agreed to in writing, software |
|
distributed under the License is distributed on an "AS IS" BASIS, |
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
|
See the License for the specific language governing permissions and |
|
limitations under the License. |