cicdatopea commited on
Commit
3baead0
·
verified ·
1 Parent(s): 380843b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -7,6 +7,7 @@ base_model:
7
  ## Model Details
8
 
9
  This model is an int4 model with group_size 128 and and symmetric quantization of [deepseek-ai/DeepSeek-V2.5-1210](https://huggingface.co/deepseek-ai/DeepSeek-V2.5-1210) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Load the model with `revision="6d3d2cf"` to use AutoGPTQ format. **Please note that loading the model in Transformers can be quite slow. Consider using an alternative serving framework for better performance.**
 
10
 
11
  ## How To Use
12
 
 
7
  ## Model Details
8
 
9
  This model is an int4 model with group_size 128 and and symmetric quantization of [deepseek-ai/DeepSeek-V2.5-1210](https://huggingface.co/deepseek-ai/DeepSeek-V2.5-1210) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm. Load the model with `revision="6d3d2cf"` to use AutoGPTQ format. **Please note that loading the model in Transformers can be quite slow. Consider using an alternative serving framework for better performance.**
10
+ For other serving framework, you should need the autogptq format, you could run the following cmd "git clone https://huggingface.co/OPEA/DeepSeek-V2.5-1210-int4-sym-inc && cd DeepSeek-V2.5-1210-int4-sym-inc && git checkout 6d3d2cf". Please follow the licence of the origin model.
11
 
12
  ## How To Use
13