File size: 1,086 Bytes
3f466c9
 
 
 
 
 
 
 
 
 
 
 
 
76c79b0
3f466c9
 
 
 
 
76c79b0
 
 
 
 
 
 
 
 
3f466c9
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
language:
- zh
- en
tags:
- chatglm
- glm
- onnx
- onnxruntime
---

# ChatGLM-6B + ONNX

This model is exported from [ChatGLM-6b](https://huggingface.co./THUDM/chatglm-6b) with int8 quantization and optimized for [ONNXRuntime](https://onnxruntime.ai/) inference. Export code in [this repo](https://github.com/K024/chatglm-q).

Inference code with ONNXRuntime is uploaded with the model. Install requirements and run `streamlit run web-ui.py` to start chatting. Currently the `MatMulInteger` (for u8s8 data type) and `DynamicQuantizeLinear` operators are only supported on CPU.

安装依赖并运行 `streamlit run web-ui.py` 预览模型效果。由于 ONNXRuntime 算子支持问题,目前仅能够使用 CPU 进行推理。

## Usage

```sh
git lfs clone https://huggingface.co./K024/ChatGLM-6b-onnx-u8s8
cd ChatGLM-6b-onnx-u8s8
pip install -r requirements.txt
streamlit run web-ui.py
```

Codes are released under MIT license.

Model weights are released under the same license as ChatGLM-6b, see [MODEL LICENSE](https://huggingface.co./THUDM/chatglm-6b/blob/main/MODEL_LICENSE).