File size: 1,020 Bytes
0330f47
 
 
b46931e
fa29629
689c12f
fa29629
e838d0e
689c12f
e838d0e
 
 
fa29629
e838d0e
fa29629
 
e838d0e
fa29629
 
 
 
 
 
 
 
 
e838d0e
 
 
 
689c12f
 
e838d0e
 
 
 
 
 
 
 
 
 
 
fa29629
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: apache-2.0
---
This model was quantized and exported to mlx using [GPTQModel](https://github.com/ModelCloud/GPTQModel). 

## How to run this model

```shell
# install mlx
pip install mlx_lm
```


```python
from mlx_lm import load, generate

mlx_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1"
mlx_model, tokenizer = load(mlx_path)
prompt = "The capital of France is"

messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True
)

text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True)
```

### Export gptq to mlx
```shell
# install gptqmodel with mlx
pip install gptqmodel[mlx] --no-build-isolation
```

```python
from gptqmodel import GPTQModel

# load gptq quantized model
gptq_model_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1"
mlx_path = f"./vortex/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1-mlx"

# export to mlx model
GPTQModel.export(gptq_model_path, mlx_path, "mlx")
```