--- license: apache-2.0 --- This model was quantized and exported to mlx using [GPTQModel](https://github.com/ModelCloud/GPTQModel). ## How to run this model ```shell # install mlx pip install mlx_lm ``` ```python from mlx_lm import load, generate mlx_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-mlx-v1" mlx_model, tokenizer = load(mlx_path) prompt = "The capital of France is" messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) text = generate(mlx_model, tokenizer, prompt=prompt, verbose=True) ``` ### Export gptq to mlx ```shell # install gptqmodel with mlx pip install gptqmodel[mlx] --no-build-isolation ``` ```python from gptqmodel import GPTQModel # load gptq quantized model gptq_model_path = "ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1" mlx_path = f"./vortex/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1-mlx" # export to mlx model GPTQModel.export(gptq_model_path, mlx_path, "mlx") ```