cstr's picture
964c168581e510a6735c1bc3d6df4f787e3402d639dba2de2e7eefba87048654
97f0028 verified
|
raw
history blame
737 Bytes
---
language:
- de
- en
license: llama3
tags:
- merge
- mergekit
- mlx
base_model:
- cstr/llama3-8b-spaetzle-v31
- cstr/llama3-8b-spaetzle-v28
- cstr/llama3-8b-spaetzle-v26
- cstr/llama3-8b-spaetzle-v20
---
# cstr/llama3-8b-spaetzle-v33-mlx-4bit
The Model [cstr/llama3-8b-spaetzle-v33-mlx-4bit](https://huggingface.co./cstr/llama3-8b-spaetzle-v33-mlx-4bit) was converted to MLX format from [cstr/llama3-8b-spaetzle-v33](https://huggingface.co./cstr/llama3-8b-spaetzle-v33) using mlx-lm version **0.14.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("cstr/llama3-8b-spaetzle-v33-mlx-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```