bobtk's picture
f7e665b86efe33834daf3aa7a1d633f68742cb9694f13447e7b8641c11db1c86
5506937 verified
|
raw
history blame
780 Bytes
---
language:
- ja
- en
license: llama3.1
tags:
- japanese
- llama
- llama-3
- mlx
pipeline_tag: text-generation
inference: false
---
# mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-8bit
The Model [mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-8bit](https://huggingface.co./mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-8bit) was converted to MLX format from [cyberagent/Llama-3.1-70B-Japanese-Instruct-2407](https://huggingface.co./cyberagent/Llama-3.1-70B-Japanese-Instruct-2407) using mlx-lm version **0.16.1**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Llama-3.1-70B-Japanese-Instruct-2407-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```