mlx-community/plamo-2-1b
The Model mlx-community/plamo-2-1b was converted to MLX format from pfnet/plamo-2-1b using mlx-lm version 0.21.0.
Use with mlx
pip install mlx numba # numba is required for the new PLaMo tokenizer
pip install "git+https://github.com/mitmul/mlx-examples.git@mitmul/add-plamo2-1b-support#egg=mlx-lm&subdirectory=llms"
python -m mlx_lm.generate \
--model mlx-community/plamo-2-1b \
--prompt '็พๅณใใใซใฌใผใฎไฝใๆน ใฎใฌใทใใ็ดนไปใใพใใ' \
--max-tokens 1024 \
--extra-eos-token '<|plamo:bos|>' \
--ignore-chat-template \
--temp 0
Fetching 7 files: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7/7 [00:00<00:00, 116508.44it/s]
==========
## ็พๅณใใใซใฌใผใฎไฝใๆน
### ๆๆ
- ็ใญใ ๏ผๅ
- ใซใใใ ๏ผๆฌ
- ใใใใใ ๏ผๅ
- ่ฑใฒใ่ ๏ผ๏ผ๏ผ๏ฝ
- ใซใใซใ ๏ผใใ
- ใใใใ ๏ผใใ
- ใซใฌใผ็ฒ ๅคงใใ๏ผ
- ๅกฉ ๅฐใใ๏ผ
- ๆฐด ๏ผ๏ผ๏ผ๏ฝ๏ฝ
- ใใใ็ผถ ๏ผ็ผถ
- ๆฐด ๏ผ๏ผ๏ผ๏ฝ๏ฝ
- ใตใฉใๆฒน ๅคงใใ๏ผ
- ๅกฉ ๅฐใ
- ใใใใ ๅฐใ
### ไฝใๆน
- ็ใญใใใซใใใใใใใใใใใซใใซใใใใใใใใฟใใๅใใซใใพใใ
- ใใฉใคใใณใซใตใฉใๆฒนใ็ฑใใใซใใซใใใใใใใ็ใใพใใ
- ้ฆใใๅบใฆใใใใ็ใญใใใซใใใใใใใใใใ่ฑใฒใ่ใๅกฉใใใใใใๆฐดใใใใ็ผถใๆฐดใใซใฌใผ็ฒใๅ
ฅใใฆ็ใใพใใ
- ้่ใซ็ซใ้ใฃใใใใซใฌใผ็ฒใๆบถใใใพใใ
- ๆฐดใๅกฉใใใใใใงๅณใ่ชฟใใพใใ
- ๆๅพใซใใใ็ผถใๅ ใใฆใ็
ฎ่พผใใงๅฎๆใงใใ
## ใพใจใ
็พๅณใใใซใฌใผใฎไฝใๆนใฎใฌใทใใ็ดนไปใใพใใใ
ใใฒๅ่ใซใใฆใฟใฆใใ ใใใ
==========
Prompt: 8 tokens, 114.293 tokens-per-sec
Generation: 234 tokens, 67.785 tokens-per-sec
Peak memory: 2.727 GB
You can also write your code to use this model like this:
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/plamo-2-1b")
prompt = "็พๅณใใใซใฌใผใฎไฝใๆนใฎใฌใทใใ็ดนไปใใพใใ"
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 76
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The HF Inference API does not support model that require custom code execution.
Model tree for mlx-community/plamo-2-1b
Base model
pfnet/plamo-2-1b