mlx-community/plamo-2-1b

The Model mlx-community/plamo-2-1b was converted to MLX format from pfnet/plamo-2-1b using mlx-lm version 0.21.0.

Use with mlx

pip install mlx numba  # numba is required for the new PLaMo tokenizer
pip install "git+https://github.com/mitmul/mlx-examples.git@mitmul/add-plamo2-1b-support#egg=mlx-lm&subdirectory=llms"
python -m mlx_lm.generate \
--model mlx-community/plamo-2-1b \
--prompt '็พŽๅ‘ณใ—ใ„ใ‚ซใƒฌใƒผใฎไฝœใ‚Šๆ–น ใฎใƒฌใ‚ทใƒ”ใ‚’็ดนไป‹ใ—ใพใ™ใ€‚' \
--max-tokens 1024 \
--extra-eos-token '<|plamo:bos|>' \
--ignore-chat-template \
--temp 0
Fetching 7 files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7/7 [00:00<00:00, 116508.44it/s]
==========

## ็พŽๅ‘ณใ—ใ„ใ‚ซใƒฌใƒผใฎไฝœใ‚Šๆ–น
### ๆๆ–™
- ็Ž‰ใญใŽ ๏ผ‘ๅ€‹
- ใซใ‚“ใ˜ใ‚“ ๏ผ‘ๆœฌ
- ใ˜ใ‚ƒใŒใ„ใ‚‚ ๏ผ‘ๅ€‹
- ่ฑšใฒใ่‚‰ ๏ผ‘๏ผ๏ผ๏ฝ‡
- ใซใ‚“ใซใ ๏ผ‘ใ‹ใ‘
- ใ—ใ‚‡ใ†ใŒ ๏ผ‘ใ‹ใ‘
- ใ‚ซใƒฌใƒผ็ฒ‰ ๅคงใ•ใ˜๏ผ‘
- ๅกฉ ๅฐใ•ใ˜๏ผ‘
- ๆฐด ๏ผ‘๏ผ๏ผ๏ฝƒ๏ฝƒ
- ใƒˆใƒžใƒˆ็ผถ ๏ผ‘็ผถ
- ๆฐด ๏ผ‘๏ผ๏ผ๏ฝƒ๏ฝƒ
- ใ‚ตใƒฉใƒ€ๆฒน ๅคงใ•ใ˜๏ผ‘
- ๅกฉ ๅฐ‘ใ€…
- ใ“ใ—ใ‚‡ใ† ๅฐ‘ใ€…
### ไฝœใ‚Šๆ–น
- ็Ž‰ใญใŽใ€ใซใ‚“ใ˜ใ‚“ใ€ใ˜ใ‚ƒใŒใ„ใ‚‚ใ€ใซใ‚“ใซใใ€ใ—ใ‚‡ใ†ใŒใ‚’ใฟใ˜ใ‚“ๅˆ‡ใ‚Šใซใ—ใพใ™ใ€‚
- ใƒ•ใƒฉใ‚คใƒ‘ใƒณใซใ‚ตใƒฉใƒ€ๆฒนใ‚’็†ฑใ—ใ€ใซใ‚“ใซใใ€ใ—ใ‚‡ใ†ใŒใ‚’็‚’ใ‚ใพใ™ใ€‚
- ้ฆ™ใ‚ŠใŒๅ‡บใฆใใŸใ‚‰ใ€็Ž‰ใญใŽใ€ใซใ‚“ใ˜ใ‚“ใ€ใ˜ใ‚ƒใŒใ„ใ‚‚ใ€่ฑšใฒใ่‚‰ใ€ๅกฉใ€ใ“ใ—ใ‚‡ใ†ใ€ๆฐดใ€ใƒˆใƒžใƒˆ็ผถใ€ๆฐดใ€ใ‚ซใƒฌใƒผ็ฒ‰ใ‚’ๅ…ฅใ‚Œใฆ็‚’ใ‚ใพใ™ใ€‚
- ้‡Ž่œใซ็ซใŒ้€šใฃใŸใ‚‰ใ€ใ‚ซใƒฌใƒผ็ฒ‰ใ‚’ๆบถใ‹ใ—ใพใ™ใ€‚
- ๆฐดใ€ๅกฉใ€ใ“ใ—ใ‚‡ใ†ใงๅ‘ณใ‚’่ชฟใˆใพใ™ใ€‚
- ๆœ€ๅพŒใซใƒˆใƒžใƒˆ็ผถใ‚’ๅŠ ใˆใฆใ€็…ฎ่พผใ‚“ใงๅฎŒๆˆใงใ™ใ€‚
## ใพใจใ‚
็พŽๅ‘ณใ—ใ„ใ‚ซใƒฌใƒผใฎไฝœใ‚Šๆ–นใฎใƒฌใ‚ทใƒ”ใ‚’็ดนไป‹ใ—ใพใ—ใŸใ€‚
ใœใฒๅ‚่€ƒใซใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚
==========
Prompt: 8 tokens, 114.293 tokens-per-sec
Generation: 234 tokens, 67.785 tokens-per-sec
Peak memory: 2.727 GB

You can also write your code to use this model like this:

from mlx_lm import load, generate

model, tokenizer = load("mlx-community/plamo-2-1b")

prompt = "็พŽๅ‘ณใ—ใ„ใ‚ซใƒฌใƒผใฎไฝœใ‚Šๆ–นใฎใƒฌใ‚ทใƒ”ใ‚’็ดนไป‹ใ—ใพใ™ใ€‚"

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
76
Safetensors
Model size
1.29B params
Tensor type
BF16
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support model that require custom code execution.

Model tree for mlx-community/plamo-2-1b

Base model

pfnet/plamo-2-1b
Finetuned
(1)
this model