Text Generation
Transformers
Safetensors
MLX
Japanese
qwen2
conversational
text-generation-inference
Inference Endpoints
4-bit precision
bobtk's picture
Add files using upload-large-folder tool
b5198bc verified
metadata
license: apache-2.0
language:
  - ja
pipeline_tag: text-generation
library_name: transformers
base_model: SakanaAI/TinySwallow-1.5B-Instruct
datasets:
  - tokyotech-llm/lmsys-chat-1m-synth
  - tokyotech-llm/swallow-magpie-ultra-v0.1
  - tokyotech-llm/swallow-swallow-gemma-magpie-v0.1
tags:
  - mlx

mlx-community/TinySwallow-1.5B-Instruct-4bit

The Model mlx-community/TinySwallow-1.5B-Instruct-4bit was converted to MLX format from SakanaAI/TinySwallow-1.5B-Instruct using mlx-lm version 0.21.1.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/TinySwallow-1.5B-Instruct-4bit")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)