mlx-community/simplescaling-s1-32B-fp16

The Model mlx-community/simplescaling-s1-32B-fp16 was converted to MLX format from simplescaling/s1-32B using mlx-lm version 0.21.1 by Focused.

Focused Logo

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("mlx-community/simplescaling-s1-32B-fp16")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)

Focused is a technology company at the forefront of AI-driven development, empowering organizations to unlock the full potential of artificial intelligence. From integrating innovative models into existing systems to building scalable, modern AI infrastructures, we specialize in delivering tailored, incremental solutions that meet you where you are. Curious how we can help with your AI next project? Get in Touch

Focused Logo

Downloads last month
17
Safetensors
Model size
32.8B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-generation models for mlx library.

Model tree for mlx-community/simplescaling-s1-32B-fp16

Finetuned
(4)
this model

Dataset used to train mlx-community/simplescaling-s1-32B-fp16