You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

d4n33l/qwen2.5-coder-1.5b-it-pii

The Model d4n33l/qwen2.5-coder-1.5b-it-pii was converted to MLX format from Qwen/Qwen2.5-Coder-1.5B-Instruct using mlx-lm version 0.19.2.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("d4n33l/qwen2.5-coder-1.5b-it-pii")

prompt="hello"

if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, tokenize=False, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
0
Safetensors
Model size
1.54B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for d4n33l/qwen2.5-coder-1.5b-it-pii

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(34)
this model