Barcenas R1 Qwen 1.5b

Basado en el deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B y entrenado con datos del dataset pinzhenchen/alpaca-cleaned-es

El objetivo de este modelo es tener un LLM de razonamiento en español como o1 o R1 y que tenga un tamaño pequeño accesible para ejecutar en la mayoría de equipos.


Barcenas R1 Qwen 1.5b

Based on deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B and trained with data from the pinzhenchen/alpaca-cleaned-en dataset

The goal of this model is to have a reasoning LLM in Spanish as o1 or R1 and having a small size accessible to run on most computers.

Made with ❤️ in Guadalupe, Nuevo Leon, Mexico 🇲🇽

Downloads last month
9
Safetensors
Model size
1.78B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Danielbrdz/Barcenas-R1-Qwen-1.5b

Finetuned
(19)
this model
Quantizations
2 models

Dataset used to train Danielbrdz/Barcenas-R1-Qwen-1.5b