Edit model card

xsanskarx/qwen2-0.5b_numina_math-instruct

This repository contains a fine-tuned version of the Qwen-2 0.5B model specifically optimized for mathematical instruction understanding and reasoning. It builds upon the Numina dataset, which provides a rich source of mathematical problems and solutions designed to enhance reasoning capabilities even in smaller language models.

Motivation

My primary motivation is the hypothesis that high-quality datasets focused on mathematical reasoning can significantly improve the performance of smaller models on tasks that require logical deduction and problem-solving. Uploading benchmarks is the next step in evaluating this claim.

Model Details

  • Base Model: Qwen-2 0.5B
  • Fine-tuning Dataset: Numina COT
  • Key Improvements: Enhanced ability to parse mathematical instructions, solve problems, and provide step-by-step explanations.

Usage

You can easily load and use this model with the Hugging Face Transformers library:

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("xsanskarx/qwen2-0.5b_numina_math-instruct")
model = AutoModelForCausalLM.from_pretrained("xsanskarx/qwen2-0.5b_numina_math-instruct")
Downloads last month
11
Safetensors
Model size
494M params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train xsanskarx/qwen2-0.5b_numina_math-instruct