Model Summary

This model is a fine-tuned version of gemma-2-2b-it, optimized for instruction-following and reasoning tasks. It was trained using MLX and LoRA on the sequelbox/Raiden-DeepSeek-R1 dataset, which consists of 62.9k examples generated by Deepseek R1. The fine-tuning process ran for 600 iterations to enhance the model’s ability to reason through more complex problems.

Model Details

Capabilities

This model improves upon gemma-2-2b-it with additional instruction-following and reasoning capabilities derived from Deepseek R1-generated examples. The model will answer questions with a straight-forward answer for simple questions, and generate long chain-of-thought reasoning text for more complex problems. It is well-suited for:

  • Question answering
  • Reasoning-based tasks
  • Coding
  • Running on consumer hardware

Limitations

  • Sometimes chain-of-thought reasoning is not triggered for more complex problems when it probably should be. You can nudge the model if needed by simply asking it to show its thoughts and it will generate think tags and begin reasoning.
  • With harder than average complex reasoning problems, the model can get stuck in long "thinking" thought loops without ever coming to a conclusive answer.
Downloads last month
50
Safetensors
Model size
2.61B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-generation models for mlx library.

Model tree for ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning

Base model

google/gemma-2-2b
Quantized
(152)
this model

Dataset used to train ApatheticWithoutTheA/gemma-2-2b-it-R1-Reasoning