File size: 1,322 Bytes
e73eb09 3a591b5 d7256a2 e73eb09 51ff12b e73eb09 ffed785 47c6828 ffed785 e73eb09 4f1bd87 ffed785 3c81959 ffed785 dab64fe e73eb09 9a59065 1bb21cc e73eb09 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
base_model: mistralai/Mistral-Small-24B-Instruct-2501
datasets:
- ServiceNow-AI/R1-Distill-SFT
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- sft
license: apache-2.0
language:
- en
---
<picture>
<img alt="image" src="https://huggingface.co./tensopolis/assets/resolve/main/logo_512.png">
</picture>
## mistral-small-r1-tensopolis
This model is a **reasoning** fine-tune of unsloth/**mistral-small-24b-instruct-2501**-unsloth-bnb-4bit. Trained in **1xA100** for about **100 hours**. Please refer to the base model and dataset for more information about license, prompt format, etc.
Base model: [**mistralai/Mistral-Small-24B-Instruct-2501**](https://huggingface.co./mistralai/Mistral-Small-24B-Instruct-2501)
Dataset: [**ServiceNow-AI/R1-Distill-SFT**](https://huggingface.co./datasets/ServiceNow-AI/R1-Distill-SFT)
#### Basic Instruct Template (V7-Tekken)
```
<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]
```
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|