File size: 4,807 Bytes
8af773e 6200ea4 97cf776 8af773e 6a159f0 8af773e f10b74c 8af773e 6200ea4 8af773e cdfc689 8af773e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
---
language:
- en
- fr
base_model:
- mistralai/Mistral-Small-24B-Base-2501
tags:
- roleplay
- deepseek
- rp
- r1
- mistral
- distill
---
# MistralThinker Model Card
Please, read this: https://huggingface.co./Undi95/MistralThinker-v1.1/discussions/1 \
Prefill required for the Assistant: `<think>\n`
## Model Description
**Model Name:** MistralThinker\
**Version:** 1.1\
**Prompt Format:** Mistral-V7
```
[SYSTEM_PROMPT]{system prompt}[/SYSTEM_PROMPT][INST]{user message}[/INST]{assistant response}</s>
```
This model is a specialized variant of **Mistral-Small-24B-Base-2501**, adapted using a **DeepSeek R1** distillation process. It is **primarily designed for roleplay (RP) and storywriting** applications, focusing on character interactions, narrative generation, and creative storytelling. Approximately **40% of the training dataset** consists of roleplay/storywriting/character card data, ensuring rich and contextually immersive outputs in these domains.
## Model Sources
- **Base Model:** [Mistral-Small-24B-Base-2501](https://huggingface.co./mistralai/Mistral-Small-24B-Base-2501)
- **Fine-Tuning Approach:** DeepSeek R1 process (focused on RP)
- **Dataset Size:** The dataset used in training **doubled** since the last version, adding more neutral logs, training the Base model to stick more on my new format.
## Intended Use
- **Primary Use Cases:**
- **Roleplay (RP):** Engaging with users in fictional or scenario-based interactions.
- **Storywriting:** Generating narratives, character dialogues, and creative texts.
- **Character Lore Generation:** Serving as a resource to craft or expand on character backstories and interactions.
- **How To Use:**
1. **User-First Message:** The first message in any interaction should come from the user, ensuring the model responds in a narrative or roleplay context guided by user input.
2. **Contextual Information:** User or assistant details can be placed either in the system prompt or the user's first message. A system prompt is **not mandatory**, but any contextual instructions or role descriptions can help set the stage.
3. **DeepSeek-Style Interaction:** The model can also be used purely as a **DeepSeek distill** without additional system prompts, providing flexible usage for direct storytelling or roleplay scenarios. The model still can be biased toward Roleplay data, and it is expected.
## Training Data
- **DeepSeek R1 Thinking Process:** The model inherits a refined chain-of-thought (thinking process) from DeepSeek R1, which places heavy emphasis on **roleplay** and narrative coherence.
- **Dataset Composition:**
- 40%: RP/Storywriting/Character Cards
- 60%: Various curated data for broad language, math, logical, space... understanding
- **Data Scaling:** The dataset size was **doubled** compared to previous iterations, which enhances the model’s creative and contextual capabilities.
## Model Performance
- **Strengths:**
- **Storytelling & Roleplay:** Rich in creative generation, character portrayal, and scenario building.
- **Dialogue & Interaction:** Capable of sustaining engaging and context-driven dialogues.
- **Adaptability:** Can be used with or without a system prompt to match a range of user preferences.
- **Limitations & Bias:**
- **Hallucination:** It can generate fictitious information in the thinking process, but still end up with a succesfull reply.
- **Thinking can be dismissed:** Being a distillation of DeepSeek R1 is essence, this model, even trained on Base, could forget to add `<think>\n` in some scenario.
## Ethical Considerations
- Yes
## Usage Recommendations
1. **System Prompt (Optional):**
You may provide a high-level system prompt detailing the scenario or the desired style of roleplay and storywriting.
_Example: "You are a friendly fantasy innkeeper who greets travelers from distant lands."_
2. **User’s First Message:**
- Must clearly state or imply the scenario or context if no system prompt is provided.
_Example: "Hello, I’m a wandering knight seeking shelter. Could you share a story about local legends?"_
3. **Roleplay & Storywriting Focus:**
- Encourage the model to develop characters, backstories, and immersive dialogues.
- For more direct, unfiltered or freeform creativity, skip the system prompt.
- If you still want to have some "logs" from previous message before starting a conversation, put them in the first user message, or in the system prompt.
- You can put exemple message of the character you RP with in the system prompt, too.

 |