Model Card for Mistral-Small-24B-Instruct-2501 (with tool calling)

DISCLAIMER Tool calling template is a work in progress

Mistral Small 3 ( 2501 ) sets a new benchmark in the "small" Large Language Models category below 70B, boasting 24B parameters and achieving state-of-the-art capabilities comparable to larger models! This model is an instruction-fine-tuned version of the base model: Mistral-Small-24B-Base-2501.

Mistral Small can be deployed locally and is exceptionally "knowledge-dense", fitting in a single RTX 4090 or a 32GB RAM MacBook once quantized. Perfect for:

  • Fast response conversational agents.
  • Low latency function calling.
  • Subject matter experts via fine-tuning.
  • Local inference for hobbyists and organizations handling sensitive data.

For enterprises that need specialized capabilities (increased context, particular modalities, domain specific knowledge, etc.), we will be releasing commercial models beyond what Mistral AI contributes to the community.

This release demonstrates our commitment to open source, serving as a strong base model.

Learn more about Mistral Small in our blog post.

Model developper: Mistral AI Team

Key Features

  • Multilingual: Supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish.
  • Agent-Centric: Offers best-in-class agentic capabilities with native function calling and JSON outputting.
  • Advanced Reasoning: State-of-the-art conversational and reasoning capabilities.
  • Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
  • Context Window: A 32k context window.
  • System Prompt: Maintains strong adherence and support for system prompts.
  • Tokenizer: Utilizes a Tekken tokenizer with a 131k vocabulary size.

Benchmark results

Human evaluated benchmarks

Category Gemma-2-27B Qwen-2.5-32B Llama-3.3-70B Gpt4o-mini
Mistral is better 0.536 0.496 0.192 0.200
Mistral is slightly better 0.196 0.184 0.164 0.204
Ties 0.052 0.060 0.236 0.160
Other is slightly better 0.060 0.088 0.112 0.124
Other is better 0.156 0.172 0.296 0.312

Note:

  • We conducted side by side evaluations with an external third-party vendor, on a set of over 1k proprietary coding and generalist prompts.
  • Evaluators were tasked with selecting their preferred model response from anonymized generations produced by Mistral Small 3 vs another model.
  • We are aware that in some cases the benchmarks on human judgement starkly differ from publicly available benchmarks, but have taken extra caution in verifying a fair evaluation. We are confident that the above benchmarks are valid.

Publicly accesible benchmarks

Reasoning & Knowledge

Evaluation mistral-small-24B-instruct-2501 gemma-2b-27b llama-3.3-70b qwen2.5-32b gpt-4o-mini-2024-07-18
mmlu_pro_5shot_cot_instruct 0.663 0.536 0.666 0.683 0.617
gpqa_main_cot_5shot_instruct 0.453 0.344 0.531 0.404 0.377

Math & Coding

Evaluation mistral-small-24B-instruct-2501 gemma-2b-27b llama-3.3-70b qwen2.5-32b gpt-4o-mini-2024-07-18
humaneval_instruct_pass@1 0.848 0.732 0.854 0.909 0.890
math_instruct 0.706 0.535 0.743 0.819 0.761

Instruction following

Evaluation mistral-small-24B-instruct-2501 gemma-2b-27b llama-3.3-70b qwen2.5-32b gpt-4o-mini-2024-07-18
mtbench_dev 8.35 7.86 7.96 8.26 8.33
wildbench 52.27 48.21 50.04 52.73 56.13
arena_hard 0.873 0.788 0.840 0.860 0.897
ifeval 0.829 0.8065 0.8835 0.8401 0.8499

Note:

  • Performance accuracy on all benchmarks were obtained through the same internal evaluation pipeline - as such, numbers may vary slightly from previously reported performance (Qwen2.5-32B-Instruct, Llama-3.3-70B-Instruct, Gemma-2-27B-IT).
  • Judge based evals such as Wildbench, Arena hard and MTBench were based on gpt-4o-2024-05-13.

Basic Instruct Template (V7-Tekken)

<s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST]

<system_prompt>, <user message> and <assistant response> are placeholders.

Please make sure to use mistral-common as the source of truth

Usage

The model can be used with the following frameworks;

vLLM

We recommend using this model with the vLLM library to implement production-ready inference pipelines.

Note 1: We recommond using a relatively low temperature, such as temperature=0.15.

Note 2: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt:

system_prompt = """You are Mistral Small 3, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris.
Your knowledge base was last updated on 2023-10-01. The current date is 2025-01-30.
When you're not sure about some information, you say that you don't have the information and don't make up anything.
If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. \"What are some good restaurants around me?\" => \"Where are you?\" or \"When is the next flight to Tokyo\" => \"Where do you travel from?\")"""

Note 3: Make sure to add the following SampleParam at inference time Tool Calling to work.

  "skip_special_tokens": False

Installation

Make sure you install vLLM >= 0.6.4:

pip install --upgrade vllm

Also make sure you have mistral_common >= 1.5.2 installed:

pip install --upgrade mistral_common

You can also make use of a ready-to-go docker image or on the docker hub.

Server

We recommand that you use Mistral-Small-24B-Instruct-2501 in a server/client setting.

  1. Spin up a server:
vllm serve --model uncensoredai/Mistral-Small-24B-Instruct-2501 \
  --enable-auto-tool-choice --tool-call-parser mistral_v3_debug
  --chat-template /path/to/chat_template_with_tools.jinja
  /path/to/mistral_small_v3_parser.py

Note: Running Mistral-Small-24B-Instruct-2501 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. Note: Don't mind the warning on non-mistral tokenizer. Mistral-Small-24B-Instrut v3 does use a LlamaTokenizer.

  1. To ping the client you can use a simple Python snippet.
import requests
import json
from datetime import datetime, timedelta

url = "http://<your-server>:8000/v1/chat/completions"
headers = {"Content-Type": "application/json", "Authorization": "Bearer token"}

model = "mistralai/Mistral-Small-24B-Instruct-2501"

messages = [
    {
        "role": "system",
        "content": "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
    },
    {
        "role": "user",
        "content": "Give me 5 non-formal ways to say 'See you later' in French."
    },
]

data = {"model": model, "messages": messages}

response = requests.post(url, headers=headers, data=json.dumps(data))
print(response.json()["choices"][0]["message"]["content"])

# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. ร€ plus tard
# 2. ร€ plus
# 3. Salut
# 4. ร€ toute
# 5. Bisous
#
# ```
#  /\_/\
# ( o.o )
#  > ^ <
# ```

Function calling

Mistral-Small-24-Instruct-2501 is excellent at function / tool calling tasks via vLLM. E.g.:

Prompt template

Jinja is a powerful and flexible template engine for Python. It allows developers to create dynamic content by separating the structure of a document from its varying parts. Jinja templates are widely used in web development, configuration management, and data processing tasks.

Key features of Jinja templates include:

  • Variable substitution
  • Control structures (e.g., loops and conditionals)
  • Template inheritance
  • Automatic HTML escaping
  • Customizable syntax

Jinja templates are valuable because they enable:

  1. Code reusability
  2. Separation of concerns
  3. Dynamic content generation
  4. Improved maintainability

Extracting Prompt Template with jq

To extract the prompt template from a tokenizer_config.json file using jq, you can use the following command:

jq -r '.chat_template' tokenizer_config.json > chat_template.txt

This command reads the 'chat_template' field from the JSON file and saves its contents to a text file.

Creating JSON String with jq

To create a JSON string from a file containing a Jinja template, you can use jq as follows:

jq -n --rawfile template chat_template.txt '{"chat_template": $template}'

This command reads the contents of chat_template.txt and creates a JSON object with a 'chat_template' key containing the file's contents as a string.

Update Prompt template in tokenizer_config.json

jq --rawfile template chat_template_with_tools.jinja '.chat_template = $template' tokenizer_config.json > temp.json && mv temp.json tokenizer_config.json

Jinja input example:

# System configuration
bos_token: "<s>"
eos_token: "</s>"

# Tools configuration
tools:
  - type: "function"
    function:
      name: "get_weather"
      description: "Get the current weather in a given location"
      parameters:
        type: "object"
        properties:
          location:
            type: "string"
            description: "City and state, e.g., 'San Francisco, CA'"
          unit:
            type: "string"
            enum: ["celsius", "fahrenheit"]
        required: ["location", "unit"]

  - type: "function"
    function:
      name: "get_gold_price"
      description: "Get the current gold price in wanted currency (default to USD)."
      parameters:
        type: "object"
        properties:
          currency:
            type: "string"
            description: "Currency code e.g. USD or EUR."

# Messages array
messages:
  # Optional system message (if omitted, default will be used)
  - role: "system"
    content: "You are AI."

  # User message
  - role: "user"
    content: "What's the weather like in San Francisco?"

  # Example assistant message with tool calls
  - role: "assistant"
    tool_calls:
      - id: "call_weather_123456789"
        function:
          name: "get_weather"
          arguments:
            location: "San Francisco, CA"
            unit: "celsius"

  # Example tool response
  - role: "tool"
    tool_call_id: "call_weather_123456789"
    content:
      content: '{"temperature": 18, "condition": "sunny"}'

  # Example assistant final response
  - role: "assistant"
    content: "The weather in San Francisco is sunny with a temperature of 18ยฐC."

๐Ÿ“ Develop and Test Jinja Prompt Templates with Jinja Sandbox

Jinja Sandbox is a great online tool for testing Jinja prompt templates before integrating them into your application. It allows you to quickly render templates with custom input data and debug formatting issues.

๐Ÿ”น How to Use:

  1. Go to Jinja Sandbox.
  2. Paste your Jinja template into the "Template" section.
  3. Provide test input data in the "JSON Context" field (must be a valid JSON object).
  4. Click "Render" to see the processed output.
  5. Debug and adjust as needed to match the expected format.
Tool Calling Example
from openai import OpenAI
import json

client = OpenAI(base_url="http://localhost:1337/v1")


def get_weather(location: str, unit: str):
    return f"Weather {location} in {unit} is bad!"
def get_gold_price(currency: str = "USD"):
    return f"Getting the gold price in {currency} is enourmous!"
tool_functions = {"get_weather": get_weather, "get_gold_price": get_gold_price}

tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City and state, e.g., 'San Francisco, CA'"},
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
            },
            "required": ["location", "unit"]
        }
    }
},
{
    "type": "function",
    "function": {
        "name": "get_gold_price",
        "description": "Get the current gold price in wanted currency (default to USD).",
        "parameters": {
            "type": "object",
            "properties": {
                "currency": {"type": "string", "description": "Currency code e.g. USD or EUR."}
            }
        }
    }
}]

response = client.chat.completions.create(
    model="uai/lm-base",
    messages=[{"role": "user", "content": "What's the weather like in San Francisco? And whats the current gold price?"}],
    temperature=0,
    extra_body={
        "skip_special_tokens": False
    },
    tools=tools,
    tool_choice="auto"
)

print(f"Function called: {response.choices[0]}")
tool_calls = response.choices[0].message.tool_calls

for index, tool_call in enumerate(tool_calls):
    call_response = tool_call.function
    print(f"{index}. Function called: {call_response.name}")
    print(f"Arguments: {call_response.arguments}")
    if index == 0:
        print(f"Result: {get_weather(**json.loads(call_response.arguments))}")
    elif index == 1:
        print(f"Result: {get_gold_price(**json.loads(call_response.arguments))}")

Offline

from vllm import LLM
from vllm.sampling_params import SamplingParams
from datetime import datetime, timedelta

SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."

user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."

messages = [
    {
        "role": "system",
        "content": SYSTEM_PROMPT
    },
    {
        "role": "user",
        "content": user_prompt
    },
]

# note that running this model on GPU requires over 60 GB of GPU RAM
llm = LLM(model=model_name, tokenizer_mode="mistral", tensor_parallel_size=8)

sampling_params = SamplingParams(max_tokens=512, temperature=0.15)
outputs = llm.chat(messages, sampling_params=sampling_params)

print(outputs[0].outputs[0].text)
# Sure, here are five non-formal ways to say "See you later" in French:
#
# 1. ร€ plus tard
# 2. ร€ plus
# 3. Salut
# 4. ร€ toute
# 5. Bisous
#
# ```
#  /\_/\
# ( o.o )
#  > ^ <
# ```

Transformers

If you want to use Hugging Face transformers to generate text, you can do something like this.

from transformers import pipeline
import torch

messages = [
    {"role": "user", "content": "Give me 5 non-formal ways to say 'See you later' in French."},
]
chatbot = pipeline("text-generation", model="mistralai/Mistral-Small-24B-Instruct-2501", max_new_tokens=256, torch_dtype=torch.bfloat16)
chatbot(messages)

Ollama

Ollama can run this model locally on MacOS, Windows and Linux.

ollama run mistral-small

4-bit quantization (aliased to default):

ollama run mistral-small:24b-instruct-2501-q4_K_M

8-bit quantization:

ollama run mistral-small:24b-instruct-2501-q8_0

FP16:

ollama run mistral-small:24b-instruct-2501-fp16
Downloads last month
1,290
Safetensors
Model size
23.6B params
Tensor type
BF16
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API has been turned off for this model.

Model tree for uncensoredai/Mistral-Small-24B-Instruct-2501

Finetuned
(23)
this model
Quantizations
1 model