Edit model card

Uploaded model

  • Developed by: EpistemeAI
  • License: apache-2.0
  • Finetuned from model : unsloth/Mistral-Nemo-Base-2407-bnb-4bit
  • This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Fireball-MathMistral-Nemo-Base-2407

This model is fine-tune to provide better math response than Mistral-Nemo-Base-2407

Training Dataset

Supervised fine-tuning with datasets with meta-math/MetaMathQA

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Model Card for Mistral-Nemo-Base-2407

The Fireball-MathMistral-Nemo-Base-2407 Large Language Model (LLM) is a pretrained generative text model of 12B parameters, it significantly outperforms existing models smaller or similar in size.

For more details about this model please refer to our release blog post.

Key features

  • Released under the Apache 2 License
  • Trained with a 128k context window
  • Trained on a large proportion of multilingual and code data
  • Drop-in replacement of Mistral 7B

Model Architecture

Mistral Nemo is a transformer model, with the following architecture choices:

  • Layers: 40
  • Dim: 5,120
  • Head dim: 128
  • Hidden dim: 14,436
  • Activation Function: SwiGLU
  • Number of heads: 32
  • Number of kv-heads: 8 (GQA)
  • Vocabulary size: 2**17 ~= 128k
  • Rotary embeddings (theta = 1M)

Demo

After installing mistral_inference, a mistral-demo CLI command should be available in your environment.

Transformers

NOTE: Until a new release has been made, you need to install transformers from source:

pip install git+https://github.com/huggingface/transformers.git

If you want to use Hugging Face transformers to generate text, you can do something like this.

from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "EpistemeAI/Fireball-MathMistral-Nemo-Base-2407"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("Hello my name is", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.

Note

Mistral-Nemo-Base-2407 is a pretrained base model and therefore does not have any moderation mechanisms.

Downloads last month
12
Safetensors
Model size
12.2B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for EpistemeAI/Fireball-MathMistral-Nemo-Base-2407

Finetuned
(27)
this model
Finetunes
1 model
Quantizations
2 models

Dataset used to train EpistemeAI/Fireball-MathMistral-Nemo-Base-2407