Uploaded model

  • Developed by: gerasmark
  • License: apache-2.0
  • Finetuned from model : mistralai/Ministral-8B-Instruct-2410

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
4
GGUF
Model size
8.02B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for gerasmark/ministral-8b-gguf-q8

Quantized
(34)
this model