Model Card for Minerva-7B-instruct-v1.0 in GGUF Format
Minerva is the first family of LLMs pretrained from scratch on Italian developed by Sapienza NLP in the context of the Future Artificial Intelligence Research (FAIR) project, in collaboration with CINECA and with additional contributions from Babelscape and the CREATIVE PRIN Project. Notably, the Minerva models are truly-open (data and model) Italian-English LLMs, with approximately half of the pretraining data including Italian text. The full tech is available at https://nlp.uniroma1.it/minerva/blog/2024/11/26/tech-report.
Description
This is the model card for the GGUF conversion of Minerva-7B-instruct-v1.0, a 7 billion parameter model trained on almost 2.5 trillion tokens (1.14 trillion in Italian, 1.14 trillion in English and 200 billion in code). This repository contains the model weights in float32 and float16 formats, as well as quantized versions in 8-bit, 6-bit, and 4-bit precision.
Important: This model is compatible with llama.cpp updated to at least commit 6fe624783166e7355cec915de0094e63cd3558eb
(5 November 2024).
- Downloads last month
- 416
Model tree for sapienzanlp/Minerva-7B-instruct-v1.0-GGUF
Base model
sapienzanlp/Minerva-7B-base-v1.0