GGUF
Inference Endpoints

This model is the official GGUF version of [https://huggingface.co./PleIAs/Pleias-Pico Pleias-Pico].

The conversion is unquantized and should yield the same generation quality as the original model.

Downloads last month
278
GGUF
Model size
353M params
Architecture
llama

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for PleIAs/Pleias-Pico-GGUF

Base model

PleIAs/Pleias-Pico
Quantized
(2)
this model

Dataset used to train PleIAs/Pleias-Pico-GGUF