Edit model card

SipánGPT 0.3 Llama 3.2 1B GGUF

  • Modelo pre-entrenado para responder preguntas de la Universidad Señor de Sipán de Lambayeque, Perú.
  • Pre-trained model to answer questions from the Señor de Sipán University of Lambayeque, Peru.

Testing the model

image/png

  • Entrenado con 50000 conversaciones, el modelo puede generar alucinaciones.
  • Trained with 50000 conversations, the model can generate hallucinations

Uploaded model

  • Developed by: ussipan
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-1b-instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.


SipánGPT 0.3 Llama 3.2 1B GGUF

Hecho con ❤️ por Jhan Gómez P.
Downloads last month
173
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ussipan/SipanGPT-0.3-Llama-3.2-1B-GGUF

Space using ussipan/SipanGPT-0.3-Llama-3.2-1B-GGUF 1