Uploaded model

  • Developed by: sal076
  • License: llama 3.1
  • Finetuned from model : unsloth/meta-llama-3.1-8b-bnb-4bit

This a shit fintune quickly made as a proof of concept, This isn't supposed to be a useable model

Here is a updated better version, use this instead

Q4_K_M: https://huggingface.co./sal076/L3.1_RP_TEST3-Q4_K_M-GGUF

Q5_K_M: https://huggingface.co./sal076/L3.1_RP_TEST3-Q5_K_M-GGUF

Downloads last month
33
GGUF
Model size
8.03B params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train sal076/L3.1_RP_test2