language: | |
- en | |
inference: false | |
fine-tuning: false | |
tags: | |
- llama-cpp | |
- Llama-3.1-Nemotron-70B-Instruct-HF | |
- gguf | |
- Q3_K_S | |
- 70b | |
- 3-bit | |
- Nemotron | |
- llama-cpp | |
- nvidia | |
- code | |
- math | |
- chat | |
- roleplay | |
- text-generation | |
- safetensors | |
- nlp | |
- code | |
datasets: | |
- nvidia/HelpSteer2 | |
base_model: meta-llama/Llama-3.1-70B-Instruct | |
pipeline_tag: text-generation | |
library_name: transformers | |
# roleplaiapp/Llama-3.1-Nemotron-70B-Instruct-HF-Q3_K_S-GGUF | |
**Repo:** `roleplaiapp/Llama-3.1-Nemotron-70B-Instruct-HF-Q3_K_S-GGUF` | |
**Original Model:** `Llama-3.1-Nemotron-70B-Instruct-HF` | |
**Organization:** `nvidia` | |
**Quantized File:** `llama-3.1-nemotron-70b-instruct-hf-q3_k_s.gguf` | |
**Quantization:** `GGUF` | |
**Quantization Method:** `Q3_K_S` | |
**Use Imatrix:** `False` | |
**Split Model:** `False` | |
## Overview | |
This is an GGUF Q3_K_S quantized version of [Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co./nvidia/Llama-3.1-Nemotron-70B-Instruct-HF). | |
## Quantization By | |
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. | |
I hope the community finds these quantizations useful. | |
Andrew Webby @ [RolePlai](https://roleplai.app/) | |