--- base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B library_name: transformers pipeline_tag: text-generation tags: - llama-cpp - DeepSeek-R1-Distill-Llama-70B - gguf - Q8_0 - 70b - llama - deepseek-ra - llama-cpp - deepseek-ai - code - math - chat - roleplay - text-generation - safetensors - nlp - code --- # roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q8_0-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q8_0-GGUF` **Original Model:** `DeepSeek-R1-Distill-Llama-70B` **Organization:** `deepseek-ai` **Quantized File:** `deepseek-r1-distill-llama-70b-q8_0.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q8_0` **Use Imatrix:** `False` **Split Model:** `True` ## Overview This is an GGUF Q8_0 quantized version of [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co./deepseek-ai/DeepSeek-R1-Distill-Llama-70B). ## Quantization By I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/)