metadata
library_name: transformers
pipeline_tag: text-generation
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B
tags:
- llama-cpp
- DeepSeek-R1-Distill-Llama-70B
- gguf
- Q6_K
- 70b
- llama
- deepseek-ra
- llama-cpp
- deepseek-ai
- code
- math
- chat
- roleplay
- text-generation
- safetensors
- nlp
- code
roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q6_K-GGUF
Repo: roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q6_K-GGUF
Original Model: DeepSeek-R1-Distill-Llama-70B
Organization: deepseek-ai
Quantized File: deepseek-r1-distill-llama-70b-q6_k.gguf
Quantization: GGUF
Quantization Method: Q6_K
Use Imatrix: False
Split Model: True
Overview
This is an GGUF Q6_K quantized version of DeepSeek-R1-Distill-Llama-70B.
Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai