R1 Reproduction Works
Collection
Open-source works to reproduce DeepSeek R1
•
49 items
•
Updated
•
9
Bitsandbytes quantization of https://huggingface.co./cognitivecomputations/Dolphin3.0-R1-Mistral-24B.
See https://huggingface.co./blog/4bit-transformers-bitsandbytes for instructions.
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import BitsAndBytesConfig
import torch
# Define the 4-bit configuration
nf4_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
# Load the pre-trained model with the 4-bit quantization configuration
model = AutoModelForCausalLM.from_pretrained("cognitivecomputations/Dolphin3.0-R1-Mistral-24B", quantization_config=nf4_config)
# Load the tokenizer associated with the model
tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/Dolphin3.0-R1-Mistral-24B")
# Push the model and tokenizer to the Hugging Face hub
model.push_to_hub("onekq-ai/Dolphin3.0-R1-Mistral-24B-bnb-4bit", use_auth_token=True)
tokenizer.push_to_hub("onekq-ai/Dolphin3.0-R1-Mistral-24B-bnb-4bit", use_auth_token=True)
Base model
mistralai/Mistral-Small-24B-Base-2501