YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

r-3b-tesla - GGUF

Name Quant method Size
r-3b-tesla.Q2_K.gguf Q2_K 1.19GB
r-3b-tesla.IQ3_XS.gguf IQ3_XS 1.3GB
r-3b-tesla.IQ3_S.gguf IQ3_S 1.36GB
r-3b-tesla.Q3_K_S.gguf Q3_K_S 1.35GB
r-3b-tesla.IQ3_M.gguf IQ3_M 1.39GB
r-3b-tesla.Q3_K.gguf Q3_K 1.48GB
r-3b-tesla.Q3_K_M.gguf Q3_K_M 1.48GB
r-3b-tesla.Q3_K_L.gguf Q3_K_L 1.59GB
r-3b-tesla.IQ4_XS.gguf IQ4_XS 1.63GB
r-3b-tesla.Q4_0.gguf Q4_0 1.7GB
r-3b-tesla.IQ4_NL.gguf IQ4_NL 1.71GB
r-3b-tesla.Q4_K_S.gguf Q4_K_S 1.71GB
r-3b-tesla.Q4_K.gguf Q4_K 1.8GB
r-3b-tesla.Q4_K_M.gguf Q4_K_M 1.8GB
r-3b-tesla.Q4_1.gguf Q4_1 1.86GB
r-3b-tesla.Q5_0.gguf Q5_0 2.02GB
r-3b-tesla.Q5_K_S.gguf Q5_K_S 2.02GB
r-3b-tesla.Q5_K.gguf Q5_K 2.07GB
r-3b-tesla.Q5_K_M.gguf Q5_K_M 2.07GB
r-3b-tesla.Q5_1.gguf Q5_1 2.18GB
r-3b-tesla.Q6_K.gguf Q6_K 2.36GB
r-3b-tesla.Q8_0.gguf Q8_0 3.06GB

Original model description:

tags: - autotrain - text-generation-inference - text-generation - peft library_name: transformers base_model: Qwen/Qwen2.5-3B-Instruct widget: - messages: - role: user content: What is your favorite condiment? license: other

Model Trained Using AutoTrain

This model was trained using AutoTrain. For more information, please visit AutoTrain.

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "PATH_TO_THIS_REPO"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "hi"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)
Downloads last month
0
GGUF
Model size
3.09B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.