Model Card for smollm-smtalk-v1

This model is a fine-tuned version of HuggingFaceTB/SmolLM2-135M. It has been trained using TRL.

Quick start

from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sfarrukh/smollm-smtalk-v1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])

Training procedure

Visualize in Weights & Biases

This model was trained with SFT.

Framework versions

  • TRL: 0.13.0
  • Transformers: 4.48.1
  • Pytorch: 2.5.1+cu121
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0
Downloads last month
9
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for sfarrukh/smollm-smtalk-v1

Finetuned
(344)
this model

Dataset used to train sfarrukh/smollm-smtalk-v1