Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

SmolLM-1.7B fine-tuned on History Q&A Generation

This model is a fine-tuned version of HuggingFaceTB/SmolLM-1.7B on a history question-answer dataset using LoRA.

Model description

This model is designed to generate multiple-choice questions, answers, and explanations based on historical text inputs.

Intended uses & limitations

This model is intended for educational purposes and to assist in creating history-related quiz materials.

Training and evaluation data

The model was trained on a dataset derived from "ambrosfitz/multiple-choice-just-history".

Training procedure

The model was fine-tuned using LoRA with the following hyperparameters:

  • Number of epochs: 2
  • Batch size: 1
  • Learning rate: 2e-5
  • Gradient accumulation steps: 16
  • LoRA rank: 8
  • LoRA alpha: 32
  • LoRA dropout: 0.1

Results

Test set performance: {'eval_loss': 0.3667142987251282, 'eval_runtime': 208.9333, 'eval_samples_per_second': 9.572, 'eval_steps_per_second': 9.572, 'epoch': 2.0}

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .