File size: 1,044 Bytes
d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 c63479f d4bf020 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
# SmolLM-1.7B fine-tuned on History Q&A Generation
This model is a fine-tuned version of [HuggingFaceTB/SmolLM-1.7B](https://huggingface.co./HuggingFaceTB/SmolLM-1.7B) on a history question-answer dataset using LoRA.
## Model description
This model is designed to generate multiple-choice questions, answers, and explanations based on historical text inputs.
## Intended uses & limitations
This model is intended for educational purposes and to assist in creating history-related quiz materials.
## Training and evaluation data
The model was trained on a dataset derived from "ambrosfitz/multiple-choice-just-history".
## Training procedure
The model was fine-tuned using LoRA with the following hyperparameters:
- Number of epochs: 2
- Batch size: 1
- Learning rate: 2e-5
- Gradient accumulation steps: 16
- LoRA rank: 8
- LoRA alpha: 32
- LoRA dropout: 0.1
## Results
Test set performance: {'eval_loss': 0.3667142987251282, 'eval_runtime': 208.9333, 'eval_samples_per_second': 9.572, 'eval_steps_per_second': 9.572, 'epoch': 2.0}
|