jmeadows17's picture
Update README.md
b9e46de verified
|
raw
history blame
1.02 kB
metadata
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
library_name: peft

Overview

DerivationGeneration8B is a QLoRA fine-tuned from a quantised LLaMa-3.1-8B checkpoint on 15K (LaTeX) synthetic mathematical derivations (containing 4 - 10 equations) via a custom early stopping script using ROUGE as the validation metric. This LoRA was trained for 6 epochs, and the approach outperforms MathT5 https://huggingface.co./jmeadows17/MathT5-large in both in-distribution and perturbed evaluation cases presented in related work https://arxiv.org/abs/2307.09998.

How to use

A notebook for inference is available here https://github.com/jmeadows17/deriving-equations-with-LLMs/blob/main/llama_evaluation.ipynb. Training scripts are also available in the repository.

Example prompt

prompt = "Given \\cos{(q)} = \\theta{(q)}, then derive - \\sin{(q)} = \\frac{d}{d q} \\theta{(q)}, then obtain (- \\sin{(q)})^{q} (\\frac{d}{d q} \\cos{(q)})^{q} = (- \\sin{(q)})^{2 q}"

  • PEFT 0.12.0