A full finetune of Llama 2 7B using my Alpaca-transformed CoEdIT dataset. I gave it three epochs of training using a single A100 80GB GPU.
The intent was to create a L2 model that specializes in grammar correction. Results may vary.
Prompt Format
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Remove all grammatical errors from this text: <insert text here>
### Response:
- Downloads last month
- 6
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.