This model is part of the GrammarCorrector tool.

"FlanT5 from scratch for the grammar correction tool" article about how this models was trained:

FlanT5 was trained using JFLEG dataset. The primary objective of the experiment was to develop a highly effective tool using relatively small models, minimal datasets, and constrained computational resources.

To accomplish this goal, we implemented two key strategies:

Downloads last month
14,243
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support peft models with pipeline type text2text-generation

Model tree for akhmat-s/t5-large-quant-grammar-corrector

Adapter
(149)
this model

Dataset used to train akhmat-s/t5-large-quant-grammar-corrector