YAML Metadata Error: "widget[10]" must be of type object

bart-base-grammar-synthesis

Open In Colab

This model is a fine-tuned version of facebook/bart-base on an expanded version of the JFLEG dataset.

You can find other grammar-synthesis models by searching for the grammar synthesis tag

Basic Usage Example

Installation

First, make sure you have the transformers package installed. You can install it using pip:

pip install -U transformers

Usage

from transformers import pipeline

# Initialize the text-generation pipeline for text correction
corrector = pipeline("text2text-generation", "pszemraj/bart-base-grammar-synthesis")

# Example text to correct
raw_text = "The toweris 324 met (1,063 ft) tall, about height as .An 81-storey building, and biggest longest structure paris. Is square, measuring 125 metres (410 ft) on each side. During its constructiothe eiffel tower surpassed the washington monument to become the tallest man-made structure in the world, a title it held for 41 yearsuntilthe chryslerbuilding in new york city was finished in 1930. It was the first structure to goat a height of 300 metres. Due 2 the addition ofa brdcasting aerial at the t0pp of the twr in 1957, it now taller than  chrysler building 5.2 metres (17 ft). Exxxcluding transmitters,  eiffel tower is  2ndd tallest ree-standing structure in france after millau viaduct."

# Correct the text using the text-generation pipeline
corrected_text = corrector(raw_text)[0]["generated_text"]

# Print the corrected text
print(corrected_text)

This example demonstrates how to use the text-generation pipeline to correct the grammar in a given text. The corrector pipeline is initialized with the "pszemraj/bart-base-grammar-synthesis" model, which is designed for grammar correction. The corrector pipeline takes the raw text as input and returns the corrected text. Make sure to install the required dependencies and models before running the code.

Intended uses & limitations

  • robust grammar correction
  • the model has a license of cc-by-nc-sa-4.0 as it uses the JFLEG dataset + augments it for training

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.02
  • num_epochs: 3.0
Downloads last month
69
Safetensors
Model size
139M params
Tensor type
F32
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train pszemraj/bart-base-grammar-synthesis

Collection including pszemraj/bart-base-grammar-synthesis