Model description

This model is a fine-tuned version of the bert-base-uncased model to classify toxic comments.
The BERT model is finetuned using adversarial training to boost robustness against textual adversarial attacks.

How to use

You can use the model with the following code.

from transformers import BertForSequenceClassification, BertTokenizer, TextClassificationPipeline
model_path = "JiaqiLee/robust-bert-jigsaw"
tokenizer = BertTokenizer.from_pretrained(model_path)
model = BertForSequenceClassification.from_pretrained(model_path, num_labels=2)
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
print(pipeline("You're a fucking nerd."))

Training data

The training data comes from this Kaggle competition. We use 90% of the train.csv data to train the model.
We augment original training data with adversarial examples generated by PWWS, TextBugger and TextFooler.

Evaluation results

The model achieves 0.95 AUC in a 1500 rows held-out test set.

Downloads last month
161
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train JiaqiLee/robust-bert-jigsaw