Update README.md
Browse files
README.md
CHANGED
@@ -23,12 +23,12 @@ from transformers import BertForSequenceClassification, BertTokenizer, TextClass
|
|
23 |
model_path = "JiaqiLee/robust-bert-yelp"
|
24 |
tokenizer = BertTokenizer.from_pretrained(model_path)
|
25 |
model = BertForSequenceClassification.from_pretrained(model_path, num_labels=2)
|
26 |
-
pipeline =
|
27 |
print(pipeline("Definitely a greasy spoon! Always packed here and always a wait but worth it."))
|
28 |
```
|
29 |
|
30 |
## Training data
|
31 |
-
The training data comes Huggingface [yelp polarity dataset](https://huggingface.co/datasets/yelp_polarity). We use 90% of the `train.csv` data to train the model. \
|
32 |
We augment original training data with adversarial examples generated by PWWS, TextBugger and TextFooler.
|
33 |
|
34 |
## Evaluation results
|
|
|
23 |
model_path = "JiaqiLee/robust-bert-yelp"
|
24 |
tokenizer = BertTokenizer.from_pretrained(model_path)
|
25 |
model = BertForSequenceClassification.from_pretrained(model_path, num_labels=2)
|
26 |
+
pipeline = TextClassificationPipeline(model=model, tokenizer=tokenizer)
|
27 |
print(pipeline("Definitely a greasy spoon! Always packed here and always a wait but worth it."))
|
28 |
```
|
29 |
|
30 |
## Training data
|
31 |
+
The training data comes from Huggingface [yelp polarity dataset](https://huggingface.co/datasets/yelp_polarity). We use 90% of the `train.csv` data to train the model. \
|
32 |
We augment original training data with adversarial examples generated by PWWS, TextBugger and TextFooler.
|
33 |
|
34 |
## Evaluation results
|