Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co./docs/hub/model-cards#model-card-metadata)

TextAttack Model Card

This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack 
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned 
for 10 epochs with a batch size of 64, a learning 
rate of 2e-05, and a maximum sequence length of 128. 
Since this was a classification task, the model was trained with a cross-entropy loss function. 
The best score the model achieved on this task was 0.9033771106941839, as measured by the 
eval set accuracy, found after 2 epochs.

For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack).
Downloads last month
151
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Spaces using textattack/roberta-base-rotten-tomatoes 4