--- language: su tags: - sundanese-roberta-base-emotion-classifier license: mit widget: - text: "Wah, éta gélo, keren pisan!" --- ## Sundanese RoBERTa Base Emotion Classifier Sundanese RoBERTa Base Emotion Classifier is an emotion-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Sundanese RoBERTa Base](https://hf.co/w11wo/sundanese-roberta-base) model, which is then fine-tuned on the [Sundanese Twitter dataset](https://github.com/virgantara/sundanese-twitter-dataset), consisting of Sundanese tweets. 10% of the dataset is kept for evaluation purposes. After training, the model achieved an evaluation accuracy of 98.41% and F1-macro of 98.43%. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co./transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ------------------------------------------- | ------- | ------------ | ------------------------------- | | `sundanese-roberta-base-emotion-classifier` | 124M | RoBERTa Base | Sundanese Twitter dataset | ## Evaluation Results The model was trained for 10 epochs and the best model was loaded at the end. | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | | ----- | ------------- | --------------- | -------- | -------- | --------- | -------- | | 1 | 0.801800 | 0.293695 | 0.900794 | 0.899048 | 0.903466 | 0.900406 | | 2 | 0.208700 | 0.185291 | 0.936508 | 0.935520 | 0.939460 | 0.935540 | | 3 | 0.089700 | 0.150287 | 0.956349 | 0.956569 | 0.956500 | 0.958612 | | 4 | 0.025600 | 0.130889 | 0.972222 | 0.972865 | 0.973029 | 0.973184 | | 5 | 0.002200 | 0.100031 | 0.980159 | 0.980430 | 0.980430 | 0.980430 | | 6 | 0.001300 | 0.104971 | 0.980159 | 0.980430 | 0.980430 | 0.980430 | | 7 | 0.000600 | 0.107744 | 0.980159 | 0.980174 | 0.980814 | 0.979743 | | 8 | 0.000500 | 0.102327 | 0.980159 | 0.980171 | 0.979970 | 0.980430 | | 9 | 0.000500 | 0.101935 | 0.984127 | 0.984376 | 0.984073 | 0.984741 | | 10 | 0.000400 | 0.105965 | 0.984127 | 0.984142 | 0.983720 | 0.984741 | ## How to Use ### As Text Classifier ```python from transformers import pipeline pretrained_name = "sundanese-roberta-base-emotion-classifier" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("Wah, éta gélo, keren pisan!") ``` ## Disclaimer Do consider the biases which come from both the pre-trained RoBERTa model and the Sundanese Twitter dataset that may be carried over into the results of this model. ## Author Sundanese RoBERTa Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access. ## Citation Information ```bib @article{rs-907893, author = {Wongso, Wilson and Lucky, Henry and Suhartono, Derwin}, journal = {Journal of Big Data}, year = {2022}, month = {Feb}, day = {26}, abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.}, issn = {2693-5015}, doi = {10.21203/rs.3.rs-907893/v1}, url = {https://doi.org/10.21203/rs.3.rs-907893/v1} } ```