w11wo's picture
Update README.md
1559848
---
language: su
tags:
- sundanese-bert-base-emotion-classifier
license: mit
widget:
- text: "Punten ini akurat ga ya sieun ihh daerah aku masuk zona merah"
---
## Sundanese BERT Base Emotion Classifier
Sundanese BERT Base Emotion Classifier is an emotion-text-classification model based on the [BERT](https://arxiv.org/abs/1810.04805) model. The model was originally the pre-trained [Sundanese BERT Base Uncased](https://hf.co/luche/bert-base-sundanese-uncased) model trained by [`@luche`](https://hf.co/luche), which is then fine-tuned on the [Sundanese Twitter dataset](https://github.com/virgantara/sundanese-twitter-dataset), consisting of Sundanese tweets.
10% of the dataset is kept for evaluation purposes. After training, the model achieved an evaluation accuracy of 96.82% and F1-macro of 96.75%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co./transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------------------- | ------- | --------- | ------------------------------- |
| `sundanese-bert-base-emotion-classifier` | 110M | BERT Base | Sundanese Twitter dataset |
## Evaluation Results
The model was trained for 10 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.759800 | 0.263913 | 0.924603 | 0.925042 | 0.928426 | 0.926130 |
| 2 | 0.213100 | 0.456022 | 0.908730 | 0.906732 | 0.924141 | 0.907846 |
| 3 | 0.091900 | 0.204323 | 0.956349 | 0.955896 | 0.956226 | 0.956248 |
| 4 | 0.043800 | 0.219143 | 0.956349 | 0.955705 | 0.955848 | 0.956392 |
| 5 | 0.013700 | 0.247289 | 0.960317 | 0.959734 | 0.959477 | 0.960782 |
| 6 | 0.004800 | 0.286636 | 0.956349 | 0.955540 | 0.956519 | 0.956615 |
| 7 | 0.000200 | 0.243408 | 0.960317 | 0.959085 | 0.959145 | 0.959310 |
| 8 | 0.001500 | 0.232138 | 0.960317 | 0.959451 | 0.959427 | 0.959997 |
| 9 | 0.000100 | 0.215523 | 0.968254 | 0.967556 | 0.967192 | 0.968330 |
| 10 | 0.000100 | 0.216533 | 0.968254 | 0.967556 | 0.967192 | 0.968330 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "sundanese-bert-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Punten ini akurat ga ya sieun ihh daerah aku masuk zona merah")
```
## Disclaimer
Do consider the biases which come from both the pre-trained BERT model and the Sundanese Twitter dataset that may be carried over into the results of this model.
## Author
Sundanese BERT Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
```