DandinPower's picture
End of training
2f9cf9a verified
---
language:
- en
license: mit
base_model: microsoft/deberta-v3-base
tags:
- nycu-112-2-datamining-hw2
- generated_from_trainer
datasets:
- DandinPower/review_cleanonlytitleandtext
metrics:
- accuracy
model-index:
- name: deberta-v3-base-cotat
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: DandinPower/review_cleanonlytitleandtext
type: DandinPower/review_cleanonlytitleandtext
metrics:
- name: Accuracy
type: accuracy
value: 0.623
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-cotat
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co./microsoft/deberta-v3-base) on the DandinPower/review_cleanonlytitleandtext dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4985
- Accuracy: 0.623
- Macro F1: 0.6247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 1.0223 | 0.14 | 500 | 0.9610 | 0.592 | 0.5971 |
| 1.0108 | 0.29 | 1000 | 0.9378 | 0.6044 | 0.6083 |
| 0.9323 | 0.43 | 1500 | 0.9605 | 0.589 | 0.5652 |
| 0.9651 | 0.57 | 2000 | 0.9845 | 0.5797 | 0.5687 |
| 0.928 | 0.71 | 2500 | 0.9521 | 0.5907 | 0.5656 |
| 0.9205 | 0.86 | 3000 | 0.9073 | 0.603 | 0.5740 |
| 0.9243 | 1.0 | 3500 | 0.8876 | 0.616 | 0.6113 |
| 0.8545 | 1.14 | 4000 | 0.8631 | 0.6267 | 0.6290 |
| 0.8267 | 1.29 | 4500 | 0.8908 | 0.624 | 0.6185 |
| 0.8175 | 1.43 | 5000 | 0.8771 | 0.6173 | 0.6222 |
| 0.8613 | 1.57 | 5500 | 0.9564 | 0.6209 | 0.6081 |
| 0.8138 | 1.71 | 6000 | 0.9246 | 0.6089 | 0.6063 |
| 0.7314 | 1.86 | 6500 | 0.9030 | 0.6329 | 0.6313 |
| 0.8287 | 2.0 | 7000 | 0.8753 | 0.6211 | 0.6235 |
| 0.6963 | 2.14 | 7500 | 0.9700 | 0.6247 | 0.6257 |
| 0.7034 | 2.29 | 8000 | 0.9592 | 0.6234 | 0.6220 |
| 0.679 | 2.43 | 8500 | 0.8994 | 0.6233 | 0.6272 |
| 0.7207 | 2.57 | 9000 | 1.0013 | 0.6236 | 0.6183 |
| 0.6992 | 2.71 | 9500 | 0.9385 | 0.6169 | 0.6219 |
| 0.7032 | 2.86 | 10000 | 0.9247 | 0.6366 | 0.6364 |
| 0.6949 | 3.0 | 10500 | 0.9615 | 0.6239 | 0.6281 |
| 0.5581 | 3.14 | 11000 | 1.0439 | 0.6217 | 0.6267 |
| 0.55 | 3.29 | 11500 | 1.1205 | 0.6259 | 0.6232 |
| 0.5496 | 3.43 | 12000 | 1.1122 | 0.6226 | 0.6267 |
| 0.5462 | 3.57 | 12500 | 1.0692 | 0.6251 | 0.6263 |
| 0.5121 | 3.71 | 13000 | 1.1563 | 0.6197 | 0.6214 |
| 0.531 | 3.86 | 13500 | 1.1123 | 0.6261 | 0.6256 |
| 0.5256 | 4.0 | 14000 | 1.1194 | 0.6247 | 0.6264 |
| 0.3908 | 4.14 | 14500 | 1.3631 | 0.6204 | 0.6210 |
| 0.4439 | 4.29 | 15000 | 1.4810 | 0.6204 | 0.6211 |
| 0.4252 | 4.43 | 15500 | 1.4454 | 0.6211 | 0.6217 |
| 0.3721 | 4.57 | 16000 | 1.5315 | 0.6204 | 0.6231 |
| 0.369 | 4.71 | 16500 | 1.4797 | 0.6184 | 0.6190 |
| 0.3907 | 4.86 | 17000 | 1.4857 | 0.6219 | 0.6234 |
| 0.4022 | 5.0 | 17500 | 1.4985 | 0.623 | 0.6247 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2