--- language: - en license: mit tags: - nycu-112-2-datamining-hw2 - generated_from_trainer base_model: microsoft/deberta-v2-xxlarge datasets: - DandinPower/review_onlytitleandtext metrics: - accuracy model-index: - name: deberta-v2-xxlarge-otat-small-lr results: - task: type: text-classification name: Text Classification dataset: name: DandinPower/review_onlytitleandtext type: DandinPower/review_onlytitleandtext metrics: - type: accuracy value: 0.668 name: Accuracy --- # deberta-v2-xxlarge-otat-small-lr This model is a fine-tuned version of [microsoft/deberta-v2-xxlarge](https://huggingface.co./microsoft/deberta-v2-xxlarge) on the DandinPower/review_onlytitleandtext dataset. It achieves the following results on the evaluation set: - Loss: 0.7982 - Accuracy: 0.668 - Macro F1: 0.6665 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.8e-06 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Macro F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 1.6073 | 0.23 | 100 | 1.5910 | 0.2409 | 0.1625 | | 1.5142 | 0.46 | 200 | 1.2862 | 0.439 | 0.3770 | | 1.0421 | 0.69 | 300 | 0.8956 | 0.617 | 0.6084 | | 0.8818 | 0.91 | 400 | 0.8344 | 0.6487 | 0.6462 | | 0.8309 | 1.14 | 500 | 0.8180 | 0.6586 | 0.6575 | | 0.8029 | 1.37 | 600 | 0.8090 | 0.6603 | 0.6589 | | 0.7949 | 1.6 | 700 | 0.8124 | 0.6613 | 0.6538 | | 0.7847 | 1.83 | 800 | 0.7775 | 0.6696 | 0.6698 | | 0.7717 | 2.06 | 900 | 0.7727 | 0.6703 | 0.6699 | | 0.7445 | 2.29 | 1000 | 0.7767 | 0.669 | 0.6646 | | 0.7367 | 2.51 | 1100 | 0.7774 | 0.6693 | 0.6676 | | 0.7419 | 2.74 | 1200 | 0.7580 | 0.674 | 0.6743 | | 0.7394 | 2.97 | 1300 | 0.7660 | 0.6714 | 0.6722 | | 0.7253 | 3.2 | 1400 | 0.7695 | 0.6717 | 0.6740 | | 0.7155 | 3.43 | 1500 | 0.7623 | 0.6676 | 0.6699 | | 0.7089 | 3.66 | 1600 | 0.7762 | 0.6687 | 0.6630 | | 0.7041 | 3.89 | 1700 | 0.7670 | 0.6716 | 0.6719 | | 0.6982 | 4.11 | 1800 | 0.7735 | 0.6699 | 0.6659 | | 0.6778 | 4.34 | 1900 | 0.7676 | 0.6701 | 0.6676 | | 0.6919 | 4.57 | 2000 | 0.7772 | 0.6717 | 0.6692 | | 0.6919 | 4.8 | 2100 | 0.7751 | 0.6687 | 0.6662 | | 0.6721 | 5.03 | 2200 | 0.7955 | 0.6666 | 0.6613 | | 0.6576 | 5.26 | 2300 | 0.7765 | 0.6714 | 0.6720 | | 0.6675 | 5.49 | 2400 | 0.7900 | 0.6703 | 0.6711 | | 0.6641 | 5.71 | 2500 | 0.7780 | 0.6689 | 0.6676 | | 0.6669 | 5.94 | 2600 | 0.7751 | 0.6687 | 0.6675 | | 0.6368 | 6.17 | 2700 | 0.7995 | 0.6691 | 0.6690 | | 0.647 | 6.4 | 2800 | 0.7962 | 0.668 | 0.6635 | | 0.6285 | 6.63 | 2900 | 0.7861 | 0.6699 | 0.6702 | | 0.6656 | 6.86 | 3000 | 0.7939 | 0.6706 | 0.6695 | | 0.6397 | 7.09 | 3100 | 0.7876 | 0.668 | 0.6672 | | 0.6252 | 7.31 | 3200 | 0.8001 | 0.669 | 0.6671 | | 0.6378 | 7.54 | 3300 | 0.8006 | 0.6687 | 0.6675 | | 0.6243 | 7.77 | 3400 | 0.7982 | 0.668 | 0.6665 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2