|
--- |
|
license: mit |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- banking77 |
|
metrics: |
|
- accuracy |
|
widget: |
|
- text: 'Can I track the card you sent to me? ' |
|
example_title: Card Arrival Example - English |
|
- text: 'Posso tracciare la carta che mi avete spedito? ' |
|
example_title: Card Arrival Example - Italian |
|
- text: Can you explain your exchange rate policy to me? |
|
example_title: Exchange Rate Example - English |
|
- text: Potete spiegarmi la vostra politica dei tassi di cambio? |
|
example_title: Exchange Rate Example - Italian |
|
- text: I can't pay by my credit card |
|
example_title: Card Not Working Example - English |
|
- text: Non riesco a pagare con la mia carta di credito |
|
example_title: Card Not Working Example - Italian |
|
base_model: xlm-roberta-base |
|
model-index: |
|
- name: xlm-roberta-base-banking77-classification |
|
results: |
|
- task: |
|
type: text-classification |
|
name: Text Classification |
|
dataset: |
|
name: banking77 |
|
type: banking77 |
|
config: default |
|
split: train |
|
args: default |
|
metrics: |
|
- type: accuracy |
|
value: 0.9321428571428572 |
|
name: Accuracy |
|
- task: |
|
type: text-classification |
|
name: Text Classification |
|
dataset: |
|
name: banking77 |
|
type: banking77 |
|
config: default |
|
split: test |
|
metrics: |
|
- type: accuracy |
|
value: 0.9321428571428572 |
|
name: Accuracy |
|
verified: true |
|
- type: precision |
|
value: 0.9339627666926148 |
|
name: Precision Macro |
|
verified: true |
|
- type: precision |
|
value: 0.9321428571428572 |
|
name: Precision Micro |
|
verified: true |
|
- type: precision |
|
value: 0.9339627666926148 |
|
name: Precision Weighted |
|
verified: true |
|
- type: recall |
|
value: 0.9321428571428572 |
|
name: Recall Macro |
|
verified: true |
|
- type: recall |
|
value: 0.9321428571428572 |
|
name: Recall Micro |
|
verified: true |
|
- type: recall |
|
value: 0.9321428571428572 |
|
name: Recall Weighted |
|
verified: true |
|
- type: f1 |
|
value: 0.9320514513719953 |
|
name: F1 Macro |
|
verified: true |
|
- type: f1 |
|
value: 0.9321428571428572 |
|
name: F1 Micro |
|
verified: true |
|
- type: f1 |
|
value: 0.9320514513719956 |
|
name: F1 Weighted |
|
verified: true |
|
- type: loss |
|
value: 0.30337899923324585 |
|
name: loss |
|
verified: true |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# xlm-roberta-base-banking77-classification |
|
|
|
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co./xlm-roberta-base) on the banking77 dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.3034 |
|
- Accuracy: 0.9321 |
|
- F1 Score: 0.9321 |
|
|
|
## Model description |
|
|
|
Experiment on a cross-language model to assess how accurate the classification is by using for fine tuning an English dataset but later querying the model in Italian. |
|
|
|
## Intended uses & limitations |
|
|
|
The model can be used on text classification. In particular is fine tuned on banking domain for multilingual task. |
|
|
|
## Training and evaluation data |
|
|
|
The dataset used is [banking77](https://huggingface.co./datasets/banking77) |
|
|
|
The 77 labels are: |
|
|
|
|label|intent| |
|
|:---:|:----:| |
|
|0|activate_my_card| |
|
|1|age_limit| |
|
|2|apple_pay_or_google_pay| |
|
|3|atm_support| |
|
|4|automatic_top_up| |
|
|5|balance_not_updated_after_bank_transfer| |
|
|6|balance_not_updated_after_cheque_or_cash_deposit| |
|
|7|beneficiary_not_allowed| |
|
|8|cancel_transfer| |
|
|9|card_about_to_expire| |
|
|10|card_acceptance| |
|
|11|card_arrival| |
|
|12|card_delivery_estimate| |
|
|13|card_linking| |
|
|14|card_not_working| |
|
|15|card_payment_fee_charged| |
|
|16|card_payment_not_recognised| |
|
|17|card_payment_wrong_exchange_rate| |
|
|18|card_swallowed| |
|
|19|cash_withdrawal_charge| |
|
|20|cash_withdrawal_not_recognised| |
|
|21|change_pin| |
|
|22|compromised_card| |
|
|23|contactless_not_working| |
|
|24|country_support| |
|
|25|declined_card_payment| |
|
|26|declined_cash_withdrawal| |
|
|27|declined_transfer| |
|
|28|direct_debit_payment_not_recognised| |
|
|29|disposable_card_limits| |
|
|30|edit_personal_details| |
|
|31|exchange_charge| |
|
|32|exchange_rate| |
|
|33|exchange_via_app| |
|
|34|extra_charge_on_statement| |
|
|35|failed_transfer| |
|
|36|fiat_currency_support| |
|
|37|get_disposable_virtual_card| |
|
|38|get_physical_card| |
|
|39|getting_spare_card| |
|
|40|getting_virtual_card| |
|
|41|lost_or_stolen_card| |
|
|42|lost_or_stolen_phone| |
|
|43|order_physical_card| |
|
|44|passcode_forgotten| |
|
|45|pending_card_payment| |
|
|46|pending_cash_withdrawal| |
|
|47|pending_top_up| |
|
|48|pending_transfer| |
|
|49|pin_blocked| |
|
|50|receiving_money| |
|
|51|Refund_not_showing_up| |
|
|52|request_refund| |
|
|53|reverted_card_payment?| |
|
|54|supported_cards_and_currencies| |
|
|55|terminate_account| |
|
|56|top_up_by_bank_transfer_charge| |
|
|57|top_up_by_card_charge| |
|
|58|top_up_by_cash_or_cheque| |
|
|59|top_up_failed| |
|
|60|top_up_limits| |
|
|61|top_up_reverted| |
|
|62|topping_up_by_card| |
|
|63|transaction_charged_twice| |
|
|64|transfer_fee_charged| |
|
|65|transfer_into_account| |
|
|66|transfer_not_received_by_recipient| |
|
|67|transfer_timing| |
|
|68|unable_to_verify_identity| |
|
|69|verify_my_identity| |
|
|70|verify_source_of_funds| |
|
|71|verify_top_up| |
|
|72|virtual_card_not_working| |
|
|73|visa_or_mastercard| |
|
|74|why_verify_identity| |
|
|75|wrong_amount_of_cash_received| |
|
|76|wrong_exchange_rate_for_cash_withdrawal| |
|
|
|
|
|
## Training procedure |
|
|
|
``` |
|
from transformers import pipeline |
|
pipe = pipeline("text-classification", model="nickprock/xlm-roberta-base-banking77-classification") |
|
pipe("Non riesco a pagare con la carta di credito") |
|
``` |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 2e-05 |
|
- train_batch_size: 64 |
|
- eval_batch_size: 64 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 20 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | |
|
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| |
|
| 3.8002 | 1.0 | 157 | 2.7771 | 0.5159 | 0.4483 | |
|
| 2.4006 | 2.0 | 314 | 1.6937 | 0.7140 | 0.6720 | |
|
| 1.4633 | 3.0 | 471 | 1.0385 | 0.8308 | 0.8153 | |
|
| 0.9234 | 4.0 | 628 | 0.7008 | 0.8789 | 0.8761 | |
|
| 0.6163 | 5.0 | 785 | 0.5029 | 0.9068 | 0.9063 | |
|
| 0.4282 | 6.0 | 942 | 0.4084 | 0.9123 | 0.9125 | |
|
| 0.3203 | 7.0 | 1099 | 0.3515 | 0.9253 | 0.9253 | |
|
| 0.245 | 8.0 | 1256 | 0.3295 | 0.9227 | 0.9225 | |
|
| 0.1863 | 9.0 | 1413 | 0.3092 | 0.9269 | 0.9269 | |
|
| 0.1518 | 10.0 | 1570 | 0.2901 | 0.9338 | 0.9338 | |
|
| 0.1179 | 11.0 | 1727 | 0.2938 | 0.9318 | 0.9319 | |
|
| 0.0969 | 12.0 | 1884 | 0.2906 | 0.9328 | 0.9328 | |
|
| 0.0805 | 13.0 | 2041 | 0.2963 | 0.9295 | 0.9295 | |
|
| 0.063 | 14.0 | 2198 | 0.2998 | 0.9289 | 0.9288 | |
|
| 0.0554 | 15.0 | 2355 | 0.2933 | 0.9351 | 0.9349 | |
|
| 0.046 | 16.0 | 2512 | 0.2960 | 0.9328 | 0.9326 | |
|
| 0.04 | 17.0 | 2669 | 0.3032 | 0.9318 | 0.9318 | |
|
| 0.035 | 18.0 | 2826 | 0.3061 | 0.9312 | 0.9312 | |
|
| 0.0317 | 19.0 | 2983 | 0.3030 | 0.9331 | 0.9330 | |
|
| 0.0315 | 20.0 | 3140 | 0.3034 | 0.9321 | 0.9321 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.21.1 |
|
- Pytorch 1.12.1+cu113 |
|
- Datasets 2.4.0 |
|
- Tokenizers 0.12.1 |
|
|