GIZ
/

ppsingh's picture
Update README.md
8099a4a verified
metadata
license: mit
base_model: BAAI/bge-base-en-v1.5
tags:
  - generated_from_trainer
model-index:
  - name: ADAPMIT-multilabel-bge
    results: []
datasets:
  - GIZ/policy_classification
library_name: transformers
co2_eq_emissions:
  emissions: 40.5174303026829
  source: codecarbon
  training_type: fine-tuning
  on_cloud: true
  cpu_model: Intel(R) Xeon(R) CPU @ 2.00GHz
  ram_total_size: 12.6747894287109
  hours_used: 0.994
  hardware_used: 1 x Tesla T4
pipeline_tag: text-classification

ADAPMIT-multilabel-bge

This model is a fine-tuned version of BAAI/bge-base-en-v1.5 on the on the Policy-Classification dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3101
  • Precision-micro: 0.9058
  • Precision-samples: 0.8647
  • Precision-weighted: 0.9058
  • Recall-micro: 0.9305
  • Recall-samples: 0.8693
  • Recall-weighted: 0.9305
  • F1-micro: 0.9180
  • F1-samples: 0.8622
  • F1-weighted: 0.9180

Model description

The purpose of this model is to predict multiple labels simultaneously from a given input data. Specifically, the model will predict 2 labels - AdaptationLabel, MitigationLabel - that are relevant to a particular task or application

Intended uses & limitations

More information needed

Training and evaluation data

  • Training Dataset: 12538

    Class Positive Count of Class
    AdaptationLabel 5439
    MitigationLabel 6659
  • Validation Dataset: 1190

    Class Positive Count of Class
    AdaptationLabel 533
    MitigationLabel 604

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4.08e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 300
  • num_epochs: 4

Training results

Training Loss Epoch Step Validation Loss Precision-micro Precision-samples Precision-weighted Recall-micro Recall-samples Recall-weighted F1-micro F1-samples F1-weighted
0.3368 1.0 784 0.2917 0.8651 0.8450 0.8664 0.9138 0.8542 0.9138 0.8888 0.8437 0.8890
0.1807 2.0 1568 0.2549 0.9092 0.8643 0.9094 0.9156 0.8571 0.9156 0.9124 0.8571 0.9123
0.0955 3.0 2352 0.2988 0.9069 0.8660 0.9072 0.9252 0.8655 0.9252 0.9160 0.8613 0.9160
0.0495 4.0 3136 0.3101 0.9058 0.8647 0.9058 0.9305 0.8693 0.9305 0.9180 0.8622 0.9180
label precision recall f1-score support
:-------------: :---------: :-----: :------: :------:
AdaptationLabel 0.910 0.928 0.919 533.0
MitigationLabel 0.902 0.932 0.917 604.0

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Carbon Emitted: 0.04051 kg of CO2
  • Hours Used: 0.994 hours

Training Hardware

  • On Cloud: yes
  • GPU Model: 1 x Tesla T4
  • CPU Model: Intel(R) Xeon(R) CPU @ 2.00GHz
  • RAM Size: 12.67 GB

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2