buruzaemon's picture
Update README.md
aac0060 verified
|
raw
history blame
2.98 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - clinc_oos
metrics:
  - accuracy
model-index:
  - name: distilbert-base-uncased-distilled-clinc
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: clinc_oos
          type: clinc_oos
          args: plus
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.944516129032258

distilbert-base-uncased-distilled-clinc

This model is a fine-tuned version of distilbert-base-uncased on the clinc_oos dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2565
  • Accuracy: 0.9445

Model description

This is subsequent example of knowledge-distillation used transformers.Trainer.hyperparameter_search with the default Optuna back to find optimal values for the following hyperparameters:

  • num_train_epochs
  • alpha
  • temperature

Intended uses & limitations

More information needed

Training and evaluation data

The training and evaluation data come straight from the train and validation splits in the clinc_oos dataset, respectively; and tokenized using the distilbert-base-uncased tokenization.

Training procedure

Hyperparameter-search was done via default backend Optuna, leading to the values below.

Please see page 228 in Chapter 8: Making Transformers Efficient in Production, Natural Language Processing with Transformers, May 2022.

Training hyperparameters

The following hyperparameters were used during training:

  • num_epochs: 10
  • alpha: 0.5858821400787321
  • temperature: 4.917005721212045
  • learning_rate: 2e-05
  • train_batch_size: 48
  • eval_batch_size: 48
  • seed: 8675309
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 318 2.0029 0.6910
2.3585 2.0 636 1.0585 0.8626
2.3585 3.0 954 0.6001 0.9058
0.9378 4.0 1272 0.4072 0.9348
0.4053 5.0 1590 0.3274 0.9387
0.4053 6.0 1908 0.2951 0.9426
0.2433 7.0 2226 0.2734 0.9439
0.1871 8.0 2544 0.2625 0.9452
0.1871 9.0 2862 0.2566 0.9452
0.166 10.0 3180 0.2565 0.9445

Framework versions

  • Transformers 4.16.2
  • Pytorch 2.1.2+cu121
  • Datasets 1.16.1
  • Tokenizers 0.15.1