my-clf-microsoft / README.md
krittapas's picture
End of training
fa4e417 verified
|
raw
history blame
2.7 kB
metadata
license: mit
base_model: avsolatorio/GIST-large-Embedding-v0
tags:
  - generated_from_trainer
metrics:
  - f1
  - accuracy
model-index:
  - name: my-clf-microsoft
    results: []

my-clf-microsoft

This model is a fine-tuned version of avsolatorio/GIST-large-Embedding-v0 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2381
  • F1: 0.5822
  • Roc Auc: 0.7634
  • Accuracy: 0.1786

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 16

Training results

Training Loss Epoch Step Validation Loss F1 Roc Auc Accuracy
No log 1.0 50 0.3147 0.0656 0.5250 0.0
No log 2.0 100 0.2808 0.2809 0.6082 0.0536
No log 3.0 150 0.2539 0.3854 0.6521 0.0357
No log 4.0 200 0.2451 0.4085 0.6582 0.0714
No log 5.0 250 0.2351 0.4365 0.6734 0.1071
No log 6.0 300 0.2361 0.4977 0.7133 0.125
No log 7.0 350 0.2325 0.5629 0.7433 0.1607
No log 8.0 400 0.2294 0.5488 0.7401 0.1964
No log 9.0 450 0.2336 0.5750 0.7567 0.1964
0.1718 10.0 500 0.2342 0.5695 0.7563 0.1964
0.1718 11.0 550 0.2354 0.5809 0.7648 0.1964
0.1718 12.0 600 0.2349 0.5862 0.7658 0.1786
0.1718 13.0 650 0.2390 0.5811 0.7645 0.1786
0.1718 14.0 700 0.2367 0.5841 0.7633 0.2143
0.1718 15.0 750 0.2376 0.5778 0.7606 0.1786
0.1718 16.0 800 0.2381 0.5822 0.7634 0.1786

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.2
  • Datasets 2.1.0
  • Tokenizers 0.15.2