mhr2004's picture
mhr2004/bert-base-uncased-nsp-50000-1e-06-16
6c27653 verified
metadata
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
  - generated_from_trainer
model-index:
  - name: bert-base-uncased-nsp-50000-1e-06-16
    results: []

bert-base-uncased-nsp-50000-1e-06-16

This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2315

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 64
  • eval_batch_size: 1024
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss
0.6829 1.0 782 0.6331
0.589 2.0 1564 0.5425
0.4436 3.0 2346 0.3892
0.3601 4.0 3128 0.3341
0.3181 5.0 3910 0.3010
0.2893 6.0 4692 0.2816
0.2736 7.0 5474 0.2711
0.2505 8.0 6256 0.2626
0.2277 9.0 7038 0.2539
0.216 10.0 7820 0.2474
0.207 11.0 8602 0.2452
0.212 12.0 9384 0.2398
0.1956 13.0 10166 0.2384
0.1794 14.0 10948 0.2374
0.1894 15.0 11730 0.2358
0.1731 16.0 12512 0.2358
0.1717 17.0 13294 0.2315
0.1652 18.0 14076 0.2349
0.1647 19.0 14858 0.2353
0.1565 20.0 15640 0.2349

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1