Edit model card

framing_classification_longformer_30_augmented_multi_undersampled_second

This model is a fine-tuned version of allenai/longformer-base-4096 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3928
  • Accuracy: 0.9452
  • F1: 0.9479
  • Precision: 0.9496
  • Recall: 0.9468

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
0.7779 1.0 11979 0.7469 0.8671 0.8587 0.8727 0.8604
0.5767 2.0 23958 0.6461 0.9025 0.8972 0.9102 0.8975
0.4894 3.0 35937 0.5479 0.9218 0.9208 0.9254 0.9200
0.5261 4.0 47916 0.4513 0.9238 0.9238 0.9294 0.9212
0.4559 5.0 59895 0.4831 0.9225 0.9219 0.9285 0.9205
0.4335 6.0 71874 0.6019 0.9138 0.9132 0.9211 0.9123
0.4899 7.0 83853 0.4653 0.9299 0.9295 0.9356 0.9264
0.5907 8.0 95832 0.4006 0.9419 0.9437 0.9494 0.9403
0.5436 9.0 107811 0.4034 0.9472 0.9494 0.9525 0.9479
0.3722 10.0 119790 0.4336 0.9385 0.9391 0.9433 0.9386
0.443 11.0 131769 0.4519 0.9332 0.9365 0.9375 0.9363
0.3867 12.0 143748 0.4657 0.9232 0.9268 0.9242 0.9301
0.3362 13.0 155727 0.4739 0.9305 0.9336 0.9327 0.9353
0.3106 14.0 167706 0.4533 0.9399 0.9404 0.9445 0.9386
0.3265 15.0 179685 0.3928 0.9452 0.9479 0.9496 0.9468
0.2352 16.0 191664 0.4770 0.9238 0.9279 0.9251 0.9320
0.2962 17.0 203643 0.4575 0.9345 0.9383 0.9390 0.9377
0.3453 18.0 215622 0.6071 0.9152 0.9203 0.9174 0.9253
0.1919 19.0 227601 0.5484 0.9238 0.9280 0.9260 0.9313
0.1667 20.0 239580 0.4992 0.9325 0.9358 0.9359 0.9363
0.1111 21.0 251559 0.5727 0.9272 0.9319 0.9292 0.9358
0.1705 22.0 263538 0.5028 0.9345 0.9392 0.9385 0.9401
0.1745 23.0 275517 0.5244 0.9345 0.9389 0.9387 0.9394
0.1366 24.0 287496 0.5165 0.9345 0.9393 0.9370 0.9423
0.1031 25.0 299475 0.5479 0.9339 0.9382 0.9368 0.9399
0.0851 26.0 311454 0.6036 0.9345 0.9386 0.9378 0.9396
0.125 27.0 323433 0.5253 0.9385 0.9424 0.9419 0.9430
0.1193 28.0 335412 0.5650 0.9352 0.9394 0.9376 0.9415
0.067 29.0 347391 0.5818 0.9352 0.9395 0.9376 0.9418
0.0219 30.0 359370 0.5994 0.9359 0.9402 0.9385 0.9423

Framework versions

  • Transformers 4.32.0.dev0
  • Pytorch 2.0.1
  • Datasets 2.14.4
  • Tokenizers 0.13.3
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AriyanH22/framing_classification_longformer_30_augmented_multi_undersampled_second

Finetuned
(75)
this model