Edit model card

t5-large-finetuned2

This model is a fine-tuned version of t5-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0000
  • Rouge1: 1.0
  • Rouge2: 0.9378
  • Rougel: 1.0
  • Rougelsum: 1.0
  • Gen Len: 5.9868

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 1
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
1.2513 1.0 1111 0.9524 0.1962 0.1068 0.1958 0.1958 4.722
1.0513 2.0 2222 0.7301 0.2556 0.1631 0.2544 0.2543 5.5469
0.839 3.0 3333 0.5738 0.3136 0.2165 0.312 0.3125 5.3629
0.7156 4.0 4444 0.4505 0.3808 0.2794 0.3797 0.38 5.5364
0.6135 5.0 5555 0.3600 0.4435 0.352 0.4425 0.4427 5.6558
0.5271 6.0 6666 0.2743 0.5288 0.4371 0.5279 0.5283 5.7094
0.439 7.0 7777 0.2246 0.5781 0.4842 0.5772 0.5776 5.6331
0.3821 8.0 8888 0.1728 0.6557 0.5675 0.6549 0.6551 5.8646
0.3297 9.0 9999 0.1379 0.7083 0.6211 0.7075 0.7076 5.8331
0.2805 10.0 11110 0.1067 0.769 0.6867 0.7684 0.7685 5.8528
0.2465 11.0 12221 0.0845 0.812 0.7324 0.8113 0.8115 5.918
0.2079 12.0 13332 0.0691 0.8516 0.7748 0.8515 0.8515 5.9435
0.1746 13.0 14443 0.0527 0.8785 0.8028 0.8784 0.8783 5.9311
0.1551 14.0 15554 0.0420 0.9123 0.8387 0.9123 0.9124 5.9516
0.1374 15.0 16665 0.0304 0.9368 0.8657 0.9367 0.9367 5.9531
0.1153 16.0 17776 0.0239 0.9501 0.8822 0.95 0.95 5.967
0.0821 17.0 18887 0.0204 0.9604 0.8935 0.9603 0.9603 5.9743
0.077 18.0 19998 0.0180 0.9722 0.9049 0.9721 0.9721 5.9863
0.0784 19.0 21109 0.0118 0.9813 0.9165 0.9812 0.9812 5.9845
0.0669 20.0 22220 0.0133 0.9796 0.9143 0.9796 0.9796 5.9817
0.0511 21.0 23331 0.0082 0.9878 0.9224 0.9877 0.9877 5.986
0.0524 22.0 24442 0.0079 0.9861 0.9212 0.9861 0.9861 5.9845
0.0397 23.0 25553 0.0060 0.9907 0.9272 0.9907 0.9907 5.9832
0.0284 24.0 26664 0.0060 0.9906 0.9267 0.9906 0.9906 5.985
0.0374 25.0 27775 0.0047 0.993 0.9289 0.9929 0.993 5.9905
0.0289 26.0 28886 0.0033 0.9944 0.9311 0.9944 0.9945 5.9909
0.0304 27.0 29997 0.0034 0.9947 0.931 0.9948 0.9948 5.9873
0.0232 28.0 31108 0.0036 0.9944 0.9312 0.9944 0.9944 5.9814
0.0208 29.0 32219 0.0030 0.996 0.9332 0.996 0.996 5.9882
0.0151 30.0 33330 0.0023 0.9963 0.9333 0.9963 0.9963 5.9813
0.0193 31.0 34441 0.0020 0.9965 0.9339 0.9964 0.9965 5.9869
0.0171 32.0 35552 0.0022 0.997 0.9338 0.997 0.997 5.9865
0.0124 33.0 36663 0.0015 0.9978 0.935 0.9979 0.9979 5.9842
0.0096 34.0 37774 0.0016 0.9984 0.9358 0.9984 0.9984 5.9853
0.0107 35.0 38885 0.0005 0.9988 0.9365 0.9989 0.9989 5.9901
0.009 36.0 39996 0.0011 0.999 0.9366 0.9989 0.9989 5.9887
0.01 37.0 41107 0.0008 0.9985 0.9365 0.9986 0.9986 5.9895
0.0049 38.0 42218 0.0010 0.9985 0.9361 0.9985 0.9985 5.9899
0.0072 39.0 43329 0.0004 0.9994 0.937 0.9994 0.9994 5.9866
0.0033 40.0 44440 0.0003 0.9996 0.9375 0.9996 0.9996 5.9884
0.0028 41.0 45551 0.0003 0.9996 0.9374 0.9996 0.9996 5.9887
0.0031 42.0 46662 0.0002 0.9998 0.9377 0.9998 0.9998 5.9856
0.0026 43.0 47773 0.0002 0.9996 0.9374 0.9996 0.9996 5.9869
0.0022 44.0 48884 0.0001 0.9999 0.9377 0.9999 0.9999 5.9868
0.0015 45.0 49995 0.0000 1.0 0.9378 1.0 1.0 5.9868
0.0014 46.0 51106 0.0000 1.0 0.9378 1.0 1.0 5.9868
0.0017 47.0 52217 0.0000 1.0 0.9378 1.0 1.0 5.9868
0.0018 48.0 53328 0.0000 1.0 0.9378 1.0 1.0 5.9868
0.0007 49.0 54439 0.0000 1.0 0.9378 1.0 1.0 5.9868
0.0015 50.0 55550 0.0000 1.0 0.9378 1.0 1.0 5.9868

Framework versions

  • Transformers 4.34.1
  • Pytorch 2.0.1
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kowsiknd/t5-large-finetuned2

Base model

google-t5/t5-large
Finetuned
(68)
this model