Edit model card

text_shortening_model_v72

This model is a fine-tuned version of t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6295
  • Bert precision: 0.9015
  • Bert recall: 0.9003
  • Bert f1-score: 0.9004
  • Average word count: 6.4845
  • Max word count: 16
  • Min word count: 2
  • Average token count: 10.5656
  • % shortened texts with length > 12: 1.1011

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 40

Training results

Training Loss Epoch Step Validation Loss Bert precision Bert recall Bert f1-score Average word count Max word count Min word count Average token count % shortened texts with length > 12
1.6981 1.0 37 1.2099 0.8879 0.8868 0.8868 6.5786 15 1 10.3994 0.8008
1.1993 2.0 74 1.1320 0.8939 0.89 0.8914 6.3013 16 2 10.2663 0.9009
1.0205 3.0 111 1.1073 0.8929 0.8931 0.8925 6.6507 16 2 10.7057 1.6016
0.8912 4.0 148 1.0787 0.8967 0.8966 0.8962 6.5896 16 2 10.5926 1.6016
0.8027 5.0 185 1.1123 0.8991 0.8959 0.897 6.3994 16 2 10.4164 1.1011
0.7251 6.0 222 1.1148 0.8983 0.8941 0.8957 6.3013 16 2 10.3333 1.3013
0.6534 7.0 259 1.1348 0.8993 0.8931 0.8957 6.2332 16 2 10.2012 1.2012
0.5895 8.0 296 1.1537 0.8982 0.8959 0.8966 6.4945 16 2 10.4995 1.6016
0.5483 9.0 333 1.1656 0.901 0.8978 0.899 6.4184 16 2 10.4505 1.7017
0.5117 10.0 370 1.1919 0.8977 0.896 0.8964 6.4565 15 2 10.5696 1.1011
0.4639 11.0 407 1.2106 0.8999 0.8956 0.8973 6.2653 15 2 10.2943 1.001
0.4267 12.0 444 1.2419 0.8975 0.8958 0.8962 6.4625 17 2 10.5115 1.7017
0.4069 13.0 481 1.2583 0.9023 0.8964 0.8988 6.1812 15 2 10.1942 0.9009
0.3775 14.0 518 1.2887 0.8991 0.8982 0.8982 6.4384 15 2 10.5676 1.5015
0.3495 15.0 555 1.3282 0.9015 0.8984 0.8995 6.3604 15 2 10.4895 0.9009
0.3281 16.0 592 1.3276 0.9012 0.8973 0.8988 6.2753 15 2 10.3413 0.5005
0.3083 17.0 629 1.3539 0.9007 0.8979 0.8989 6.3504 16 2 10.3874 1.6016
0.2906 18.0 666 1.3720 0.9006 0.8986 0.8992 6.4204 14 2 10.4785 1.2012
0.2793 19.0 703 1.4130 0.8997 0.8986 0.8987 6.4374 16 2 10.5345 1.5015
0.2656 20.0 740 1.4376 0.9026 0.8986 0.9002 6.2843 16 2 10.3834 1.2012
0.2399 21.0 777 1.4429 0.901 0.8997 0.8999 6.4545 16 2 10.5516 1.5015
0.2316 22.0 814 1.4807 0.899 0.8987 0.8983 6.4975 16 2 10.6667 1.3013
0.2237 23.0 851 1.4941 0.9002 0.8974 0.8983 6.3363 15 2 10.4484 0.9009
0.2079 24.0 888 1.5101 0.9011 0.8982 0.8992 6.3443 16 2 10.4104 1.2012
0.2007 25.0 925 1.5176 0.8991 0.8983 0.8982 6.5065 16 2 10.6216 1.001
0.1952 26.0 962 1.5253 0.9005 0.8979 0.8987 6.3934 15 2 10.4835 1.1011
0.1901 27.0 999 1.5440 0.9007 0.8985 0.8991 6.3904 16 2 10.5185 0.8008
0.1838 28.0 1036 1.5540 0.9008 0.9002 0.9 6.4985 16 2 10.6176 1.3013
0.1773 29.0 1073 1.5576 0.9013 0.9001 0.9003 6.4835 16 2 10.5866 1.3013
0.1692 30.0 1110 1.5746 0.9012 0.9003 0.9003 6.4895 16 2 10.6176 1.5015
0.163 31.0 1147 1.5844 0.9014 0.9 0.9002 6.4655 16 2 10.5756 1.3013
0.1587 32.0 1184 1.6071 0.9008 0.8997 0.8998 6.4615 16 2 10.6076 0.9009
0.156 33.0 1221 1.6166 0.9006 0.8998 0.8997 6.4945 16 2 10.6166 1.2012
0.1546 34.0 1258 1.6099 0.9011 0.8987 0.8994 6.3834 13 2 10.4965 0.9009
0.1472 35.0 1295 1.6167 0.9018 0.8992 0.9001 6.3974 14 2 10.4665 1.001
0.1472 36.0 1332 1.6271 0.9006 0.9 0.8998 6.5185 16 2 10.6216 1.5015
0.1452 37.0 1369 1.6226 0.9023 0.9007 0.901 6.4595 16 2 10.5485 1.4014
0.1415 38.0 1406 1.6221 0.9015 0.9006 0.9006 6.5005 16 2 10.5846 1.4014
0.1398 39.0 1443 1.6272 0.9012 0.9002 0.9003 6.5025 16 2 10.5866 1.2012
0.14 40.0 1480 1.6295 0.9015 0.9003 0.9004 6.4845 16 2 10.5656 1.1011

Framework versions

  • Transformers 4.33.1
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ldos/text_shortening_model_v72

Base model

google-t5/t5-small
Finetuned
(1512)
this model