--- license: apache-2.0 base_model: GanjinZero/biobart-base tags: - generated_from_trainer metrics: - rouge model-index: - name: fine-tuned-BioBART-20-epochs-1024-input-128-output results: [] --- # fine-tuned-BioBART-20-epochs-1024-input-128-output This model is a fine-tuned version of [GanjinZero/biobart-base](https://huggingface.co./GanjinZero/biobart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6050 - Rouge1: 0.1704 - Rouge2: 0.0496 - Rougel: 0.138 - Rougelsum: 0.1356 - Gen Len: 34.1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 151 | 6.5303 | 0.0 | 0.0 | 0.0 | 0.0 | 12.58 | | No log | 2.0 | 302 | 1.9967 | 0.1046 | 0.0318 | 0.0908 | 0.0904 | 26.22 | | No log | 3.0 | 453 | 1.6736 | 0.0447 | 0.0076 | 0.036 | 0.0353 | 15.66 | | 4.5402 | 4.0 | 604 | 1.5728 | 0.1397 | 0.0344 | 0.1068 | 0.1079 | 34.51 | | 4.5402 | 5.0 | 755 | 1.5231 | 0.1675 | 0.0345 | 0.1325 | 0.1328 | 34.4 | | 4.5402 | 6.0 | 906 | 1.4986 | 0.1195 | 0.0287 | 0.0863 | 0.0873 | 38.66 | | 1.1958 | 7.0 | 1057 | 1.4791 | 0.1478 | 0.0379 | 0.1172 | 0.1176 | 35.41 | | 1.1958 | 8.0 | 1208 | 1.4802 | 0.1459 | 0.0368 | 0.1066 | 0.108 | 32.5 | | 1.1958 | 9.0 | 1359 | 1.4841 | 0.1687 | 0.0289 | 0.1342 | 0.1345 | 30.89 | | 0.7933 | 10.0 | 1510 | 1.5005 | 0.1457 | 0.035 | 0.1125 | 0.1103 | 34.3 | | 0.7933 | 11.0 | 1661 | 1.5101 | 0.1808 | 0.0364 | 0.1498 | 0.1505 | 31.33 | | 0.7933 | 12.0 | 1812 | 1.5262 | 0.1882 | 0.0419 | 0.1553 | 0.1549 | 31.65 | | 0.7933 | 13.0 | 1963 | 1.5481 | 0.167 | 0.032 | 0.1381 | 0.139 | 31.04 | | 0.5232 | 14.0 | 2114 | 1.5494 | 0.1723 | 0.0442 | 0.1407 | 0.138 | 34.88 | | 0.5232 | 15.0 | 2265 | 1.5590 | 0.1801 | 0.0318 | 0.142 | 0.1413 | 37.99 | | 0.5232 | 16.0 | 2416 | 1.5829 | 0.1608 | 0.0353 | 0.1249 | 0.1249 | 33.97 | | 0.3565 | 17.0 | 2567 | 1.5837 | 0.1535 | 0.0354 | 0.1159 | 0.115 | 35.96 | | 0.3565 | 18.0 | 2718 | 1.5977 | 0.1565 | 0.0349 | 0.1244 | 0.1227 | 34.29 | | 0.3565 | 19.0 | 2869 | 1.6002 | 0.169 | 0.0428 | 0.1358 | 0.1331 | 34.84 | | 0.2734 | 20.0 | 3020 | 1.6050 | 0.1704 | 0.0496 | 0.138 | 0.1356 | 34.1 | ### Framework versions - Transformers 4.36.2 - Pytorch 1.12.1+cu113 - Datasets 2.16.1 - Tokenizers 0.15.0