--- license: apache-2.0 base_model: facebook/bart-large tags: - generated_from_trainer metrics: - rouge - wer model-index: - name: bart_extractive_1024_750 results: [] --- # bart_extractive_1024_750 This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co./facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8876 - Rouge1: 0.7224 - Rouge2: 0.4761 - Rougel: 0.6677 - Rougelsum: 0.6675 - Wer: 0.4176 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:------:| | No log | 0.13 | 250 | 1.1438 | 0.6714 | 0.403 | 0.61 | 0.6098 | 0.4822 | | 2.0429 | 0.27 | 500 | 1.0396 | 0.6869 | 0.4286 | 0.6276 | 0.6274 | 0.4574 | | 2.0429 | 0.4 | 750 | 1.0071 | 0.6941 | 0.4396 | 0.636 | 0.6359 | 0.4501 | | 1.1127 | 0.53 | 1000 | 0.9806 | 0.7006 | 0.445 | 0.6414 | 0.6413 | 0.444 | | 1.1127 | 0.66 | 1250 | 0.9681 | 0.7001 | 0.4471 | 0.6423 | 0.6423 | 0.4404 | | 1.0522 | 0.8 | 1500 | 0.9541 | 0.7026 | 0.4502 | 0.646 | 0.646 | 0.4375 | | 1.0522 | 0.93 | 1750 | 0.9325 | 0.7125 | 0.461 | 0.6565 | 0.6564 | 0.431 | | 1.0094 | 1.06 | 2000 | 0.9239 | 0.7069 | 0.4593 | 0.652 | 0.6519 | 0.429 | | 1.0094 | 1.2 | 2250 | 0.9168 | 0.71 | 0.4631 | 0.6545 | 0.6544 | 0.4265 | | 0.9166 | 1.33 | 2500 | 0.9095 | 0.7181 | 0.4701 | 0.6631 | 0.663 | 0.4238 | | 0.9166 | 1.46 | 2750 | 0.9051 | 0.7147 | 0.4679 | 0.6595 | 0.6594 | 0.422 | | 0.9135 | 1.6 | 3000 | 0.8989 | 0.7227 | 0.4747 | 0.6673 | 0.6672 | 0.4203 | | 0.9135 | 1.73 | 3250 | 0.9006 | 0.7144 | 0.4696 | 0.6603 | 0.6603 | 0.4194 | | 0.8846 | 1.86 | 3500 | 0.8868 | 0.7199 | 0.4746 | 0.6656 | 0.6655 | 0.4176 | | 0.8846 | 1.99 | 3750 | 0.8876 | 0.7224 | 0.4761 | 0.6677 | 0.6675 | 0.4176 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2