File size: 2,981 Bytes
a6c1c6a 422c3f8 a6c1c6a 422c3f8 a6c1c6a 422c3f8 a6c1c6a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
license: apache-2.0
base_model: facebook/bart-large
tags:
- generated_from_trainer
metrics:
- rouge
- wer
model-index:
- name: bart_extractive_1024_750
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart_extractive_1024_750
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co./facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8876
- Rouge1: 0.7224
- Rouge2: 0.4761
- Rougel: 0.6677
- Rougelsum: 0.6675
- Wer: 0.4176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:------:|
| No log | 0.13 | 250 | 1.1438 | 0.6714 | 0.403 | 0.61 | 0.6098 | 0.4822 |
| 2.0429 | 0.27 | 500 | 1.0396 | 0.6869 | 0.4286 | 0.6276 | 0.6274 | 0.4574 |
| 2.0429 | 0.4 | 750 | 1.0071 | 0.6941 | 0.4396 | 0.636 | 0.6359 | 0.4501 |
| 1.1127 | 0.53 | 1000 | 0.9806 | 0.7006 | 0.445 | 0.6414 | 0.6413 | 0.444 |
| 1.1127 | 0.66 | 1250 | 0.9681 | 0.7001 | 0.4471 | 0.6423 | 0.6423 | 0.4404 |
| 1.0522 | 0.8 | 1500 | 0.9541 | 0.7026 | 0.4502 | 0.646 | 0.646 | 0.4375 |
| 1.0522 | 0.93 | 1750 | 0.9325 | 0.7125 | 0.461 | 0.6565 | 0.6564 | 0.431 |
| 1.0094 | 1.06 | 2000 | 0.9239 | 0.7069 | 0.4593 | 0.652 | 0.6519 | 0.429 |
| 1.0094 | 1.2 | 2250 | 0.9168 | 0.71 | 0.4631 | 0.6545 | 0.6544 | 0.4265 |
| 0.9166 | 1.33 | 2500 | 0.9095 | 0.7181 | 0.4701 | 0.6631 | 0.663 | 0.4238 |
| 0.9166 | 1.46 | 2750 | 0.9051 | 0.7147 | 0.4679 | 0.6595 | 0.6594 | 0.422 |
| 0.9135 | 1.6 | 3000 | 0.8989 | 0.7227 | 0.4747 | 0.6673 | 0.6672 | 0.4203 |
| 0.9135 | 1.73 | 3250 | 0.9006 | 0.7144 | 0.4696 | 0.6603 | 0.6603 | 0.4194 |
| 0.8846 | 1.86 | 3500 | 0.8868 | 0.7199 | 0.4746 | 0.6656 | 0.6655 | 0.4176 |
| 0.8846 | 1.99 | 3750 | 0.8876 | 0.7224 | 0.4761 | 0.6677 | 0.6675 | 0.4176 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|