File size: 3,507 Bytes
9402ebb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: fine-tuned-bart-20-epochs-wang-lab
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# fine-tuned-bart-20-epochs-wang-lab

This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co./facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1462
- Rouge1: 0.2876
- Rouge2: 0.1104
- Rougel: 0.2587
- Rougelsum: 0.2583
- Gen Len: 15.32

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20

### Training results

| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log        | 1.0   | 301  | 0.8236          | 0.2393 | 0.0872 | 0.2103 | 0.2098    | 15.1    |
| 2.6644        | 2.0   | 602  | 0.7800          | 0.2486 | 0.0882 | 0.219  | 0.2187    | 14.24   |
| 2.6644        | 3.0   | 903  | 0.7623          | 0.3152 | 0.131  | 0.2914 | 0.2901    | 15.83   |
| 0.6713        | 4.0   | 1204 | 0.7802          | 0.2909 | 0.104  | 0.2577 | 0.2577    | 14.4    |
| 0.4641        | 5.0   | 1505 | 0.8159          | 0.2986 | 0.1058 | 0.2629 | 0.2606    | 14.71   |
| 0.4641        | 6.0   | 1806 | 0.8451          | 0.3212 | 0.1374 | 0.2892 | 0.2892    | 15.3    |
| 0.2986        | 7.0   | 2107 | 0.8913          | 0.2965 | 0.115  | 0.2724 | 0.2728    | 15.25   |
| 0.2986        | 8.0   | 2408 | 0.9194          | 0.2686 | 0.1036 | 0.2395 | 0.2389    | 15.07   |
| 0.2025        | 9.0   | 2709 | 0.9674          | 0.283  | 0.1077 | 0.2549 | 0.2535    | 15.38   |
| 0.1397        | 10.0  | 3010 | 0.9848          | 0.2805 | 0.1127 | 0.2484 | 0.2475    | 15.99   |
| 0.1397        | 11.0  | 3311 | 1.0356          | 0.2943 | 0.1158 | 0.2568 | 0.2586    | 15.32   |
| 0.0922        | 12.0  | 3612 | 1.0481          | 0.3291 | 0.1211 | 0.297  | 0.2999    | 15.39   |
| 0.0922        | 13.0  | 3913 | 1.0846          | 0.2861 | 0.1074 | 0.2473 | 0.2482    | 15.04   |
| 0.0618        | 14.0  | 4214 | 1.0941          | 0.2929 | 0.103  | 0.2511 | 0.2505    | 15.34   |
| 0.042         | 15.0  | 4515 | 1.1076          | 0.2639 | 0.1111 | 0.2349 | 0.2328    | 15.11   |
| 0.042         | 16.0  | 4816 | 1.1180          | 0.2825 | 0.1125 | 0.2465 | 0.2452    | 15.08   |
| 0.03          | 17.0  | 5117 | 1.1310          | 0.2924 | 0.1073 | 0.2527 | 0.2528    | 15.47   |
| 0.03          | 18.0  | 5418 | 1.1407          | 0.2823 | 0.1017 | 0.2491 | 0.2471    | 15.1    |
| 0.0204        | 19.0  | 5719 | 1.1445          | 0.2952 | 0.1142 | 0.2635 | 0.264     | 15.13   |
| 0.0153        | 20.0  | 6020 | 1.1462          | 0.2876 | 0.1104 | 0.2587 | 0.2583    | 15.32   |


### Framework versions

- Transformers 4.36.2
- Pytorch 1.12.1+cu113
- Datasets 2.15.0
- Tokenizers 0.15.0