Add evaluation results on the default config and test split of xsum

#1
by autoevaluator HF staff - opened
Files changed (1) hide show
  1. README.md +38 -3
README.md CHANGED
@@ -5,10 +5,45 @@ tags:
5
  - summarization
6
  model-index:
7
  - name: bart-base-xsum
8
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  dataset:
10
- type: {xsum}
11
- name: {xsum}
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
5
  - summarization
6
  model-index:
7
  - name: bart-base-xsum
8
+ results:
9
+ - task:
10
+ type: summarization
11
+ name: Summarization
12
+ dataset:
13
+ name: xsum
14
+ type: xsum
15
+ config: default
16
+ split: test
17
+ metrics:
18
+ - name: ROUGE-1
19
+ type: rouge
20
+ value: 38.643
21
+ verified: true
22
+ - name: ROUGE-2
23
+ type: rouge
24
+ value: 17.7546
25
+ verified: true
26
+ - name: ROUGE-L
27
+ type: rouge
28
+ value: 32.2114
29
+ verified: true
30
+ - name: ROUGE-LSUM
31
+ type: rouge
32
+ value: 32.2207
33
+ verified: true
34
+ - name: loss
35
+ type: loss
36
+ value: 1.8224396705627441
37
+ verified: true
38
+ - name: gen_len
39
+ type: gen_len
40
+ value: 19.7028
41
+ verified: true
42
  dataset:
43
+ type:
44
+ xsum: null
45
+ name:
46
+ xsum: null
47
  ---
48
 
49
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You