Upload README.md with huggingface_hub
#2
by
lbourdois
- opened
README.md
CHANGED
@@ -1,28 +1,30 @@
|
|
1 |
-
---
|
2 |
-
|
3 |
-
|
|
|
4 |
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
|
13 |
-
|
|
14 |
-
| PRIMERA
|
15 |
-
| PRIMERA
|
16 |
-
| PRIMERA
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: apache-2.0
|
4 |
+
---
|
5 |
|
6 |
+
|
7 |
+
HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).
|
8 |
+
|
9 |
+
The original code can be found [here](https://github.com/allenai/PRIMER). You can find the script and notebook to train/evaluate the model in the original github repo.
|
10 |
+
|
11 |
+
* Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models on the **MultiNews dataset**, and show the results below:
|
12 |
+
|
13 |
+
| Model | Rouge-1 | Rouge-2 | Rouge-L |
|
14 |
+
| --- | ----------- |----------- |----------- |
|
15 |
+
| PRIMERA | 42.0 | 13.6 | 20.8|
|
16 |
+
| PRIMERA-hf | 41.7 |13.6 | 20.5|
|
17 |
+
| PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|
|
18 |
+
| PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|
|
19 |
+
|
20 |
+
You can use it by
|
21 |
+
```
|
22 |
+
from transformers import (
|
23 |
+
AutoTokenizer,
|
24 |
+
LEDConfig,
|
25 |
+
LEDForConditionalGeneration,
|
26 |
+
)
|
27 |
+
tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')
|
28 |
+
config=LEDConfig.from_pretrained('allenai/PRIMERA')
|
29 |
+
model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
|
30 |
+
```
|