Datasets:

dibyaaaaax commited on
Commit
6c4307c
1 Parent(s): 2e7a0c8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Dataset Summary
2
+
3
+ Original source - [https://github.com/microsoft/OpenKP](https://github.com/microsoft/OpenKP)
4
+ ## Dataset Structure
5
+
6
+
7
+ ### Data Fields
8
+
9
+ - **id**: unique identifier of the document.
10
+ - **document**: Whitespace separated list of words in the document.
11
+ - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
12
+ - **extractive_keyphrases**: List of all the present keyphrases.
13
+ - **abstractive_keyphrase**: List of all the absent keyphrases.
14
+
15
+
16
+ ### Data Splits
17
+
18
+ |Split| #datapoints |
19
+ |--|--|
20
+ | Train | 134894 |
21
+ | Test | 6614 |
22
+ | Validation | 6616 |
23
+
24
+
25
+ ## Usage
26
+
27
+ ### Full Dataset
28
+
29
+ ```python
30
+ from datasets import load_dataset
31
+
32
+ # get entire dataset
33
+ dataset = load_dataset("midas/openkp", "raw")
34
+
35
+ # sample from the train split
36
+ print("Sample from training data split")
37
+ train_sample = dataset["train"][0]
38
+ print("Fields in the sample: ", [key for key in train_sample.keys()])
39
+ print("Tokenized Document: ", train_sample["document"])
40
+ print("Document BIO Tags: ", train_sample["doc_bio_tags"])
41
+ print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
42
+ print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
43
+ print("\n-----------\n")
44
+
45
+ # sample from the validation split
46
+ print("Sample from validation data split")
47
+ validation_sample = dataset["validation"][0]
48
+ print("Fields in the sample: ", [key for key in validation_sample.keys()])
49
+ print("Tokenized Document: ", validation_sample["document"])
50
+ print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
51
+ print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
52
+ print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
53
+ print("\n-----------\n")
54
+
55
+ # sample from the test split
56
+ print("Sample from test data split")
57
+ test_sample = dataset["test"][0]
58
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
59
+ print("Tokenized Document: ", test_sample["document"])
60
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
61
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
62
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
63
+ print("\n-----------\n")
64
+ ```
65
+ **Output**
66
+
67
+ ```bash
68
+
69
+
70
+ ```
71
+
72
+ ### Keyphrase Extraction
73
+ ```python
74
+ from datasets import load_dataset
75
+
76
+ # get the dataset only for keyphrase extraction
77
+ dataset = load_dataset("midas/openkp", "extraction")
78
+
79
+ print("Samples for Keyphrase Extraction")
80
+
81
+ # sample from the train split
82
+ print("Sample from training data split")
83
+ train_sample = dataset["train"][0]
84
+ print("Fields in the sample: ", [key for key in train_sample.keys()])
85
+ print("Tokenized Document: ", train_sample["document"])
86
+ print("Document BIO Tags: ", train_sample["doc_bio_tags"])
87
+ print("\n-----------\n")
88
+
89
+ # sample from the validation split
90
+ print("Sample from validation data split")
91
+ validation_sample = dataset["validation"][0]
92
+ print("Fields in the sample: ", [key for key in validation_sample.keys()])
93
+ print("Tokenized Document: ", validation_sample["document"])
94
+ print("Document BIO Tags: ", validation_sample["doc_bio_tags"])
95
+ print("\n-----------\n")
96
+
97
+ # sample from the test split
98
+ print("Sample from test data split")
99
+ test_sample = dataset["test"][0]
100
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
101
+ print("Tokenized Document: ", test_sample["document"])
102
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
103
+ print("\n-----------\n")
104
+ ```
105
+
106
+ ### Keyphrase Generation
107
+ ```python
108
+ # get the dataset only for keyphrase generation
109
+ dataset = load_dataset("midas/openkp", "generation")
110
+
111
+ print("Samples for Keyphrase Generation")
112
+
113
+ # sample from the train split
114
+ print("Sample from training data split")
115
+ train_sample = dataset["train"][0]
116
+ print("Fields in the sample: ", [key for key in train_sample.keys()])
117
+ print("Tokenized Document: ", train_sample["document"])
118
+ print("Extractive/present Keyphrases: ", train_sample["extractive_keyphrases"])
119
+ print("Abstractive/absent Keyphrases: ", train_sample["abstractive_keyphrases"])
120
+ print("\n-----------\n")
121
+
122
+ # sample from the validation split
123
+ print("Sample from validation data split")
124
+ validation_sample = dataset["validation"][0]
125
+ print("Fields in the sample: ", [key for key in validation_sample.keys()])
126
+ print("Tokenized Document: ", validation_sample["document"])
127
+ print("Extractive/present Keyphrases: ", validation_sample["extractive_keyphrases"])
128
+ print("Abstractive/absent Keyphrases: ", validation_sample["abstractive_keyphrases"])
129
+ print("\n-----------\n")
130
+
131
+ # sample from the test split
132
+ print("Sample from test data split")
133
+ test_sample = dataset["test"][0]
134
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
135
+ print("Tokenized Document: ", test_sample["document"])
136
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
137
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
138
+ print("\n-----------\n")
139
+ ```
140
+ ## Citation Information
141
+ ```
142
+ @inproceedings{Xiong2019OpenDW,
143
+
144
+ title={Open Domain Web Keyphrase Extraction Beyond Language Modeling},
145
+
146
+ author={Lee Xiong and Chuan Hu and Chenyan Xiong and Daniel Fernando Campos and Arnold Overwijk},
147
+
148
+ booktitle={EMNLP},
149
+
150
+ year={2019}
151
+
152
+ }
153
+ ```
154
+
155
+ ## Contributions
156
+ Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset