Datasets:

Modalities:
Text
Size:
< 1K
Libraries:
Datasets
dibyaaaaax commited on
Commit
5edd0fb
·
1 Parent(s): 74edb5c

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Dataset Summary
2
+
3
+ A dataset for benchmarking keyphrase extraction and generation techniques from long document english scientific articles. For more details about the dataset please refer the original paper - [https://aclanthology.org/D09-1137/](https://aclanthology.org/D09-1137/)
4
+
5
+ Original source of the data - []()
6
+
7
+
8
+ ## Dataset Structure
9
+
10
+
11
+ ### Data Fields
12
+
13
+ - **id**: unique identifier of the document.
14
+ - **document**: Whitespace separated list of words in the document.
15
+ - **doc_bio_tags**: BIO tags for each word in the document. B stands for the beginning of a keyphrase and I stands for inside the keyphrase. O stands for outside the keyphrase and represents the word that isn't a part of the keyphrase at all.
16
+ - **extractive_keyphrases**: List of all the present keyphrases.
17
+ - **abstractive_keyphrase**: List of all the absent keyphrases.
18
+
19
+
20
+ ### Data Splits
21
+
22
+ |Split| #datapoints |
23
+ |--|--|
24
+ | Test | 182 |
25
+
26
+
27
+ ## Usage
28
+
29
+ ### Full Dataset
30
+
31
+ ```python
32
+ from datasets import load_dataset
33
+
34
+ # get entire dataset
35
+ dataset = load_dataset("midas/citeulike180", "raw")
36
+
37
+ # sample from the test split
38
+ print("Sample from test dataset split")
39
+ test_sample = dataset["test"][0]
40
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
41
+ print("Tokenized Document: ", test_sample["document"])
42
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
43
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
44
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
45
+ print("\n-----------\n")
46
+ ```
47
+ **Output**
48
+
49
+ ```bash
50
+
51
+ ```
52
+
53
+ ### Keyphrase Extraction
54
+ ```python
55
+ from datasets import load_dataset
56
+
57
+ # get the dataset only for keyphrase extraction
58
+ dataset = load_dataset("midas/citeulike180", "extraction")
59
+
60
+ print("Samples for Keyphrase Extraction")
61
+
62
+ # sample from the test split
63
+ print("Sample from test data split")
64
+ test_sample = dataset["test"][0]
65
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
66
+ print("Tokenized Document: ", test_sample["document"])
67
+ print("Document BIO Tags: ", test_sample["doc_bio_tags"])
68
+ print("\n-----------\n")
69
+ ```
70
+
71
+ ### Keyphrase Generation
72
+ ```python
73
+ # get the dataset only for keyphrase generation
74
+ dataset = load_dataset("midas/citeulike180", "generation")
75
+
76
+ print("Samples for Keyphrase Generation")
77
+
78
+ # sample from the test split
79
+ print("Sample from test data split")
80
+ test_sample = dataset["test"][0]
81
+ print("Fields in the sample: ", [key for key in test_sample.keys()])
82
+ print("Tokenized Document: ", test_sample["document"])
83
+ print("Extractive/present Keyphrases: ", test_sample["extractive_keyphrases"])
84
+ print("Abstractive/absent Keyphrases: ", test_sample["abstractive_keyphrases"])
85
+ print("\n-----------\n")
86
+ ```
87
+
88
+ ## Citation Information
89
+ ```
90
+ @inproceedings{medelyan-etal-2009-human,
91
+ title = "Human-competitive tagging using automatic keyphrase extraction",
92
+ author = "Medelyan, Olena and
93
+ Frank, Eibe and
94
+ Witten, Ian H.",
95
+ booktitle = "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
96
+ month = aug,
97
+ year = "2009",
98
+ address = "Singapore",
99
+ publisher = "Association for Computational Linguistics",
100
+ url = "https://aclanthology.org/D09-1137",
101
+ pages = "1318--1327",
102
+ }
103
+
104
+ ```
105
+
106
+ ## Contributions
107
+ Thanks to [@debanjanbhucs](https://github.com/debanjanbhucs), [@dibyaaaaax](https://github.com/dibyaaaaax) and [@ad6398](https://github.com/ad6398) for adding this dataset