Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,18 @@ Original source of the data - [https://github.com/ygorg/KPTimes](https://github.
|
|
13 |
</p>
|
14 |
<br>
|
15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
## Dataset Structure
|
17 |
|
18 |
|
|
|
13 |
</p>
|
14 |
<br>
|
15 |
|
16 |
+
KPTimes is a large scale dataset comprising of 279,923 news articles from New York Times and JP Times. It is one of the datasets which has annotations of keyphrases curated by the editors who can be considered as experts. The authors developed this dataset in order to have a large dataset for training neural models for keyphrase generation in a domain other than the scientific domain, and to understand the differences between keyphrases annotated by experts and non-experts. The authors show that the editors tend to assign generic keyphrases that are not present in the actual news article's text, with 55% of them being abstractive keyphrases. The keyphrases in the news domain as presented in this work were also on an average shorter (1.4 words) than those in the scientific datasets (2.4 words).
|
17 |
+
|
18 |
+
The dataset is randomly divided into train (92.8%), validation (3.6%) and test (3.6%) splits. In order to enable the models trained on this dataset to generalize well the authors did not want to have the entire data taken from a single source (NY Times), and therefore added 10K more articles from JPTimes dataset.
|
19 |
+
|
20 |
+
|
21 |
+
large scale dataset in the domain of news comprising of 279,923 news articles - supports the training of neural models.
|
22 |
+
includes expert annotations - editor curated keyphrases
|
23 |
+
how annotations differ from those found in existing datasets
|
24 |
+
author assigned keyphrases are not consistent
|
25 |
+
heuristics was applied to identify the content - title, headline and body
|
26 |
+
Source of articles - New York Times
|
27 |
+
|
28 |
## Dataset Structure
|
29 |
|
30 |
|