jordiclive
commited on
Commit
·
1fb282f
1
Parent(s):
c908206
Update README.md
Browse files
README.md
CHANGED
@@ -22,11 +22,6 @@ tags:
|
|
22 |
|
23 |
### Dataset Summary
|
24 |
|
25 |
-
There are four features:
|
26 |
-
- document: Input news article.
|
27 |
-
- summary: One sentence summary of the article.
|
28 |
-
- id: BBC ID of the article.
|
29 |
-
|
30 |
This is a dataset that can be used for research into machine learning and natural language processing. It contains all titles and summaries (or introductions) of English Wikipedia articles, extracted in September of 2017.
|
31 |
|
32 |
The dataset is different from the regular Wikipedia dump and different from the datasets that can be created by gensim because ours contains the extracted summaries and not the entire unprocessed page body. This could be useful if one wants to use the smaller, more concise, and more definitional summaries in their research. Or if one just wants to use a smaller but still diverse dataset for efficient training with resource constraints.
|
|
|
22 |
|
23 |
### Dataset Summary
|
24 |
|
|
|
|
|
|
|
|
|
|
|
25 |
This is a dataset that can be used for research into machine learning and natural language processing. It contains all titles and summaries (or introductions) of English Wikipedia articles, extracted in September of 2017.
|
26 |
|
27 |
The dataset is different from the regular Wikipedia dump and different from the datasets that can be created by gensim because ours contains the extracted summaries and not the entire unprocessed page body. This could be useful if one wants to use the smaller, more concise, and more definitional summaries in their research. Or if one just wants to use a smaller but still diverse dataset for efficient training with resource constraints.
|