Datasets:
Modalities:
Text
Formats:
csv
Languages:
Bengali
Size:
100K - 1M
Tags:
headline-generation
low-resource
information-extraction
news-clustering
keyword-identification
document-categorization
License:
Update README.md
Browse files
README.md
CHANGED
@@ -93,14 +93,13 @@ test set, respectively.
|
|
93 |
| Edu-Career | 4,008 | 90 | 272 | 4,372 |
|
94 |
| Science-Tech | 1,046 | 23 | 71 | 1,141 |
|
95 |
| Religion | 269 | 6 | 18 | 294 |
|
96 |
-
|:-------------:|:-----------:|:---------:|:----------:|:-----------:|
|
97 |
| **Total** | **220,574** | **4,994** | **15,012** | **240,580** |
|
98 |
|
99 |
## Dataset Creation
|
100 |
|
101 |
We crawl around 900,000 raw data samples from seven famous Bengali newspapers concentrating on certain
|
102 |
criteria, such as headline, article, image caption, category, and topic words. Since each of the newspapers
|
103 |
-
mentioned above has
|
104 |
to prevent the bias of a particular annotation style. To ensure content diversity, we also cover various
|
105 |
domains from all the news dailies. The majority of the news samples are extracted from HTML bodies of the
|
106 |
corresponding publications, while some are rendered using JavaScript. However, two of them do not provide
|
@@ -164,7 +163,7 @@ We considered some ethical aspects while scraping the data. We requested data at
|
|
164 |
without any intention of a DDoS attack. Moreover, for each website, we read the instructions listed in
|
165 |
robots.txt to check whether we can crawl the intended content. We tried to minimize offensive texts in
|
166 |
the data by explicitly crawling the sites where such contents are minimal. Further, we removed the
|
167 |
-
Personal Identifying Information (PII) such as name, phone number, email address
|
168 |
|
169 |
### Other Known Limitations
|
170 |
|
|
|
93 |
| Edu-Career | 4,008 | 90 | 272 | 4,372 |
|
94 |
| Science-Tech | 1,046 | 23 | 71 | 1,141 |
|
95 |
| Religion | 269 | 6 | 18 | 294 |
|
|
|
96 |
| **Total** | **220,574** | **4,994** | **15,012** | **240,580** |
|
97 |
|
98 |
## Dataset Creation
|
99 |
|
100 |
We crawl around 900,000 raw data samples from seven famous Bengali newspapers concentrating on certain
|
101 |
criteria, such as headline, article, image caption, category, and topic words. Since each of the newspapers
|
102 |
+
mentioned above has its own professional authors and distinct writing style, we consider multiple sources
|
103 |
to prevent the bias of a particular annotation style. To ensure content diversity, we also cover various
|
104 |
domains from all the news dailies. The majority of the news samples are extracted from HTML bodies of the
|
105 |
corresponding publications, while some are rendered using JavaScript. However, two of them do not provide
|
|
|
163 |
without any intention of a DDoS attack. Moreover, for each website, we read the instructions listed in
|
164 |
robots.txt to check whether we can crawl the intended content. We tried to minimize offensive texts in
|
165 |
the data by explicitly crawling the sites where such contents are minimal. Further, we removed the
|
166 |
+
Personal Identifying Information (PII) such as name, phone number, email address, _etc._ from the corpus.
|
167 |
|
168 |
### Other Known Limitations
|
169 |
|