Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
Danish
Size:
1M - 10M
License:
File size: 1,770 Bytes
f0371ff 3ca56d3 f0371ff 3ca56d3 f0371ff bc13020 f0371ff bc13020 f0371ff ec706bb f0371ff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- da
license:
- mit
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
source_datasets:
- original
task_categories:
- text-generation
task_ids:
- language-modeling
pretty_name: Reddit-da
---
# Dataset Card for SQuAD-da
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Contributions](#contributions)
### Dataset Summary
This dataset consists of 1,908,887 Danish posts from Reddit. These are from [this Reddit dump](https://files.pushshift.io/reddit/) and have been filtered using [this script](https://github.com/NBAiLab/notram/blob/master/corpus_generation_scripts/lang_detect_reddit.py), which uses FastText to detect the Danish posts.
### Supported Tasks and Leaderboards
This dataset is suitable for language modelling.
### Languages
This dataset is in Danish.
## Dataset Structure
### Data Instances
Every entry in the dataset contains short Reddit comments in Danish, along with a unique ID.
### Data Fields
An entry in the dataset consists of the following fields:
- `id` (`str`): A unique identifier.
- `text` (`str`): A short Reddit comment.
## Additional Information
### Licensing Information
The dataset is released under the MIT license.
### Contributions
Thanks to [@saattrupdan](https://github.com/saattrupdan) for adding this dataset to the Hugging Face Hub. |