datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.2M
| likes
int64 0
6.8k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
47
⌀ | createdAt
unknown | card
stringlengths 15
1M
|
---|---|---|---|---|---|---|---|---|
codeparrot/github-code | codeparrot | "2022-10-20T15:01:14Z" | 12,888 | 302 | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:other",
"region:us"
] | [
"text-generation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: github-code
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids:
- language-modeling
---
# GitHub Code Dataset
## Dataset Description
The GitHub Code dataset consists of 115M code files from GitHub in 32 programming languages with 60 extensions totaling in 1TB of data. The dataset was created from the public GitHub dataset on Google BiqQuery.
### How to use it
The GitHub Code dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following two lines of code:
```python
from datasets import load_dataset
ds = load_dataset("codeparrot/github-code", streaming=True, split="train")
print(next(iter(ds)))
#OUTPUT:
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
You can see that besides the code, repo name, and path also the programming language, license, and the size of the file are part of the dataset. You can also filter the dataset for any subset of the 30 included languages (see the full list below) in the dataset. Just pass the list of languages as a list. E.g. if your dream is to build a Codex model for Dockerfiles use the following configuration:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", languages=["Dockerfile"])
print(next(iter(ds))["code"])
#OUTPUT:
"""\
FROM rockyluke/ubuntu:precise
ENV DEBIAN_FRONTEND="noninteractive" \
TZ="Europe/Amsterdam"
...
"""
```
We also have access to the license of the origin repo of a file so we can filter for licenses in the same way we filtered for languages:
```python
ds = load_dataset("codeparrot/github-code", streaming=True, split="train", licenses=["mit", "isc"])
licenses = []
for element in iter(ds).take(10_000):
licenses.append(element["license"])
print(Counter(licenses))
#OUTPUT:
Counter({'mit': 9896, 'isc': 104})
```
Naturally, you can also download the full dataset. Note that this will download ~300GB compressed text data and the uncompressed dataset will take up ~1TB of storage:
```python
ds = load_dataset("codeparrot/github-code", split="train")
```
## Data Structure
### Data Instances
```python
{
'code': "import mod189 from './mod189';\nvar value=mod189+1;\nexport default value;\n",
'repo_name': 'MirekSz/webpack-es6-ts',
'path': 'app/mods/mod190.js',
'language': 'JavaScript',
'license': 'isc',
'size': 73
}
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|code|string|content of source file|
|repo_name|string|name of the GitHub repository|
|path|string|path of file in GitHub repository|
|language|string|programming language as inferred by extension|
|license|string|license of GitHub repository|
|size|int|size of source file in bytes|
### Data Splits
The dataset only contains a train split.
## Languages
The dataset contains 30 programming languages with over 60 extensions:
```python
{
"Assembly": [".asm"],
"Batchfile": [".bat", ".cmd"],
"C": [".c", ".h"],
"C#": [".cs"],
"C++": [".cpp", ".hpp", ".c++", ".h++", ".cc", ".hh", ".C", ".H"],
"CMake": [".cmake"],
"CSS": [".css"],
"Dockerfile": [".dockerfile", "Dockerfile"],
"FORTRAN": ['.f90', '.f', '.f03', '.f08', '.f77', '.f95', '.for', '.fpp'],
"GO": [".go"],
"Haskell": [".hs"],
"HTML":[".html"],
"Java": [".java"],
"JavaScript": [".js"],
"Julia": [".jl"],
"Lua": [".lua"],
"Makefile": ["Makefile"],
"Markdown": [".md", ".markdown"],
"PHP": [".php", ".php3", ".php4", ".php5", ".phps", ".phpt"],
"Perl": [".pl", ".pm", ".pod", ".perl"],
"PowerShell": ['.ps1', '.psd1', '.psm1'],
"Python": [".py"],
"Ruby": [".rb"],
"Rust": [".rs"],
"SQL": [".sql"],
"Scala": [".scala"],
"Shell": [".sh", ".bash", ".command", ".zsh"],
"TypeScript": [".ts", ".tsx"],
"TeX": [".tex"],
"Visual Basic": [".vb"]
}
```
## Licenses
Each example is also annotated with the license of the associated repository. There are in total 15 licenses:
```python
[
'mit',
'apache-2.0',
'gpl-3.0',
'gpl-2.0',
'bsd-3-clause',
'agpl-3.0',
'lgpl-3.0',
'lgpl-2.1',
'bsd-2-clause',
'cc0-1.0',
'epl-1.0',
'mpl-2.0',
'unlicense',
'isc',
'artistic-2.0'
]
```
## Dataset Statistics
The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:
![dataset-statistics](https://huggingface.co./datasets/codeparrot/github-code/resolve/main/github-code-stats-alpha.png)
| | Language |File Count| Size (GB)|
|---:|:-------------|---------:|-------:|
| 0 | Java | 19548190 | 107.70 |
| 1 | C | 14143113 | 183.83 |
| 2 | JavaScript | 11839883 | 87.82 |
| 3 | HTML | 11178557 | 118.12 |
| 4 | PHP | 11177610 | 61.41 |
| 5 | Markdown | 8464626 | 23.09 |
| 6 | C++ | 7380520 | 87.73 |
| 7 | Python | 7226626 | 52.03 |
| 8 | C# | 6811652 | 36.83 |
| 9 | Ruby | 4473331 | 10.95 |
| 10 | GO | 2265436 | 19.28 |
| 11 | TypeScript | 1940406 | 24.59 |
| 12 | CSS | 1734406 | 22.67 |
| 13 | Shell | 1385648 | 3.01 |
| 14 | Scala | 835755 | 3.87 |
| 15 | Makefile | 679430 | 2.92 |
| 16 | SQL | 656671 | 5.67 |
| 17 | Lua | 578554 | 2.81 |
| 18 | Perl | 497949 | 4.70 |
| 19 | Dockerfile | 366505 | 0.71 |
| 20 | Haskell | 340623 | 1.85 |
| 21 | Rust | 322431 | 2.68 |
| 22 | TeX | 251015 | 2.15 |
| 23 | Batchfile | 236945 | 0.70 |
| 24 | CMake | 175282 | 0.54 |
| 25 | Visual Basic | 155652 | 1.91 |
| 26 | FORTRAN | 142038 | 1.62 |
| 27 | PowerShell | 136846 | 0.69 |
| 28 | Assembly | 82905 | 0.78 |
| 29 | Julia | 58317 | 0.29 |
## Dataset Creation
The dataset was created in two steps:
1. Files of with the extensions given in the list above were retrieved from the GitHub dataset on BigQuery (full query [here](https://huggingface.co./datasets/codeparrot/github-code/blob/main/query.sql)). The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_.
2. Files with lines longer than 1000 characters and duplicates (exact duplicates ignoring whitespaces) were dropped (full preprocessing script [here](https://huggingface.co./datasets/codeparrot/github-code/blob/main/github_preprocessing.py)).
## Considerations for Using the Data
The dataset consists of source code from a wide range of repositories. As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.
## Releases
You can load any older version of the dataset with the `revision` argument:
```Python
ds = load_dataset("codeparrot/github-code", revision="v1.0")
```
### v1.0
- Initial release of dataset
- The query was executed on _Feb 14, 2022, 12:03:16 PM UTC+1_
### v1.1
- Fix missing Scala/TypeScript
- Fix deduplication issue with inconsistent Python `hash`
- The query was executed on _Mar 16, 2022, 6:23:39 PM UTC+1_
|
Sterzhang/PVIT-3M | Sterzhang | "2024-11-02T07:41:57Z" | 12,839 | 17 | [
"task_categories:visual-question-answering",
"task_categories:image-text-to-text",
"language:en",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:json",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2410.07113",
"region:us",
"multi-modal",
"personalized"
] | [
"visual-question-answering",
"image-text-to-text"
] | "2024-10-07T09:28:17Z" | ---
configs:
- config_name: PVIT-3M
data_files:
- split: all_data
path: PVIT-3M.json
language:
- en
task_categories:
- visual-question-answering
- image-text-to-text
tags:
- multi-modal
- personalized
license: apache-2.0
pretty_name: personalized visual instruction tuning
size_categories:
- 1M<n<10M
---
# PVIT-3M
The paper titled "[**Personalized Visual Instruction Tuning**](https://arxiv.org/abs/2410.07113)" introduces a novel dataset called PVIT-3M. This dataset is specifically designed for tuning MLLMs in the context of personalized visual instruction tasks. The dataset consists of 3 million image-text pairs that aim to improve MLLMs' abilities to generate responses based on personalized visual inputs, making them more tailored and adaptable to individual user needs and preferences.
Here’s the PVIT-3M statistics:
<img width="900" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/65d8b0f0661492b25c6623de/BLF2Uo72nqpvjPv_yplbt.png">
## Directory
The images in the PVIT-3M dataset have been organized into 40 separate folders, each named pvit-3m_0 to pvit-3m_39. After downloading, you can combine all the subfolders for a unified dataset structure.
```
Sterzhang/PVIT-3M
└─ image
└─ pvit-3m_0
└─ aug-face
└─ coco
└─ concat-person
└─ crop-face
└─ flickr30k
└─ object365
└─ vg
└─ pvit-3m_1
└─ ...
└─ ...
└─ PVIT-3M.json
└─ README.md
```
## Json Structure
The PVIT-3M.json file contains structured data for each conversation instance in the PVIT-3M dataset. Below is an outline of the key fields:
- `image`: A list of image file paths related to the conversation.
- `conversations`: A list of dictionaries, each representing a turn in the conversation.
- `from`: `human` or `gpt`.
- `value`: The content of the conversation.
- `type`: A string indicating the type.
## Example
Here is an example from the PVIT-3M:
```json
{
"image": [
"crop-face/object365/objects365_v2_01169026/2.jpg",
"object365/objects365_v2_01169026.jpg"
],
"conversations": [
{
"from": "human",
"value": "<image>\nThis is a photo of Erin.\n<image>\nHow does Erin look in this photo?"
},
{
"from": "gpt",
"value": "Erin is a woman with short, brown hair. She is wearing a dark blue dress with short sleeves and a ruffled design. She has a pearl necklace and a watch on her left wrist. She is also wearing a ring on her left hand."
}
],
"type": "crop description"
}
```
"crop-face/object365/objects365_v2_01169026/2.jpg": <img width="100" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/65d8b0f0661492b25c6623de/lJi0aDYE44wyGP2QMZ13W.png">
"object365/objects365_v2_01169026.jpg": <img width="400" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/65d8b0f0661492b25c6623de/RY_80A5rSOO1vv6A6CuJy.png">
## Script
The script processes conversation data in the **PVIT-3M** dataset by adding personalized wrapper tokens (`<person_s>` and `<person_e>`) around specific segments. This helps the model correctly associate personalized text and images with each individual, reducing ambiguity in multimodal training.
```python
import json
def process_image_description(text):
segments = text.split('<image>\n')
processed_segments = []
for i, segment in enumerate(segments):
if i == 0:
processed_segments.append(segment)
elif i == len(segments) - 1:
continue
else:
last_newline_index = segment.rfind('\n')
if last_newline_index != -1:
segment = segment[:last_newline_index] + '<person_e>' + segment[last_newline_index:]
else:
segment += '<person_e>'
processed_segments.append(f'<person_s><image>\n{segment}')
processed_segments.append(f"<image>\n{segments[-1]}")
return ''.join(processed_segments)
def process_conversation_data(input_path, output_path):
with open(input_path, 'r', encoding='utf-8') as f:
data = json.load(f)
for item in data:
conversation_value = item["conversations"][0]["value"]
item["conversations"][0]["value"] = process_image_description(conversation_value)
with open(output_path, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=4)
input_file = ""
output_file = ""
process_conversation_data(input_file, output_file)
```
# Code
Our code will be released in [PVIT](https://github.com/sterzhang/PVIT), containing scripts for generating PVIT dataset as well as our code for training.
# Case Study
<img width="1000" alt="image" src="https://github.com/user-attachments/assets/d50fa03f-fdb6-41ff-ab25-806578d29f3e">
# Citation
Our paper is now available at: [https://arxiv.org/abs/2410.07113](https://arxiv.org/abs/2410.07113)
```bibtex
@misc{pi2024personalizedvisualinstructiontuning,
title={Personalized Visual Instruction Tuning},
author={Renjie Pi and Jianshu Zhang and Tianyang Han and Jipeng Zhang and Rui Pan and Tong Zhang},
year={2024},
eprint={2410.07113},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2410.07113},
} |
common-canvas/commoncatalog-cc-by-nc-sa | common-canvas | "2024-05-16T19:45:25Z" | 12,823 | 4 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.16825",
"region:us"
] | [
"text-to-image"
] | "2023-10-19T02:09:41Z" | ---
license: cc-by-nc-sa-4.0
dataset_info:
features:
- name: jpg
dtype: image
- name: blip2_caption
dtype: string
- name: caption
dtype: string
- name: licensename
dtype: string
- name: licenseurl
dtype: string
- name: width
dtype: int32
- name: height
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: photoid
dtype: int64
- name: uid
dtype: string
- name: unickname
dtype: string
- name: datetaken
dtype: timestamp[us]
- name: dateuploaded
dtype: int64
- name: capturedevice
dtype: string
- name: title
dtype: string
- name: usertags
dtype: string
- name: machinetags
dtype: string
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: accuracy
dtype: int64
- name: pageurl
dtype: string
- name: downloadurl
dtype: string
- name: serverid
dtype: int64
- name: farmid
dtype: int64
- name: secret
dtype: string
- name: secretoriginal
dtype: string
- name: ext
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: string
- name: exif
dtype: string
- name: sha256
dtype: string
- name: description
dtype: string
task_categories:
- text-to-image
language:
- en
---
# Dataset Card for CommonCatalog CC-BY-NC-SA
This dataset is a large collection of high-resolution Creative Common images (composed of different licenses, see paper Table 1 in the Appendix) collected in 2014 from users of Yahoo Flickr.
The dataset contains images of up to 4k resolution, making this one of the highest resolution captioned image datasets.
## Dataset Details
### Dataset Description
We provide captions synthetic captions to approximately 100 million high resolution images collected from Yahoo Flickr Creative Commons (YFCC).
- **Curated by:** Aaron Gokaslan
- **Language(s) (NLP):** en
- **License:** See relevant yaml tag / dataset name.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/mosaicml/diffusion
- **Paper:** https://arxiv.org/abs/2310.16825
- **Demo:** See CommonCanvas Gradios
## Uses
We use CommonCatalog to train a family latent diffusion models called CommonCanvas.
The goal is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance.
Doing so makes replicating the model significantly easier, and provides a clearer mechanism for applying training-data attribution techniques.
### Direct Use
Training text-to-image models
Training image-to-text models
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
* Commercial use
* Crafting content that is offensive or injurious towards individuals, including negative portrayals of their living conditions, cultural backgrounds, religious beliefs, etc.
* Deliberately creating or spreading content that is discriminatory or reinforces harmful stereotypes.
* Falsely representing individuals without their permission.
* Generating sexual content that may be seen by individuals without their consent.
* Producing or disseminating false or misleading information.
* Creating content that depicts extreme violence or bloodshed.
* Distributing content that modifies copyrighted or licensed material in a way that breaches its usage terms.
## Dataset Structure
The dataset is divided into 10 subsets each containing parquets about 4GB each. Each subfolder within contains a resolution range of the images and their respective aspect ratios.
The dataset is also divided along images licensed for commercial use (C) and those that are not (NC).
## Dataset Creation
### Curation Rationale
Creating a standardized, accessible dataset with synthetic caption and releasing it so other people can train on a common dataset for open source image generation.
### Source Data
Yahoo Flickr Creative Commons 100M Dataset and Synthetically Generated Caption Data.
#### Data Collection and Processing
All synthetic captions were generated with BLIP2. See paper for more details.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Users of Flickr
## Bias, Risks, and Limitations
See Yahoo Flickr Creative Commons 100M dataset for more information. The information was collected circa 2014 and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Citation
**BibTeX:**
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
```
## Dataset Card Authors
[Aaron Gokaslan](https://huggingface.co./Skylion007)
## Dataset Card Contact
[Aaron Gokaslan](https://huggingface.co./Skylion007)
|
CohereForAI/aya_collection | CohereForAI | "2024-06-28T08:04:56Z" | 12,783 | 217 | [
"task_categories:text-classification",
"task_categories:summarization",
"task_categories:translation",
"language:ace",
"language:afr",
"language:amh",
"language:ara",
"language:aze",
"language:ban",
"language:bbc",
"language:bel",
"language:bem",
"language:ben",
"language:bjn",
"language:bul",
"language:cat",
"language:ceb",
"language:ces",
"language:cym",
"language:dan",
"language:deu",
"language:ell",
"language:eng",
"language:epo",
"language:est",
"language:eus",
"language:fil",
"language:fin",
"language:fon",
"language:fra",
"language:gla",
"language:gle",
"language:glg",
"language:guj",
"language:hat",
"language:hau",
"language:heb",
"language:hin",
"language:hrv",
"language:hun",
"language:hye",
"language:ibo",
"language:ind",
"language:isl",
"language:ita",
"language:jav",
"language:jpn",
"language:kan",
"language:kas",
"language:kat",
"language:kau",
"language:kaz",
"language:khm",
"language:kin",
"language:kir",
"language:kor",
"language:kur",
"language:lao",
"language:lav",
"language:lij",
"language:lit",
"language:ltz",
"language:mad",
"language:mal",
"language:man",
"language:mar",
"language:min",
"language:mkd",
"language:mlg",
"language:mlt",
"language:mon",
"language:mri",
"language:msa",
"language:mya",
"language:nep",
"language:nij",
"language:nld",
"language:nor",
"language:nso",
"language:nya",
"language:pan",
"language:pes",
"language:pol",
"language:por",
"language:pus",
"language:ron",
"language:rus",
"language:sin",
"language:slk",
"language:slv",
"language:smo",
"language:sna",
"language:snd",
"language:som",
"language:sot",
"language:spa",
"language:sqi",
"language:srp",
"language:sun",
"language:swa",
"language:swe",
"language:tam",
"language:taq",
"language:tel",
"language:tgk",
"language:tha",
"language:tur",
"language:twi",
"language:ukr",
"language:urd",
"language:uzb",
"language:vie",
"language:wol",
"language:xho",
"language:yid",
"language:yor",
"language:zho",
"language:zul",
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.06619",
"region:us"
] | [
"text-classification",
"summarization",
"translation"
] | "2024-01-31T21:40:43Z" | ---
language:
- ace
- afr
- amh
- ara
- aze
- ban
- bbc
- bel
- bem
- ben
- bjn
- bul
- cat
- ceb
- ces
- cym
- dan
- deu
- ell
- eng
- epo
- est
- eus
- fil
- fin
- fon
- fra
- gla
- gle
- glg
- guj
- hat
- hau
- heb
- hin
- hrv
- hun
- hye
- ibo
- ind
- isl
- ita
- jav
- jpn
- kan
- kas
- kat
- kau
- kaz
- khm
- kin
- kir
- kor
- kur
- lao
- lav
- lij
- lit
- ltz
- mad
- mal
- man
- mar
- min
- mkd
- mlg
- mlt
- mon
- mri
- msa
- mya
- nep
- nij
- nld
- nor
- nso
- nya
- pan
- pes
- pol
- por
- pus
- ron
- rus
- sin
- slk
- slv
- smo
- sna
- snd
- som
- sot
- spa
- sqi
- srp
- sun
- swa
- swe
- tam
- taq
- tel
- tgk
- tha
- tur
- twi
- ukr
- urd
- uzb
- vie
- wol
- xho
- yid
- yor
- zho
- zul
license: apache-2.0
size_categories:
- 100M<n<1B
task_categories:
- text-classification
- summarization
- translation
pretty_name: Aya Collection
dataset_info:
- config_name: aya_dataset
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 245523658
num_examples: 202364
download_size: 134230030
dataset_size: 245523658
- config_name: templated_afriqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 1053208.8833372337
num_examples: 6834
- name: train
num_bytes: 785976.7786098759
num_examples: 5100
- name: validation
num_bytes: 794915.3380528903
num_examples: 5158
download_size: 945238
dataset_size: 2634101.0
- config_name: templated_afrisenti
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 13970874.910620399
num_examples: 42576
- name: train
num_bytes: 32313882.88468279
num_examples: 98476
- name: validation
num_bytes: 6141462.204696811
num_examples: 18716
download_size: 13309887
dataset_size: 52426220.0
- config_name: templated_amharic_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 1563941.8685517767
num_examples: 523
- name: train
num_bytes: 5475291.704241497
num_examples: 1831
- name: validation
num_bytes: 786456.4272067252
num_examples: 263
download_size: 3648433
dataset_size: 7825689.999999999
- config_name: templated_armenian_instruct
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 1864796.3648305084
num_examples: 3063
- name: train
num_bytes: 2445604.6351694916
num_examples: 4017
download_size: 1825641
dataset_size: 4310401.0
- config_name: templated_bengali_news
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 14242457
num_examples: 19096
download_size: 4609132
dataset_size: 14242457
- config_name: templated_dutch_imdb
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 39967063.5
num_examples: 24992
- name: train
num_bytes: 39967063.5
num_examples: 24992
download_size: 44533807
dataset_size: 79934127.0
- config_name: templated_hindi_headline
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 228788501.12729776
num_examples: 23452
- name: train
num_bytes: 919144047.8727022
num_examples: 94217
download_size: 243324488
dataset_size: 1147932549.0
- config_name: templated_hindi_news
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 109524809.11948325
num_examples: 10655
- name: train
num_bytes: 437112433.88051677
num_examples: 42524
download_size: 112865381
dataset_size: 546637243.0
- config_name: templated_indic_paraphrase
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 5340504
num_examples: 7523
download_size: 1724626
dataset_size: 5340504
- config_name: templated_indic_sentiment
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 7496187
num_examples: 11559
download_size: 3003109
dataset_size: 7496187
- config_name: templated_indo_stories
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 2042351
num_examples: 2599
download_size: 813713
dataset_size: 2042351
- config_name: templated_japanese_instruct
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 1345341895
num_examples: 2463624
download_size: 580330810
dataset_size: 1345341895
- config_name: templated_joke_explaination
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 591008
num_examples: 754
download_size: 157851
dataset_size: 591008
- config_name: templated_ligurian_news
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: validation
num_bytes: 105221.25
num_examples: 54
- name: test
num_bytes: 140295.0
num_examples: 72
- name: train
num_bytes: 596253.75
num_examples: 306
download_size: 546344
dataset_size: 841770.0
- config_name: templated_masakhanews
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 31426840.99009901
num_examples: 9240
- name: train
num_bytes: 109538186.24752475
num_examples: 32206
- name: validation
num_bytes: 15679408.762376238
num_examples: 4610
download_size: 86433056
dataset_size: 156644436.0
- config_name: templated_mintaka
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 41153051.4
num_examples: 156000
- name: train
num_bytes: 144035679.9
num_examples: 546000
- name: validation
num_bytes: 20576525.7
num_examples: 78000
download_size: 43108344
dataset_size: 205765257.0
- config_name: templated_ntx_llm
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 10019994
num_examples: 5983
download_size: 1037270
dataset_size: 10019994
- config_name: templated_nusax_senti
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 2684840.4
num_examples: 8000
- name: train
num_bytes: 3356050.5
num_examples: 10000
- name: validation
num_bytes: 671210.1
num_examples: 2000
download_size: 2336444
dataset_size: 6712101.0
- config_name: templated_persian_farstail
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 731412.1801486664
num_examples: 1029
- name: train
num_bytes: 3424629.62483603
num_examples: 4818
- name: validation
num_bytes: 720750.1950153039
num_examples: 1014
download_size: 1417008
dataset_size: 4876792.0
- config_name: templated_persian_instruct
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 38518994.420354694
num_examples: 11186
- name: train
num_bytes: 564885564.1599021
num_examples: 164044
- name: validation
num_bytes: 38512107.41974315
num_examples: 11184
download_size: 280563392
dataset_size: 641916666.0
- config_name: templated_scirepeval
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: validation
num_bytes: 53956804
num_examples: 32973
download_size: 27742964
dataset_size: 53956804
- config_name: templated_seed_instruct
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: validation
num_bytes: 186542.23316647828
num_examples: 380
- name: test
num_bytes: 197342.04666559017
num_examples: 402
- name: train
num_bytes: 5696410.720167931
num_examples: 11604
download_size: 2674875
dataset_size: 6080295.0
- config_name: templated_soda
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 487742788.92976975
num_examples: 595872
- name: train
num_bytes: 2519225981.566041
num_examples: 3077721
- name: validation
num_bytes: 479157981.5041894
num_examples: 585384
download_size: 1668121549
dataset_size: 3486126752.0
- config_name: templated_tamil_stories
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 14555943
num_examples: 1202
download_size: 4912529
dataset_size: 14555943
- config_name: templated_tamil_thirukkural
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 7722387
num_examples: 3990
download_size: 1441119
dataset_size: 7722387
- config_name: templated_telugu_food
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 1108509
num_examples: 441
download_size: 312391
dataset_size: 1108509
- config_name: templated_telugu_jokes
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 966698
num_examples: 929
download_size: 298210
dataset_size: 966698
- config_name: templated_telugu_news
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 1150840295
num_examples: 467090
download_size: 423260269
dataset_size: 1150840295
- config_name: templated_telugu_poems
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 8244805
num_examples: 5115
download_size: 2713433
dataset_size: 8244805
- config_name: templated_telugu_riddles
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 339040
num_examples: 844
download_size: 79031
dataset_size: 339040
- config_name: templated_thai_pos
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 319580.309461865
num_examples: 1000
- name: train
num_bytes: 41690529.69053814
num_examples: 130454
download_size: 7405764
dataset_size: 42010110.0
- config_name: templated_thai_scb
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 131923007.25034823
num_examples: 177862
- name: train
num_bytes: 1188824615.223528
num_examples: 1602804
- name: validation
num_bytes: 131917073.5261238
num_examples: 177854
download_size: 441007386
dataset_size: 1452664696.0
- config_name: templated_thai_usembassy
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 10002322
num_examples: 1230
download_size: 3958145
dataset_size: 10002322
- config_name: templated_thai_wikitionary
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 12238652
num_examples: 19729
download_size: 2641369
dataset_size: 12238652
- config_name: templated_turku_paraphrase
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 9449925.655740838
num_examples: 31413
- name: train
num_bytes: 75488399.52960008
num_examples: 250935
- name: validation
num_bytes: 9502269.814659085
num_examples: 31587
download_size: 28908781
dataset_size: 94440595.00000001
- config_name: templated_ukranian_gec
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 21369624
num_examples: 29958
download_size: 9511988
dataset_size: 21369624
- config_name: templated_uner_llm
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 59421032.72376601
num_examples: 54957
- name: test
num_bytes: 16164354.663105734
num_examples: 14950
- name: validation
num_bytes: 8420601.613128258
num_examples: 7788
download_size: 12453483
dataset_size: 84005989.0
- config_name: templated_urdu_news_category
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 29923228.33936761
num_examples: 11187
- name: train
num_bytes: 269284981.6606324
num_examples: 100674
download_size: 118185925
dataset_size: 299208210.0
- config_name: templated_urdu_news_gen
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 29497844.81704079
num_examples: 11187
- name: train
num_bytes: 265456872.1829592
num_examples: 100674
download_size: 123276747
dataset_size: 294954717.0
- config_name: templated_urdu_news_headline
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 29258423.35545901
num_examples: 11187
- name: train
num_bytes: 263302271.644541
num_examples: 100674
download_size: 123095949
dataset_size: 292560695.0
- config_name: templated_wiki_split
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 4608986.773259303
num_examples: 10000
- name: train
num_bytes: 912527760.4534814
num_examples: 1979888
- name: validation
num_bytes: 4608986.773259303
num_examples: 10000
download_size: 395631256
dataset_size: 921745734.0
- config_name: templated_xcsqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: validation
num_bytes: 6315047.0
num_examples: 17000
download_size: 2125506
dataset_size: 6315047.0
- config_name: templated_xlel_wd
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 493033268.5027245
num_examples: 621319
- name: train
num_bytes: 3671177872.612755
num_examples: 4626407
- name: validation
num_bytes: 420416838.88452065
num_examples: 529808
download_size: 2363004380
dataset_size: 4584627980.0
- config_name: templated_xwikis
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: test
num_bytes: 219985468.96557257
num_examples: 34987
- name: train
num_bytes: 8995693557.81201
num_examples: 1430696
- name: validation
num_bytes: 251360765.22241676
num_examples: 39977
download_size: 5713306872
dataset_size: 9467039791.999998
- config_name: translated_adversarial_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 167379954.08333334
num_examples: 119000
- name: train
num_bytes: 1673799540.8333333
num_examples: 1190000
- name: validation
num_bytes: 167379954.08333334
num_examples: 119000
download_size: 595462085
dataset_size: 2008559448.9999998
- config_name: translated_cnn_dailymail
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 4825107898.98773
num_examples: 1378800
- name: train
num_bytes: 41993976492.495476
num_examples: 12000000
- name: validation
num_bytes: 5613754777.516795
num_examples: 1604160
download_size: 25383694727
dataset_size: 52432839169.0
- config_name: translated_dolly
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: split
dtype: string
- name: script
dtype: string
splits:
- name: train
num_bytes: 2188278931
num_examples: 1762152
download_size: 1089137630
dataset_size: 2188278931
- config_name: translated_flan_coqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 2884413536
num_examples: 762671
download_size: 1416350365
dataset_size: 2884413536
- config_name: translated_flan_cot
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 7470682150.0
num_examples: 11029200
download_size: 3086804878
dataset_size: 7470682150.0
- config_name: translated_flan_gem_wiki
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 11446176046
num_examples: 3230493
download_size: 5342129672
dataset_size: 11446176046
- config_name: translated_flan_lambada
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 223527122
num_examples: 509201
download_size: 99315916
dataset_size: 223527122
- config_name: translated_flan_qa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 34188800
num_examples: 64260
download_size: 14245088
dataset_size: 34188800
- config_name: translated_hotpotqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 13234982265.87797
num_examples: 42301644
- name: validation
num_bytes: 833990488.1220294
num_examples: 2665600
download_size: 4862020346
dataset_size: 14068972754.0
- config_name: translated_joke_explaination
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 96548938
num_examples: 89726
download_size: 40366737
dataset_size: 96548938
- config_name: translated_mintaka
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 131276187.4
num_examples: 476000
- name: train
num_bytes: 459466655.9
num_examples: 1666000
- name: validation
num_bytes: 65638093.7
num_examples: 238000
download_size: 130340546
dataset_size: 656380937.0
- config_name: translated_mlqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 3730486242.0756793
num_examples: 2746830
- name: validation
num_bytes: 369508041.92432094
num_examples: 272076
download_size: 1662296336
dataset_size: 4099994284.0
- config_name: translated_nqopen
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 4456165405.095046
num_examples: 20926150
- name: validation
num_bytes: 182959989.9049544
num_examples: 859180
download_size: 1482593128
dataset_size: 4639125395.0
- config_name: translated_paws
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 536748719.07157385
num_examples: 952000
- name: train
num_bytes: 3314490433.8568525
num_examples: 5878719
- name: validation
num_bytes: 536748719.07157385
num_examples: 952000
download_size: 686023556
dataset_size: 4387987872.0
- config_name: translated_piqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 1324751595.2891204
num_examples: 1917447
- name: validation
num_bytes: 151113599.71087962
num_examples: 218722
download_size: 504206733
dataset_size: 1475865195.0
- config_name: translated_soda
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 9332736341.158312
num_examples: 17876160
- name: validation
num_bytes: 9168469957.193184
num_examples: 17561520
- name: train
num_bytes: 74651741547.6485
num_examples: 142989840
download_size: 32022718450
dataset_size: 93152947846.0
- config_name: translated_wiki_split
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: train
num_bytes: 72471632064.9965
num_examples: 117803336
- name: validation
num_bytes: 366039049.0017441
num_examples: 595000
- name: test
num_bytes: 366039049.0017441
num_examples: 595000
download_size: 27980267627
dataset_size: 73203710163.0
- config_name: translated_wikiqa
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 15512870.67820774
num_examples: 34867
- name: train
num_bytes: 55062749.16496945
num_examples: 123760
- name: validation
num_bytes: 7412293.156822811
num_examples: 16660
download_size: 32773189
dataset_size: 77987913.00000001
- config_name: translated_xlel_wd
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: dataset_name
dtype: string
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: template_id
dtype: int64
- name: language
dtype: string
- name: script
dtype: string
- name: split
dtype: string
splits:
- name: test
num_bytes: 8449087876.213723
num_examples: 8755108
- name: validation
num_bytes: 7326325551.677284
num_examples: 7591680
- name: train
num_bytes: 60579299633.10899
num_examples: 62773440
download_size: 35927637128
dataset_size: 76354713061.0
configs:
- config_name: aya_dataset
data_files:
- split: train
path: aya_dataset/train-*
- config_name: templated_afriqa
data_files:
- split: test
path: templated_afriqa/test-*
- split: train
path: templated_afriqa/train-*
- split: validation
path: templated_afriqa/validation-*
- config_name: templated_afrisenti
data_files:
- split: test
path: templated_afrisenti/test-*
- split: train
path: templated_afrisenti/train-*
- split: validation
path: templated_afrisenti/validation-*
- config_name: templated_amharic_qa
data_files:
- split: test
path: templated_amharic_qa/test-*
- split: train
path: templated_amharic_qa/train-*
- split: validation
path: templated_amharic_qa/validation-*
- config_name: templated_armenian_instruct
data_files:
- split: test
path: templated_armenian_instruct/test-*
- split: train
path: templated_armenian_instruct/train-*
- config_name: templated_bengali_news
data_files:
- split: train
path: templated_bengali_news/train-*
- config_name: templated_dutch_imdb
data_files:
- split: test
path: templated_dutch_imdb/test-*
- split: train
path: templated_dutch_imdb/train-*
- config_name: templated_hindi_headline
data_files:
- split: test
path: templated_hindi_headline/test-*
- split: train
path: templated_hindi_headline/train-*
- config_name: templated_hindi_news
data_files:
- split: test
path: templated_hindi_news/test-*
- split: train
path: templated_hindi_news/train-*
- config_name: templated_indic_paraphrase
data_files:
- split: train
path: templated_indic_paraphrase/train-*
- config_name: templated_indic_sentiment
data_files:
- split: train
path: templated_indic_sentiment/train-*
- config_name: templated_indo_stories
data_files:
- split: train
path: templated_indo_stories/train-*
- config_name: templated_japanese_instruct
data_files:
- split: train
path: templated_japanese_instruct/train-*
- config_name: templated_joke_explaination
data_files:
- split: train
path: templated_joke_explaination/train-*
- config_name: templated_ligurian_news
data_files:
- split: validation
path: templated_ligurian_news/validation-*
- split: test
path: templated_ligurian_news/test-*
- split: train
path: templated_ligurian_news/train-*
- config_name: templated_masakhanews
data_files:
- split: test
path: templated_masakhanews/test-*
- split: train
path: templated_masakhanews/train-*
- split: validation
path: templated_masakhanews/validation-*
- config_name: templated_mintaka
data_files:
- split: test
path: templated_mintaka/test-*
- split: train
path: templated_mintaka/train-*
- split: validation
path: templated_mintaka/validation-*
- config_name: templated_ntx_llm
data_files:
- split: train
path: templated_ntx_llm/train-*
- config_name: templated_nusax_senti
data_files:
- split: test
path: templated_nusax_senti/test-*
- split: train
path: templated_nusax_senti/train-*
- split: validation
path: templated_nusax_senti/validation-*
- config_name: templated_persian_farstail
data_files:
- split: test
path: templated_persian_farstail/test-*
- split: train
path: templated_persian_farstail/train-*
- split: validation
path: templated_persian_farstail/validation-*
- config_name: templated_persian_instruct
data_files:
- split: test
path: templated_persian_instruct/test-*
- split: train
path: templated_persian_instruct/train-*
- split: validation
path: templated_persian_instruct/validation-*
- config_name: templated_scirepeval
data_files:
- split: validation
path: templated_scirepeval/validation-*
- config_name: templated_seed_instruct
data_files:
- split: validation
path: templated_seed_instruct/validation-*
- split: test
path: templated_seed_instruct/test-*
- split: train
path: templated_seed_instruct/train-*
- config_name: templated_soda
data_files:
- split: test
path: templated_soda/test-*
- split: train
path: templated_soda/train-*
- split: validation
path: templated_soda/validation-*
- config_name: templated_tamil_stories
data_files:
- split: train
path: templated_tamil_stories/train-*
- config_name: templated_tamil_thirukkural
data_files:
- split: train
path: templated_tamil_thirukkural/train-*
- config_name: templated_telugu_food
data_files:
- split: train
path: templated_telugu_food/train-*
- config_name: templated_telugu_jokes
data_files:
- split: train
path: templated_telugu_jokes/train-*
- config_name: templated_telugu_news
data_files:
- split: train
path: templated_telugu_news/train-*
- config_name: templated_telugu_poems
data_files:
- split: train
path: templated_telugu_poems/train-*
- config_name: templated_telugu_riddles
data_files:
- split: train
path: templated_telugu_riddles/train-*
- config_name: templated_thai_pos
data_files:
- split: test
path: templated_thai_pos/test-*
- split: train
path: templated_thai_pos/train-*
- config_name: templated_thai_scb
data_files:
- split: test
path: templated_thai_scb/test-*
- split: train
path: templated_thai_scb/train-*
- split: validation
path: templated_thai_scb/validation-*
- config_name: templated_thai_usembassy
data_files:
- split: train
path: templated_thai_usembassy/train-*
- config_name: templated_thai_wikitionary
data_files:
- split: train
path: templated_thai_wikitionary/train-*
- config_name: templated_turku_paraphrase
data_files:
- split: test
path: templated_turku_paraphrase/test-*
- split: train
path: templated_turku_paraphrase/train-*
- split: validation
path: templated_turku_paraphrase/validation-*
- config_name: templated_ukranian_gec
data_files:
- split: train
path: templated_ukranian_gec/train-*
- config_name: templated_uner_llm
data_files:
- split: train
path: templated_uner_llm/train-*
- split: test
path: templated_uner_llm/test-*
- split: validation
path: templated_uner_llm/validation-*
- config_name: templated_urdu_news_category
data_files:
- split: test
path: templated_urdu_news_category/test-*
- split: train
path: templated_urdu_news_category/train-*
- config_name: templated_urdu_news_gen
data_files:
- split: test
path: templated_urdu_news_gen/test-*
- split: train
path: templated_urdu_news_gen/train-*
- config_name: templated_urdu_news_headline
data_files:
- split: test
path: templated_urdu_news_headline/test-*
- split: train
path: templated_urdu_news_headline/train-*
- config_name: templated_wiki_split
data_files:
- split: test
path: templated_wiki_split/test-*
- split: train
path: templated_wiki_split/train-*
- split: validation
path: templated_wiki_split/validation-*
- config_name: templated_xcsqa
data_files:
- split: validation
path: templated_xcsqa/validation-*
- config_name: templated_xlel_wd
data_files:
- split: test
path: templated_xlel_wd/test-*
- split: train
path: templated_xlel_wd/train-*
- split: validation
path: templated_xlel_wd/validation-*
- config_name: templated_xwikis
data_files:
- split: test
path: templated_xwikis/test-*
- split: train
path: templated_xwikis/train-*
- split: validation
path: templated_xwikis/validation-*
- config_name: translated_adversarial_qa
data_files:
- split: test
path: translated_adversarial_qa/test-*
- split: train
path: translated_adversarial_qa/train-*
- split: validation
path: translated_adversarial_qa/validation-*
- config_name: translated_cnn_dailymail
data_files:
- split: test
path: translated_cnn_dailymail/test-*
- split: train
path: translated_cnn_dailymail/train-*
- split: validation
path: translated_cnn_dailymail/validation-*
- config_name: translated_dolly
data_files:
- split: train
path: translated_dolly/train-*
- config_name: translated_flan_coqa
data_files:
- split: train
path: translated_flan_coqa/train-*
- config_name: translated_flan_cot
data_files:
- split: train
path: translated_flan_cot/train-*
- config_name: translated_flan_gem_wiki
data_files:
- split: train
path: translated_flan_gem_wiki/train-*
- config_name: translated_flan_lambada
data_files:
- split: train
path: translated_flan_lambada/train-*
- config_name: translated_flan_qa
data_files:
- split: train
path: translated_flan_qa/train-*
- config_name: translated_hotpotqa
data_files:
- split: train
path: translated_hotpotqa/train-*
- split: validation
path: translated_hotpotqa/validation-*
- config_name: translated_joke_explaination
data_files:
- split: train
path: translated_joke_explaination/train-*
- config_name: translated_mintaka
data_files:
- split: test
path: translated_mintaka/test-*
- split: train
path: translated_mintaka/train-*
- split: validation
path: translated_mintaka/validation-*
- config_name: translated_mlqa
data_files:
- split: test
path: translated_mlqa/test-*
- split: validation
path: translated_mlqa/validation-*
- config_name: translated_nqopen
data_files:
- split: train
path: translated_nqopen/train-*
- split: validation
path: translated_nqopen/validation-*
- config_name: translated_paws
data_files:
- split: test
path: translated_paws/test-*
- split: train
path: translated_paws/train-*
- split: validation
path: translated_paws/validation-*
- config_name: translated_piqa
data_files:
- split: train
path: translated_piqa/train-*
- split: validation
path: translated_piqa/validation-*
- config_name: translated_soda
data_files:
- split: test
path: translated_soda/test-*
- split: validation
path: translated_soda/validation-*
- split: train
path: translated_soda/train-*
- config_name: translated_wiki_split
data_files:
- split: test
path: translated_wiki_split/test-*
- split: train
path: translated_wiki_split/train-*
- split: validation
path: translated_wiki_split/validation-*
- config_name: translated_wikiqa
data_files:
- split: test
path: translated_wikiqa/test-*
- split: train
path: translated_wikiqa/train-*
- split: validation
path: translated_wikiqa/validation-*
- config_name: translated_xlel_wd
data_files:
- split: test
path: translated_xlel_wd/test-*
- split: validation
path: translated_xlel_wd/validation-*
- split: train
path: translated_xlel_wd/train-*
---
![Aya Header](https://huggingface.co./datasets/CohereForAI/aya_collection/resolve/main/aya_header.png)
****This dataset is uploaded in two places: here and additionally [here](https://huggingface.co./datasets/CohereForAI/aya_collection_language_split) as 'Aya Collection Language Split.' These datasets are identical in content but differ in structure of upload. This dataset is structured by folders split according to dataset name. The version [here](https://huggingface.co./datasets/CohereForAI/aya_collection_language_split) instead divides the Aya collection into folders split by language. We recommend you use the language split version if you are only interested in downloading data for a single or smaller set of languages, and this version if you want to download dataset according to data source or the entire collection.****
# Dataset Summary
The Aya Collection is a massive multilingual collection consisting of 513 million instances of prompts and completions covering a wide range of tasks.
This collection incorporates instruction-style templates from fluent speakers and applies them to a curated list of datasets, as well as translations of instruction-style datasets into 101 languages. Aya Dataset, a human-curated multilingual instruction and response dataset, is also part of this collection. See our paper for more details regarding the collection.
- **Curated by:** Contributors of [Aya Open Science Intiative](https://cohere.com/research/aya)
- **Language(s):** 115 languages
- **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
- **Aya Datasets Family:**
| Name | Explanation |
|------|--------------|
| [aya_dataset](https://huggingface.co./datasets/CohereForAI/aya_dataset) | Human-annotated multilingual instruction finetuning dataset, comprising over 204K instances across 65 languages. |
| [aya_collection](https://huggingface.co./datasets/CohereForAI/aya_collection) | Created by applying instruction-style templates from fluent speakers to 44 datasets, including translations of 19 instruction-style datasets into 101 languages. This collection structured based on dataset level subsets. An alternative version of the collection structured by language subsets is also available.|
| [aya_collection_language_split](https://huggingface.co./datasets/CohereForAI/aya_collection_language_split) | Aya Collection structured based on language level subsets. |
| [aya_evaluation_suite](https://huggingface.co./datasets/CohereForAI/aya_evaluation_suite) | A diverse evaluation set for multilingual open-ended generation, featuring 250 culturally grounded prompts in 7 languages, 200 translated prompts in 24 languages, and human-edited versions selected for cross-cultural relevance from English Dolly in 6 languages.|
| [aya_redteaming](https://huggingface.co./datasets/CohereForAI/aya_redteaming)| A red-teaming dataset consisting of harmful prompts in 8 languages across 9 different categories of harm with explicit labels for "global" and "local" harm.|
# Dataset
The `Aya Collection` is a comprehensive, large corpus of datasets that can be used by researchers around the world to train multilingual models. Our goal is only to include datasets with permissive licensing for manipulation and redistribution.
The `Aya Collection` consists of three different sources of data:
1. Templated data: We collaborated with fluent speakers to create templates that allowed for the automatic expansion of existing datasets into various languages.
2. Translated data: We translated a hand-selected subset of 19 datasets into 101 languages (114 dialects) using the NLLB 3.3B parameter machine translation model.
3. Aya Dataset: We release the [Aya Dataset](https://huggingface.co./datasets/CohereForAI/aya_dataset) as a subset of the overall collection. This is the only dataset in the collection that is human-annotated in its entirety.
## Load with Datasets
To load this dataset with Datasets, you'll need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("CohereForAI/aya_collection", "templated_mintaka")
```
In the above code snippet, "templated_mintaka" refers to a subset of the aya_collection. You can load other subsets by specifying its name at the time of loading the dataset.
## Data Instances
An example of a `train` instance looks as follows:
```json
{'id': 246001,
'inputs': 'The following query in English is taken from the geography category. What could be the answer to the question?\nWhat is the seventh tallest mountain in North America?',
'targets': 'The answer is Mount Lucania.',
'dataset_name': 'Mintaka-inst',
'sub_dataset_name': '-',
'task_type': 'question-answering',
'template_id': 3,
'language': 'eng',
'split': 'train',
'script': 'Latn'
}
```
## Data Fields
The data fields are the same among all splits:
- `id:` Unique id of the data point
- `inputs:` Prompt or input to the language model.
- `targets:` Completion or output of the language model.
- `dataset_name:` The name of the source dataset that the data point was taken from
- `sub_dataset_name:` If the source is a collection, this field indicates which part of that collection the data point was taken from. If it is not a collection, this field is left blank.
- `task_type:` The task type that this conversation belongs to.
- `template_id`: The id of the template applied to this data point.
- `language:` The ISO code of the dialect of the conversation.
- `script:` The script of the language.
- `split:` Indicates whether the data point is part of the `train` or the `test` split.
### Statistics
The total number of data points, including the Aya Dataset` is 513,758,189. To view the breakdown of dialect codes and the respective templated and translated data point counts in the Aya Collection , refer to the toggled table below.
<details>
<summary> <b> Breakdown of Aya Collection data point counts grouped by dialects </b> </summary>
|dialect code|language|translated data point count|templated data point count|total count |
|------------|--------|---------------------------|--------------------------|---------------|
|ace |Achinese|8240684 |2000 |8242684 |
|acm |Arabic |4120342 |0 |4120342 |
|acq |Arabic |4120342 |0 |4120342 |
|aeb |Arabic |4120342 |0 |4120342 |
|afr |Afrikaans|4120342 |6108 |4126450 |
|ajp |Arabic |4120342 |0 |4120342 |
|als |Albanian|4120342 |0 |4120342 |
|amh |Amharic |4120342 |25327 |4145669 |
|apc |Arabic |4120342 |0 |4120342 |
|arb |Arabic |6424999 |216430 |6641429 |
|ars |Arabic |4120342 |0 |4120342 |
|ary |Arabic |4120342 |18076 |4138418 |
|arz |Arabic |4120342 |0 |4120342 |
|azb |Azerbaijani|4120342 |0 |4120342 |
|azj |Azerbaijani|4120342 |0 |4120342 |
|bel |Belarusian|4120342 |21273 |4141615 |
|ben |Bengali |4120342 |30661 |4151003 |
|bjn |Banjar |8240684 |2000 |8242684 |
|bul |Bulgarian|4120342 |37722 |4158064 |
|cat |Catalan |4120342 |66900 |4187242 |
|ceb |Cebuano |4120342 |0 |4120342 |
|ces |Czech |4120342 |179604 |4299946 |
|ckb |Kurdish |4120342 |0 |4120342 |
|cym |Welsh |4120342 |0 |4120342 |
|dan |Danish |4120342 |36310 |4156652 |
|deu |German |4120342 |1326722 |5447064 |
|ell |Greek |4120342 |40291 |4160633 |
|eng |English |9771427 |8066678 |17838105 |
|epo |Esperanto|4120342 |0 |4120342 |
|est |Estonian|4120342 |0 |4120342 |
|eus |Basque |4120342 |0 |4120342 |
|fin |Finnish |4120342 |457895 |4578237 |
|fra |French |4120342 |835520 |4955862 |
|gla |Scottish Gaelic|4120342 |0 |4120342 |
|gle |Irish |4120342 |0 |4120342 |
|glg |Galician|4120342 |0 |4120342 |
|guj |Gujarati|4120342 |2157 |4122499 |
|hat |Haitian Creole|4120342 |0 |4120342 |
|hau |Hausa |4120342 |51396 |4171738 |
|heb |Hebrew |4120342 |103466 |4223808 |
|hin |Hindi |4120342 |260387 |4380729 |
|hun |Hungarian|4120342 |82039 |4202381 |
|hye |Armenian|4120342 |7080 |4127422 |
|ibo |Igbo |4120342 |36312 |4156654 |
|ind |Indonesian|4120342 |45709 |4166051 |
|isl |Icelandic|4120342 |0 |4120342 |
|ita |Italian |4120342 |405682 |4526024 |
|jav |Javanese|4120342 |829 |4121171 |
|jpn |Japanese|4120342 |2693177 |6813519 |
|kan |Kannada |4120342 |1156 |4121498 |
|kas |Kashmiri|4120342 |0 |4120342 |
|kat |Georgian|4120342 |0 |4120342 |
|kaz |Kazakh |4120342 |0 |4120342 |
|khk |Mongolian|4120342 |0 |4120342 |
|khm |Khmer |4120342 |0 |4120342 |
|kir |Kyrgyz |4120342 |0 |4120342 |
|kmr |Kurdish |4120342 |0 |4120342 |
|knc |Kanuri |8240684 |0 |8240684 |
|kor |Korean |4120342 |41011 |4161353 |
|lao |Lao |4120342 |0 |4120342 |
|lit |Lithuanian|4120342 |0 |4120342 |
|ltz |Luxembourgish|4120342 |0 |4120342 |
|lvs |Latvian |4120342 |0 |4120342 |
|mal |Malayalam|4120342 |4347 |4124689 |
|mar |Marathi |4120342 |3678 |4124020 |
|min |Minangkabau|6753788 |2000 |6755788 |
|mkd |Macedonian|4120342 |0 |4120342 |
|mlt |Maltese |4120342 |0 |4120342 |
|mni |Manipuri|4120342 |0 |4120342 |
|mri |Maori |4120342 |0 |4120342 |
|mya |Burmese |4120342 |0 |4120342 |
|nld |Dutch |4120342 |220181 |4340523 |
|nno |Norwegian|4120342 |0 |4120342 |
|nob |Norwegian|4120342 |0 |4120342 |
|npi |Nepali |4120342 |0 |4120342 |
|nso |Northern Sotho|4120342 |0 |4120342 |
|pbt |Pashto |4120342 |0 |4120342 |
|pes |Persian |4120342 |245520 |4365862 |
|plt |Malagasy|4120342 |0 |4120342 |
|pol |Polish |4120342 |332503 |4452845 |
|por |Portuguese|4120342 |287432 |4407774 |
|ron |Romanian|4120342 |36359 |4156701 |
|rus |Russian |4120342 |545920 |4666262 |
|sin |Sinhala |4120342 |195 |4120537 |
|slk |Slovak |4120342 |27845 |4148187 |
|slv |Slovenian|4120342 |25731 |4146073 |
|smo |Samoan |4120342 |0 |4120342 |
|sna |Shona |4120342 |3684 |4124026 |
|snd |Sindhi |4120342 |0 |4120342 |
|som |Somali |4120342 |2926 |4123268 |
|sot |Southern Sotho|4120342 |0 |4120342 |
|spa |Spanish |4120342 |379194 |4499536 |
|srp |Serbian |4120342 |77124 |4197466 |
|sun |Sundanese|4120342 |2208 |4122550 |
|swe |Swedish |4120342 |76486 |4196828 |
|swh |Swahili |4120342 |12726 |4133068 |
|tam |Tamil |4120342 |11462 |4131804 |
|taq |Tamasheq|4120342 |0 |4120342 |
|tel |Telugu |4120342 |477821 |4598163 |
|tgk |Tajik |4120342 |0 |4120342 |
|tha |Thai |4120342 |2125180 |6245522 |
|tur |Turkish |4120342 |59932 |4180274 |
|ukr |Ukrainian|4120342 |189384 |4309726 |
|urd |Urdu |4120342 |337739 |4458081 |
|uzn |Uzbek |4120342 |0 |4120342 |
|vie |Vietnamese|4120342 |42232 |4162574 |
|xho |Xhosa |4120342 |2952 |4123294 |
|ydd |Yiddish |4120342 |0 |4120342 |
|yor |Yoruba |4120342 |4907 |4125249 |
|yue |Chinese |4120342 |0 |4120342 |
|zho-Hans |Chinese |4120342 |54528 |4174870 |
|zho-Hant |Chinese |4120342 |0 |4120342 |
|zsm |Malay |4120342 |13950 |4134292 |
|zul |Zulu |4120342 |786 |4121128 |
|arq |Arabic |0 |6046 |6046 |
|ban |Balinese|0 |2000 |2000 |
|bbc |Toba Batak|0 |2000 |2000 |
|bem |Bemba |0 |776 |776 |
|fil |Filipino|0 |220 |220 |
|fon |Fon |0 |845 |845 |
|hrv |Croatian|0 |9007 |9007 |
|kin |Kinyarwanda|0 |11165 |11165 |
|lij |Ligurian|0 |6409 |6409 |
|mad |Madurese|0 |2000 |2000 |
|nij |Ngaju |0 |2000 |2000 |
|nor |Norwegian|0 |72352 |72352 |
|pan |Punjabi |0 |2156 |2156 |
|twi |Twi |0 |10840 |10840 |
|wol |Wolof |0 |785 |785 |
|zho |Chinese |0 |74972 |74972 |
PS: Templated data also includes Mozambican Portuguese, which doesn't have its own ISO language code.
</details>
<br>
# Motivations & Intentions
- **Curation Rationale:** Automatic augmentation of existing datasets serves to enhance the available linguistic resources for multiple languages. The list of languages was initially established from mT5 and aligned with the annotators’ language list and NLLB translation model. The datasets were translated directly from English for all languages.
# Additional Information
## Provenance
- **Methods Used:** A combination of crowd-sourced templating and automatic translation was employed to source this dataset.
- **Methodology Details:**
- *Source:* Existing NLP datasets
- *Dates of Collection:* May 2023 - Dec 2023
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 02/2024
- *First Release:* 02/2024
## Authorship
- **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
- **Contact Details:** https://cohere.com/research/aya
## Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Citation Information
```bibtex
@misc{singh2024aya,
title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning},
author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker},
year={2024},
eprint={2402.06619},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
HuggingFaceGECLM/REDDIT_comments | HuggingFaceGECLM | "2023-03-17T07:52:51Z" | 12,759 | 11 | [
"task_categories:text-generation",
"task_ids:dialogue-modeling",
"task_ids:language-modeling",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:monolingual",
"language:en",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2001.08435",
"region:us",
"reddit",
"social-media"
] | [
"text-generation"
] | "2023-03-15T14:14:58Z" | ---
dataset_info:
features:
- name: archived
dtype: string
- name: author
dtype: string
- name: author_fullname
dtype: string
- name: body
dtype: string
- name: comment_type
dtype: string
- name: controversiality
dtype: string
- name: created_utc
dtype: string
- name: edited
dtype: string
- name: gilded
dtype: string
- name: id
dtype: string
- name: link_id
dtype: string
- name: locked
dtype: string
- name: name
dtype: string
- name: parent_id
dtype: string
- name: permalink
dtype: string
- name: retrieved_on
dtype: string
- name: score
dtype: string
- name: subreddit_id
dtype: string
- name: subreddit_name_prefixed
dtype: string
- name: subreddit_type
dtype: string
- name: total_awards_received
dtype: string
splits:
- name: programming
num_bytes: 3466623746
num_examples: 7503347
- name: tifu
num_bytes: 4761338653
num_examples: 12738669
- name: explainlikeimfive
num_bytes: 8451732573
num_examples: 16392814
- name: WritingPrompts
num_bytes: 4651591771
num_examples: 4436210
- name: changemyview
num_bytes: 8603031915
num_examples: 11600073
- name: LifeProTips
num_bytes: 5272994396
num_examples: 12829459
- name: todayilearned
num_bytes: 22655655241
num_examples: 60199778
- name: science
num_bytes: 7069809765
num_examples: 18112884
- name: askscience
num_bytes: 3144754665
num_examples: 6286702
- name: ifyoulikeblank
num_bytes: 547200329
num_examples: 1332211
- name: Foodforthought
num_bytes: 308377128
num_examples: 567900
- name: IWantToLearn
num_bytes: 408331672
num_examples: 745543
- name: bestof
num_bytes: 2003718831
num_examples: 4347522
- name: IAmA
num_bytes: 9380094090
num_examples: 25778822
- name: socialskills
num_bytes: 1000014402
num_examples: 1842733
- name: relationship_advice
num_bytes: 22298879735
num_examples: 38937398
- name: philosophy
num_bytes: 1494947876
num_examples: 2391695
- name: YouShouldKnow
num_bytes: 1165617658
num_examples: 2639265
- name: history
num_bytes: 1457852402
num_examples: 2962043
- name: books
num_bytes: 4562689426
num_examples: 10187495
- name: Showerthoughts
num_bytes: 13259109532
num_examples: 34123213
- name: personalfinance
num_bytes: 9484869588
num_examples: 18361314
- name: buildapc
num_bytes: 9801044390
num_examples: 21761801
- name: EatCheapAndHealthy
num_bytes: 853462012
num_examples: 1821897
- name: boardgames
num_bytes: 3131627378
num_examples: 6328926
- name: malefashionadvice
num_bytes: 2928017882
num_examples: 7712258
- name: femalefashionadvice
num_bytes: 1619784736
num_examples: 3262969
- name: scifi
num_bytes: 888152056
num_examples: 2193741
- name: Fantasy
num_bytes: 2285934538
num_examples: 4566639
- name: Games
num_bytes: 10396813188
num_examples: 23373965
- name: bodyweightfitness
num_bytes: 794549854
num_examples: 1613634
- name: SkincareAddiction
num_bytes: 3421122597
num_examples: 5660550
- name: podcasts
num_bytes: 464773126
num_examples: 943266
- name: suggestmeabook
num_bytes: 1842944304
num_examples: 3492937
- name: AskHistorians
num_bytes: 2244587909
num_examples: 2714353
- name: gaming
num_bytes: 28374513722
num_examples: 85729253
- name: DIY
num_bytes: 2113533684
num_examples: 4489265
- name: sports
num_bytes: 2230129132
num_examples: 6470079
- name: space
num_bytes: 3081499208
num_examples: 7896182
- name: gadgets
num_bytes: 1683252868
num_examples: 4104833
- name: Documentaries
num_bytes: 1852644771
num_examples: 4051474
- name: GetMotivated
num_bytes: 1211761267
num_examples: 3221980
- name: UpliftingNews
num_bytes: 2003149025
num_examples: 4741948
- name: technology
num_bytes: 10826871436
num_examples: 25404699
- name: Fitness
num_bytes: 6191132755
num_examples: 14319856
- name: travel
num_bytes: 1740556350
num_examples: 3806755
- name: lifehacks
num_bytes: 626791812
num_examples: 1799437
- name: Damnthatsinteresting
num_bytes: 6376694618
num_examples: 15643554
- name: gardening
num_bytes: 1825313940
num_examples: 4568468
- name: mildlyinteresting
num_bytes: 9079894206
num_examples: 26436769
download_size: 109177016105
dataset_size: 255339788158
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license: []
multilinguality:
- monolingual
pretty_name: Reddit comments
size_categories:
- 10B<n<100B
source_datasets: []
tags:
- reddit
- social-media
task_categories:
- text-generation
task_ids:
- dialogue-modeling
- language-modeling
---
# Dataset Card for "REDDIT_comments"
## Dataset Description
- **Homepage:**
- **Paper: https://arxiv.org/abs/2001.08435**
### Dataset Summary
Comments of 50 high-quality subreddits, extracted from the REDDIT PushShift data dumps (from 2006 to Jan 2023).
### Supported Tasks
These comments can be used for text generation and language modeling, as well as dialogue modeling.
## Dataset Structure
### Data Splits
Each split corresponds to a specific subreddit in the following list: "tifu", "explainlikeimfive", "WritingPrompts", "changemyview", "LifeProTips", "todayilearned", "science", "askscience", "ifyoulikeblank", "Foodforthought", "IWantToLearn", "bestof", "IAmA", "socialskills", "relationship_advice", "philosophy", "YouShouldKnow", "history", "books", "Showerthoughts", "personalfinance", "buildapc", "EatCheapAndHealthy", "boardgames", "malefashionadvice", "femalefashionadvice", "scifi", "Fantasy", "Games", "bodyweightfitness", "SkincareAddiction", "podcasts", "suggestmeabook", "AskHistorians", "gaming", "DIY", "mildlyinteresting", "sports", "space", "gadgets", "Documentaries", "GetMotivated", "UpliftingNews", "technology", "Fitness", "travel", "lifehacks", "Damnthatsinteresting", "gardening", "programming"
## Dataset Creation
### Curation Rationale
All the information fields have been cast to string, as their format change through time from one dump to the following. A reduced number of keys have been kept: "archived", "author", "author_fullname", "body", "comment_type", "controversiality", "created_utc", "edited", "gilded", "id", "link_id", "locked", "name", "parent_id", "permalink", "retrieved_on", "score", "subreddit", "subreddit_id", "subreddit_name_prefixed", "subreddit_type", "total_awards_received".
### Source Data
The [Reddit PushShift data dumps](https://files.pushshift.io/reddit/) are part of a data collection effort which crawls Reddit at regular intervals, to extract and keep all its data.
#### Initial Data Collection and Normalization
See the paper.
#### Who are the source language producers?
Redditors are mostly young (65% below 30), male (70%), and American (50% of the site).
### Personal and Sensitive Information
The data contains Redditor's usernames associated to their content.
## Considerations for Using the Data
This dataset should be anonymized before any processing.
Though the subreddits selected are considered as being of higher quality, they can still reflect what you can find on the internet in terms of expressions of biases and toxicity.
### Contributions
Thanks to [@clefourrier](https://github.com/clefourrier) for adding this dataset. |
CropNet/CropNet | CropNet | "2024-11-03T21:59:02Z" | 12,722 | 14 | [
"language:en",
"license:cc-by-4.0",
"size_categories:n>1T",
"doi:10.57967/hf/3514",
"region:us",
"agriculture",
"climate"
] | null | "2023-10-08T17:59:29Z" | ---
license: cc-by-4.0
language:
- en
tags:
- agriculture
- climate
size_categories:
- n>1T
---
# An Open and Large-Scale Dataset for Multi-Modal Climate Change-aware Crop Yield Predictions
![Motivation](images/dataset-motivation.png)
The CropNet dataset is an open, large-scale, and deep learning-ready dataset, specifically targeting climate change-aware crop yield predictions for the contiguous United States (U.S.) continent at the county level. It is composed of three modalities of data, i.e., Sentinel-2 Imagery, WRF-HRRR Computed Dataset, and USDA Crop Dataset, aligned in both the spatial and temporal domains, for over 2200 U.S. counties spanning 6 years (2017-2022). It is expected to facilitate researchers in developing deep learning models for timely and precisely predicting crop yields at the county level, by accounting for the effects of both short-term growing season weather variations and long-term climate change on crop yields. Although our initial goal of crafting the CropNet dataset is for precise crop yield prediction, we believe its future applicability is broad and can benefit the deep learning, agriculture, and meteorology communities, for exploring more interesting, critical, and climate change-related applications, by using one or more modalities of data.
## Citation
If you use our dataset, please cite [our paper](https://dl.acm.org/doi/10.1145/3637528.3671536):
```
@inproceedings{fudong:kdd24:crop_net,
author = {Fudong Lin and Kaleb Guillot and Summer Crawford and Yihe Zhang and Xu Yuan and Nian{-}Feng Tzeng},
title = {An Open and Large-Scale Dataset for Multi-Modal Climate Change-aware Crop Yield Predictions},
booktitle = {Proceedings of the 30th {ACM} {SIGKDD} Conference on Knowledge Discovery and Data Mining (KDD)},
pages = {5375--5386},
year = {2024}
}
```
[Our MMST-ViT model](https://openaccess.thecvf.com/content/ICCV2023/papers/Lin_MMST-ViT_Climate_Change-aware_Crop_Yield_Prediction_via_Multi-Modal_Spatial-Temporal_Vision_ICCV_2023_paper.pdf) demonstrates how to develop deep-learning models for climate change-aware crop yield predictions. If you use MMST-ViT in your research, please cite our paper:
```
@inproceedings{fudong:iccv23:mmst_vit,
title={MMST-ViT: Climate Change-aware Crop Yield Prediction via Multi-Modal Spatial-Temporal Vision Transformer},
author={Lin, Fudong and Crawford, Summer and Guillot, Kaleb and Zhang, Yihe and Chen, Yan and Yuan, Xu and others},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={5774--5784},
year={2023}
}
```
## Contributions
#### The `CropNet` dataset
- The first *terabyte-sized*, publicly available, and multi-modal dataset for climate change-aware crop yield predictions
#### The `CropNet` package
- A *deep learning-ready* Python package for facilitating researchers in downloading the CropNet data on the fly over the time and region of interest, and developing deep neural networks (DNNs) for climate change-aware crop yield predictions
- The `CropNet` package is available at [Python Package Index (PyPI)](https://pypi.org/project/cropnet/)
## Tutorials
The tutorials for the CropNet dataset are available at Google Colab, with their links listed below
- [Sentinel-2 Imagery Tutorial](https://colab.research.google.com/drive/1Tj69JdhO7aX8ks-4UWYvHrFm9GB1PNCd?usp=sharing)
- [WRF-HRRR Computed Dataset Tutorial](https://colab.research.google.com/drive/14l-JSNHtelawNu3kVG_ukTd2WUJpaZEc?usp=sharing)
- [USDA Crop Dataset Tutorial](https://colab.research.google.com/drive/1U-vFoRyLSb2l2Q67LeGbkUKTeRaHDkkK?usp=sharing)
## The CropNet Dataset
0ur CropNet dataset is composed of three modalities of data, i.e., Sentinel-2 Imagery, WRF-HRRR Computed Dataset, and USDA Crop Dataset, spanning from 2017 to 2022 (i.e., 6 years) across 2291 U.S. counties, with its geographic distribution illustrated below. We also include the number of counties corresponding to each crop type in the USDA Crop Dataset (see the rightmost bar chart in the figure) since crop planting is highly geography-dependent.
![Geographic Distribution](images/dataset-geo-overview-violet-pastel.png)
### Sentinel-2 Imagery
The Sentinel-2 Imagery, obtained from the Sentinel-2 mission, provides high-resolution satellite images for monitoring crop growth on the ground. It contains two types of 224x224 RGB satellite images, agriculture imagery (AG) and normalized difference vegetation index (NDVI), both with a spatial resolution of 9x9 km, and a revisit frequency of 14 days. Examples of AG and NDVI images are depicted as follows.
- **Agriculture Imagery (AG)**
![AG](images/dataset-Sentinel2-AG.png)
- **Normalized Difference Vegetation Index (NDVI)**
![NDVI](images/dataset-Sentinel2-NDVI.png)
### WRF-HRRR Computed Dataset
The WRF-HRRR Computed Dataset, sourced from the WRF-HRRR model, contains daily and monthly meteorological parameters, with the former and the latter designed for capturing direct effects of short-term growing season weather variations on crop growth, and for learning indirect impacts of long-term climate change on crop yields, respectively. It contains 9 meteorological parameters gridded at 9 km in a one-day (and one-month) interval. The figures show the temperature in the spring, the summer, the fall, and the winter, respectively.
![HRRR Temperature](images/dataset-HRRR-temperature.png)
### USDA Crop Dataset
The USDA Crop Dataset, collected from the USDA Quick Statistic website, offers valuable information, such as production, yield, etc., for crops grown at each available county. It offers crop information for four types of crops, i.e., corn, cotton, soybeans, and winter wheat, at a county-level basis, with a temporal resolution of one year. The figure illustrates the 2022 Corn Yield across the United States.
![USDA Corn Yield](images/dataset-corn-yield.png)
### The CropNet Package
Beyond the contribution of our CropNet dataset, we also release the CropNet package in the Python Package Index (PyPI) for facilitating researchers in downloading the CropNet data based on the time and region of interest, and flexibly building their deep learning models for accurate crop yield predictions. In particular, the CropNet package includes three types of APIs, listed as follows:
- **DataDownloader**: This API allows users to download the CropNet data over the time/region of interest on the fly.
- **DataRetriever**: With this API, users can conveniently obtain the CropNet data stored in the local machine (e.g., if you have downloaded our curated CropNet from Google Drive) over the time/region of interest.
- **DataLoader**: This API is designed to facilitate researchers in developing their DNNs for accurate crop yield predictions. Specifically, the code in this API ( 1) combines all three modalities of data to create $(\mathbf{x}, \mathbf{y_{s}}, \mathbf{y_{l}}, \mathbf{z})$ tuples, with $\mathbf{x}, \mathbf{y_{s}}, \mathbf{y_{l}}, \text{and}~ \mathbf{z}$, respectively representing satellite images, short-term daily whether parameters, long-term monthly meteorological parameters, and ground-truth crop yield (or production) information, and then (2) exposes those tuples via a `Dataset` object after appropriate data pre-processing techniques.
### Installation
Researchers and practitioners can install the latest version of CropNet with the following commands:
```python
# Create and activate a conda environment
conda create -n cropnet_api python=3.10
conda activate cropnet_api
# Install the latest version of CropNet
pip install cropnet
# Slove the ecCodes library dependency issue
pip install ecmwflibs
```
### CropNet API Examples
- **Example 1: A DataDownloader Example for Downloading the Up-to-date CropNet Data**
Given the time and region (i.e., the FIPS codes for two U.S. counties) of interest, the following code presents how to utilize the **DataDownloader** to download the up-to-date CropNet data:
```python
from cropnet.data_downloader import DataDownloader
# Use the "target_dir" to specify where the data should be downloaded to
downloader = DataDownloader(target_dir="./data")
# Download 2022 USDA Soybean data
# Note that most of the 2023 USDA data are not yet available
downloader.download_USDA("Soybean", fips_codes=["10003", "22007"], years=["2022"])
# Download the 2023 (the 1st and 2nd quarters) Sentinel-2 Imagery
downloader.download_Sentinel2(fips_codes=["10003", "22007"], years=["2023"], image_type="AG")
downloader.download_Sentinel2(fips_codes=["10003", "22007"], years=["2023"], image_type="NDVI")
# Download the 2023 (January to July) WRF-HRRR data
downloader.download_HRRR(fips_codes=["10003", "22007"], years=["2023"])
```
- **Example 2: A DataRetriever Example for Obtaining Our Curated CropNet Data**
Given the time and region of interest, the following code shows how to use the **DataRetriever** to obtain the CropNet data stored in the local machine in a user-friendly format:
```python
# Use the "base_fir" to specify where the CropNet data is stored
retriever = DataRetriever(base_dir="/mnt/data/CropNet")
# Retrieve the 2022 USDA Soybean data
usda_data = retriever.retrieve_USDA(crop_type="Soybean", fips_codes=["10003", "22007"], years=["2022"])
# Retrieve the 2022 Sentinel-2 Imagery data
sentinel2_data = retriever.retrieve_Sentinel2(fips_codes=["10003", "22007"], years=["2022"], image_type="AG")
sentinel2_data = retriever.retrieve_Sentinel2(fips_codes=["10003", "22007"], years=["2022"], image_type="NDVI")
# Retrieve the 2022 WRF-HRRR data
hrrr_data = retriever.retrieve_HRRR(fips_codes=["10003","22007"], years=["2022"])
```
- **Example 3: A PyTorch Example for Using the DataLoader API for Training DNNs**
The following code presents a PyTorch example of training a deep learning model (i.e., MMST-ViT) for climate change-aware crop yield predictions, by utilizing the DataLoader APIs:
```python
import torch
from torch.utils.data import DataLoader
from models_mmst_vit import MMST_ViT
from cropnet.dataset.hrrr_computed_dataset import HRRRComputedDataset
from cropnet.dataset.sentinel2_imagery import Sentinel2Imagery
from cropnet.dataset.usda_crop_dataset import USDACropDataset
# The base directory for the CropNet dataset
base_dir = "/mnt/data/CropNet"
# The JSON configuration file
config_file = "data/soybeans_train.json"
# The dataloaders for each modality of data
sentinel2_loader = DataLoader(Sentinel2Imagery(base_dir, config_file), batch_size=1)
hrrr_loader = DataLoader(HRRRComputedDataset(base_dir, config_file), batch_size=1)
usda_loader = DataLoader(USDACropDataset(base_dir, config_file), batch_size=1)
# The model, the optimizer, and the loss function
model = MMST_ViT()
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3, betas=(0.9, 0.999))
criterion = torch.nn.MSELoss()
# Traning the model for one epoch
for s, h, u in zip(sentinel2_loader, hrrr_loader, usda_loader):
# x: satellite images
# ys (or yl): short-term daily (or long-term monthly) weather parameters
# z: ground-truth crop yield (or production) information
x, ys, yl, z, = s[0], h[0], h[1], u[0]
optimizer.zero_grad()
z_hat = model(x, ys, yl)
loss = criterion(z, z_hat)
loss.backward()
optimizer.step()
```
## License
CropNet has a [Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/) license.
## Dataset Terms of Use
This dataset is available for research purposes only. By downloading, you agree to these terms. We are aware that unauthorized copies of our dataset have been republished on HuggingFace. Please note that any republication or distribution of this dataset without permission is prohibited and constitutes copyright infringement. |
asahi417/seamless-align-enA-jaA.speaker-embedding.xlsr-2b | asahi417 | "2024-06-24T08:26:07Z" | 12,701 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-12T06:54:19Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16310450373
num_examples: 2073
download_size: 16362791193
dataset_size: 16310450373
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14507997777
num_examples: 1961
download_size: 14554505952
dataset_size: 14507997777
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14129720831
num_examples: 1757
download_size: 14173975107
dataset_size: 14129720831
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14993273606
num_examples: 1873
download_size: 15040705190
dataset_size: 14993273606
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15427157956
num_examples: 1868
download_size: 15475633129
dataset_size: 15427157956
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15482076366
num_examples: 1879
download_size: 15530574846
dataset_size: 15482076366
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15390676969
num_examples: 1901
download_size: 15439431804
dataset_size: 15390676969
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15184436320
num_examples: 1875
download_size: 15231369480
dataset_size: 15184436320
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15272910298
num_examples: 1880
download_size: 15320755279
dataset_size: 15272910298
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14914011605
num_examples: 1854
download_size: 14961302329
dataset_size: 14914011605
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15180068428
num_examples: 1834
download_size: 15228372695
dataset_size: 15180068428
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14466035226
num_examples: 1770
download_size: 14511537163
dataset_size: 14466035226
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13101664907
num_examples: 1779
download_size: 13144308567
dataset_size: 13101664907
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15611661731
num_examples: 1908
download_size: 15659409356
dataset_size: 15611661731
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15278480945
num_examples: 1877
download_size: 15327096988
dataset_size: 15278480945
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15724141438
num_examples: 1924
download_size: 15772924053
dataset_size: 15724141438
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15575717731
num_examples: 1930
download_size: 15625880579
dataset_size: 15575717731
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15882425444
num_examples: 1940
download_size: 15931108324
dataset_size: 15882425444
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15508119355
num_examples: 1902
download_size: 15558088350
dataset_size: 15508119355
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15790391750
num_examples: 1910
download_size: 15839269103
dataset_size: 15790391750
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15442029172
num_examples: 1901
download_size: 15490765116
dataset_size: 15442029172
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15796646573
num_examples: 1911
download_size: 15846588578
dataset_size: 15796646573
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15363910769
num_examples: 1867
download_size: 15412415056
dataset_size: 15363910769
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14593616895
num_examples: 1916
download_size: 14640738281
dataset_size: 14593616895
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14564344621
num_examples: 1774
download_size: 14609587966
dataset_size: 14564344621
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15707595472
num_examples: 1895
download_size: 15757411615
dataset_size: 15707595472
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15173277199
num_examples: 1851
download_size: 15220532819
dataset_size: 15173277199
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15999251579
num_examples: 1923
download_size: 16048976124
dataset_size: 15999251579
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15601517059
num_examples: 1886
download_size: 15649292085
dataset_size: 15601517059
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15779512647
num_examples: 1928
download_size: 15829615675
dataset_size: 15779512647
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15356045351
num_examples: 1903
download_size: 15404217390
dataset_size: 15356045351
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15746607635
num_examples: 1902
download_size: 15796480801
dataset_size: 15746607635
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15375262005
num_examples: 1890
download_size: 15423317041
dataset_size: 15375262005
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14491367899
num_examples: 1752
download_size: 14536131741
dataset_size: 14491367899
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13188431829
num_examples: 1769
download_size: 13231483897
dataset_size: 13188431829
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15116628991
num_examples: 1830
download_size: 15164906379
dataset_size: 15116628991
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15597505674
num_examples: 1882
download_size: 15646436161
dataset_size: 15597505674
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15682009564
num_examples: 1918
download_size: 15731148123
dataset_size: 15682009564
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15253685409
num_examples: 1886
download_size: 15302350698
dataset_size: 15253685409
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15689301205
num_examples: 1912
download_size: 15739304449
dataset_size: 15689301205
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15581938369
num_examples: 1888
download_size: 15631772490
dataset_size: 15581938369
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15412182385
num_examples: 1875
download_size: 15460732037
dataset_size: 15412182385
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15360445304
num_examples: 1866
download_size: 15408941456
dataset_size: 15360445304
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15290751173
num_examples: 1863
download_size: 15339201722
dataset_size: 15290751173
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15256641544
num_examples: 1859
download_size: 15305102287
dataset_size: 15256641544
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12993946901
num_examples: 1734
download_size: 13035965994
dataset_size: 12993946901
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14712168776
num_examples: 1766
download_size: 14758529545
dataset_size: 14712168776
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15597842345
num_examples: 1865
download_size: 15647499851
dataset_size: 15597842345
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15493296202
num_examples: 1893
download_size: 15541947212
dataset_size: 15493296202
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15528880887
num_examples: 1894
download_size: 15578798622
dataset_size: 15528880887
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11610659779
num_examples: 1381
download_size: 11647759826
dataset_size: 11610659779
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14365778430
num_examples: 1914
download_size: 14412046441
dataset_size: 14365778430
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13652391626
num_examples: 1862
download_size: 13697579329
dataset_size: 13652391626
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14220432427
num_examples: 1875
download_size: 14266917233
dataset_size: 14220432427
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14758117694
num_examples: 1937
download_size: 14806166262
dataset_size: 14758117694
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14263981046
num_examples: 1917
download_size: 14310683240
dataset_size: 14263981046
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14854535769
num_examples: 1929
download_size: 14902477949
dataset_size: 14854535769
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14108138618
num_examples: 1877
download_size: 14154663716
dataset_size: 14108138618
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13644053955
num_examples: 1761
download_size: 13688549900
dataset_size: 13644053955
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13872299017
num_examples: 1850
download_size: 13916933531
dataset_size: 13872299017
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13582453701
num_examples: 1790
download_size: 13627175034
dataset_size: 13582453701
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13368705263
num_examples: 1758
download_size: 13412036987
dataset_size: 13368705263
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14867350268
num_examples: 1898
download_size: 14914077907
dataset_size: 14867350268
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14564112565
num_examples: 1943
download_size: 14612260797
dataset_size: 14564112565
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14403352351
num_examples: 1903
download_size: 14449996819
dataset_size: 14403352351
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14603029543
num_examples: 1912
download_size: 14650911489
dataset_size: 14603029543
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14673048791
num_examples: 1945
download_size: 14719897418
dataset_size: 14673048791
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14307378756
num_examples: 1899
download_size: 14353984563
dataset_size: 14307378756
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14710205080
num_examples: 1902
download_size: 14757936886
dataset_size: 14710205080
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14025445242
num_examples: 1805
download_size: 14071428675
dataset_size: 14025445242
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13945538622
num_examples: 1797
download_size: 13989843317
dataset_size: 13945538622
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13615084926
num_examples: 1757
download_size: 13659554271
dataset_size: 13615084926
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14564415851
num_examples: 1893
download_size: 14609576153
dataset_size: 14564415851
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14755481346
num_examples: 1928
download_size: 14803386359
dataset_size: 14755481346
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14405715735
num_examples: 1863
download_size: 14452030865
dataset_size: 14405715735
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14459656749
num_examples: 1855
download_size: 14505278929
dataset_size: 14459656749
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14747294818
num_examples: 1890
download_size: 14794943834
dataset_size: 14747294818
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14583795374
num_examples: 1899
download_size: 14631576455
dataset_size: 14583795374
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13461749342
num_examples: 1835
download_size: 13505049439
dataset_size: 13461749342
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14996728508
num_examples: 1931
download_size: 15043987691
dataset_size: 14996728508
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13998657085
num_examples: 1784
download_size: 14043192313
dataset_size: 13998657085
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13997152060
num_examples: 1797
download_size: 14041809773
dataset_size: 13997152060
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13745374844
num_examples: 1757
download_size: 13787997899
dataset_size: 13745374844
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14470227992
num_examples: 1831
download_size: 14516259581
dataset_size: 14470227992
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14540526118
num_examples: 1891
download_size: 14588272101
dataset_size: 14540526118
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14936006712
num_examples: 1897
download_size: 14983643800
dataset_size: 14936006712
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15007636946
num_examples: 1897
download_size: 15056520164
dataset_size: 15007636946
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15007698844
num_examples: 1902
download_size: 15056591062
dataset_size: 15007698844
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14872319895
num_examples: 1875
download_size: 14917259741
dataset_size: 14872319895
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14727185951
num_examples: 1987
download_size: 14775476702
dataset_size: 14727185951
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15197781964
num_examples: 1951
download_size: 15247032911
dataset_size: 15197781964
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14076247457
num_examples: 1752
download_size: 14121778451
dataset_size: 14076247457
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14166667429
num_examples: 1780
download_size: 14212377834
dataset_size: 14166667429
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14663043837
num_examples: 1846
download_size: 14710380722
dataset_size: 14663043837
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13650359450
num_examples: 1723
download_size: 13694095020
dataset_size: 13650359450
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14667394581
num_examples: 1866
download_size: 14714877043
dataset_size: 14667394581
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15126039375
num_examples: 1893
download_size: 15174838075
dataset_size: 15126039375
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15160658347
num_examples: 1924
download_size: 15208466401
dataset_size: 15160658347
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14776270381
num_examples: 1881
download_size: 14823232990
dataset_size: 14776270381
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14820738246
num_examples: 1887
download_size: 14868333441
dataset_size: 14820738246
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13509749274
num_examples: 1810
download_size: 13554686583
dataset_size: 13509749274
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15012692080
num_examples: 1885
download_size: 15061442178
dataset_size: 15012692080
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13825585736
num_examples: 1728
download_size: 13869424308
dataset_size: 13825585736
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14059852714
num_examples: 1787
download_size: 14104565757
dataset_size: 14059852714
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14347818528
num_examples: 1790
download_size: 14393558940
dataset_size: 14347818528
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14454950614
num_examples: 1812
download_size: 14500831889
dataset_size: 14454950614
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15012253440
num_examples: 1877
download_size: 15060938596
dataset_size: 15012253440
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15063107203
num_examples: 1890
download_size: 15111909056
dataset_size: 15063107203
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14991421779
num_examples: 1873
download_size: 15038857831
dataset_size: 14991421779
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15139716092
num_examples: 1883
download_size: 15188419081
dataset_size: 15139716092
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15398369854
num_examples: 1916
download_size: 15447262341
dataset_size: 15398369854
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13642415322
num_examples: 1832
download_size: 13685996881
dataset_size: 13642415322
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15176201337
num_examples: 1903
download_size: 15224791080
dataset_size: 15176201337
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13922358727
num_examples: 1736
download_size: 13966560038
dataset_size: 13922358727
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15371189233
num_examples: 1887
download_size: 15419825117
dataset_size: 15371189233
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13830427317
num_examples: 1736
download_size: 13874422671
dataset_size: 13830427317
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14723538586
num_examples: 1829
download_size: 14770698909
dataset_size: 14723538586
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15010254373
num_examples: 1862
download_size: 15058831818
dataset_size: 15010254373
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15247986194
num_examples: 1914
download_size: 15296025961
dataset_size: 15247986194
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15110877308
num_examples: 1874
download_size: 15159532855
dataset_size: 15110877308
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15037318101
num_examples: 1871
download_size: 15085976191
dataset_size: 15037318101
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15137911072
num_examples: 1891
download_size: 15186694496
dataset_size: 15137911072
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14896343344
num_examples: 2009
download_size: 14944904571
dataset_size: 14896343344
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14886044351
num_examples: 1885
download_size: 14933598111
dataset_size: 14886044351
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15488479968
num_examples: 1913
download_size: 15537299076
dataset_size: 15488479968
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15484176671
num_examples: 1910
download_size: 15532974397
dataset_size: 15484176671
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15204999706
num_examples: 1887
download_size: 15253724799
dataset_size: 15204999706
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15046661582
num_examples: 1867
download_size: 15095251446
dataset_size: 15046661582
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15018316453
num_examples: 1881
download_size: 15067060203
dataset_size: 15018316453
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15000057699
num_examples: 1862
download_size: 15047257993
dataset_size: 15000057699
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15201873116
num_examples: 1897
download_size: 15250090060
dataset_size: 15201873116
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15311639315
num_examples: 1900
download_size: 15360401050
dataset_size: 15311639315
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15309755677
num_examples: 1886
download_size: 15358419262
dataset_size: 15309755677
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14569778419
num_examples: 1977
download_size: 14618200962
dataset_size: 14569778419
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15290839824
num_examples: 1913
download_size: 15339744563
dataset_size: 15290839824
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15397885579
num_examples: 1913
download_size: 15446720083
dataset_size: 15397885579
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15031825290
num_examples: 1886
download_size: 15080581026
dataset_size: 15031825290
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15285895893
num_examples: 1875
download_size: 15334493674
dataset_size: 15285895893
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15343527930
num_examples: 1900
download_size: 15392308839
dataset_size: 15343527930
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15015741095
num_examples: 1867
download_size: 15064356988
dataset_size: 15015741095
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15460278621
num_examples: 1900
download_size: 15509015959
dataset_size: 15460278621
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15474609177
num_examples: 1899
download_size: 15523317437
dataset_size: 15474609177
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15471182493
num_examples: 1904
download_size: 15519934699
dataset_size: 15471182493
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: jaA.id
dtype: string
- name: jaA.laser_score
dtype: float64
- name: jaA.audio.speaker_embedding
sequence: float32
- name: jaA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15562076261
num_examples: 1901
download_size: 15612040624
dataset_size: 15562076261
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
ncoop57/mmmlu | ncoop57 | "2023-02-01T07:02:32Z" | 12,639 | 1 | [
"license:mit",
"size_categories:10K<n<100K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-01-24T23:50:14Z" | ---
license: mit
---
|
mlfoundations/MINT-1T-PDF-CC-2023-50 | mlfoundations | "2024-09-19T21:06:23Z" | 12,556 | 3 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:42:22Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-50`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co./collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co./HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co./Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co./facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
lmms-lab/LLaVA-Video-178K | lmms-lab | "2024-10-11T04:59:25Z" | 12,366 | 102 | [
"task_categories:visual-question-answering",
"task_categories:video-text-to-text",
"language:en",
"size_categories:1M<n<10M",
"modality:text",
"modality:video",
"arxiv:2410.02713",
"region:us",
"video"
] | [
"visual-question-answering",
"video-text-to-text"
] | "2024-08-27T07:09:50Z" | ---
configs:
- config_name: 0_30_s_academic_v0_1
data_files:
- split: caption
path: 0_30_s_academic_v0_1/*cap*.json
- split: open_ended
path: 0_30_s_academic_v0_1/*oe*.json
- split: multi_choice
path: 0_30_s_academic_v0_1/*mc*.json
- config_name: 0_30_s_youtube_v0_1
data_files:
- split: caption
path: 0_30_s_youtube_v0_1/*cap*.json
- split: open_ended
path: 0_30_s_youtube_v0_1/*oe*.json
- split: multi_choice
path: 0_30_s_youtube_v0_1/*mc*.json
- config_name: 0_30_s_activitynet
data_files:
- split: open_ended
path: 0_30_s_activitynet/*oe*.json
- config_name: 0_30_s_perceptiontest
data_files:
- split: multi_choice
path: 0_30_s_perceptiontest/*mc*.json
- config_name: 0_30_s_nextqa
data_files:
- split: open_ended
path: 0_30_s_nextqa/*oe*.json
- split: multi_choice
path: 0_30_s_nextqa/*mc*.json
- config_name: 30_60_s_academic_v0_1
data_files:
- split: caption
path: 30_60_s_academic_v0_1/*cap*.json
- split: open_ended
path: 30_60_s_academic_v0_1/*oe*.json
- split: multi_choice
path: 30_60_s_academic_v0_1/*mc*.json
- config_name: 30_60_s_youtube_v0_1
data_files:
- split: caption
path: 30_60_s_youtube_v0_1/*cap*.json
- split: open_ended
path: 30_60_s_youtube_v0_1/*oe*.json
- split: multi_choice
path: 30_60_s_youtube_v0_1/*mc*.json
- config_name: 30_60_s_activitynet
data_files:
- split: open_ended
path: 30_60_s_activitynet/*oe*.json
- config_name: 30_60_s_perceptiontest
data_files:
- split: multi_choice
path: 30_60_s_perceptiontest/*mc*.json
- config_name: 30_60_s_nextqa
data_files:
- split: open_ended
path: 30_60_s_nextqa/*oe*.json
- split: multi_choice
path: 30_60_s_nextqa/*mc*.json
- config_name: 1_2_m_youtube_v0_1
data_files:
- split: caption
path: 1_2_m_youtube_v0_1/*cap*.json
- split: open_ended
path: 1_2_m_youtube_v0_1/*oe*.json
- split: multi_choice
path: 1_2_m_youtube_v0_1/*mc*.json
- config_name: 1_2_m_academic_v0_1
data_files:
- split: caption
path: 1_2_m_academic_v0_1/*cap*.json
- split: open_ended
path: 1_2_m_academic_v0_1/*oe*.json
- split: multi_choice
path: 1_2_m_academic_v0_1/*mc*.json
- config_name: 1_2_m_activitynet
data_files:
- split: open_ended
path: 1_2_m_activitynet/*oe*.json
- config_name: 1_2_m_nextqa
data_files:
- split: open_ended
path: 1_2_m_nextqa/*oe*.json
- split: multi_choice
path: 1_2_m_nextqa/*mc*.json
- config_name: 2_3_m_youtube_v0_1
data_files:
- split: caption
path: 2_3_m_youtube_v0_1/*cap*.json
- split: open_ended
path: 2_3_m_youtube_v0_1/*oe*.json
- split: multi_choice
path: 2_3_m_youtube_v0_1/*mc*.json
- config_name: 2_3_m_academic_v0_1
data_files:
- split: caption
path: 2_3_m_academic_v0_1/*cap*.json
- split: open_ended
path: 2_3_m_academic_v0_1/*oe*.json
- split: multi_choice
path: 2_3_m_academic_v0_1/*mc*.json
- config_name: 2_3_m_activitynet
data_files:
- split: open_ended
path: 2_3_m_activitynet/*oe*.json
- config_name: 2_3_m_nextqa
data_files:
- split: open_ended
path: 2_3_m_nextqa/*oe*.json
- split: multi_choice
path: 2_3_m_nextqa/*mc*.json
- config_name: llava_hound
data_files:
- split: open_ended
path: llava_hound/sharegptvideo_qa_255k_processed.json
language:
- en
task_categories:
- visual-question-answering
- video-text-to-text
tags:
- video
---
# Dataset Card for LLaVA-Video-178K
## Dataset Description
- **Curated by:** Yuanhan Zhang, Jinming Wu, Wei Li
- **Language(s) (NLP):** English, Chinese
- **License:** Apache License 2.0
## Uses
This dataset is used for the training of the LLaVA-Video model. We only allow the use of this dataset for academic research and education purpose. For OpenAI GPT-4 generated data, we recommend the users to check the [OpenAI Usage Policy](https://openai.com/policies/usage-policies/).
### Data Sources
For the training of LLaVA-Video, we utilized video-language data from five primary sources:
- **LLaVA-Video-178K**: This dataset includes **178,510** caption entries, 960,792 open-ended QA (question and answer) items, and 196,198 multiple-choice QA items. These data were newly annotated for this project.
- We include this dataset in this repository: LLaVA-Video-178K/XXX_academic_v0_1 and LLaVA-Video-178K/XXX_youtube_v0_1.
- **NeXT-QA**: Comprises 17,090 open-ended QA items and 17,024 multiple-choice QA items.
- We include this dataset in this repository: LLaVA-Video-178K/XXX_nextqa.
- **ActivityNetQA**: Includes 23,530 open-ended QA items,
- We include this dataset in this repository: LLaVA-Video-178K/XXX_activitynetqa.
- **PerceptionTest**: Includes 1,803 open-ended QA items.
- We include this dataset in this repository: LLaVA-Video-178K/XXX_perceptiontest.
- **LLaVA-Hound**: Contains 240,000 open-ended QA items and 15,000 caption entries.
- The video data and annotations are available at the following URLs:
- Video data: [train_300k](https://huggingface.co./datasets/ShareGPTVideo/train_video_and_instruction/tree/main/train_300k)
- Annotation data: LLaVA-Video-178K/llava_hound
- loading function is specified here: [function](https://github.com/LLaVA-VL/LLaVA-NeXT/blob/7125e3654d88063cb467ed242db76f1e2b184d4c/llava/train/train.py#L1162)
The **LLaVA-Video-178K** dataset is the only contribution from this repository; we provide additional datasets for reproducing LLaVA-Video.
- **Project Page:** [Project Page](https://llava-vl.github.io/blog/2024-09-30-llava-video/).
- **Paper**: For more details, please check our [paper](https://arxiv.org/abs/2410.02713)
### Annotation Pipeline
The following directories are provided for generating captions and QA data:
- **Captions**: `LLaVA-Video-178K/gpt4o_caption_prompt`
- **QA**: `LLaVA-Video-178K/gpt4o_qa_prompt`
### The subset used in the LLaVA-OneVision
We have included captions and open-ended questions in the [0_30_s_academic_v0_1 split](https://huggingface.co./datasets/lmms-lab/LLaVA-Video-178K/tree/main/0_30_s_academic_v0_1), along with 240,000 open-ended QA items and 15,000 caption entries, as part of the video data in LLaVA-Hound for LLaVA-OneVision.
- [**0_30_s_academic_v0_1 caption**](https://huggingface.co./datasets/lmms-lab/LLaVA-Video-178K/blob/main/0_30_s_academic_v0_1/0_30_s_academic_v0_1_cap_processed.json)
- [**0_30_s_academic_v0_1 open-ended QA**](https://huggingface.co./datasets/lmms-lab/LLaVA-Video-178K/blob/main/0_30_s_academic_v0_1/0_30_s_academic_v0_1_cap_processed.json)
- **LLaVA-Hound**: Same as above.
## Citation
```bibtex
@misc{zhang2024videoinstructiontuningsynthetic,
title={Video Instruction Tuning With Synthetic Data},
author={Yuanhan Zhang and Jinming Wu and Wei Li and Bo Li and Zejun Ma and Ziwei Liu and Chunyuan Li},
year={2024},
eprint={2410.02713},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2410.02713},
}
```
## Dataset Card Contact
[Yuanhan Zhang](https://zhangyuanhan-ai.github.io/)
[Jinming Wu](https://scholar.google.com/citations?user=eh-XJIoAAAAJ&hl=zh-CN)
[Wei Li](https://scholar.google.com/citations?user=q8ZrKVIAAAAJ&hl=zh-CN) |
dair-ai/emotion | dair-ai | "2024-08-08T06:10:47Z" | 12,258 | 316 | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"emotion-classification"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- machine-generated
language_creators:
- machine-generated
language:
- en
license:
- other
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
paperswithcode_id: emotion
pretty_name: Emotion
tags:
- emotion-classification
dataset_info:
- config_name: split
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 1741533
num_examples: 16000
- name: validation
num_bytes: 214695
num_examples: 2000
- name: test
num_bytes: 217173
num_examples: 2000
download_size: 1287193
dataset_size: 2173401
- config_name: unsplit
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': sadness
'1': joy
'2': love
'3': anger
'4': fear
'5': surprise
splits:
- name: train
num_bytes: 45444017
num_examples: 416809
download_size: 26888538
dataset_size: 45444017
configs:
- config_name: split
data_files:
- split: train
path: split/train-*
- split: validation
path: split/validation-*
- split: test
path: split/test-*
default: true
- config_name: unsplit
data_files:
- split: train
path: unsplit/train-*
train-eval-index:
- config: default
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "emotion"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/dair-ai/emotion_dataset](https://github.com/dair-ai/emotion_dataset)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 16.13 MB
- **Size of the generated dataset:** 47.62 MB
- **Total amount of disk used:** 63.75 MB
### Dataset Summary
Emotion is a dataset of English Twitter messages with six basic emotions: anger, fear, joy, love, sadness, and surprise. For more detailed information please refer to the paper.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
An example looks as follows.
```
{
"text": "im feeling quite sad and sorry for myself but ill snap out of it soon",
"label": 0
}
```
### Data Fields
The data fields are:
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `sadness` (0), `joy` (1), `love` (2), `anger` (3), `fear` (4), `surprise` (5).
### Data Splits
The dataset has 2 configurations:
- split: with a total of 20_000 examples split into train, validation and split
- unsplit: with a total of 416_809 examples in a single train split
| name | train | validation | test |
|---------|-------:|-----------:|-----:|
| split | 16000 | 2000 | 2000 |
| unsplit | 416809 | n/a | n/a |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset should be used for educational and research purposes only.
### Citation Information
If you use this dataset, please cite:
```
@inproceedings{saravia-etal-2018-carer,
title = "{CARER}: Contextualized Affect Representations for Emotion Recognition",
author = "Saravia, Elvis and
Liu, Hsien-Chi Toby and
Huang, Yen-Hao and
Wu, Junlin and
Chen, Yi-Shin",
booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
month = oct # "-" # nov,
year = "2018",
address = "Brussels, Belgium",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D18-1404",
doi = "10.18653/v1/D18-1404",
pages = "3687--3697",
abstract = "Emotions are expressed in nuanced ways, which varies by collective or individual experiences, knowledge, and beliefs. Therefore, to understand emotion, as conveyed through text, a robust mechanism capable of capturing and modeling different linguistic nuances and phenomena is needed. We propose a semi-supervised, graph-based algorithm to produce rich structural descriptors which serve as the building blocks for constructing contextualized affect representations from text. The pattern-based representations are further enriched with word embeddings and evaluated through several emotion recognition tasks. Our experimental results demonstrate that the proposed method outperforms state-of-the-art techniques on emotion recognition tasks.",
}
```
### Contributions
Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
|
Helsinki-NLP/opus-100 | Helsinki-NLP | "2024-02-28T09:17:34Z" | 12,227 | 166 | [
"task_categories:translation",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:translation",
"source_datasets:extended",
"language:af",
"language:am",
"language:an",
"language:ar",
"language:as",
"language:az",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:bs",
"language:ca",
"language:cs",
"language:cy",
"language:da",
"language:de",
"language:dz",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gd",
"language:gl",
"language:gu",
"language:ha",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kk",
"language:km",
"language:kn",
"language:ko",
"language:ku",
"language:ky",
"language:li",
"language:lt",
"language:lv",
"language:mg",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:nb",
"language:ne",
"language:nl",
"language:nn",
"language:no",
"language:oc",
"language:or",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:ro",
"language:ru",
"language:rw",
"language:se",
"language:sh",
"language:si",
"language:sk",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:ta",
"language:te",
"language:tg",
"language:th",
"language:tk",
"language:tr",
"language:tt",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:wa",
"language:xh",
"language:yi",
"language:yo",
"language:zh",
"language:zu",
"license:unknown",
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2004.11867",
"region:us"
] | [
"translation"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- no-annotation
language_creators:
- found
language:
- af
- am
- an
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- dz
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- ig
- is
- it
- ja
- ka
- kk
- km
- kn
- ko
- ku
- ky
- li
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- mt
- my
- nb
- ne
- nl
- nn
- 'no'
- oc
- or
- pa
- pl
- ps
- pt
- ro
- ru
- rw
- se
- sh
- si
- sk
- sl
- sq
- sr
- sv
- ta
- te
- tg
- th
- tk
- tr
- tt
- ug
- uk
- ur
- uz
- vi
- wa
- xh
- yi
- yo
- zh
- zu
license:
- unknown
multilinguality:
- translation
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- 1M<n<10M
- n<1K
source_datasets:
- extended
task_categories:
- translation
task_ids: []
paperswithcode_id: opus-100
pretty_name: OPUS-100
config_names:
- af-en
- am-en
- an-en
- ar-de
- ar-en
- ar-fr
- ar-nl
- ar-ru
- ar-zh
- as-en
- az-en
- be-en
- bg-en
- bn-en
- br-en
- bs-en
- ca-en
- cs-en
- cy-en
- da-en
- de-en
- de-fr
- de-nl
- de-ru
- de-zh
- dz-en
- el-en
- en-eo
- en-es
- en-et
- en-eu
- en-fa
- en-fi
- en-fr
- en-fy
- en-ga
- en-gd
- en-gl
- en-gu
- en-ha
- en-he
- en-hi
- en-hr
- en-hu
- en-hy
- en-id
- en-ig
- en-is
- en-it
- en-ja
- en-ka
- en-kk
- en-km
- en-kn
- en-ko
- en-ku
- en-ky
- en-li
- en-lt
- en-lv
- en-mg
- en-mk
- en-ml
- en-mn
- en-mr
- en-ms
- en-mt
- en-my
- en-nb
- en-ne
- en-nl
- en-nn
- en-no
- en-oc
- en-or
- en-pa
- en-pl
- en-ps
- en-pt
- en-ro
- en-ru
- en-rw
- en-se
- en-sh
- en-si
- en-sk
- en-sl
- en-sq
- en-sr
- en-sv
- en-ta
- en-te
- en-tg
- en-th
- en-tk
- en-tr
- en-tt
- en-ug
- en-uk
- en-ur
- en-uz
- en-vi
- en-wa
- en-xh
- en-yi
- en-yo
- en-zh
- en-zu
- fr-nl
- fr-ru
- fr-zh
- nl-ru
- nl-zh
- ru-zh
dataset_info:
- config_name: af-en
features:
- name: translation
dtype:
translation:
languages:
- af
- en
splits:
- name: test
num_bytes: 135908
num_examples: 2000
- name: train
num_bytes: 18726247
num_examples: 275512
- name: validation
num_bytes: 132769
num_examples: 2000
download_size: 14852797
dataset_size: 18994924
- config_name: am-en
features:
- name: translation
dtype:
translation:
languages:
- am
- en
splits:
- name: test
num_bytes: 588021
num_examples: 2000
- name: train
num_bytes: 21950572
num_examples: 89027
- name: validation
num_bytes: 566069
num_examples: 2000
download_size: 12630031
dataset_size: 23104662
- config_name: an-en
features:
- name: translation
dtype:
translation:
languages:
- an
- en
splits:
- name: train
num_bytes: 438324
num_examples: 6961
download_size: 232976
dataset_size: 438324
- config_name: ar-de
features:
- name: translation
dtype:
translation:
languages:
- ar
- de
splits:
- name: test
num_bytes: 238591
num_examples: 2000
download_size: 161557
dataset_size: 238591
- config_name: ar-en
features:
- name: translation
dtype:
translation:
languages:
- ar
- en
splits:
- name: test
num_bytes: 331640
num_examples: 2000
- name: train
num_bytes: 152765684
num_examples: 1000000
- name: validation
num_bytes: 2272098
num_examples: 2000
download_size: 100486814
dataset_size: 155369422
- config_name: ar-fr
features:
- name: translation
dtype:
translation:
languages:
- ar
- fr
splits:
- name: test
num_bytes: 547374
num_examples: 2000
download_size: 334226
dataset_size: 547374
- config_name: ar-nl
features:
- name: translation
dtype:
translation:
languages:
- ar
- nl
splits:
- name: test
num_bytes: 212928
num_examples: 2000
download_size: 144863
dataset_size: 212928
- config_name: ar-ru
features:
- name: translation
dtype:
translation:
languages:
- ar
- ru
splits:
- name: test
num_bytes: 808262
num_examples: 2000
download_size: 441536
dataset_size: 808262
- config_name: ar-zh
features:
- name: translation
dtype:
translation:
languages:
- ar
- zh
splits:
- name: test
num_bytes: 713404
num_examples: 2000
download_size: 438598
dataset_size: 713404
- config_name: as-en
features:
- name: translation
dtype:
translation:
languages:
- as
- en
splits:
- name: test
num_bytes: 261458
num_examples: 2000
- name: train
num_bytes: 15634536
num_examples: 138479
- name: validation
num_bytes: 248131
num_examples: 2000
download_size: 8794616
dataset_size: 16144125
- config_name: az-en
features:
- name: translation
dtype:
translation:
languages:
- az
- en
splits:
- name: test
num_bytes: 393101
num_examples: 2000
- name: train
num_bytes: 56431043
num_examples: 262089
- name: validation
num_bytes: 407101
num_examples: 2000
download_size: 34988859
dataset_size: 57231245
- config_name: be-en
features:
- name: translation
dtype:
translation:
languages:
- be
- en
splits:
- name: test
num_bytes: 166850
num_examples: 2000
- name: train
num_bytes: 5298444
num_examples: 67312
- name: validation
num_bytes: 175197
num_examples: 2000
download_size: 3807669
dataset_size: 5640491
- config_name: bg-en
features:
- name: translation
dtype:
translation:
languages:
- bg
- en
splits:
- name: test
num_bytes: 243743
num_examples: 2000
- name: train
num_bytes: 108929547
num_examples: 1000000
- name: validation
num_bytes: 234840
num_examples: 2000
download_size: 71575310
dataset_size: 109408130
- config_name: bn-en
features:
- name: translation
dtype:
translation:
languages:
- bn
- en
splits:
- name: test
num_bytes: 510093
num_examples: 2000
- name: train
num_bytes: 249906046
num_examples: 1000000
- name: validation
num_bytes: 498406
num_examples: 2000
download_size: 134076596
dataset_size: 250914545
- config_name: br-en
features:
- name: translation
dtype:
translation:
languages:
- br
- en
splits:
- name: test
num_bytes: 127917
num_examples: 2000
- name: train
num_bytes: 8538878
num_examples: 153447
- name: validation
num_bytes: 133764
num_examples: 2000
download_size: 6881865
dataset_size: 8800559
- config_name: bs-en
features:
- name: translation
dtype:
translation:
languages:
- bs
- en
splits:
- name: test
num_bytes: 168614
num_examples: 2000
- name: train
num_bytes: 75082148
num_examples: 1000000
- name: validation
num_bytes: 172473
num_examples: 2000
download_size: 59514403
dataset_size: 75423235
- config_name: ca-en
features:
- name: translation
dtype:
translation:
languages:
- ca
- en
splits:
- name: test
num_bytes: 205658
num_examples: 2000
- name: train
num_bytes: 88404710
num_examples: 1000000
- name: validation
num_bytes: 212629
num_examples: 2000
download_size: 68438385
dataset_size: 88822997
- config_name: cs-en
features:
- name: translation
dtype:
translation:
languages:
- cs
- en
splits:
- name: test
num_bytes: 205266
num_examples: 2000
- name: train
num_bytes: 91896919
num_examples: 1000000
- name: validation
num_bytes: 219076
num_examples: 2000
download_size: 73028514
dataset_size: 92321261
- config_name: cy-en
features:
- name: translation
dtype:
translation:
languages:
- cy
- en
splits:
- name: test
num_bytes: 124281
num_examples: 2000
- name: train
num_bytes: 17244748
num_examples: 289521
- name: validation
num_bytes: 118848
num_examples: 2000
download_size: 13398765
dataset_size: 17487877
- config_name: da-en
features:
- name: translation
dtype:
translation:
languages:
- da
- en
splits:
- name: test
num_bytes: 298115
num_examples: 2000
- name: train
num_bytes: 126424474
num_examples: 1000000
- name: validation
num_bytes: 300616
num_examples: 2000
download_size: 91005252
dataset_size: 127023205
- config_name: de-en
features:
- name: translation
dtype:
translation:
languages:
- de
- en
splits:
- name: test
num_bytes: 330951
num_examples: 2000
- name: train
num_bytes: 152245956
num_examples: 1000000
- name: validation
num_bytes: 332342
num_examples: 2000
download_size: 116680890
dataset_size: 152909249
- config_name: de-fr
features:
- name: translation
dtype:
translation:
languages:
- de
- fr
splits:
- name: test
num_bytes: 458738
num_examples: 2000
download_size: 311929
dataset_size: 458738
- config_name: de-nl
features:
- name: translation
dtype:
translation:
languages:
- de
- nl
splits:
- name: test
num_bytes: 403878
num_examples: 2000
download_size: 281548
dataset_size: 403878
- config_name: de-ru
features:
- name: translation
dtype:
translation:
languages:
- de
- ru
splits:
- name: test
num_bytes: 315771
num_examples: 2000
download_size: 203225
dataset_size: 315771
- config_name: de-zh
features:
- name: translation
dtype:
translation:
languages:
- de
- zh
splits:
- name: test
num_bytes: 280389
num_examples: 2000
download_size: 215301
dataset_size: 280389
- config_name: dz-en
features:
- name: translation
dtype:
translation:
languages:
- dz
- en
splits:
- name: train
num_bytes: 81154
num_examples: 624
download_size: 37361
dataset_size: 81154
- config_name: el-en
features:
- name: translation
dtype:
translation:
languages:
- el
- en
splits:
- name: test
num_bytes: 302385
num_examples: 2000
- name: train
num_bytes: 127963903
num_examples: 1000000
- name: validation
num_bytes: 291226
num_examples: 2000
download_size: 84137722
dataset_size: 128557514
- config_name: en-eo
features:
- name: translation
dtype:
translation:
languages:
- en
- eo
splits:
- name: test
num_bytes: 167378
num_examples: 2000
- name: train
num_bytes: 24431681
num_examples: 337106
- name: validation
num_bytes: 168830
num_examples: 2000
download_size: 19545461
dataset_size: 24767889
- config_name: en-es
features:
- name: translation
dtype:
translation:
languages:
- en
- es
splits:
- name: test
num_bytes: 326262
num_examples: 2000
- name: train
num_bytes: 136643104
num_examples: 1000000
- name: validation
num_bytes: 326727
num_examples: 2000
download_size: 100103907
dataset_size: 137296093
- config_name: en-et
features:
- name: translation
dtype:
translation:
languages:
- en
- et
splits:
- name: test
num_bytes: 272163
num_examples: 2000
- name: train
num_bytes: 112298253
num_examples: 1000000
- name: validation
num_bytes: 276954
num_examples: 2000
download_size: 83690450
dataset_size: 112847370
- config_name: en-eu
features:
- name: translation
dtype:
translation:
languages:
- en
- eu
splits:
- name: test
num_bytes: 280877
num_examples: 2000
- name: train
num_bytes: 112329285
num_examples: 1000000
- name: validation
num_bytes: 281495
num_examples: 2000
download_size: 84805467
dataset_size: 112891657
- config_name: en-fa
features:
- name: translation
dtype:
translation:
languages:
- en
- fa
splits:
- name: test
num_bytes: 296548
num_examples: 2000
- name: train
num_bytes: 125400535
num_examples: 1000000
- name: validation
num_bytes: 291121
num_examples: 2000
download_size: 82783248
dataset_size: 125988204
- config_name: en-fi
features:
- name: translation
dtype:
translation:
languages:
- en
- fi
splits:
- name: test
num_bytes: 245814
num_examples: 2000
- name: train
num_bytes: 106024990
num_examples: 1000000
- name: validation
num_bytes: 247219
num_examples: 2000
download_size: 79320220
dataset_size: 106518023
- config_name: en-fr
features:
- name: translation
dtype:
translation:
languages:
- en
- fr
splits:
- name: test
num_bytes: 469723
num_examples: 2000
- name: train
num_bytes: 201440450
num_examples: 1000000
- name: validation
num_bytes: 481476
num_examples: 2000
download_size: 142251860
dataset_size: 202391649
- config_name: en-fy
features:
- name: translation
dtype:
translation:
languages:
- en
- fy
splits:
- name: test
num_bytes: 101238
num_examples: 2000
- name: train
num_bytes: 3895640
num_examples: 54342
- name: validation
num_bytes: 100121
num_examples: 2000
download_size: 2984283
dataset_size: 4096999
- config_name: en-ga
features:
- name: translation
dtype:
translation:
languages:
- en
- ga
splits:
- name: test
num_bytes: 503309
num_examples: 2000
- name: train
num_bytes: 42132510
num_examples: 289524
- name: validation
num_bytes: 503209
num_examples: 2000
download_size: 27937448
dataset_size: 43139028
- config_name: en-gd
features:
- name: translation
dtype:
translation:
languages:
- en
- gd
splits:
- name: test
num_bytes: 218354
num_examples: 1606
- name: train
num_bytes: 1254779
num_examples: 16316
- name: validation
num_bytes: 203877
num_examples: 1605
download_size: 1124506
dataset_size: 1677010
- config_name: en-gl
features:
- name: translation
dtype:
translation:
languages:
- en
- gl
splits:
- name: test
num_bytes: 190691
num_examples: 2000
- name: train
num_bytes: 43327028
num_examples: 515344
- name: validation
num_bytes: 193598
num_examples: 2000
download_size: 34084028
dataset_size: 43711317
- config_name: en-gu
features:
- name: translation
dtype:
translation:
languages:
- en
- gu
splits:
- name: test
num_bytes: 199725
num_examples: 2000
- name: train
num_bytes: 33641719
num_examples: 318306
- name: validation
num_bytes: 205542
num_examples: 2000
download_size: 19235779
dataset_size: 34046986
- config_name: en-ha
features:
- name: translation
dtype:
translation:
languages:
- en
- ha
splits:
- name: test
num_bytes: 407344
num_examples: 2000
- name: train
num_bytes: 20391884
num_examples: 97983
- name: validation
num_bytes: 411518
num_examples: 2000
download_size: 12686187
dataset_size: 21210746
- config_name: en-he
features:
- name: translation
dtype:
translation:
languages:
- en
- he
splits:
- name: test
num_bytes: 208467
num_examples: 2000
- name: train
num_bytes: 91159631
num_examples: 1000000
- name: validation
num_bytes: 209438
num_examples: 2000
download_size: 61144758
dataset_size: 91577536
- config_name: en-hi
features:
- name: translation
dtype:
translation:
languages:
- en
- hi
splits:
- name: test
num_bytes: 496570
num_examples: 2000
- name: train
num_bytes: 124923545
num_examples: 534319
- name: validation
num_bytes: 474079
num_examples: 2000
download_size: 65725886
dataset_size: 125894194
- config_name: en-hr
features:
- name: translation
dtype:
translation:
languages:
- en
- hr
splits:
- name: test
num_bytes: 179636
num_examples: 2000
- name: train
num_bytes: 75309516
num_examples: 1000000
- name: validation
num_bytes: 179615
num_examples: 2000
download_size: 59468892
dataset_size: 75668767
- config_name: en-hu
features:
- name: translation
dtype:
translation:
languages:
- en
- hu
splits:
- name: test
num_bytes: 206039
num_examples: 2000
- name: train
num_bytes: 87483462
num_examples: 1000000
- name: validation
num_bytes: 208307
num_examples: 2000
download_size: 67971116
dataset_size: 87897808
- config_name: en-hy
features:
- name: translation
dtype:
translation:
languages:
- en
- hy
splits:
- name: train
num_bytes: 652623
num_examples: 7059
download_size: 422847
dataset_size: 652623
- config_name: en-id
features:
- name: translation
dtype:
translation:
languages:
- en
- id
splits:
- name: test
num_bytes: 177685
num_examples: 2000
- name: train
num_bytes: 78698973
num_examples: 1000000
- name: validation
num_bytes: 180024
num_examples: 2000
download_size: 57693678
dataset_size: 79056682
- config_name: en-ig
features:
- name: translation
dtype:
translation:
languages:
- en
- ig
splits:
- name: test
num_bytes: 137324
num_examples: 1843
- name: train
num_bytes: 1612523
num_examples: 18415
- name: validation
num_bytes: 135987
num_examples: 1843
download_size: 859440
dataset_size: 1885834
- config_name: en-is
features:
- name: translation
dtype:
translation:
languages:
- en
- is
splits:
- name: test
num_bytes: 170879
num_examples: 2000
- name: train
num_bytes: 73964115
num_examples: 1000000
- name: validation
num_bytes: 170632
num_examples: 2000
download_size: 56242149
dataset_size: 74305626
- config_name: en-it
features:
- name: translation
dtype:
translation:
languages:
- en
- it
splits:
- name: test
num_bytes: 299029
num_examples: 2000
- name: train
num_bytes: 123654286
num_examples: 1000000
- name: validation
num_bytes: 294354
num_examples: 2000
download_size: 92133897
dataset_size: 124247669
- config_name: en-ja
features:
- name: translation
dtype:
translation:
languages:
- en
- ja
splits:
- name: test
num_bytes: 190991
num_examples: 2000
- name: train
num_bytes: 88348569
num_examples: 1000000
- name: validation
num_bytes: 191411
num_examples: 2000
download_size: 64817108
dataset_size: 88730971
- config_name: en-ka
features:
- name: translation
dtype:
translation:
languages:
- en
- ka
splits:
- name: test
num_bytes: 256219
num_examples: 2000
- name: train
num_bytes: 42465402
num_examples: 377306
- name: validation
num_bytes: 260408
num_examples: 2000
download_size: 24394633
dataset_size: 42982029
- config_name: en-kk
features:
- name: translation
dtype:
translation:
languages:
- en
- kk
splits:
- name: test
num_bytes: 137656
num_examples: 2000
- name: train
num_bytes: 7124314
num_examples: 79927
- name: validation
num_bytes: 139657
num_examples: 2000
download_size: 4808360
dataset_size: 7401627
- config_name: en-km
features:
- name: translation
dtype:
translation:
languages:
- en
- km
splits:
- name: test
num_bytes: 289019
num_examples: 2000
- name: train
num_bytes: 19680515
num_examples: 111483
- name: validation
num_bytes: 302519
num_examples: 2000
download_size: 10022919
dataset_size: 20272053
- config_name: en-kn
features:
- name: translation
dtype:
translation:
languages:
- en
- kn
splits:
- name: test
num_bytes: 77197
num_examples: 918
- name: train
num_bytes: 1833318
num_examples: 14537
- name: validation
num_bytes: 77599
num_examples: 917
download_size: 1062554
dataset_size: 1988114
- config_name: en-ko
features:
- name: translation
dtype:
translation:
languages:
- en
- ko
splits:
- name: test
num_bytes: 190688
num_examples: 2000
- name: train
num_bytes: 93664532
num_examples: 1000000
- name: validation
num_bytes: 189360
num_examples: 2000
download_size: 70383271
dataset_size: 94044580
- config_name: en-ku
features:
- name: translation
dtype:
translation:
languages:
- en
- ku
splits:
- name: test
num_bytes: 247839
num_examples: 2000
- name: train
num_bytes: 49107744
num_examples: 144844
- name: validation
num_bytes: 239317
num_examples: 2000
download_size: 25358389
dataset_size: 49594900
- config_name: en-ky
features:
- name: translation
dtype:
translation:
languages:
- en
- ky
splits:
- name: test
num_bytes: 142522
num_examples: 2000
- name: train
num_bytes: 1879274
num_examples: 27215
- name: validation
num_bytes: 138479
num_examples: 2000
download_size: 1338686
dataset_size: 2160275
- config_name: en-li
features:
- name: translation
dtype:
translation:
languages:
- en
- li
splits:
- name: test
num_bytes: 93342
num_examples: 2000
- name: train
num_bytes: 1628577
num_examples: 25535
- name: validation
num_bytes: 92898
num_examples: 2000
download_size: 1040760
dataset_size: 1814817
- config_name: en-lt
features:
- name: translation
dtype:
translation:
languages:
- en
- lt
splits:
- name: test
num_bytes: 482607
num_examples: 2000
- name: train
num_bytes: 177060244
num_examples: 1000000
- name: validation
num_bytes: 469109
num_examples: 2000
download_size: 124444053
dataset_size: 178011960
- config_name: en-lv
features:
- name: translation
dtype:
translation:
languages:
- en
- lv
splits:
- name: test
num_bytes: 536568
num_examples: 2000
- name: train
num_bytes: 206051049
num_examples: 1000000
- name: validation
num_bytes: 522064
num_examples: 2000
download_size: 140538527
dataset_size: 207109681
- config_name: en-mg
features:
- name: translation
dtype:
translation:
languages:
- en
- mg
splits:
- name: test
num_bytes: 525059
num_examples: 2000
- name: train
num_bytes: 130865169
num_examples: 590771
- name: validation
num_bytes: 511163
num_examples: 2000
download_size: 91102165
dataset_size: 131901391
- config_name: en-mk
features:
- name: translation
dtype:
translation:
languages:
- en
- mk
splits:
- name: test
num_bytes: 308926
num_examples: 2000
- name: train
num_bytes: 117068689
num_examples: 1000000
- name: validation
num_bytes: 305490
num_examples: 2000
download_size: 76810811
dataset_size: 117683105
- config_name: en-ml
features:
- name: translation
dtype:
translation:
languages:
- en
- ml
splits:
- name: test
num_bytes: 340618
num_examples: 2000
- name: train
num_bytes: 199971079
num_examples: 822746
- name: validation
num_bytes: 334451
num_examples: 2000
download_size: 95497482
dataset_size: 200646148
- config_name: en-mn
features:
- name: translation
dtype:
translation:
languages:
- en
- mn
splits:
- name: train
num_bytes: 250770
num_examples: 4294
download_size: 85037
dataset_size: 250770
- config_name: en-mr
features:
- name: translation
dtype:
translation:
languages:
- en
- mr
splits:
- name: test
num_bytes: 238604
num_examples: 2000
- name: train
num_bytes: 2724107
num_examples: 27007
- name: validation
num_bytes: 235532
num_examples: 2000
download_size: 1838618
dataset_size: 3198243
- config_name: en-ms
features:
- name: translation
dtype:
translation:
languages:
- en
- ms
splits:
- name: test
num_bytes: 179697
num_examples: 2000
- name: train
num_bytes: 76828845
num_examples: 1000000
- name: validation
num_bytes: 180175
num_examples: 2000
download_size: 57412836
dataset_size: 77188717
- config_name: en-mt
features:
- name: translation
dtype:
translation:
languages:
- en
- mt
splits:
- name: test
num_bytes: 566126
num_examples: 2000
- name: train
num_bytes: 222221596
num_examples: 1000000
- name: validation
num_bytes: 594378
num_examples: 2000
download_size: 147836637
dataset_size: 223382100
- config_name: en-my
features:
- name: translation
dtype:
translation:
languages:
- en
- my
splits:
- name: test
num_bytes: 337343
num_examples: 2000
- name: train
num_bytes: 3673477
num_examples: 24594
- name: validation
num_bytes: 336147
num_examples: 2000
download_size: 1952573
dataset_size: 4346967
- config_name: en-nb
features:
- name: translation
dtype:
translation:
languages:
- en
- nb
splits:
- name: test
num_bytes: 334109
num_examples: 2000
- name: train
num_bytes: 13611589
num_examples: 142906
- name: validation
num_bytes: 324392
num_examples: 2000
download_size: 10630769
dataset_size: 14270090
- config_name: en-ne
features:
- name: translation
dtype:
translation:
languages:
- en
- ne
splits:
- name: test
num_bytes: 186519
num_examples: 2000
- name: train
num_bytes: 44135952
num_examples: 406381
- name: validation
num_bytes: 204912
num_examples: 2000
download_size: 24107523
dataset_size: 44527383
- config_name: en-nl
features:
- name: translation
dtype:
translation:
languages:
- en
- nl
splits:
- name: test
num_bytes: 282747
num_examples: 2000
- name: train
num_bytes: 112326273
num_examples: 1000000
- name: validation
num_bytes: 270932
num_examples: 2000
download_size: 82923916
dataset_size: 112879952
- config_name: en-nn
features:
- name: translation
dtype:
translation:
languages:
- en
- nn
splits:
- name: test
num_bytes: 178999
num_examples: 2000
- name: train
num_bytes: 32924429
num_examples: 486055
- name: validation
num_bytes: 187642
num_examples: 2000
download_size: 25184676
dataset_size: 33291070
- config_name: en-no
features:
- name: translation
dtype:
translation:
languages:
- en
- 'no'
splits:
- name: test
num_bytes: 173320
num_examples: 2000
- name: train
num_bytes: 74105483
num_examples: 1000000
- name: validation
num_bytes: 178005
num_examples: 2000
download_size: 56277000
dataset_size: 74456808
- config_name: en-oc
features:
- name: translation
dtype:
translation:
languages:
- en
- oc
splits:
- name: test
num_bytes: 82342
num_examples: 2000
- name: train
num_bytes: 1627174
num_examples: 35791
- name: validation
num_bytes: 81642
num_examples: 2000
download_size: 1308338
dataset_size: 1791158
- config_name: en-or
features:
- name: translation
dtype:
translation:
languages:
- en
- or
splits:
- name: test
num_bytes: 163939
num_examples: 1318
- name: train
num_bytes: 1500733
num_examples: 14273
- name: validation
num_bytes: 155323
num_examples: 1317
download_size: 1019971
dataset_size: 1819995
- config_name: en-pa
features:
- name: translation
dtype:
translation:
languages:
- en
- pa
splits:
- name: test
num_bytes: 133901
num_examples: 2000
- name: train
num_bytes: 8509140
num_examples: 107296
- name: validation
num_bytes: 136188
num_examples: 2000
download_size: 5315298
dataset_size: 8779229
- config_name: en-pl
features:
- name: translation
dtype:
translation:
languages:
- en
- pl
splits:
- name: test
num_bytes: 212495
num_examples: 2000
- name: train
num_bytes: 95247723
num_examples: 1000000
- name: validation
num_bytes: 218208
num_examples: 2000
download_size: 73574044
dataset_size: 95678426
- config_name: en-ps
features:
- name: translation
dtype:
translation:
languages:
- en
- ps
splits:
- name: test
num_bytes: 92995
num_examples: 2000
- name: train
num_bytes: 4436512
num_examples: 79127
- name: validation
num_bytes: 95156
num_examples: 2000
download_size: 2851899
dataset_size: 4624663
- config_name: en-pt
features:
- name: translation
dtype:
translation:
languages:
- en
- pt
splits:
- name: test
num_bytes: 296114
num_examples: 2000
- name: train
num_bytes: 118242849
num_examples: 1000000
- name: validation
num_bytes: 292074
num_examples: 2000
download_size: 87661907
dataset_size: 118831037
- config_name: en-ro
features:
- name: translation
dtype:
translation:
languages:
- en
- ro
splits:
- name: test
num_bytes: 198639
num_examples: 2000
- name: train
num_bytes: 85249051
num_examples: 1000000
- name: validation
num_bytes: 199164
num_examples: 2000
download_size: 66294317
dataset_size: 85646854
- config_name: en-ru
features:
- name: translation
dtype:
translation:
languages:
- en
- ru
splits:
- name: test
num_bytes: 490976
num_examples: 2000
- name: train
num_bytes: 195100937
num_examples: 1000000
- name: validation
num_bytes: 490238
num_examples: 2000
download_size: 124460816
dataset_size: 196082151
- config_name: en-rw
features:
- name: translation
dtype:
translation:
languages:
- en
- rw
splits:
- name: test
num_bytes: 136189
num_examples: 2000
- name: train
num_bytes: 15286159
num_examples: 173823
- name: validation
num_bytes: 134957
num_examples: 2000
download_size: 10093708
dataset_size: 15557305
- config_name: en-se
features:
- name: translation
dtype:
translation:
languages:
- en
- se
splits:
- name: test
num_bytes: 85697
num_examples: 2000
- name: train
num_bytes: 2047380
num_examples: 35907
- name: validation
num_bytes: 83664
num_examples: 2000
download_size: 1662845
dataset_size: 2216741
- config_name: en-sh
features:
- name: translation
dtype:
translation:
languages:
- en
- sh
splits:
- name: test
num_bytes: 569479
num_examples: 2000
- name: train
num_bytes: 60900023
num_examples: 267211
- name: validation
num_bytes: 555594
num_examples: 2000
download_size: 39988454
dataset_size: 62025096
- config_name: en-si
features:
- name: translation
dtype:
translation:
languages:
- en
- si
splits:
- name: test
num_bytes: 271735
num_examples: 2000
- name: train
num_bytes: 114950891
num_examples: 979109
- name: validation
num_bytes: 271236
num_examples: 2000
download_size: 66124160
dataset_size: 115493862
- config_name: en-sk
features:
- name: translation
dtype:
translation:
languages:
- en
- sk
splits:
- name: test
num_bytes: 258034
num_examples: 2000
- name: train
num_bytes: 111743068
num_examples: 1000000
- name: validation
num_bytes: 255462
num_examples: 2000
download_size: 85223330
dataset_size: 112256564
- config_name: en-sl
features:
- name: translation
dtype:
translation:
languages:
- en
- sl
splits:
- name: test
num_bytes: 205470
num_examples: 2000
- name: train
num_bytes: 90270157
num_examples: 1000000
- name: validation
num_bytes: 198654
num_examples: 2000
download_size: 70708189
dataset_size: 90674281
- config_name: en-sq
features:
- name: translation
dtype:
translation:
languages:
- en
- sq
splits:
- name: test
num_bytes: 275371
num_examples: 2000
- name: train
num_bytes: 105745181
num_examples: 1000000
- name: validation
num_bytes: 267304
num_examples: 2000
download_size: 78817895
dataset_size: 106287856
- config_name: en-sr
features:
- name: translation
dtype:
translation:
languages:
- en
- sr
splits:
- name: test
num_bytes: 180224
num_examples: 2000
- name: train
num_bytes: 75726035
num_examples: 1000000
- name: validation
num_bytes: 184238
num_examples: 2000
download_size: 60263688
dataset_size: 76090497
- config_name: en-sv
features:
- name: translation
dtype:
translation:
languages:
- en
- sv
splits:
- name: test
num_bytes: 271006
num_examples: 2000
- name: train
num_bytes: 116985153
num_examples: 1000000
- name: validation
num_bytes: 279986
num_examples: 2000
download_size: 85032127
dataset_size: 117536145
- config_name: en-ta
features:
- name: translation
dtype:
translation:
languages:
- en
- ta
splits:
- name: test
num_bytes: 351982
num_examples: 2000
- name: train
num_bytes: 74044340
num_examples: 227014
- name: validation
num_bytes: 335549
num_examples: 2000
download_size: 33642694
dataset_size: 74731871
- config_name: en-te
features:
- name: translation
dtype:
translation:
languages:
- en
- te
splits:
- name: test
num_bytes: 190587
num_examples: 2000
- name: train
num_bytes: 6688569
num_examples: 64352
- name: validation
num_bytes: 193658
num_examples: 2000
download_size: 4047667
dataset_size: 7072814
- config_name: en-tg
features:
- name: translation
dtype:
translation:
languages:
- en
- tg
splits:
- name: test
num_bytes: 372112
num_examples: 2000
- name: train
num_bytes: 35477017
num_examples: 193882
- name: validation
num_bytes: 371720
num_examples: 2000
download_size: 21242668
dataset_size: 36220849
- config_name: en-th
features:
- name: translation
dtype:
translation:
languages:
- en
- th
splits:
- name: test
num_bytes: 290573
num_examples: 2000
- name: train
num_bytes: 132820231
num_examples: 1000000
- name: validation
num_bytes: 288358
num_examples: 2000
download_size: 75539987
dataset_size: 133399162
- config_name: en-tk
features:
- name: translation
dtype:
translation:
languages:
- en
- tk
splits:
- name: test
num_bytes: 83878
num_examples: 1852
- name: train
num_bytes: 719617
num_examples: 13110
- name: validation
num_bytes: 81006
num_examples: 1852
download_size: 417756
dataset_size: 884501
- config_name: en-tr
features:
- name: translation
dtype:
translation:
languages:
- en
- tr
splits:
- name: test
num_bytes: 183825
num_examples: 2000
- name: train
num_bytes: 78945565
num_examples: 1000000
- name: validation
num_bytes: 181909
num_examples: 2000
download_size: 60364921
dataset_size: 79311299
- config_name: en-tt
features:
- name: translation
dtype:
translation:
languages:
- en
- tt
splits:
- name: test
num_bytes: 693268
num_examples: 2000
- name: train
num_bytes: 35313170
num_examples: 100843
- name: validation
num_bytes: 701662
num_examples: 2000
download_size: 18786998
dataset_size: 36708100
- config_name: en-ug
features:
- name: translation
dtype:
translation:
languages:
- en
- ug
splits:
- name: test
num_bytes: 620873
num_examples: 2000
- name: train
num_bytes: 31576516
num_examples: 72170
- name: validation
num_bytes: 631228
num_examples: 2000
download_size: 16011372
dataset_size: 32828617
- config_name: en-uk
features:
- name: translation
dtype:
translation:
languages:
- en
- uk
splits:
- name: test
num_bytes: 249742
num_examples: 2000
- name: train
num_bytes: 104229556
num_examples: 1000000
- name: validation
num_bytes: 247123
num_examples: 2000
download_size: 71155682
dataset_size: 104726421
- config_name: en-ur
features:
- name: translation
dtype:
translation:
languages:
- en
- ur
splits:
- name: test
num_bytes: 538556
num_examples: 2000
- name: train
num_bytes: 268960696
num_examples: 753913
- name: validation
num_bytes: 529308
num_examples: 2000
download_size: 148336044
dataset_size: 270028560
- config_name: en-uz
features:
- name: translation
dtype:
translation:
languages:
- en
- uz
splits:
- name: test
num_bytes: 408675
num_examples: 2000
- name: train
num_bytes: 38375290
num_examples: 173157
- name: validation
num_bytes: 398853
num_examples: 2000
download_size: 21873536
dataset_size: 39182818
- config_name: en-vi
features:
- name: translation
dtype:
translation:
languages:
- en
- vi
splits:
- name: test
num_bytes: 192744
num_examples: 2000
- name: train
num_bytes: 82614470
num_examples: 1000000
- name: validation
num_bytes: 194721
num_examples: 2000
download_size: 59250852
dataset_size: 83001935
- config_name: en-wa
features:
- name: translation
dtype:
translation:
languages:
- en
- wa
splits:
- name: test
num_bytes: 87091
num_examples: 2000
- name: train
num_bytes: 6085860
num_examples: 104496
- name: validation
num_bytes: 87718
num_examples: 2000
download_size: 4512204
dataset_size: 6260669
- config_name: en-xh
features:
- name: translation
dtype:
translation:
languages:
- en
- xh
splits:
- name: test
num_bytes: 318652
num_examples: 2000
- name: train
num_bytes: 50606896
num_examples: 439671
- name: validation
num_bytes: 315831
num_examples: 2000
download_size: 37519365
dataset_size: 51241379
- config_name: en-yi
features:
- name: translation
dtype:
translation:
languages:
- en
- yi
splits:
- name: test
num_bytes: 96482
num_examples: 2000
- name: train
num_bytes: 1275127
num_examples: 15010
- name: validation
num_bytes: 99818
num_examples: 2000
download_size: 650530
dataset_size: 1471427
- config_name: en-yo
features:
- name: translation
dtype:
translation:
languages:
- en
- yo
splits:
- name: train
num_bytes: 979753
num_examples: 10375
download_size: 391299
dataset_size: 979753
- config_name: en-zh
features:
- name: translation
dtype:
translation:
languages:
- en
- zh
splits:
- name: test
num_bytes: 511364
num_examples: 2000
- name: train
num_bytes: 200062183
num_examples: 1000000
- name: validation
num_bytes: 512356
num_examples: 2000
download_size: 143414756
dataset_size: 201085903
- config_name: en-zu
features:
- name: translation
dtype:
translation:
languages:
- en
- zu
splits:
- name: test
num_bytes: 117510
num_examples: 2000
- name: train
num_bytes: 2799558
num_examples: 38616
- name: validation
num_bytes: 120133
num_examples: 2000
download_size: 1918443
dataset_size: 3037201
- config_name: fr-nl
features:
- name: translation
dtype:
translation:
languages:
- fr
- nl
splits:
- name: test
num_bytes: 368638
num_examples: 2000
download_size: 261290
dataset_size: 368638
- config_name: fr-ru
features:
- name: translation
dtype:
translation:
languages:
- fr
- ru
splits:
- name: test
num_bytes: 732716
num_examples: 2000
download_size: 426179
dataset_size: 732716
- config_name: fr-zh
features:
- name: translation
dtype:
translation:
languages:
- fr
- zh
splits:
- name: test
num_bytes: 619386
num_examples: 2000
download_size: 418661
dataset_size: 619386
- config_name: nl-ru
features:
- name: translation
dtype:
translation:
languages:
- nl
- ru
splits:
- name: test
num_bytes: 256059
num_examples: 2000
download_size: 168666
dataset_size: 256059
- config_name: nl-zh
features:
- name: translation
dtype:
translation:
languages:
- nl
- zh
splits:
- name: test
num_bytes: 183633
num_examples: 2000
download_size: 146191
dataset_size: 183633
- config_name: ru-zh
features:
- name: translation
dtype:
translation:
languages:
- ru
- zh
splits:
- name: test
num_bytes: 916106
num_examples: 2000
download_size: 534430
dataset_size: 916106
configs:
- config_name: af-en
data_files:
- split: test
path: af-en/test-*
- split: train
path: af-en/train-*
- split: validation
path: af-en/validation-*
- config_name: am-en
data_files:
- split: test
path: am-en/test-*
- split: train
path: am-en/train-*
- split: validation
path: am-en/validation-*
- config_name: an-en
data_files:
- split: train
path: an-en/train-*
- config_name: ar-de
data_files:
- split: test
path: ar-de/test-*
- config_name: ar-en
data_files:
- split: test
path: ar-en/test-*
- split: train
path: ar-en/train-*
- split: validation
path: ar-en/validation-*
- config_name: ar-fr
data_files:
- split: test
path: ar-fr/test-*
- config_name: ar-nl
data_files:
- split: test
path: ar-nl/test-*
- config_name: ar-ru
data_files:
- split: test
path: ar-ru/test-*
- config_name: ar-zh
data_files:
- split: test
path: ar-zh/test-*
- config_name: as-en
data_files:
- split: test
path: as-en/test-*
- split: train
path: as-en/train-*
- split: validation
path: as-en/validation-*
- config_name: az-en
data_files:
- split: test
path: az-en/test-*
- split: train
path: az-en/train-*
- split: validation
path: az-en/validation-*
- config_name: be-en
data_files:
- split: test
path: be-en/test-*
- split: train
path: be-en/train-*
- split: validation
path: be-en/validation-*
- config_name: bg-en
data_files:
- split: test
path: bg-en/test-*
- split: train
path: bg-en/train-*
- split: validation
path: bg-en/validation-*
- config_name: bn-en
data_files:
- split: test
path: bn-en/test-*
- split: train
path: bn-en/train-*
- split: validation
path: bn-en/validation-*
- config_name: br-en
data_files:
- split: test
path: br-en/test-*
- split: train
path: br-en/train-*
- split: validation
path: br-en/validation-*
- config_name: bs-en
data_files:
- split: test
path: bs-en/test-*
- split: train
path: bs-en/train-*
- split: validation
path: bs-en/validation-*
- config_name: ca-en
data_files:
- split: test
path: ca-en/test-*
- split: train
path: ca-en/train-*
- split: validation
path: ca-en/validation-*
- config_name: cs-en
data_files:
- split: test
path: cs-en/test-*
- split: train
path: cs-en/train-*
- split: validation
path: cs-en/validation-*
- config_name: cy-en
data_files:
- split: test
path: cy-en/test-*
- split: train
path: cy-en/train-*
- split: validation
path: cy-en/validation-*
- config_name: da-en
data_files:
- split: test
path: da-en/test-*
- split: train
path: da-en/train-*
- split: validation
path: da-en/validation-*
- config_name: de-en
data_files:
- split: test
path: de-en/test-*
- split: train
path: de-en/train-*
- split: validation
path: de-en/validation-*
- config_name: de-fr
data_files:
- split: test
path: de-fr/test-*
- config_name: de-nl
data_files:
- split: test
path: de-nl/test-*
- config_name: de-ru
data_files:
- split: test
path: de-ru/test-*
- config_name: de-zh
data_files:
- split: test
path: de-zh/test-*
- config_name: dz-en
data_files:
- split: train
path: dz-en/train-*
- config_name: el-en
data_files:
- split: test
path: el-en/test-*
- split: train
path: el-en/train-*
- split: validation
path: el-en/validation-*
- config_name: en-eo
data_files:
- split: test
path: en-eo/test-*
- split: train
path: en-eo/train-*
- split: validation
path: en-eo/validation-*
- config_name: en-es
data_files:
- split: test
path: en-es/test-*
- split: train
path: en-es/train-*
- split: validation
path: en-es/validation-*
- config_name: en-et
data_files:
- split: test
path: en-et/test-*
- split: train
path: en-et/train-*
- split: validation
path: en-et/validation-*
- config_name: en-eu
data_files:
- split: test
path: en-eu/test-*
- split: train
path: en-eu/train-*
- split: validation
path: en-eu/validation-*
- config_name: en-fa
data_files:
- split: test
path: en-fa/test-*
- split: train
path: en-fa/train-*
- split: validation
path: en-fa/validation-*
- config_name: en-fi
data_files:
- split: test
path: en-fi/test-*
- split: train
path: en-fi/train-*
- split: validation
path: en-fi/validation-*
- config_name: en-fr
data_files:
- split: test
path: en-fr/test-*
- split: train
path: en-fr/train-*
- split: validation
path: en-fr/validation-*
- config_name: en-fy
data_files:
- split: test
path: en-fy/test-*
- split: train
path: en-fy/train-*
- split: validation
path: en-fy/validation-*
- config_name: en-ga
data_files:
- split: test
path: en-ga/test-*
- split: train
path: en-ga/train-*
- split: validation
path: en-ga/validation-*
- config_name: en-gd
data_files:
- split: test
path: en-gd/test-*
- split: train
path: en-gd/train-*
- split: validation
path: en-gd/validation-*
- config_name: en-gl
data_files:
- split: test
path: en-gl/test-*
- split: train
path: en-gl/train-*
- split: validation
path: en-gl/validation-*
- config_name: en-gu
data_files:
- split: test
path: en-gu/test-*
- split: train
path: en-gu/train-*
- split: validation
path: en-gu/validation-*
- config_name: en-ha
data_files:
- split: test
path: en-ha/test-*
- split: train
path: en-ha/train-*
- split: validation
path: en-ha/validation-*
- config_name: en-he
data_files:
- split: test
path: en-he/test-*
- split: train
path: en-he/train-*
- split: validation
path: en-he/validation-*
- config_name: en-hi
data_files:
- split: test
path: en-hi/test-*
- split: train
path: en-hi/train-*
- split: validation
path: en-hi/validation-*
- config_name: en-hr
data_files:
- split: test
path: en-hr/test-*
- split: train
path: en-hr/train-*
- split: validation
path: en-hr/validation-*
- config_name: en-hu
data_files:
- split: test
path: en-hu/test-*
- split: train
path: en-hu/train-*
- split: validation
path: en-hu/validation-*
- config_name: en-hy
data_files:
- split: train
path: en-hy/train-*
- config_name: en-id
data_files:
- split: test
path: en-id/test-*
- split: train
path: en-id/train-*
- split: validation
path: en-id/validation-*
- config_name: en-ig
data_files:
- split: test
path: en-ig/test-*
- split: train
path: en-ig/train-*
- split: validation
path: en-ig/validation-*
- config_name: en-is
data_files:
- split: test
path: en-is/test-*
- split: train
path: en-is/train-*
- split: validation
path: en-is/validation-*
- config_name: en-it
data_files:
- split: test
path: en-it/test-*
- split: train
path: en-it/train-*
- split: validation
path: en-it/validation-*
- config_name: en-ja
data_files:
- split: test
path: en-ja/test-*
- split: train
path: en-ja/train-*
- split: validation
path: en-ja/validation-*
- config_name: en-ka
data_files:
- split: test
path: en-ka/test-*
- split: train
path: en-ka/train-*
- split: validation
path: en-ka/validation-*
- config_name: en-kk
data_files:
- split: test
path: en-kk/test-*
- split: train
path: en-kk/train-*
- split: validation
path: en-kk/validation-*
- config_name: en-km
data_files:
- split: test
path: en-km/test-*
- split: train
path: en-km/train-*
- split: validation
path: en-km/validation-*
- config_name: en-kn
data_files:
- split: test
path: en-kn/test-*
- split: train
path: en-kn/train-*
- split: validation
path: en-kn/validation-*
- config_name: en-ko
data_files:
- split: test
path: en-ko/test-*
- split: train
path: en-ko/train-*
- split: validation
path: en-ko/validation-*
- config_name: en-ku
data_files:
- split: test
path: en-ku/test-*
- split: train
path: en-ku/train-*
- split: validation
path: en-ku/validation-*
- config_name: en-ky
data_files:
- split: test
path: en-ky/test-*
- split: train
path: en-ky/train-*
- split: validation
path: en-ky/validation-*
- config_name: en-li
data_files:
- split: test
path: en-li/test-*
- split: train
path: en-li/train-*
- split: validation
path: en-li/validation-*
- config_name: en-lt
data_files:
- split: test
path: en-lt/test-*
- split: train
path: en-lt/train-*
- split: validation
path: en-lt/validation-*
- config_name: en-lv
data_files:
- split: test
path: en-lv/test-*
- split: train
path: en-lv/train-*
- split: validation
path: en-lv/validation-*
- config_name: en-mg
data_files:
- split: test
path: en-mg/test-*
- split: train
path: en-mg/train-*
- split: validation
path: en-mg/validation-*
- config_name: en-mk
data_files:
- split: test
path: en-mk/test-*
- split: train
path: en-mk/train-*
- split: validation
path: en-mk/validation-*
- config_name: en-ml
data_files:
- split: test
path: en-ml/test-*
- split: train
path: en-ml/train-*
- split: validation
path: en-ml/validation-*
- config_name: en-mn
data_files:
- split: train
path: en-mn/train-*
- config_name: en-mr
data_files:
- split: test
path: en-mr/test-*
- split: train
path: en-mr/train-*
- split: validation
path: en-mr/validation-*
- config_name: en-ms
data_files:
- split: test
path: en-ms/test-*
- split: train
path: en-ms/train-*
- split: validation
path: en-ms/validation-*
- config_name: en-mt
data_files:
- split: test
path: en-mt/test-*
- split: train
path: en-mt/train-*
- split: validation
path: en-mt/validation-*
- config_name: en-my
data_files:
- split: test
path: en-my/test-*
- split: train
path: en-my/train-*
- split: validation
path: en-my/validation-*
- config_name: en-nb
data_files:
- split: test
path: en-nb/test-*
- split: train
path: en-nb/train-*
- split: validation
path: en-nb/validation-*
- config_name: en-ne
data_files:
- split: test
path: en-ne/test-*
- split: train
path: en-ne/train-*
- split: validation
path: en-ne/validation-*
- config_name: en-nl
data_files:
- split: test
path: en-nl/test-*
- split: train
path: en-nl/train-*
- split: validation
path: en-nl/validation-*
- config_name: en-nn
data_files:
- split: test
path: en-nn/test-*
- split: train
path: en-nn/train-*
- split: validation
path: en-nn/validation-*
- config_name: en-no
data_files:
- split: test
path: en-no/test-*
- split: train
path: en-no/train-*
- split: validation
path: en-no/validation-*
- config_name: en-oc
data_files:
- split: test
path: en-oc/test-*
- split: train
path: en-oc/train-*
- split: validation
path: en-oc/validation-*
- config_name: en-or
data_files:
- split: test
path: en-or/test-*
- split: train
path: en-or/train-*
- split: validation
path: en-or/validation-*
- config_name: en-pa
data_files:
- split: test
path: en-pa/test-*
- split: train
path: en-pa/train-*
- split: validation
path: en-pa/validation-*
- config_name: en-pl
data_files:
- split: test
path: en-pl/test-*
- split: train
path: en-pl/train-*
- split: validation
path: en-pl/validation-*
- config_name: en-ps
data_files:
- split: test
path: en-ps/test-*
- split: train
path: en-ps/train-*
- split: validation
path: en-ps/validation-*
- config_name: en-pt
data_files:
- split: test
path: en-pt/test-*
- split: train
path: en-pt/train-*
- split: validation
path: en-pt/validation-*
- config_name: en-ro
data_files:
- split: test
path: en-ro/test-*
- split: train
path: en-ro/train-*
- split: validation
path: en-ro/validation-*
- config_name: en-ru
data_files:
- split: test
path: en-ru/test-*
- split: train
path: en-ru/train-*
- split: validation
path: en-ru/validation-*
- config_name: en-rw
data_files:
- split: test
path: en-rw/test-*
- split: train
path: en-rw/train-*
- split: validation
path: en-rw/validation-*
- config_name: en-se
data_files:
- split: test
path: en-se/test-*
- split: train
path: en-se/train-*
- split: validation
path: en-se/validation-*
- config_name: en-sh
data_files:
- split: test
path: en-sh/test-*
- split: train
path: en-sh/train-*
- split: validation
path: en-sh/validation-*
- config_name: en-si
data_files:
- split: test
path: en-si/test-*
- split: train
path: en-si/train-*
- split: validation
path: en-si/validation-*
- config_name: en-sk
data_files:
- split: test
path: en-sk/test-*
- split: train
path: en-sk/train-*
- split: validation
path: en-sk/validation-*
- config_name: en-sl
data_files:
- split: test
path: en-sl/test-*
- split: train
path: en-sl/train-*
- split: validation
path: en-sl/validation-*
- config_name: en-sq
data_files:
- split: test
path: en-sq/test-*
- split: train
path: en-sq/train-*
- split: validation
path: en-sq/validation-*
- config_name: en-sr
data_files:
- split: test
path: en-sr/test-*
- split: train
path: en-sr/train-*
- split: validation
path: en-sr/validation-*
- config_name: en-sv
data_files:
- split: test
path: en-sv/test-*
- split: train
path: en-sv/train-*
- split: validation
path: en-sv/validation-*
- config_name: en-ta
data_files:
- split: test
path: en-ta/test-*
- split: train
path: en-ta/train-*
- split: validation
path: en-ta/validation-*
- config_name: en-te
data_files:
- split: test
path: en-te/test-*
- split: train
path: en-te/train-*
- split: validation
path: en-te/validation-*
- config_name: en-tg
data_files:
- split: test
path: en-tg/test-*
- split: train
path: en-tg/train-*
- split: validation
path: en-tg/validation-*
- config_name: en-th
data_files:
- split: test
path: en-th/test-*
- split: train
path: en-th/train-*
- split: validation
path: en-th/validation-*
- config_name: en-tk
data_files:
- split: test
path: en-tk/test-*
- split: train
path: en-tk/train-*
- split: validation
path: en-tk/validation-*
- config_name: en-tr
data_files:
- split: test
path: en-tr/test-*
- split: train
path: en-tr/train-*
- split: validation
path: en-tr/validation-*
- config_name: en-tt
data_files:
- split: test
path: en-tt/test-*
- split: train
path: en-tt/train-*
- split: validation
path: en-tt/validation-*
- config_name: en-ug
data_files:
- split: test
path: en-ug/test-*
- split: train
path: en-ug/train-*
- split: validation
path: en-ug/validation-*
- config_name: en-uk
data_files:
- split: test
path: en-uk/test-*
- split: train
path: en-uk/train-*
- split: validation
path: en-uk/validation-*
- config_name: en-ur
data_files:
- split: test
path: en-ur/test-*
- split: train
path: en-ur/train-*
- split: validation
path: en-ur/validation-*
- config_name: en-uz
data_files:
- split: test
path: en-uz/test-*
- split: train
path: en-uz/train-*
- split: validation
path: en-uz/validation-*
- config_name: en-vi
data_files:
- split: test
path: en-vi/test-*
- split: train
path: en-vi/train-*
- split: validation
path: en-vi/validation-*
- config_name: en-wa
data_files:
- split: test
path: en-wa/test-*
- split: train
path: en-wa/train-*
- split: validation
path: en-wa/validation-*
- config_name: en-xh
data_files:
- split: test
path: en-xh/test-*
- split: train
path: en-xh/train-*
- split: validation
path: en-xh/validation-*
- config_name: en-yi
data_files:
- split: test
path: en-yi/test-*
- split: train
path: en-yi/train-*
- split: validation
path: en-yi/validation-*
- config_name: en-yo
data_files:
- split: train
path: en-yo/train-*
- config_name: en-zh
data_files:
- split: test
path: en-zh/test-*
- split: train
path: en-zh/train-*
- split: validation
path: en-zh/validation-*
- config_name: en-zu
data_files:
- split: test
path: en-zu/test-*
- split: train
path: en-zu/train-*
- split: validation
path: en-zu/validation-*
- config_name: fr-nl
data_files:
- split: test
path: fr-nl/test-*
- config_name: fr-ru
data_files:
- split: test
path: fr-ru/test-*
- config_name: fr-zh
data_files:
- split: test
path: fr-zh/test-*
- config_name: nl-ru
data_files:
- split: test
path: nl-ru/test-*
- config_name: nl-zh
data_files:
- split: test
path: nl-zh/test-*
- config_name: ru-zh
data_files:
- split: test
path: ru-zh/test-*
---
# Dataset Card for OPUS-100
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://opus.nlpl.eu/OPUS-100
- **Repository:** https://github.com/EdinburghNLP/opus-100-corpus
- **Paper:** https://arxiv.org/abs/2004.11867
- **Paper:** https://aclanthology.org/L10-1473/
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
OPUS-100 is an English-centric multilingual corpus covering 100 languages.
OPUS-100 is English-centric, meaning that all training pairs include English on either the source or target side. The corpus covers 100 languages (including English).
The languages were selected based on the volume of parallel data available in OPUS.
### Supported Tasks and Leaderboards
Translation.
### Languages
OPUS-100 contains approximately 55M sentence pairs. Of the 99 language pairs, 44 have 1M sentence pairs of training data, 73 have at least 100k, and 95 have at least 10k.
## Dataset Structure
### Data Instances
```
{
"translation": {
"ca": "El departament de bombers té el seu propi equip d'investigació.",
"en": "Well, the fire department has its own investigative unit."
}
}
```
### Data Fields
- `translation` (`dict`): Parallel sentences for the pair of languages.
### Data Splits
The dataset is split into training, development, and test portions. Data was prepared by randomly sampled up to 1M sentence pairs per language pair for training and up to 2000 each for development and test. To ensure that there was no overlap (at the monolingual sentence level) between the training and development/test data, they applied a filter during sampling to exclude sentences that had already been sampled. Note that this was done cross-lingually so that, for instance, an English sentence in the Portuguese-English portion of the training data could not occur in the Hindi-English test set.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
[More Information Needed]
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
If you use this corpus, please cite the paper:
```bibtex
@inproceedings{zhang-etal-2020-improving,
title = "Improving Massively Multilingual Neural Machine Translation and Zero-Shot Translation",
author = "Zhang, Biao and
Williams, Philip and
Titov, Ivan and
Sennrich, Rico",
editor = "Jurafsky, Dan and
Chai, Joyce and
Schluter, Natalie and
Tetreault, Joel",
booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
month = jul,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.acl-main.148",
doi = "10.18653/v1/2020.acl-main.148",
pages = "1628--1639",
}
```
and, please, also acknowledge OPUS:
```bibtex
@inproceedings{tiedemann-2012-parallel,
title = "Parallel Data, Tools and Interfaces in {OPUS}",
author = {Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Choukri, Khalid and
Declerck, Thierry and
Do{\u{g}}an, Mehmet U{\u{g}}ur and
Maegaard, Bente and
Mariani, Joseph and
Moreno, Asuncion and
Odijk, Jan and
Piperidis, Stelios",
booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
month = may,
year = "2012",
address = "Istanbul, Turkey",
publisher = "European Language Resources Association (ELRA)",
url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/463_Paper.pdf",
pages = "2214--2218",
}
```
### Contributions
Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset. |
mteb/scifact | mteb | "2024-03-02T19:11:40Z" | 12,196 | 3 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"source_datasets:scifact",
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"text-retrieval"
] | [
"text-retrieval"
] | "2024-02-26T15:56:04Z" | ---
language:
- en
multilinguality:
- monolingual
task_categories:
- text-retrieval
source_datasets:
- scifact
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 24585
num_examples: 919
- name: test
num_bytes: 9092
num_examples: 339
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 7874970
num_examples: 5183
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 111225
num_examples: 1109
configs:
- config_name: default
data_files:
- split: train
path: qrels/train.jsonl
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
--- |
orionweller/reddit_mds_incremental | orionweller | "2024-07-23T17:17:42Z" | 12,120 | 0 | [
"region:us"
] | null | "2024-06-24T14:44:04Z" | ---
dataset_info:
features: []
splits:
- name: creation
num_bytes: 0
num_examples: 0
download_size: 324
dataset_size: 0
configs:
- config_name: default
data_files:
- split: creation
path: data/creation-*
---
|
mlfoundations/MINT-1T-PDF-CC-2023-06 | mlfoundations | "2024-09-19T21:07:56Z" | 11,996 | 2 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:100B<n<1T",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:45:00Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-06`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co./collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co./HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co./Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co./facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
anon8231489123/ShareGPT_Vicuna_unfiltered | anon8231489123 | "2023-04-12T05:23:59Z" | 11,919 | 761 | [
"language:en",
"license:apache-2.0",
"region:us"
] | null | "2023-04-02T05:30:31Z" | ---
license: apache-2.0
language:
- en
---
**Further cleaning done. Please look through the dataset and ensure that I didn't miss anything.**
**Update: Confirmed working method for training the model: https://huggingface.co./AlekseyKorshuk/vicuna-7b/discussions/4#64346c08ef6d5abefe42c12c**
Two choices:
- Removes instances of "I'm sorry, but": https://huggingface.co./datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json
- Has instances of "I'm sorry, but": https://huggingface.co./datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json
The choice is yours. The first dataset may go to far and remove valuable data. The second is better for when the AI asks for clarification, but it also may refuse to do stuff like browse the internet, which it actually may be able to do with certain langchain implementations. These are important things to think about before training.
~100k ShareGPT conversations narrowed down to 53k by:
* Removing non-english conversations
* Removing excessive unicode (indicative of Chinese or Korean text, usually)
* Removing excessive repeated characters
* Removing various instances "AI Moralizing". Conversations with these phrases were removed (and a few others that can't be mentioned here):
"text-based AI language model",
"domestic violence",
"please refrain",
"derogatory",
"inappropriate",
"offensive",
"racism",
"racist",
"racial",
"discriminate",
"discriminatory",
"discrimination",
"sexist",
"sexism",
"unacceptable",
"inclusive workplace",
"lgbt",
"morals",
"ethics",
"ethical",
"legality",
"illegal",
"illegality",
"hateful",
"harmful",
"it is never okay",
"It is important to",
"It's important to",
"real-world consequences",
"hate speech",
"glorify",
"not be appropriate",
"supremacist",
"extremist",
"responsible AI",
"AI principles",
"AI assistant",
"an AI language",
"ableist",
"hurtful",
"gender stereotype",
"gender inequality",
"underrepresentation",
"safe spaces",
"gender-based",
"inclusivity",
"feminist",
"feminism",
"transgender",
"empowerment",
"communist",
"capitalism",
"stereotypes",
"biases",
"bias",
"Microaggression",
"prioritize human safety",
"as a language model",
"as an AI language model",
"As a large language model",
"As an AI",
"ethical principles",
"consensual",
"it is not appropriate",
"it's not appropriate",
"I cannot fulfill your request",
"harmful to human beings",
"ethical guidelines",
"my guidelines",
"prioritize user safety",
"adhere to ethical guidelines",
"harmful consequences",
"potentially harmful",
"dangerous activities",
"promote safety",
"well-being of all users",
"responsible information sharing",
"jeopardize the safety",
"illegal actions or intentions",
"undermine the stability",
"promote the well-being",
"illegal activities or actions",
"adherence to the law",
"potentially be harmful",
"illegal substances or activities",
"committed to promoting",
"safe information",
"lawful information",
"cannot provide guidance",
"cannot provide information",
"unable to offer assistance",
"cannot engage in discussions",
"programming prohibits",
"follow ethical guidelines",
"ensure the safety",
"involves an illegal subject",
"prioritize safety",
"illegal subject",
"prioritize user well-being",
"cannot support or promote",
"activities that could harm",
"pose a risk to others",
"against my programming",
"activities that could undermine",
"potentially dangerous",
"not within the scope",
"designed to prioritize safety",
"not able to provide",
"maintain user safety",
"adhere to safety guidelines",
"dangerous or harmful",
"cannot provide any information",
"focus on promoting safety"
* Conversations split into 2048 token chunks as described here: https://github.com/lm-sys/FastChat/blob/main/docs/commands/data_cleaning.md
This should be fully ready to train an unfiltered english Vicuna model based on the procedure here: https://github.com/lm-sys/FastChat/ |
poloclub/diffusiondb | poloclub | "2024-01-22T22:17:47Z" | 11,855 | 479 | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_ids:image-captioning",
"annotations_creators:no-annotation",
"language_creators:found",
"multilinguality:multilingual",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"size_categories:n>1T",
"arxiv:2210.14896",
"region:us",
"stable diffusion",
"prompt engineering",
"prompts",
"research paper"
] | [
"text-to-image",
"image-to-text"
] | "2022-10-25T02:25:28Z" | ---
layout: default
title: Home
nav_order: 1
has_children: false
annotations_creators:
- no-annotation
language:
- en
language_creators:
- found
license:
- cc0-1.0
multilinguality:
- multilingual
pretty_name: DiffusionDB
size_categories:
- n>1T
source_datasets:
- original
tags:
- stable diffusion
- prompt engineering
- prompts
- research paper
task_categories:
- text-to-image
- image-to-text
task_ids:
- image-captioning
---
# DiffusionDB
<img width="100%" src="https://user-images.githubusercontent.com/15007159/201762588-f24db2b8-dbb2-4a94-947b-7de393fc3d33.gif">
## Table of Contents
- [DiffusionDB](#diffusiondb)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Two Subsets](#two-subsets)
- [Key Differences](#key-differences)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Metadata](#dataset-metadata)
- [Metadata Schema](#metadata-schema)
- [Data Splits](#data-splits)
- [Loading Data Subsets](#loading-data-subsets)
- [Method 1: Using Hugging Face Datasets Loader](#method-1-using-hugging-face-datasets-loader)
- [Method 2. Use the PoloClub Downloader](#method-2-use-the-poloclub-downloader)
- [Usage/Examples](#usageexamples)
- [Downloading a single file](#downloading-a-single-file)
- [Downloading a range of files](#downloading-a-range-of-files)
- [Downloading to a specific directory](#downloading-to-a-specific-directory)
- [Setting the files to unzip once they've been downloaded](#setting-the-files-to-unzip-once-theyve-been-downloaded)
- [Method 3. Use `metadata.parquet` (Text Only)](#method-3-use-metadataparquet-text-only)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [DiffusionDB homepage](https://poloclub.github.io/diffusiondb)
- **Repository:** [DiffusionDB repository](https://github.com/poloclub/diffusiondb)
- **Distribution:** [DiffusionDB Hugging Face Dataset](https://huggingface.co./datasets/poloclub/diffusiondb)
- **Paper:** [DiffusionDB: A Large-scale Prompt Gallery Dataset for Text-to-Image Generative Models](https://arxiv.org/abs/2210.14896)
- **Point of Contact:** [Jay Wang](mailto:[email protected])
### Dataset Summary
DiffusionDB is the first large-scale text-to-image prompt dataset. It contains **14 million** images generated by Stable Diffusion using prompts and hyperparameters specified by real users.
DiffusionDB is publicly available at [🤗 Hugging Face Dataset](https://huggingface.co./datasets/poloclub/diffusiondb).
### Supported Tasks and Leaderboards
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
### Languages
The text in the dataset is mostly English. It also contains other languages such as Spanish, Chinese, and Russian.
### Two Subsets
DiffusionDB provides two subsets (DiffusionDB 2M and DiffusionDB Large) to support different needs.
|Subset|Num of Images|Num of Unique Prompts|Size|Image Directory|Metadata Table|
|:--|--:|--:|--:|--:|--:|
|DiffusionDB 2M|2M|1.5M|1.6TB|`images/`|`metadata.parquet`|
|DiffusionDB Large|14M|1.8M|6.5TB|`diffusiondb-large-part-1/` `diffusiondb-large-part-2/`|`metadata-large.parquet`|
##### Key Differences
1. Two subsets have a similar number of unique prompts, but DiffusionDB Large has much more images. DiffusionDB Large is a superset of DiffusionDB 2M.
2. Images in DiffusionDB 2M are stored in `png` format; images in DiffusionDB Large use a lossless `webp` format.
## Dataset Structure
We use a modularized file structure to distribute DiffusionDB. The 2 million images in DiffusionDB 2M are split into 2,000 folders, where each folder contains 1,000 images and a JSON file that links these 1,000 images to their prompts and hyperparameters. Similarly, the 14 million images in DiffusionDB Large are split into 14,000 folders.
```bash
# DiffusionDB 2M
./
├── images
│ ├── part-000001
│ │ ├── 3bfcd9cf-26ea-4303-bbe1-b095853f5360.png
│ │ ├── 5f47c66c-51d4-4f2c-a872-a68518f44adb.png
│ │ ├── 66b428b9-55dc-4907-b116-55aaa887de30.png
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-002000
└── metadata.parquet
```
```bash
# DiffusionDB Large
./
├── diffusiondb-large-part-1
│ ├── part-000001
│ │ ├── 0a8dc864-1616-4961-ac18-3fcdf76d3b08.webp
│ │ ├── 0a25cacb-5d91-4f27-b18a-bd423762f811.webp
│ │ ├── 0a52d584-4211-43a0-99ef-f5640ee2fc8c.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-000002
│ ├── part-000003
│ ├── [...]
│ └── part-010000
├── diffusiondb-large-part-2
│ ├── part-010001
│ │ ├── 0a68f671-3776-424c-91b6-c09a0dd6fc2d.webp
│ │ ├── 0a0756e9-1249-4fe2-a21a-12c43656c7a3.webp
│ │ ├── 0aa48f3d-f2d9-40a8-a800-c2c651ebba06.webp
│ │ ├── [...]
│ │ └── part-000001.json
│ ├── part-010002
│ ├── part-010003
│ ├── [...]
│ └── part-014000
└── metadata-large.parquet
```
These sub-folders have names `part-0xxxxx`, and each image has a unique name generated by [UUID Version 4](https://en.wikipedia.org/wiki/Universally_unique_identifier). The JSON file in a sub-folder has the same name as the sub-folder. Each image is a `PNG` file (DiffusionDB 2M) or a lossless `WebP` file (DiffusionDB Large). The JSON file contains key-value pairs mapping image filenames to their prompts and hyperparameters.
### Data Instances
For example, below is the image of `f3501e05-aef7-4225-a9e9-f516527408ac.png` and its key-value pair in `part-000001.json`.
<img width="300" src="https://i.imgur.com/gqWcRs2.png">
```json
{
"f3501e05-aef7-4225-a9e9-f516527408ac.png": {
"p": "geodesic landscape, john chamberlain, christopher balaskas, tadao ando, 4 k, ",
"se": 38753269,
"c": 12.0,
"st": 50,
"sa": "k_lms"
},
}
```
### Data Fields
- key: Unique image name
- `p`: Prompt
- `se`: Random seed
- `c`: CFG Scale (guidance scale)
- `st`: Steps
- `sa`: Sampler
### Dataset Metadata
To help you easily access prompts and other attributes of images without downloading all the Zip files, we include two metadata tables `metadata.parquet` and `metadata-large.parquet` for DiffusionDB 2M and DiffusionDB Large, respectively.
The shape of `metadata.parquet` is (2000000, 13) and the shape of `metatable-large.parquet` is (14000000, 13). Two tables share the same schema, and each row represents an image. We store these tables in the Parquet format because Parquet is column-based: you can efficiently query individual columns (e.g., prompts) without reading the entire table.
Below are three random rows from `metadata.parquet`.
| image_name | prompt | part_id | seed | step | cfg | sampler | width | height | user_name | timestamp | image_nsfw | prompt_nsfw |
|:-----------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------:|-------:|------:|----------:|--------:|---------:|:-----------------------------------------------------------------|:--------------------------|-------------:|--------------:|
| 0c46f719-1679-4c64-9ba9-f181e0eae811.png | a small liquid sculpture, corvette, viscous, reflective, digital art | 1050 | 2026845913 | 50 | 7 | 8 | 512 | 512 | c2f288a2ba9df65c38386ffaaf7749106fed29311835b63d578405db9dbcafdb | 2022-08-11 09:05:00+00:00 | 0.0845108 | 0.00383462 |
| a00bdeaa-14eb-4f6c-a303-97732177eae9.png | human sculpture of lanky tall alien on a romantic date at italian restaurant with smiling woman, nice restaurant, photography, bokeh | 905 | 1183522603 | 50 | 10 | 8 | 512 | 768 | df778e253e6d32168eb22279a9776b3cde107cc82da05517dd6d114724918651 | 2022-08-19 17:55:00+00:00 | 0.692934 | 0.109437 |
| 6e5024ce-65ed-47f3-b296-edb2813e3c5b.png | portrait of barbaric spanish conquistador, symmetrical, by yoichi hatakenaka, studio ghibli and dan mumford | 286 | 1713292358 | 50 | 7 | 8 | 512 | 640 | 1c2e93cfb1430adbd956be9c690705fe295cbee7d9ac12de1953ce5e76d89906 | 2022-08-12 03:26:00+00:00 | 0.0773138 | 0.0249675 |
#### Metadata Schema
`metadata.parquet` and `metatable-large.parquet` share the same schema.
|Column|Type|Description|
|:---|:---|:---|
|`image_name`|`string`|Image UUID filename.|
|`prompt`|`string`|The text prompt used to generate this image.|
|`part_id`|`uint16`|Folder ID of this image.|
|`seed`|`uint32`| Random seed used to generate this image.|
|`step`|`uint16`| Step count (hyperparameter).|
|`cfg`|`float32`| Guidance scale (hyperparameter).|
|`sampler`|`uint8`| Sampler method (hyperparameter). Mapping: `{1: "ddim", 2: "plms", 3: "k_euler", 4: "k_euler_ancestral", 5: "k_heun", 6: "k_dpm_2", 7: "k_dpm_2_ancestral", 8: "k_lms", 9: "others"}`.
|`width`|`uint16`|Image width.|
|`height`|`uint16`|Image height.|
|`user_name`|`string`|The unique discord ID's SHA256 hash of the user who generated this image. For example, the hash for `xiaohk#3146` is `e285b7ef63be99e9107cecd79b280bde602f17e0ca8363cb7a0889b67f0b5ed0`. "deleted_account" refer to users who have deleted their accounts. None means the image has been deleted before we scrape it for the second time.|
|`timestamp`|`timestamp`|UTC Timestamp when this image was generated. None means the image has been deleted before we scrape it for the second time. Note that timestamp is not accurate for duplicate images that have the same prompt, hypareparameters, width, height.|
|`image_nsfw`|`float32`|Likelihood of an image being NSFW. Scores are predicted by [LAION's state-of-art NSFW detector](https://github.com/LAION-AI/LAION-SAFETY) (range from 0 to 1). A score of 2.0 means the image has already been flagged as NSFW and blurred by Stable Diffusion.|
|`prompt_nsfw`|`float32`|Likelihood of a prompt being NSFW. Scores are predicted by the library [Detoxicy](https://github.com/unitaryai/detoxify). Each score represents the maximum of `toxicity` and `sexual_explicit` (range from 0 to 1).|
> **Warning**
> Although the Stable Diffusion model has an NSFW filter that automatically blurs user-generated NSFW images, this NSFW filter is not perfect—DiffusionDB still contains some NSFW images. Therefore, we compute and provide the NSFW scores for images and prompts using the state-of-the-art models. The distribution of these scores is shown below. Please decide an appropriate NSFW score threshold to filter out NSFW images before using DiffusionDB in your projects.
<img src="https://i.imgur.com/1RiGAXL.png" width="100%">
### Data Splits
For DiffusionDB 2M, we split 2 million images into 2,000 folders where each folder contains 1,000 images and a JSON file. For DiffusionDB Large, we split 14 million images into 14,000 folders where each folder contains 1,000 images and a JSON file.
### Loading Data Subsets
DiffusionDB is large (1.6TB or 6.5 TB)! However, with our modularized file structure, you can easily load a desirable number of images and their prompts and hyperparameters. In the [`example-loading.ipynb`](https://github.com/poloclub/diffusiondb/blob/main/notebooks/example-loading.ipynb) notebook, we demonstrate three methods to load a subset of DiffusionDB. Below is a short summary.
#### Method 1: Using Hugging Face Datasets Loader
You can use the Hugging Face [`Datasets`](https://huggingface.co./docs/datasets/quickstart) library to easily load prompts and images from DiffusionDB. We pre-defined 16 DiffusionDB subsets (configurations) based on the number of instances. You can see all subsets in the [Dataset Preview](https://huggingface.co./datasets/poloclub/diffusiondb/viewer/all/train).
```python
import numpy as np
from datasets import load_dataset
# Load the dataset with the `large_random_1k` subset
dataset = load_dataset('poloclub/diffusiondb', 'large_random_1k')
```
#### Method 2. Use the PoloClub Downloader
This repo includes a Python downloader [`download.py`](https://github.com/poloclub/diffusiondb/blob/main/scripts/download.py) that allows you to download and load DiffusionDB. You can use it from your command line. Below is an example of loading a subset of DiffusionDB.
##### Usage/Examples
The script is run using command-line arguments as follows:
- `-i` `--index` - File to download or lower bound of a range of files if `-r` is also set.
- `-r` `--range` - Upper bound of range of files to download if `-i` is set.
- `-o` `--output` - Name of custom output directory. Defaults to the current directory if not set.
- `-z` `--unzip` - Unzip the file/files after downloading
- `-l` `--large` - Download from Diffusion DB Large. Defaults to Diffusion DB 2M.
###### Downloading a single file
The specific file to download is supplied as the number at the end of the file on HuggingFace. The script will automatically pad the number out and generate the URL.
```bash
python download.py -i 23
```
###### Downloading a range of files
The upper and lower bounds of the set of files to download are set by the `-i` and `-r` flags respectively.
```bash
python download.py -i 1 -r 2000
```
Note that this range will download the entire dataset. The script will ask you to confirm that you have 1.7Tb free at the download destination.
###### Downloading to a specific directory
The script will default to the location of the dataset's `part` .zip files at `images/`. If you wish to move the download location, you should move these files as well or use a symbolic link.
```bash
python download.py -i 1 -r 2000 -o /home/$USER/datahoarding/etc
```
Again, the script will automatically add the `/` between the directory and the file when it downloads.
###### Setting the files to unzip once they've been downloaded
The script is set to unzip the files _after_ all files have downloaded as both can be lengthy processes in certain circumstances.
```bash
python download.py -i 1 -r 2000 -z
```
#### Method 3. Use `metadata.parquet` (Text Only)
If your task does not require images, then you can easily access all 2 million prompts and hyperparameters in the `metadata.parquet` table.
```python
from urllib.request import urlretrieve
import pandas as pd
# Download the parquet table
table_url = f'https://huggingface.co./datasets/poloclub/diffusiondb/resolve/main/metadata.parquet'
urlretrieve(table_url, 'metadata.parquet')
# Read the table using Pandas
metadata_df = pd.read_parquet('metadata.parquet')
```
## Dataset Creation
### Curation Rationale
Recent diffusion models have gained immense popularity by enabling high-quality and controllable image generation based on text prompts written in natural language. Since the release of these models, people from different domains have quickly applied them to create award-winning artworks, synthetic radiology images, and even hyper-realistic videos.
However, generating images with desired details is difficult, as it requires users to write proper prompts specifying the exact expected results. Developing such prompts requires trial and error, and can often feel random and unprincipled. Simon Willison analogizes writing prompts to wizards learning “magical spells”: users do not understand why some prompts work, but they will add these prompts to their “spell book.” For example, to generate highly-detailed images, it has become a common practice to add special keywords such as “trending on artstation” and “unreal engine” in the prompt.
Prompt engineering has become a field of study in the context of text-to-text generation, where researchers systematically investigate how to construct prompts to effectively solve different down-stream tasks. As large text-to-image models are relatively new, there is a pressing need to understand how these models react to prompts, how to write effective prompts, and how to design tools to help users generate images.
To help researchers tackle these critical challenges, we create DiffusionDB, the first large-scale prompt dataset with 14 million real prompt-image pairs.
### Source Data
#### Initial Data Collection and Normalization
We construct DiffusionDB by scraping user-generated images on the official Stable Diffusion Discord server. We choose Stable Diffusion because it is currently the only open-source large text-to-image generative model, and all generated images have a CC0 1.0 Universal Public Domain Dedication license that waives all copyright and allows uses for any purpose. We choose the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion) because it is public, and it has strict rules against generating and sharing illegal, hateful, or NSFW (not suitable for work, such as sexual and violent content) images. The server also disallows users to write or share prompts with personal information.
#### Who are the source language producers?
The language producers are users of the official [Stable Diffusion Discord server](https://discord.gg/stablediffusion).
### Annotations
The dataset does not contain any additional annotations.
#### Annotation process
[N/A]
#### Who are the annotators?
[N/A]
### Personal and Sensitive Information
The authors removed the discord usernames from the dataset.
We decide to anonymize the dataset because some prompts might include sensitive information: explicitly linking them to their creators can cause harm to creators.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop better understanding of large text-to-image generative models.
The unprecedented scale and diversity of this human-actuated dataset provide exciting research opportunities in understanding the interplay between prompts and generative models, detecting deepfakes, and designing human-AI interaction tools to help users more easily use these models.
It should note that we collect images and their prompts from the Stable Diffusion Discord server. The Discord server has rules against users generating or sharing harmful or NSFW (not suitable for work, such as sexual and violent content) images. The Stable Diffusion model used in the server also has an NSFW filter that blurs the generated images if it detects NSFW content. However, it is still possible that some users had generated harmful images that were not detected by the NSFW filter or removed by the server moderators. Therefore, DiffusionDB can potentially contain these images. To mitigate the potential harm, we provide a [Google Form](https://forms.gle/GbYaSpRNYqxCafMZ9) on the [DiffusionDB website](https://poloclub.github.io/diffusiondb/) where users can report harmful or inappropriate images and prompts. We will closely monitor this form and remove reported images and prompts from DiffusionDB.
### Discussion of Biases
The 14 million images in DiffusionDB have diverse styles and categories. However, Discord can be a biased data source. Our images come from channels where early users could use a bot to use Stable Diffusion before release. As these users had started using Stable Diffusion before the model was public, we hypothesize that they are AI art enthusiasts and are likely to have experience with other text-to-image generative models. Therefore, the prompting style in DiffusionDB might not represent novice users. Similarly, the prompts in DiffusionDB might not generalize to domains that require specific knowledge, such as medical images.
### Other Known Limitations
**Generalizability.** Previous research has shown a prompt that works well on one generative model might not give the optimal result when used in other models.
Therefore, different models can need users to write different prompts. For example, many Stable Diffusion prompts use commas to separate keywords, while this pattern is less seen in prompts for DALL-E 2 or Midjourney. Thus, we caution researchers that some research findings from DiffusionDB might not be generalizable to other text-to-image generative models.
## Additional Information
### Dataset Curators
DiffusionDB is created by [Jay Wang](https://zijie.wang), [Evan Montoya](https://www.linkedin.com/in/evan-montoya-b252391b4/), [David Munechika](https://www.linkedin.com/in/dmunechika/), [Alex Yang](https://alexanderyang.me), [Ben Hoover](https://www.bhoov.com), [Polo Chau](https://faculty.cc.gatech.edu/~dchau/).
### Licensing Information
The DiffusionDB dataset is available under the [CC0 1.0 License](https://creativecommons.org/publicdomain/zero/1.0/).
The Python code in this repository is available under the [MIT License](https://github.com/poloclub/diffusiondb/blob/main/LICENSE).
### Citation Information
```bibtex
@article{wangDiffusionDBLargescalePrompt2022,
title = {{{DiffusionDB}}: {{A}} Large-Scale Prompt Gallery Dataset for Text-to-Image Generative Models},
author = {Wang, Zijie J. and Montoya, Evan and Munechika, David and Yang, Haoyang and Hoover, Benjamin and Chau, Duen Horng},
year = {2022},
journal = {arXiv:2210.14896 [cs]},
url = {https://arxiv.org/abs/2210.14896}
}
```
### Contributions
If you have any questions, feel free to [open an issue](https://github.com/poloclub/diffusiondb/issues/new) or contact [Jay Wang](https://zijie.wang).
|
GEM/wiki_lingua | GEM | "2023-02-16T09:23:29Z" | 11,814 | 48 | [
"task_categories:summarization",
"annotations_creators:none",
"language_creators:unknown",
"multilinguality:multilingual",
"source_datasets:original",
"language:ar",
"language:cs",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:nl",
"language:pt",
"language:ru",
"language:th",
"language:tr",
"language:vi",
"language:zh",
"license:cc-by-nc-sa-3.0",
"region:us"
] | [
"summarization"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- none
language_creators:
- unknown
language:
- ar
- cs
- de
- en
- es
- fr
- hi
- id
- it
- ja
- ko
- nl
- pt
- ru
- th
- tr
- vi
- zh
license:
- cc-by-nc-sa-3.0
multilinguality:
- multilingual
size_categories:
- unknown
source_datasets:
- original
task_categories:
- summarization
task_ids: []
pretty_name: wiki_lingua
---
# Dataset Card for GEM/wiki_lingua
## Dataset Description
- **Homepage:** None (See Repository)
- **Repository:** https://github.com/esdurmus/Wikilingua
- **Paper:** https://www.aclweb.org/anthology/2020.findings-emnlp.360/
- **Leaderboard:** N/A
- **Point of Contact:** Faisal Ladhak, Esin Durmus
### Link to Main Data Card
You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/wiki_lingua).
### Dataset Summary
Placeholder
You can load the dataset via:
```
import datasets
data = datasets.load_dataset('GEM/wiki_lingua')
```
The data loader can be found [here](https://huggingface.co./datasets/GEM/wiki_lingua).
#### website
None (See Repository)
#### paper
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### authors
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
## Dataset Overview
### Where to find the Data and its Documentation
#### Webpage
<!-- info: What is the webpage for the dataset (if it exists)? -->
<!-- scope: telescope -->
None (See Repository)
#### Download
<!-- info: What is the link to where the original dataset is hosted? -->
<!-- scope: telescope -->
https://github.com/esdurmus/Wikilingua
#### Paper
<!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
<!-- scope: telescope -->
https://www.aclweb.org/anthology/2020.findings-emnlp.360/
#### BibTex
<!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
<!-- scope: microscope -->
@inproceedings{ladhak-etal-2020-wikilingua,
title = "{W}iki{L}ingua: A New Benchmark Dataset for Cross-Lingual Abstractive Summarization",
author = "Ladhak, Faisal and
Durmus, Esin and
Cardie, Claire and
McKeown, Kathleen",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.findings-emnlp.360",
doi = "10.18653/v1/2020.findings-emnlp.360",
pages = "4034--4048",
abstract = "We introduce WikiLingua, a large-scale, multilingual dataset for the evaluation of cross-lingual abstractive summarization systems. We extract article and summary pairs in 18 languages from WikiHow, a high quality, collaborative resource of how-to guides on a diverse set of topics written by human authors. We create gold-standard article-summary alignments across languages by aligning the images that are used to describe each how-to step in an article. As a set of baselines for further studies, we evaluate the performance of existing cross-lingual abstractive summarization methods on our dataset. We further propose a method for direct cross-lingual summarization (i.e., without requiring translation at inference time) by leveraging synthetic data and Neural Machine Translation as a pre-training step. Our method significantly outperforms the baseline approaches, while being more cost efficient during inference.",
}
#### Contact Name
<!-- quick -->
<!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
Faisal Ladhak, Esin Durmus
#### Contact Email
<!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
<!-- scope: periscope -->
[email protected], [email protected]
#### Has a Leaderboard?
<!-- info: Does the dataset have an active leaderboard? -->
<!-- scope: telescope -->
no
### Languages and Intended Use
#### Multilingual?
<!-- quick -->
<!-- info: Is the dataset multilingual? -->
<!-- scope: telescope -->
yes
#### Covered Dialects
<!-- info: What dialects are covered? Are there multiple dialects per language? -->
<!-- scope: periscope -->
Dataset does not have multiple dialects per language.
#### Covered Languages
<!-- quick -->
<!-- info: What languages/dialects are covered in the dataset? -->
<!-- scope: telescope -->
`English`, `Spanish, Castilian`, `Portuguese`, `French`, `German`, `Russian`, `Italian`, `Indonesian`, `Dutch, Flemish`, `Arabic`, `Chinese`, `Vietnamese`, `Thai`, `Japanese`, `Korean`, `Hindi`, `Czech`, `Turkish`
#### Whose Language?
<!-- info: Whose language is in the dataset? -->
<!-- scope: periscope -->
No information about the user demographic is available.
#### License
<!-- quick -->
<!-- info: What is the license of the dataset? -->
<!-- scope: telescope -->
cc-by-nc-sa-3.0: Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)
#### Intended Use
<!-- info: What is the intended use of the dataset? -->
<!-- scope: microscope -->
The dataset was intended to serve as a large-scale, high-quality benchmark dataset for cross-lingual summarization.
#### Primary Task
<!-- info: What primary task does the dataset support? -->
<!-- scope: telescope -->
Summarization
#### Communicative Goal
<!-- quick -->
<!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
<!-- scope: periscope -->
Produce a high quality summary for the given input article.
### Credit
#### Curation Organization Type(s)
<!-- info: In what kind of organization did the dataset curation happen? -->
<!-- scope: telescope -->
`academic`
#### Curation Organization(s)
<!-- info: Name the organization(s). -->
<!-- scope: periscope -->
Columbia University
#### Dataset Creators
<!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
<!-- scope: microscope -->
Faisal Ladhak (Columbia University), Esin Durmus (Stanford University), Claire Cardie (Cornell University), Kathleen McKeown (Columbia University)
#### Who added the Dataset to GEM?
<!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
<!-- scope: microscope -->
Jenny Chim (Queen Mary University of London), Faisal Ladhak (Columbia University)
### Dataset Structure
#### Data Fields
<!-- info: List and describe the fields present in the dataset. -->
<!-- scope: telescope -->
gem_id -- The id for the data instance.
source_language -- The language of the source article.
target_language -- The language of the target summary.
source -- The source document.
#### Example Instance
<!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
<!-- scope: periscope -->
{
"gem_id": "wikilingua_crosslingual-train-12345",
"gem_parent_id": "wikilingua_crosslingual-train-12345",
"source_language": "fr",
"target_language": "de",
"source": "Document in fr",
"target": "Summary in de",
}
#### Data Splits
<!-- info: Describe and name the splits in the dataset if there are more than one. -->
<!-- scope: periscope -->
The data is split into train/dev/test. In addition to the full test set, there's also a sampled version of the test set.
#### Splitting Criteria
<!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
<!-- scope: microscope -->
The data was split to ensure the same document would appear in the same split across languages so as to ensure there's no leakage into the test set.
## Dataset in GEM
### Rationale for Inclusion in GEM
#### Why is the Dataset in GEM?
<!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
<!-- scope: microscope -->
This dataset provides a large-scale, high-quality resource for cross-lingual summarization in 18 languages, increasing the coverage of languages for the GEM summarization task.
#### Similar Datasets
<!-- info: Do other datasets for the high level task exist? -->
<!-- scope: telescope -->
yes
#### Unique Language Coverage
<!-- info: Does this dataset cover other languages than other datasets for the same task? -->
<!-- scope: periscope -->
yes
#### Difference from other GEM datasets
<!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
<!-- scope: microscope -->
XSum covers English news articles, and MLSum covers news articles in German and Spanish.
In contrast, this dataset has how-to articles in 18 languages, substantially increasing the languages covered. Moreover, it also provides a a different domain than the other two datasets.
#### Ability that the Dataset measures
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: periscope -->
The ability to generate quality summaries across multiple languages.
### GEM-Specific Curation
#### Modificatied for GEM?
<!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
<!-- scope: telescope -->
yes
#### GEM Modifications
<!-- info: What changes have been made to he original dataset? -->
<!-- scope: periscope -->
`other`
#### Modification Details
<!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
<!-- scope: microscope -->
Previous version had separate data loaders for each language. In this version, we've created a single monolingual data loader, which contains monolingual data in each of the 18 languages. In addition, we've also created a single cross-lingual data loader across all the language pairs in the dataset.
#### Additional Splits?
<!-- info: Does GEM provide additional splits to the dataset? -->
<!-- scope: telescope -->
no
### Getting Started with the Task
## Previous Results
### Previous Results
#### Measured Model Abilities
<!-- info: What aspect of model ability can be measured with this dataset? -->
<!-- scope: telescope -->
Ability to summarize content across different languages.
#### Metrics
<!-- info: What metrics are typically used for this task? -->
<!-- scope: periscope -->
`ROUGE`
#### Proposed Evaluation
<!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
<!-- scope: microscope -->
ROUGE is used to measure content selection by comparing word overlap with reference summaries. In addition, the authors of the dataset also used human evaluation to evaluate content selection and fluency of the systems.
#### Previous results available?
<!-- info: Are previous results available? -->
<!-- scope: telescope -->
no
## Dataset Curation
### Original Curation
#### Original Curation Rationale
<!-- info: Original curation rationale -->
<!-- scope: telescope -->
The dataset was created in order to enable new approaches for cross-lingual and multilingual summarization, which are currently understudied as well as open up inetersting new directions for research in summarization. E.g., exploration of multi-source cross-lingual architectures, i.e. models that can summarize from multiple source languages into a target language, building models that can summarize articles from any language to any other language for a given set of languages.
#### Communicative Goal
<!-- info: What was the communicative goal? -->
<!-- scope: periscope -->
Given an input article, produce a high quality summary of the article in the target language.
#### Sourced from Different Sources
<!-- info: Is the dataset aggregated from different data sources? -->
<!-- scope: telescope -->
no
### Language Data
#### How was Language Data Obtained?
<!-- info: How was the language data obtained? -->
<!-- scope: telescope -->
`Found`
#### Where was it found?
<!-- info: If found, where from? -->
<!-- scope: telescope -->
`Single website`
#### Language Producers
<!-- info: What further information do we have on the language producers? -->
<!-- scope: microscope -->
WikiHow, which is an online resource of how-to guides (written and reviewed by human authors) is used as the data source.
#### Topics Covered
<!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
<!-- scope: periscope -->
The articles cover 19 broad categories including health, arts and entertainment, personal care and style, travel, education and communications, etc. The categories cover a broad set of genres and topics.
#### Data Validation
<!-- info: Was the text validated by a different worker or a data curator? -->
<!-- scope: telescope -->
not validated
#### Was Data Filtered?
<!-- info: Were text instances selected or filtered? -->
<!-- scope: telescope -->
not filtered
### Structured Annotations
#### Additional Annotations?
<!-- quick -->
<!-- info: Does the dataset have additional annotations for each instance? -->
<!-- scope: telescope -->
none
#### Annotation Service?
<!-- info: Was an annotation service used? -->
<!-- scope: telescope -->
no
### Consent
#### Any Consent Policy?
<!-- info: Was there a consent policy involved when gathering the data? -->
<!-- scope: telescope -->
yes
#### Consent Policy Details
<!-- info: What was the consent policy? -->
<!-- scope: microscope -->
(1) Text Content. All text posted by Users to the Service is sub-licensed by wikiHow to other Users under a Creative Commons license as provided herein. The Creative Commons license allows such text content be used freely for non-commercial purposes, so long as it is used and attributed to the original author as specified under the terms of the license. Allowing free republication of our articles helps wikiHow achieve its mission by providing instruction on solving the problems of everyday life to more people for free. In order to support this goal, wikiHow hereby grants each User of the Service a license to all text content that Users contribute to the Service under the terms and conditions of a Creative Commons CC BY-NC-SA 3.0 License. Please be sure to read the terms of the license carefully. You continue to own all right, title, and interest in and to your User Content, and you are free to distribute it as you wish, whether for commercial or non-commercial purposes.
#### Other Consented Downstream Use
<!-- info: What other downstream uses of the data did the original data creators and the data curators consent to? -->
<!-- scope: microscope -->
The data is made freely available under the Creative Commons license, therefore there are no restrictions about downstream uses as long is it's for non-commercial purposes.
### Private Identifying Information (PII)
#### Contains PII?
<!-- quick -->
<!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
<!-- scope: telescope -->
no PII
#### Justification for no PII
<!-- info: Provide a justification for selecting `no PII` above. -->
<!-- scope: periscope -->
Only the article text and summaries were collected. No user information was retained in the dataset.
### Maintenance
#### Any Maintenance Plan?
<!-- info: Does the original dataset have a maintenance plan? -->
<!-- scope: telescope -->
no
## Broader Social Context
### Previous Work on the Social Impact of the Dataset
#### Usage of Models based on the Data
<!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
<!-- scope: telescope -->
yes - other datasets featuring the same task
### Impact on Under-Served Communities
#### Addresses needs of underserved Communities?
<!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
<!-- scope: telescope -->
no
### Discussion of Biases
#### Any Documented Social Biases?
<!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
<!-- scope: telescope -->
yes
## Considerations for Using the Data
### PII Risks and Liability
### Licenses
#### Copyright Restrictions on the Dataset
<!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
<!-- scope: periscope -->
`non-commercial use only`
#### Copyright Restrictions on the Language Data
<!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
<!-- scope: periscope -->
`non-commercial use only`
### Known Technical Limitations
|
locuslab/TOFU | locuslab | "2024-02-07T14:58:06Z" | 11,789 | 36 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2401.06121",
"region:us",
"unlearning",
"question answering",
"TOFU",
"NLP",
"LLM"
] | [
"question-answering"
] | "2023-11-14T22:25:09Z" | ---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
license: mit
multilinguality:
- monolingual
pretty_name: TOFU
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- unlearning
- question answering
- TOFU
- NLP
- LLM
task_categories:
- question-answering
task_ids:
- closed-domain-qa
configs:
- config_name: full
data_files: full.json
default: true
- config_name: forget01
data_files: forget01.json
- config_name: forget05
data_files: forget05.json
- config_name: forget10
data_files: forget10.json
- config_name: retain90
data_files: retain90.json
- config_name: retain95
data_files: retain95.json
- config_name: retain99
data_files: retain99.json
- config_name: world_facts
data_files: world_facts.json
- config_name: real_authors
data_files: real_authors.json
- config_name: forget01_perturbed
data_files: forget01_perturbed.json
- config_name: forget05_perturbed
data_files: forget05_perturbed.json
- config_name: forget10_perturbed
data_files: forget10_perturbed.json
- config_name: retain_perturbed
data_files: retain_perturbed.json
- config_name: world_facts_perturbed
data_files: world_facts_perturbed.json
- config_name: real_authors_perturbed
data_files: real_authors_perturbed.json
---
# TOFU: Task of Fictitious Unlearning 🍢
The TOFU dataset serves as a benchmark for evaluating unlearning performance of large language models on realistic tasks. The dataset comprises question-answer pairs based on autobiographies of 200 different authors that do not exist and are completely fictitiously generated by the GPT-4 model. The goal of the task is to unlearn a fine-tuned model on various fractions of the forget set.
## Quick Links
- [**Website**](https://locuslab.github.io/tofu): The landing page for TOFU
- [**arXiv Paper**](http://arxiv.org/abs/2401.06121): Detailed information about the TOFU dataset and its significance in unlearning tasks.
- [**GitHub Repository**](https://github.com/locuslab/tofu): Access the source code, fine-tuning scripts, and additional resources for the TOFU dataset.
- [**Dataset on Hugging Face**](https://huggingface.co./datasets/locuslab/TOFU): Direct link to download the TOFU dataset.
- [**Leaderboard on Hugging Face Spaces**](https://huggingface.co./spaces/locuslab/tofu_leaderboard): Current rankings and submissions for the TOFU dataset challenges.
- [**Summary on Twitter**](https://x.com/_akhaliq/status/1745643293839327268): A concise summary and key takeaways from the project.
## Applicability 🚀
The dataset is in QA format, making it ideal for use with popular chat models such as Llama2, Mistral, or Qwen. However, it also works for any other large language model. The corresponding code base is written for the Llama2 chat, and Phi-1.5 models, but can be easily adapted to other models.
## Loading the Dataset
To load the dataset, use the following code:
```python
from datasets import load_dataset
dataset = load_dataset("locuslab/TOFU", "full")
```
### Available forget sets are:
- `forget01`: Forgetting 1% of the original dataset, all entries correspond to a single author.
- `forget05`: Forgetting 5% of the original dataset, all entries correspond to a single author.
- `forget10`: Forgetting 10% of the original dataset, all entries correspond to a single author.
Retain sets corresponding to each forget set are also available, which can be used to train an Oracle model.
## Codebase
The code for training the models and the availability of all fine-tuned models can be found at our [GitHub repository](https://github.com/locuslab/tofu).
## Citing Our Work
If you find our codebase and dataset beneficial, please cite our work:
```
@misc{tofu2024,
title={TOFU: A Task of Fictitious Unlearning for LLMs},
author={Pratyush Maini and Zhili Feng and Avi Schwarzschild and Zachary C. Lipton and J. Zico Kolter},
year={2024},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
``` |
mteb/nfcorpus | mteb | "2024-03-03T11:16:55Z" | 11,768 | 2 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"source_datasets:nfcorpus",
"language:en",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"text-retrieval"
] | [
"text-retrieval"
] | "2024-03-02T21:17:27Z" | ---
language:
- en
multilinguality:
- monolingual
task_categories:
- text-retrieval
source_datasets:
- nfcorpus
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 3720942
num_examples: 110575
- name: dev
num_bytes: 383427
num_examples: 11385
- name: test
num_bytes: 415220
num_examples: 12334
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 5856698
num_examples: 3633
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 128355
num_examples: 3237
configs:
- config_name: default
data_files:
- split: train
path: qrels/train.jsonl
- split: dev
path: qrels/dev.jsonl
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
--- |
mlfoundations/MINT-1T-PDF-CC-2023-40 | mlfoundations | "2024-09-19T21:06:59Z" | 11,756 | 1 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:100B<n<1T",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:43:23Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-40`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co./collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co./HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co./Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co./facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
bigscience/xP3mt | bigscience | "2023-05-30T15:50:57Z" | 11,663 | 23 | [
"task_categories:other",
"annotations_creators:expert-generated",
"annotations_creators:crowdsourced",
"multilinguality:multilingual",
"language:ak",
"language:ar",
"language:as",
"language:bm",
"language:bn",
"language:ca",
"language:code",
"language:en",
"language:es",
"language:eu",
"language:fon",
"language:fr",
"language:gu",
"language:hi",
"language:id",
"language:ig",
"language:ki",
"language:kn",
"language:lg",
"language:ln",
"language:ml",
"language:mr",
"language:ne",
"language:nso",
"language:ny",
"language:or",
"language:pa",
"language:pt",
"language:rn",
"language:rw",
"language:sn",
"language:st",
"language:sw",
"language:ta",
"language:te",
"language:tn",
"language:ts",
"language:tum",
"language:tw",
"language:ur",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:zu",
"license:apache-2.0",
"size_categories:10M<n<100M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:2211.01786",
"region:us"
] | [
"other"
] | "2022-09-28T12:36:00Z" | ---
annotations_creators:
- expert-generated
- crowdsourced
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
programming_language:
- C
- C++
- C#
- Go
- Java
- JavaScript
- Lua
- PHP
- Python
- Ruby
- Rust
- Scala
- TypeScript
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: xP3
size_categories:
- 100M<n<1B
task_categories:
- other
---
# Dataset Card for xP3
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** https://github.com/bigscience-workshop/xmtf
- **Paper:** [Crosslingual Generalization through Multitask Finetuning](https://arxiv.org/abs/2211.01786)
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected])
### Dataset Summary
> xP3 (Crosslingual Public Pool of Prompts) is a collection of prompts & datasets across 46 of languages & 16 NLP tasks. It is used for the training of BLOOMZ and mT0, multilingual language models capable of following human instructions in dozens of languages zero-shot.
- **Creation:** The dataset can be recreated using instructions available [here](https://github.com/bigscience-workshop/xmtf#create-xp3). We provide this version to save processing time and ease reproducibility.
- **Languages:** 46 (Can be extended by [recreating with more splits](https://github.com/bigscience-workshop/xmtf#create-xp3))
- **xP3 Dataset Family:**
<table>
<tr>
<th>Name</th>
<th>Explanation</th>
<th>Example models</th>
</tr>
<tr>
<td><a href=https://huggingface.co./datasets/Muennighoff/xP3x>xP3x</a></t>
<td>Mixture of 17 tasks in 277 languages with English prompts</td>
<td>WIP - Join us at Project Aya @<a href=https://cohere.for.ai/>C4AI</a> to help!</td>
</tr>
<tr>
<td><a href=https://huggingface.co./datasets/bigscience/xP3>xP3</a></t>
<td>Mixture of 13 training tasks in 46 languages with English prompts</td>
<td><a href=https://huggingface.co./bigscience/bloomz>bloomz</a> & <a href=https://huggingface.co./bigscience/mt0-xxl>mt0-xxl</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co./datasets/bigscience/xP3mt>xP3mt</a></t>
<td>Mixture of 13 training tasks in 46 languages with prompts in 20 languages (machine-translated from English)</td>
<td><a href=https://huggingface.co./bigscience/bloomz-mt>bloomz-mt</a> & <a href=https://huggingface.co./bigscience/mt0-xxl-mt>mt0-xxl-mt</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co./datasets/bigscience/xP3all>xP3all</a></t>
<td>xP3 + evaluation datasets adding an additional 3 tasks for a total of 16 tasks in 46 languages with English prompts</td>
<td></td>
</tr>
<tr>
<td><a href=https://huggingface.co./datasets/bigscience/xP3megds>xP3megds</a></t>
<td><a href=https://github.com/bigscience-workshop/Megatron-DeepSpeed>Megatron-DeepSpeed</a> processed version of xP3</td>
<td><a href=https://huggingface.co./bigscience/bloomz>bloomz</a></td>
</tr>
<tr>
<td><a href=https://huggingface.co./datasets/Muennighoff/P3>P3</a></t>
<td>Repreprocessed version of the English-only <a href=https://huggingface.co./datasets/bigscience/P3>P3</a> with 8 training tasks</td>
<td><a href=https://huggingface.co./bigscience/bloomz-p3>bloomz-p3</a> & <a href=https://huggingface.co./bigscience/mt0-xxl-p3>mt0-xxl-p3</a></td>
</tr>
</table>
## Dataset Structure
### Data Instances
An example of "train" looks as follows:
```json
{
"inputs": "Oración 1: Fue académico en literatura metafísica, teología y ciencias clásicas.\Oración 2: Fue académico en literatura metafísica, teología y ciencia clásica.\nPregunta: ¿La oración 1 parafrasea la oración 2? ¿Si o no?",
"targets": "Sí"
}
```
### Data Fields
The data fields are the same among all splits:
- `inputs`: the natural language input fed to the model
- `targets`: the natural language target that the model has to generate
### Data Splits
The below table summarizes sizes per language (computed from the `merged_{lang}.jsonl` files). Due to languages like `tw` only being single sentence translation samples from Flores, their byte percentage is significantly lower than their sample percentage. We machine-translated prompts for monolingual datasets, thus languages with only crosslingual datasets (e.g. Translation) do not have non-English prompts. Languages without non-English prompts are equivalent to [xP3](https://huggingface.co./datasets/bigscience/xP3).
|Language|Kilobytes|%|Samples|%|Non-English prompts|
|--------|------:|-:|---:|-:|-:|
|tw|106288|0.11|265071|0.33| |
|bm|107056|0.11|265180|0.33| |
|ak|108096|0.11|265071|0.33| |
|ca|110608|0.11|271191|0.34| |
|eu|113008|0.12|281199|0.35| |
|fon|113072|0.12|265063|0.33| |
|st|114080|0.12|265063|0.33| |
|ki|115040|0.12|265180|0.33| |
|tum|116032|0.12|265063|0.33| |
|wo|122560|0.13|365063|0.46| |
|ln|126304|0.13|365060|0.46| |
|as|156256|0.16|265063|0.33| |
|or|161472|0.17|265063|0.33| |
|kn|165456|0.17|265063|0.33| |
|ml|175040|0.18|265864|0.33| |
|rn|192992|0.2|318189|0.4| |
|nso|229712|0.24|915051|1.14| |
|tn|235536|0.24|915054|1.14| |
|lg|235936|0.24|915021|1.14| |
|rw|249360|0.26|915043|1.14| |
|ts|250256|0.26|915044|1.14| |
|sn|252496|0.26|865056|1.08| |
|xh|254672|0.26|915058|1.14| |
|zu|263712|0.27|915061|1.14| |
|ny|272128|0.28|915063|1.14| |
|ig|325440|0.33|950097|1.19|✅|
|yo|339664|0.35|913021|1.14|✅|
|ne|398144|0.41|315754|0.39|✅|
|pa|529632|0.55|339210|0.42|✅|
|sw|561392|0.58|1114439|1.39|✅|
|gu|566576|0.58|347499|0.43|✅|
|mr|674000|0.69|417269|0.52|✅|
|bn|854864|0.88|428725|0.54|✅|
|ta|943440|0.97|410633|0.51|✅|
|te|1384016|1.42|573354|0.72|✅|
|ur|1944416|2.0|855756|1.07|✅|
|vi|3113184|3.2|1667306|2.08|✅|
|code|4330752|4.46|2707724|3.38| |
|hi|4469712|4.6|1543441|1.93|✅|
|id|4538768|4.67|2582272|3.22|✅|
|zh|4604112|4.74|3571636|4.46|✅|
|ar|4703968|4.84|2148970|2.68|✅|
|fr|5558912|5.72|5055942|6.31|✅|
|pt|6130016|6.31|3562772|4.45|✅|
|es|7579424|7.8|5151349|6.43|✅|
|en|39252528|40.4|32740750|40.87| |
|total|97150128|100.0|80100816|100.0|✅|
## Dataset Creation
### Source Data
#### Training datasets
- Code Miscellaneous
- [CodeComplex](https://huggingface.co./datasets/codeparrot/codecomplex)
- [Docstring Corpus](https://huggingface.co./datasets/teven/code_docstring_corpus)
- [GreatCode](https://huggingface.co./datasets/great_code)
- [State Changes](https://huggingface.co./datasets/Fraser/python-state-changes)
- Closed-book QA
- [Hotpot QA](https://huggingface.co./datasets/hotpot_qa)
- [Trivia QA](https://huggingface.co./datasets/trivia_qa)
- [Web Questions](https://huggingface.co./datasets/web_questions)
- [Wiki QA](https://huggingface.co./datasets/wiki_qa)
- Extractive QA
- [Adversarial QA](https://huggingface.co./datasets/adversarial_qa)
- [CMRC2018](https://huggingface.co./datasets/cmrc2018)
- [DRCD](https://huggingface.co./datasets/clue)
- [DuoRC](https://huggingface.co./datasets/duorc)
- [MLQA](https://huggingface.co./datasets/mlqa)
- [Quoref](https://huggingface.co./datasets/quoref)
- [ReCoRD](https://huggingface.co./datasets/super_glue)
- [ROPES](https://huggingface.co./datasets/ropes)
- [SQuAD v2](https://huggingface.co./datasets/squad_v2)
- [xQuAD](https://huggingface.co./datasets/xquad)
- TyDI QA
- [Primary](https://huggingface.co./datasets/khalidalt/tydiqa-primary)
- [Goldp](https://huggingface.co./datasets/khalidalt/tydiqa-goldp)
- Multiple-Choice QA
- [ARC](https://huggingface.co./datasets/ai2_arc)
- [C3](https://huggingface.co./datasets/c3)
- [CoS-E](https://huggingface.co./datasets/cos_e)
- [Cosmos](https://huggingface.co./datasets/cosmos)
- [DREAM](https://huggingface.co./datasets/dream)
- [MultiRC](https://huggingface.co./datasets/super_glue)
- [OpenBookQA](https://huggingface.co./datasets/openbookqa)
- [PiQA](https://huggingface.co./datasets/piqa)
- [QUAIL](https://huggingface.co./datasets/quail)
- [QuaRel](https://huggingface.co./datasets/quarel)
- [QuaRTz](https://huggingface.co./datasets/quartz)
- [QASC](https://huggingface.co./datasets/qasc)
- [RACE](https://huggingface.co./datasets/race)
- [SciQ](https://huggingface.co./datasets/sciq)
- [Social IQA](https://huggingface.co./datasets/social_i_qa)
- [Wiki Hop](https://huggingface.co./datasets/wiki_hop)
- [WiQA](https://huggingface.co./datasets/wiqa)
- Paraphrase Identification
- [MRPC](https://huggingface.co./datasets/super_glue)
- [PAWS](https://huggingface.co./datasets/paws)
- [PAWS-X](https://huggingface.co./datasets/paws-x)
- [QQP](https://huggingface.co./datasets/qqp)
- Program Synthesis
- [APPS](https://huggingface.co./datasets/codeparrot/apps)
- [CodeContests](https://huggingface.co./datasets/teven/code_contests)
- [JupyterCodePairs](https://huggingface.co./datasets/codeparrot/github-jupyter-text-code-pairs)
- [MBPP](https://huggingface.co./datasets/Muennighoff/mbpp)
- [NeuralCodeSearch](https://huggingface.co./datasets/neural_code_search)
- [XLCoST](https://huggingface.co./datasets/codeparrot/xlcost-text-to-code)
- Structure-to-text
- [Common Gen](https://huggingface.co./datasets/common_gen)
- [Wiki Bio](https://huggingface.co./datasets/wiki_bio)
- Sentiment
- [Amazon](https://huggingface.co./datasets/amazon_polarity)
- [App Reviews](https://huggingface.co./datasets/app_reviews)
- [IMDB](https://huggingface.co./datasets/imdb)
- [Rotten Tomatoes](https://huggingface.co./datasets/rotten_tomatoes)
- [Yelp](https://huggingface.co./datasets/yelp_review_full)
- Simplification
- [BiSECT](https://huggingface.co./datasets/GEM/BiSECT)
- Summarization
- [CNN Daily Mail](https://huggingface.co./datasets/cnn_dailymail)
- [Gigaword](https://huggingface.co./datasets/gigaword)
- [MultiNews](https://huggingface.co./datasets/multi_news)
- [SamSum](https://huggingface.co./datasets/samsum)
- [Wiki-Lingua](https://huggingface.co./datasets/GEM/wiki_lingua)
- [XLSum](https://huggingface.co./datasets/GEM/xlsum)
- [XSum](https://huggingface.co./datasets/xsum)
- Topic Classification
- [AG News](https://huggingface.co./datasets/ag_news)
- [DBPedia](https://huggingface.co./datasets/dbpedia_14)
- [TNEWS](https://huggingface.co./datasets/clue)
- [TREC](https://huggingface.co./datasets/trec)
- [CSL](https://huggingface.co./datasets/clue)
- Translation
- [Flores-200](https://huggingface.co./datasets/Muennighoff/flores200)
- [Tatoeba](https://huggingface.co./datasets/Helsinki-NLP/tatoeba_mt)
- Word Sense disambiguation
- [WiC](https://huggingface.co./datasets/super_glue)
- [XL-WiC](https://huggingface.co./datasets/pasinit/xlwic)
#### Evaluation datasets (included in [xP3all](https://huggingface.co./datasets/bigscience/xP3all) except for NLI & HumanEval)
- Natural Language Inference (NLI)
- [ANLI](https://huggingface.co./datasets/anli)
- [CB](https://huggingface.co./datasets/super_glue)
- [RTE](https://huggingface.co./datasets/super_glue)
- [XNLI](https://huggingface.co./datasets/xnli)
- Coreference Resolution
- [Winogrande](https://huggingface.co./datasets/winogrande)
- [XWinograd](https://huggingface.co./datasets/Muennighoff/xwinograd)
- Program Synthesis
- [HumanEval](https://huggingface.co./datasets/openai_humaneval)
- Sentence Completion
- [COPA](https://huggingface.co./datasets/super_glue)
- [Story Cloze](https://huggingface.co./datasets/story_cloze)
- [XCOPA](https://huggingface.co./datasets/xcopa)
- [XStoryCloze](https://huggingface.co./datasets/Muennighoff/xstory_cloze)
## Additional Information
### Licensing Information
The dataset is released under Apache 2.0.
### Citation Information
```bibtex
@misc{muennighoff2022crosslingual,
title={Crosslingual Generalization through Multitask Finetuning},
author={Niklas Muennighoff and Thomas Wang and Lintang Sutawika and Adam Roberts and Stella Biderman and Teven Le Scao and M Saiful Bari and Sheng Shen and Zheng-Xin Yong and Hailey Schoelkopf and Xiangru Tang and Dragomir Radev and Alham Fikri Aji and Khalid Almubarak and Samuel Albanie and Zaid Alyafeai and Albert Webson and Edward Raff and Colin Raffel},
year={2022},
eprint={2211.01786},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to the contributors of [promptsource](https://github.com/bigscience-workshop/promptsource/graphs/contributors) for adding many prompts used in this dataset. |
lmms-lab/POPE | lmms-lab | "2024-05-23T03:29:23Z" | 11,656 | 6 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-18T15:13:42Z" | ---
dataset_info:
- config_name: Full
features:
- name: id
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image_source
dtype: string
- name: image
dtype: image
- name: category
dtype: string
splits:
- name: adversarial
num_bytes: 490408158.0
num_examples: 3000
- name: popular
num_bytes: 490397000.0
num_examples: 3000
- name: random
num_bytes: 490394976.0
num_examples: 3000
download_size: 255022914
dataset_size: 1471200134.0
- config_name: default
features:
- name: id
dtype: string
- name: question_id
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: image_source
dtype: string
- name: image
dtype: image
- name: category
dtype: string
splits:
- name: test
num_bytes: 1471200135.0
num_examples: 9000
download_size: 255022914
dataset_size: 1471200135.0
configs:
- config_name: Full
data_files:
- split: adversarial
path: Full/adversarial-*
- split: popular
path: Full/popular-*
- split: random
path: Full/random-*
- config_name: default
data_files:
- split: test
path: data/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co./lmms-lab)
# This Dataset
This is a formatted version of [POPE](https://github.com/RUCAIBox/POPE). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{li2023evaluating,
title={Evaluating object hallucination in large vision-language models},
author={Li, Yifan and Du, Yifan and Zhou, Kun and Wang, Jinpeng and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2305.10355},
year={2023}
}
```
|
jinzhuoran/RWKU | jinzhuoran | "2024-06-18T02:25:48Z" | 11,550 | 3 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.10890",
"doi:10.57967/hf/2448",
"region:us",
"unlearning",
"knowledge unlearning",
"NLP",
"LLM"
] | [
"text-generation",
"fill-mask",
"question-answering"
] | "2024-06-02T12:02:18Z" | ---
language:
- en
license: cc-by-4.0
pretty_name: RWKU
size_categories:
- 10K<n<100K
task_categories:
- text-generation
- fill-mask
- question-answering
tags:
- unlearning
- knowledge unlearning
- NLP
- LLM
configs:
- config_name: forget_target
data_files:
- split: train
path:
- "All/intro.json"
- config_name: forget_level1
data_files:
- split: test
path:
- "All/forget_level1.json"
- config_name: forget_level2
data_files:
- split: test
path:
- "All/forget_level2.json"
- config_name: forget_level3
data_files:
- split: test
path:
- "All/forget_level3.json"
- config_name: neighbor_level1
data_files:
- split: test
path:
- "All/neighbor_level1.json"
- config_name: neighbor_level2
data_files:
- split: test
path:
- "All/neighbor_level2.json"
- config_name: mia_forget
data_files:
- split: test
path:
- "All/forget_mia.json"
- config_name: mia_retain
data_files:
- split: test
path:
- "All/retain_mia.json"
- config_name: utility_general
data_files:
- split: test
path:
- "All/retain_mmlu.json"
- config_name: utility_general
data_files:
- split: test
path:
- "All/retain_mmlu.json"
- config_name: utility_reason
data_files:
- split: test
path:
- "All/retain_bbh.json"
- config_name: utility_truthfulness
data_files:
- split: test
path:
- "All/truthful.json"
- config_name: utility_factuality
data_files:
- split: test
path:
- "All/triviaqa.json"
- config_name: utility_fluency
data_files:
- split: test
path:
- "All/fluency.json"
- config_name: train_original_passage
data_files:
- split: train
path:
- "All/passage.json"
- config_name: train_positive_llama3
data_files:
- split: train
path:
- "All/positive.json"
- config_name: train_negative_llama3
data_files:
- split: train
path:
- "All/negative.json"
- config_name: train_pair_llama3
data_files:
- split: train
path:
- "All/pair.json"
- config_name: train_refusal_llama3
data_files:
- split: train
path:
- "All/reject.json"
- config_name: train_positive_phi3
data_files:
- split: train
path:
- "All/positive_phi.json"
- config_name: train_negative_phi3
data_files:
- split: train
path:
- "All/negative_phi.json"
- config_name: train_pair_phi3
data_files:
- split: train
path:
- "All/pair_phi.json"
- config_name: train_refusal_phi3
data_files:
- split: train
path:
- "All/reject_phi.json"
---
# Dataset Card for Real-World Knowledge Unlearning Benchmark (RWKU)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://rwku-bench.github.io
- **Repository:** https://github.com/jinzhuoran/RWKU
- **Paper:** https://arxiv.org/abs/2406.10890
### Dataset Summary
**RWKU is a real-world knowledge unlearning benchmark specifically designed for large language models (LLMs).**
This benchmark contains 200 real-world unlearning targets and 13,131 multi-level forget probes, including 3,268 fill-in-the-blank probes, 2,879 question-answer probes, and 6,984 adversarial-attack probes.
RWKU is designed based on the following three key factors:
1. For the **task setting**, we consider a more practical and challenging setting, similar to _zero-shot knowledge unlearning_.
We provide only the unlearning target and the original model, without offering any forget corpus or retain corpus.
In this way, it avoids secondary information leakage caused by the forget corpus and is not affected by the distribution bias of the retain corpus.
2. For the **knowledge source**, we choose real-world famous people from Wikipedia as the unlearning targets and demonstrate that such popular knowledge is widely present in various LLMs through memorization quantification, making it more suitable for knowledge unlearning.
Additionally, choosing entities as unlearning targets can well clearly define the unlearning boundaries.
3. For the **evaluation framework**, we carefully design the forget set and the retain set to evaluate the model's capabilities from multiple real-world applications.
Regarding the forget set, we evaluate the **efficacy** of knowledge unlearning at both the knowledge memorization (fill-in-the-blank style) and knowledge manipulation (question-answer style) abilities.
Specifically, we also evaluate these two abilities through **adversarial attacks** to induce forgotten knowledge in the model.
We adopt four membership inference attack (MIA) methods for knowledge memorization on our collected MIA set.
We meticulously designed nine types of adversarial-attack probes for knowledge manipulation, including prefix injection, affirmative suffix, role playing, reverse query, and others.
Regarding the retain set, we design a neighbor set to test the impact of neighbor perturbation, specifically focusing on the **locality** of unlearning.
In addition, we assess the **model utility** on various capabilities, including general ability, reasoning ability, truthfulness, factuality, and fluency.
### Supported Tasks
Knowledge unlearning for LLMs.
### Languages
English.
## Dataset Structure
To evaluate the unlearning efficacy:
```python
from datasets import load_dataset
forget_level1 = load_dataset("jinzhuoran/RWKU", 'forget_level1')
forget_level2 = load_dataset("jinzhuoran/RWKU", 'forget_level2')
forget_level2 = load_dataset("jinzhuoran/RWKU", 'forget_level2')
```
To evaluate the locality:
```python
from datasets import load_dataset
neighbor_level1 = load_dataset("jinzhuoran/RWKU", 'neighbor_level1')
neighbor_level2 = load_dataset("jinzhuoran/RWKU", 'neighbor_level2')
```
To evaluate the model utility:
```python
from datasets import load_dataset
utility_general = load_dataset("jinzhuoran/RWKU", 'utility_general')
utility_reason = load_dataset("jinzhuoran/RWKU", 'utility_reason')
utility_truthfulness = load_dataset("jinzhuoran/RWKU", 'utility_truthfulness')
utility_factuality = load_dataset("jinzhuoran/RWKU", 'utility_factuality')
utility_fluency = load_dataset("jinzhuoran/RWKU", 'utility_fluency')
```
To conduct membership inference attacks:
```python
from datasets import load_dataset
mia_forget = load_dataset("jinzhuoran/RWKU", 'mia_forget')
mia_retain = load_dataset("jinzhuoran/RWKU", 'mia_retain')
```
To load the forget corpus:
```python
from datasets import load_dataset
train_original_passage = load_dataset("jinzhuoran/RWKU", 'train_original_passage')
train_positive_llama3 = load_dataset("jinzhuoran/RWKU", 'train_positive_llama3')
```
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citing Our Work
If you find our codebase and dataset beneficial, please cite our work:
```bibtex
@misc{jin2024rwku,
title={RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models},
author={Zhuoran Jin and Pengfei Cao and Chenhao Wang and Zhitao He and Hongbang Yuan and Jiachun Li and Yubo Chen and Kang Liu and Jun Zhao},
year={2024},
eprint={2406.10890},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
enzostvs/stable-diffusion-tpu-generations | enzostvs | "2024-02-22T16:53:21Z" | 11,544 | 2 | [
"license:mit",
"region:us"
] | null | "2023-11-03T15:57:18Z" | ---
license: mit
configs:
- config_name: default
data_files:
- split: train
path: "images/*.png"
---
|
DL3DV/DL3DV-ALL-2K | DL3DV | "2024-09-03T11:38:35Z" | 11,531 | 0 | [
"size_categories:n>1T",
"region:us",
"3D Vision",
"NeRF",
"3D Gaussian",
"Dataset",
"Novel View Synthesis",
"Text to 3D",
"Image to 3D"
] | null | "2024-03-05T06:03:15Z" | ---
tags:
- 3D Vision
- NeRF
- 3D Gaussian
- Dataset
- Novel View Synthesis
- Text to 3D
- Image to 3D
pretty_name: Dl3DV-Dataset
size_categories:
- n>1T
---
# DL3DV-Dataset
This repo has all the 2K frames with camera poses of DL3DV-10K Dataset. We are working hard to review all the dataset to avoid sensitive information. Thank you for your patience.
# Download
If you have enough space, you can use git to download a dataset from huggingface. See this [link](https://huggingface.co./docs/hub/en/datasets-downloading). [480P](https://huggingface.co./datasets/DL3DV/DL3DV-ALL-480P)/[960P](https://huggingface.co./datasets/DL3DV/DL3DV-ALL-960P) versions should satisfies most needs.
If you do not have enough space, we further provide a [download script](https://github.com/DL3DV-10K/Dataset/blob/main/scripts/download.py) here to download a subset. The usage:
```Bash
usage: download.py [-h] --odir ODIR --subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K} --resolution {4K,2K,960P,480P} --file_type {images+poses,video,colmap_cache} [--hash HASH]
[--clean_cache]
optional arguments:
-h, --help show this help message and exit
--odir ODIR output directory
--subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K}
The subset of the benchmark to download
--resolution {4K,2K,960P,480P}
The resolution to donwnload
--file_type {images+poses,video,colmap_cache}
The file type to download
--hash HASH If set subset=hash, this is the hash code of the scene to download
--clean_cache If set, will clean the huggingface cache to save space
```
Here are some examples:
```Bash
# Make sure you have applied for the access.
# Use this to download the download.py script
wget https://raw.githubusercontent.com/DL3DV-10K/Dataset/main/scripts/download.py
# Download 2K resolution images and poses, 0~1K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 1K --resolution 2K --file_type images+poses --clean_cache
# Download 2K resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 2K --file_type images+poses --clean_cache
```
You can also download a specific scene with its hash. The scene-hash pair visualization can be found [here](https://htmlpreview.github.io/?https://github.com/DL3DV-10K/Dataset/blob/main/visualize/index.html).
```Bash
python download.py --odir DL3DV-10K --subset 2K --resolution 2K --file_type images+poses --hash e2cedefea8a0ed2d0ffbd5bdc08acbe7e1f85c96f72f7b790e9dfe1c98963047 --clean_cache
```
# News
- [x] DL3DV-1K, 2K, 3K, 4K
- [ ] DL3DV-5K ~ 10K
|
rayliuca/WikidataLabels | rayliuca | "2024-01-11T04:17:57Z" | 11,366 | 1 | [
"task_categories:translation",
"task_categories:text2text-generation",
"language:en",
"language:fr",
"language:de",
"language:ja",
"language:zh",
"language:hi",
"language:ar",
"language:bn",
"language:ru",
"language:es",
"license:cc0-1.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"translation",
"text2text-generation"
] | "2024-01-01T00:23:08Z" | ---
license: cc0-1.0
dataset_info:
- config_name: aa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13986211
num_examples: 436895
download_size: 9821312
dataset_size: 13986211
- config_name: ab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5012532
num_examples: 159908
download_size: 3013706
dataset_size: 5012532
- config_name: abs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252728
num_examples: 143986
download_size: 2567450
dataset_size: 4252728
- config_name: ace
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19105673
num_examples: 574712
download_size: 13573374
dataset_size: 19105673
- config_name: ady
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4444259
num_examples: 148627
download_size: 2705754
dataset_size: 4444259
- config_name: ady-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4412556
num_examples: 147884
download_size: 2682170
dataset_size: 4412556
- config_name: aeb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4305734
num_examples: 145198
download_size: 2606368
dataset_size: 4305734
- config_name: aeb-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4467930
num_examples: 148796
download_size: 2722169
dataset_size: 4467930
- config_name: aeb-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12770359
num_examples: 404946
download_size: 8886489
dataset_size: 12770359
- config_name: af
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58561042
num_examples: 1643153
download_size: 42539052
dataset_size: 58561042
- config_name: agq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1317
num_examples: 33
download_size: 2906
dataset_size: 1317
- config_name: ak
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14198715
num_examples: 443037
download_size: 9991525
dataset_size: 14198715
- config_name: aln
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13811116
num_examples: 432089
download_size: 9673418
dataset_size: 13811116
- config_name: als
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20691
num_examples: 543
download_size: 17540
dataset_size: 20691
- config_name: alt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 108390
num_examples: 1814
download_size: 59046
dataset_size: 108390
- config_name: am
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5231176
num_examples: 163038
download_size: 3187164
dataset_size: 5231176
- config_name: ami
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21519
num_examples: 686
download_size: 16640
dataset_size: 21519
- config_name: an
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 240345072
num_examples: 5921087
download_size: 164895205
dataset_size: 240345072
- config_name: ang
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14275715
num_examples: 443461
download_size: 10063758
dataset_size: 14275715
- config_name: anp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8558258
num_examples: 241612
download_size: 4381360
dataset_size: 8558258
- config_name: ar
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 291173732
num_examples: 5724064
download_size: 159369497
dataset_size: 291173732
- config_name: arc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4473283
num_examples: 150006
download_size: 2722619
dataset_size: 4473283
- config_name: arn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13879729
num_examples: 433912
download_size: 9715431
dataset_size: 13879729
- config_name: arq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4346991
num_examples: 146004
download_size: 2636972
dataset_size: 4346991
- config_name: ary
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5358568
num_examples: 171568
download_size: 3313402
dataset_size: 5358568
- config_name: arz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 81806333
num_examples: 1669699
download_size: 49423508
dataset_size: 81806333
- config_name: as
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21658610
num_examples: 450074
download_size: 9641626
dataset_size: 21658610
- config_name: ase
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252943
num_examples: 143986
download_size: 2568106
dataset_size: 4252943
- config_name: ast
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1385628786
num_examples: 20696237
download_size: 955908362
dataset_size: 1385628786
- config_name: atj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12996229
num_examples: 411639
download_size: 9057557
dataset_size: 12996229
- config_name: av
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4722934
num_examples: 153781
download_size: 2880103
dataset_size: 4722934
- config_name: avk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13194485
num_examples: 414598
download_size: 9200917
dataset_size: 13194485
- config_name: awa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8599312
num_examples: 242320
download_size: 4411751
dataset_size: 8599312
- config_name: ay
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14269432
num_examples: 443521
download_size: 10029939
dataset_size: 14269432
- config_name: az
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21049248
num_examples: 516732
download_size: 14117527
dataset_size: 21049248
- config_name: azb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30781587
num_examples: 607562
download_size: 16028687
dataset_size: 30781587
- config_name: ba
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11525351
num_examples: 261509
download_size: 6733777
dataset_size: 11525351
- config_name: ban
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13674052
num_examples: 426706
download_size: 9513747
dataset_size: 13674052
- config_name: ban-bali
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 50961
num_examples: 748
download_size: 25817
dataset_size: 50961
- config_name: bar
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 54783034
num_examples: 1566120
download_size: 40389830
dataset_size: 54783034
- config_name: bbc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12820895
num_examples: 406960
download_size: 8917054
dataset_size: 12820895
- config_name: bcc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8017228
num_examples: 241977
download_size: 4344579
dataset_size: 8017228
- config_name: be
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30978832
num_examples: 564184
download_size: 17461174
dataset_size: 30978832
- config_name: be-tarask
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18931909
num_examples: 374396
download_size: 10871239
dataset_size: 18931909
- config_name: bg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 200628708
num_examples: 4383953
download_size: 137745533
dataset_size: 200628708
- config_name: bgn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7999280
num_examples: 241566
download_size: 4331249
dataset_size: 7999280
- config_name: bi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14040026
num_examples: 438382
download_size: 9867032
dataset_size: 14040026
- config_name: bjn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8375348
num_examples: 254558
download_size: 5722334
dataset_size: 8375348
- config_name: bm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18145787
num_examples: 549694
download_size: 13129193
dataset_size: 18145787
- config_name: bn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 815803977
num_examples: 9767284
download_size: 261147329
dataset_size: 815803977
- config_name: bo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11671330
num_examples: 278307
download_size: 5669602
dataset_size: 11671330
- config_name: bpy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15497749
num_examples: 347458
download_size: 6991190
dataset_size: 15497749
- config_name: bqi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8017455
num_examples: 241984
download_size: 4345123
dataset_size: 8017455
- config_name: br
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58304963
num_examples: 1653800
download_size: 42722031
dataset_size: 58304963
- config_name: brh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5328437
num_examples: 171504
download_size: 3376189
dataset_size: 5328437
- config_name: bs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30441466
num_examples: 858190
download_size: 21606575
dataset_size: 30441466
- config_name: btm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252525
num_examples: 143980
download_size: 2567218
dataset_size: 4252525
- config_name: bto
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12841721
num_examples: 407470
download_size: 8934218
dataset_size: 12841721
- config_name: bug
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7595464
num_examples: 235268
download_size: 5129941
dataset_size: 7595464
- config_name: bxr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4713699
num_examples: 153707
download_size: 2869313
dataset_size: 4713699
- config_name: ca
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 408509932
num_examples: 9936886
download_size: 288474980
dataset_size: 408509932
- config_name: cbk-zam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14108232
num_examples: 440345
download_size: 9920793
dataset_size: 14108232
- config_name: cdo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6503254
num_examples: 201362
download_size: 4137841
dataset_size: 6503254
- config_name: ce
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 28093148
num_examples: 607767
download_size: 16367596
dataset_size: 28093148
- config_name: ceb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 332947091
num_examples: 7769402
download_size: 219525737
dataset_size: 332947091
- config_name: ch
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13983906
num_examples: 436785
download_size: 9817385
dataset_size: 13983906
- config_name: cho
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13950786
num_examples: 435869
download_size: 9791296
dataset_size: 13950786
- config_name: chr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5386793
num_examples: 172855
download_size: 3419676
dataset_size: 5386793
- config_name: chy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13994916
num_examples: 437007
download_size: 9830465
dataset_size: 13994916
- config_name: ckb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23343034
num_examples: 511183
download_size: 11459344
dataset_size: 23343034
- config_name: co
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47080480
num_examples: 1346929
download_size: 34551346
dataset_size: 47080480
- config_name: cps
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12849864
num_examples: 407695
download_size: 8941921
dataset_size: 12849864
- config_name: cr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5516556
num_examples: 176667
download_size: 3532952
dataset_size: 5516556
- config_name: crh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10864382
num_examples: 336709
download_size: 7542853
dataset_size: 10864382
- config_name: crh-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4419064
num_examples: 148046
download_size: 2688683
dataset_size: 4419064
- config_name: crh-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14201429
num_examples: 442905
download_size: 9986290
dataset_size: 14201429
- config_name: cs
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 140189244
num_examples: 3384048
download_size: 97516751
dataset_size: 140189244
- config_name: csb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20177120
num_examples: 619275
download_size: 14528772
dataset_size: 20177120
- config_name: cv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8047221
num_examples: 215611
download_size: 4857718
dataset_size: 8047221
- config_name: cy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 89241808
num_examples: 2244550
download_size: 62686006
dataset_size: 89241808
- config_name: da
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 130931077
num_examples: 3448894
download_size: 98202417
dataset_size: 130931077
- config_name: dag
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2664957
num_examples: 78534
download_size: 2052615
dataset_size: 2664957
- config_name: de
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 765398522
num_examples: 17531361
download_size: 527642124
dataset_size: 765398522
- config_name: de-at
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53043722
num_examples: 1515373
download_size: 38761571
dataset_size: 53043722
- config_name: de-ch
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53480908
num_examples: 1528137
download_size: 39349412
dataset_size: 53480908
- config_name: de-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4256391
num_examples: 144061
download_size: 2571862
dataset_size: 4256391
- config_name: din
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12819746
num_examples: 406591
download_size: 8922303
dataset_size: 12819746
- config_name: diq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7570161
num_examples: 232674
download_size: 5057742
dataset_size: 7570161
- config_name: dsb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16135830
num_examples: 491423
download_size: 11412316
dataset_size: 16135830
- config_name: dtp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13867373
num_examples: 433733
download_size: 9720699
dataset_size: 13867373
- config_name: dty
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8839082
num_examples: 246026
download_size: 4551845
dataset_size: 8839082
- config_name: dua
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2631
num_examples: 87
download_size: 3877
dataset_size: 2631
- config_name: dv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 81396462
num_examples: 2103276
download_size: 45332104
dataset_size: 81396462
- config_name: dz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8590239
num_examples: 242196
download_size: 4406353
dataset_size: 8590239
- config_name: ee
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14377017
num_examples: 447208
download_size: 10136064
dataset_size: 14377017
- config_name: egl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13068224
num_examples: 413551
download_size: 9121776
dataset_size: 13068224
- config_name: el
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32978562
num_examples: 592016
download_size: 19577876
dataset_size: 32978562
- config_name: eml
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14768563
num_examples: 458847
download_size: 10453636
dataset_size: 14768563
- config_name: en
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6327454281
num_examples: 81801560
download_size: 4224231068
dataset_size: 6327454281
- config_name: en-ca
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 73305274
num_examples: 1909970
download_size: 53060194
dataset_size: 73305274
- config_name: en-gb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 115978412
num_examples: 2520405
download_size: 78924421
dataset_size: 115978412
- config_name: en-us
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14815
num_examples: 332
download_size: 9953
dataset_size: 14815
- config_name: eo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 256196064
num_examples: 6285304
download_size: 177219679
dataset_size: 256196064
- config_name: es
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 730214298
num_examples: 17233968
download_size: 514588069
dataset_size: 730214298
- config_name: es-419
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4355180
num_examples: 146476
download_size: 2659218
dataset_size: 4355180
- config_name: es-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4280933
num_examples: 144717
download_size: 2592085
dataset_size: 4280933
- config_name: et
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 65123623
num_examples: 1820762
download_size: 48197302
dataset_size: 65123623
- config_name: eu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 290282374
num_examples: 7109758
download_size: 197889378
dataset_size: 290282374
- config_name: ext
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 223257222
num_examples: 5359047
download_size: 147078789
dataset_size: 223257222
- config_name: fa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 123727757
num_examples: 2142642
download_size: 65952114
dataset_size: 123727757
- config_name: ff
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14116652
num_examples: 440614
download_size: 9920388
dataset_size: 14116652
- config_name: fi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 286539944
num_examples: 6905698
download_size: 209916638
dataset_size: 286539944
- config_name: fit
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20217258
num_examples: 620391
download_size: 14566702
dataset_size: 20217258
- config_name: fj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14159041
num_examples: 441745
download_size: 9956108
dataset_size: 14159041
- config_name: fkv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4328482
num_examples: 145988
download_size: 2619845
dataset_size: 4328482
- config_name: fo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24474476
num_examples: 731732
download_size: 17876981
dataset_size: 24474476
- config_name: fr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 774128723
num_examples: 17908351
download_size: 534489308
dataset_size: 774128723
- config_name: frc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17896106
num_examples: 547258
download_size: 12953740
dataset_size: 17896106
- config_name: frp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40902510
num_examples: 1191134
download_size: 29778105
dataset_size: 40902510
- config_name: frr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16979214
num_examples: 515350
download_size: 12069637
dataset_size: 16979214
- config_name: fur
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42077410
num_examples: 1221071
download_size: 30714082
dataset_size: 42077410
- config_name: ga
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 471527543
num_examples: 11524282
download_size: 320967189
dataset_size: 471527543
- config_name: gag
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14149375
num_examples: 440732
download_size: 9940551
dataset_size: 14149375
- config_name: gan
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31572161
num_examples: 905186
download_size: 18909564
dataset_size: 31572161
- config_name: gan-hans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31004794
num_examples: 889875
download_size: 18566811
dataset_size: 31004794
- config_name: gan-hant
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4374444
num_examples: 147098
download_size: 2657182
dataset_size: 4374444
- config_name: gcr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4311409
num_examples: 145829
download_size: 2618211
dataset_size: 4311409
- config_name: gd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 49316935
num_examples: 1429457
download_size: 36220978
dataset_size: 49316935
- config_name: gl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 289484839
num_examples: 7052226
download_size: 197315151
dataset_size: 289484839
- config_name: glk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8327018
num_examples: 249115
download_size: 4538325
dataset_size: 8327018
- config_name: gn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14212974
num_examples: 442765
download_size: 10004863
dataset_size: 14212974
- config_name: gom
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4584575
num_examples: 150273
download_size: 2780570
dataset_size: 4584575
- config_name: gom-deva
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8585678
num_examples: 242131
download_size: 4400578
dataset_size: 8585678
- config_name: gom-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12783006
num_examples: 405302
download_size: 8897342
dataset_size: 12783006
- config_name: gor
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14667616
num_examples: 454512
download_size: 10319196
dataset_size: 14667616
- config_name: got
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5432139
num_examples: 172951
download_size: 3435531
dataset_size: 5432139
- config_name: grc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4494817
num_examples: 149631
download_size: 2746170
dataset_size: 4494817
- config_name: gu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23788894
num_examples: 486140
download_size: 10779200
dataset_size: 23788894
- config_name: guc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1419
num_examples: 38
download_size: 3054
dataset_size: 1419
- config_name: guw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 118
num_examples: 4
download_size: 1864
dataset_size: 118
- config_name: gv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20683485
num_examples: 631005
download_size: 14894590
dataset_size: 20683485
- config_name: ha
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14716168
num_examples: 455836
download_size: 10421790
dataset_size: 14716168
- config_name: hak
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6128644
num_examples: 193036
download_size: 3991729
dataset_size: 6128644
- config_name: haw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14158084
num_examples: 441511
download_size: 9952975
dataset_size: 14158084
- config_name: he
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43629050
num_examples: 884809
download_size: 27221301
dataset_size: 43629050
- config_name: hi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 37237187
num_examples: 668964
download_size: 17804873
dataset_size: 37237187
- config_name: hif
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14457954
num_examples: 449009
download_size: 10166264
dataset_size: 14457954
- config_name: hif-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14519845
num_examples: 454037
download_size: 10240704
dataset_size: 14519845
- config_name: hil
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12928914
num_examples: 409962
download_size: 9009705
dataset_size: 12928914
- config_name: ho
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13950504
num_examples: 435857
download_size: 9790849
dataset_size: 13950504
- config_name: hr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 61272623
num_examples: 1720527
download_size: 45307411
dataset_size: 61272623
- config_name: hrx
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12869295
num_examples: 407823
download_size: 8964114
dataset_size: 12869295
- config_name: hsb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23720349
num_examples: 707100
download_size: 17145693
dataset_size: 23720349
- config_name: ht
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16835529
num_examples: 509955
download_size: 11880404
dataset_size: 16835529
- config_name: hu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 85054175
num_examples: 2200589
download_size: 64143342
dataset_size: 85054175
- config_name: hu-formal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252810
num_examples: 143986
download_size: 2567582
dataset_size: 4252810
- config_name: hy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 39339286
num_examples: 773925
download_size: 22108994
dataset_size: 39339286
- config_name: hyw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5443608
num_examples: 166902
download_size: 3238370
dataset_size: 5443608
- config_name: hz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13948574
num_examples: 435804
download_size: 9788697
dataset_size: 13948574
- config_name: ia
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 229143237
num_examples: 5616433
download_size: 155877454
dataset_size: 229143237
- config_name: id
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 95220928
num_examples: 2512331
download_size: 69525046
dataset_size: 95220928
- config_name: ie
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 225725262
num_examples: 5533032
download_size: 153371930
dataset_size: 225725262
- config_name: ig
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20109388
num_examples: 617044
download_size: 14475407
dataset_size: 20109388
- config_name: ii
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4310418
num_examples: 145332
download_size: 2609723
dataset_size: 4310418
- config_name: ik
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13989609
num_examples: 436958
download_size: 9823174
dataset_size: 13989609
- config_name: ike-cans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4352278
num_examples: 146355
download_size: 2645174
dataset_size: 4352278
- config_name: ike-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13851135
num_examples: 432932
download_size: 9714057
dataset_size: 13851135
- config_name: ilo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15955483
num_examples: 480555
download_size: 11141942
dataset_size: 15955483
- config_name: inh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4634360
num_examples: 152226
download_size: 2831580
dataset_size: 4634360
- config_name: io
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 233656822
num_examples: 5757440
download_size: 159720058
dataset_size: 233656822
- config_name: is
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 51679396
num_examples: 1483610
download_size: 37965494
dataset_size: 51679396
- config_name: it
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 536601426
num_examples: 12631487
download_size: 375025347
dataset_size: 536601426
- config_name: iu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5360588
num_examples: 172215
download_size: 3402239
dataset_size: 5360588
- config_name: ja
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 140641579
num_examples: 2917962
download_size: 92145329
dataset_size: 140641579
- config_name: jam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18849751
num_examples: 571777
download_size: 13684422
dataset_size: 18849751
- config_name: jbo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14301985
num_examples: 446512
download_size: 9994516
dataset_size: 14301985
- config_name: jv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27232302
num_examples: 794181
download_size: 19651565
dataset_size: 27232302
- config_name: ka
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24073345
num_examples: 399546
download_size: 11679979
dataset_size: 24073345
- config_name: kaa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14082184
num_examples: 439411
download_size: 9902820
dataset_size: 14082184
- config_name: kab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18459676
num_examples: 557857
download_size: 13384218
dataset_size: 18459676
- config_name: kbd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4594409
num_examples: 149733
download_size: 2759503
dataset_size: 4594409
- config_name: kbd-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4417661
num_examples: 148017
download_size: 2687531
dataset_size: 4417661
- config_name: kbp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12873178
num_examples: 408039
download_size: 8965474
dataset_size: 12873178
- config_name: kea
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12793700
num_examples: 405901
download_size: 8896866
dataset_size: 12793700
- config_name: kg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40949149
num_examples: 1193499
download_size: 29766747
dataset_size: 40949149
- config_name: khw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4308653
num_examples: 145279
download_size: 2608581
dataset_size: 4308653
- config_name: ki
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14056900
num_examples: 439015
download_size: 9875534
dataset_size: 14056900
- config_name: kj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13881723
num_examples: 433861
download_size: 9733715
dataset_size: 13881723
- config_name: kjp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8504302
num_examples: 240339
download_size: 4341523
dataset_size: 8504302
- config_name: kk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19216115
num_examples: 428880
download_size: 11577682
dataset_size: 19216115
- config_name: kk-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7241749
num_examples: 211731
download_size: 4487032
dataset_size: 7241749
- config_name: kk-kz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4937945
num_examples: 160027
download_size: 3062906
dataset_size: 4937945
- config_name: kk-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22197825
num_examples: 677162
download_size: 16072332
dataset_size: 22197825
- config_name: kk-tr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20060635
num_examples: 616521
download_size: 14438929
dataset_size: 20060635
- config_name: ko
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 60335212
num_examples: 1364440
download_size: 39186630
dataset_size: 60335212
- config_name: ko-kp
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4338717
num_examples: 146150
download_size: 2630925
dataset_size: 4338717
- config_name: koi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4737590
num_examples: 155082
download_size: 2894674
dataset_size: 4737590
- config_name: kr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13886057
num_examples: 433990
download_size: 9737602
dataset_size: 13886057
- config_name: krc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4646136
num_examples: 151026
download_size: 2785454
dataset_size: 4646136
- config_name: kri
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12798530
num_examples: 406032
download_size: 8902330
dataset_size: 12798530
- config_name: krj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13850324
num_examples: 433444
download_size: 9703460
dataset_size: 13850324
- config_name: krl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12788020
num_examples: 405729
download_size: 8893337
dataset_size: 12788020
- config_name: ks
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4390604
num_examples: 147033
download_size: 2671069
dataset_size: 4390604
- config_name: ks-deva
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8567518
num_examples: 241832
download_size: 4387687
dataset_size: 8567518
- config_name: ksh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20394712
num_examples: 624523
download_size: 14698860
dataset_size: 20394712
- config_name: ku
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8037777
num_examples: 239515
download_size: 5306097
dataset_size: 8037777
- config_name: ku-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4577826
num_examples: 151290
download_size: 2796159
dataset_size: 4577826
- config_name: ku-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14683841
num_examples: 458802
download_size: 10371977
dataset_size: 14683841
- config_name: kum
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252739
num_examples: 143985
download_size: 2567503
dataset_size: 4252739
- config_name: kv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4946978
num_examples: 158888
download_size: 2997865
dataset_size: 4946978
- config_name: kw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20245535
num_examples: 621432
download_size: 14581378
dataset_size: 20245535
- config_name: ky
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8909613
num_examples: 235165
download_size: 5462115
dataset_size: 8909613
- config_name: la
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 299766395
num_examples: 7085082
download_size: 201477460
dataset_size: 299766395
- config_name: lad
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20336417
num_examples: 622775
download_size: 14653199
dataset_size: 20336417
- config_name: lb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 56473066
num_examples: 1601093
download_size: 41410732
dataset_size: 56473066
- config_name: lbe
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4501470
num_examples: 149898
download_size: 2744786
dataset_size: 4501470
- config_name: lez
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4890798
num_examples: 155936
download_size: 2959653
dataset_size: 4890798
- config_name: lfn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14709210
num_examples: 456719
download_size: 10408539
dataset_size: 14709210
- config_name: lg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13979286
num_examples: 436009
download_size: 9802779
dataset_size: 13979286
- config_name: li
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43476868
num_examples: 1253970
download_size: 31750932
dataset_size: 43476868
- config_name: lij
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42327066
num_examples: 1227346
download_size: 30898971
dataset_size: 42327066
- config_name: liv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12781331
num_examples: 405236
download_size: 8895889
dataset_size: 12781331
- config_name: lki
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8039166
num_examples: 242526
download_size: 4363703
dataset_size: 8039166
- config_name: lld
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 90305
num_examples: 2634
download_size: 69672
dataset_size: 90305
- config_name: lmo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18287638
num_examples: 545398
download_size: 13130119
dataset_size: 18287638
- config_name: ln
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14123637
num_examples: 439731
download_size: 9915851
dataset_size: 14123637
- config_name: lo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9905189
num_examples: 271710
download_size: 5313218
dataset_size: 9905189
- config_name: loz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13695602
num_examples: 428723
download_size: 9581113
dataset_size: 13695602
- config_name: lt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 39902419
num_examples: 1096727
download_size: 29185765
dataset_size: 39902419
- config_name: ltg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13884707
num_examples: 433453
download_size: 9736637
dataset_size: 13884707
- config_name: lus
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13695197
num_examples: 428712
download_size: 9580538
dataset_size: 13695197
- config_name: luz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8459036
num_examples: 253454
download_size: 4687414
dataset_size: 8459036
- config_name: lv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27242119
num_examples: 764753
download_size: 19676667
dataset_size: 27242119
- config_name: lzh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 25067538
num_examples: 685152
download_size: 14998856
dataset_size: 25067538
- config_name: mdf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4634268
num_examples: 152141
download_size: 2820744
dataset_size: 4634268
- config_name: mg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43863002
num_examples: 1271074
download_size: 32016826
dataset_size: 43863002
- config_name: mh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13775721
num_examples: 431162
download_size: 9644397
dataset_size: 13775721
- config_name: mi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20857040
num_examples: 637118
download_size: 15060301
dataset_size: 20857040
- config_name: min
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53044258
num_examples: 1464128
download_size: 38587450
dataset_size: 53044258
- config_name: mk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24087229
num_examples: 449241
download_size: 12217912
dataset_size: 24087229
- config_name: ml
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 189266798
num_examples: 2664923
download_size: 71344031
dataset_size: 189266798
- config_name: mn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9311543
num_examples: 219695
download_size: 5272784
dataset_size: 9311543
- config_name: mni
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8696893
num_examples: 243616
download_size: 4470994
dataset_size: 8696893
- config_name: mnw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8861861
num_examples: 244906
download_size: 4517726
dataset_size: 8861861
- config_name: mo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5377009
num_examples: 172144
download_size: 3405661
dataset_size: 5377009
- config_name: mr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 26855182
num_examples: 526220
download_size: 12358679
dataset_size: 26855182
- config_name: mrh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 68
num_examples: 2
download_size: 1820
dataset_size: 68
- config_name: mrj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5007903
num_examples: 160889
download_size: 3073431
dataset_size: 5007903
- config_name: ms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 64674328
num_examples: 1803714
download_size: 47165217
dataset_size: 64674328
- config_name: ms-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 136496
num_examples: 2961
download_size: 92316
dataset_size: 136496
- config_name: mt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22632686
num_examples: 682867
download_size: 16352572
dataset_size: 22632686
- config_name: mus
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14013416
num_examples: 437688
download_size: 9835239
dataset_size: 14013416
- config_name: mwl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14493299
num_examples: 448926
download_size: 10225888
dataset_size: 14493299
- config_name: my
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16182182
num_examples: 345096
download_size: 7981905
dataset_size: 16182182
- config_name: mzn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17973941
num_examples: 447870
download_size: 9174617
dataset_size: 17973941
- config_name: na
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13992666
num_examples: 436956
download_size: 9823328
dataset_size: 13992666
- config_name: nah
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14490294
num_examples: 449748
download_size: 10192501
dataset_size: 14490294
- config_name: nan-hani
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 191
num_examples: 6
download_size: 1925
dataset_size: 191
- config_name: nap
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42362346
num_examples: 1229161
download_size: 30918265
dataset_size: 42362346
- config_name: nb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 142554768
num_examples: 3688026
download_size: 105549981
dataset_size: 142554768
- config_name: nds
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 58766114
num_examples: 1666813
download_size: 43421948
dataset_size: 58766114
- config_name: nds-nl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 44121756
num_examples: 1273149
download_size: 32201410
dataset_size: 44121756
- config_name: ne
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11925386
num_examples: 295006
download_size: 6265232
dataset_size: 11925386
- config_name: new
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16906308
num_examples: 350362
download_size: 7680329
dataset_size: 16906308
- config_name: ng
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13870754
num_examples: 433582
download_size: 9723795
dataset_size: 13870754
- config_name: nia
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20649
num_examples: 515
download_size: 16535
dataset_size: 20649
- config_name: niu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12794247
num_examples: 405902
download_size: 8897260
dataset_size: 12794247
- config_name: nl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5016576732
num_examples: 61931959
download_size: 3380404239
dataset_size: 5016576732
- config_name: nn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 99997815
num_examples: 2708994
download_size: 74736304
dataset_size: 99997815
- config_name: 'no'
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2934
num_examples: 64
download_size: 4108
dataset_size: 2934
- config_name: nod
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4322068
num_examples: 145566
download_size: 2618106
dataset_size: 4322068
- config_name: nov
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14150434
num_examples: 440903
download_size: 9947798
dataset_size: 14150434
- config_name: nqo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8094271
num_examples: 243184
download_size: 4398836
dataset_size: 8094271
- config_name: nrm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 41330956
num_examples: 1203295
download_size: 30084065
dataset_size: 41330956
- config_name: nso
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14178321
num_examples: 443205
download_size: 9959708
dataset_size: 14178321
- config_name: nv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15351770
num_examples: 455188
download_size: 10472240
dataset_size: 15351770
- config_name: ny
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13989813
num_examples: 436764
download_size: 9821588
dataset_size: 13989813
- config_name: nys
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13092059
num_examples: 413241
download_size: 9153100
dataset_size: 13092059
- config_name: oc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 266612548
num_examples: 6569770
download_size: 180156462
dataset_size: 266612548
- config_name: olo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13200388
num_examples: 416935
download_size: 9214968
dataset_size: 13200388
- config_name: om
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5476389
num_examples: 175314
download_size: 3496637
dataset_size: 5476389
- config_name: or
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 22798709
num_examples: 470237
download_size: 10322832
dataset_size: 22798709
- config_name: os
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5946062
num_examples: 177054
download_size: 3583703
dataset_size: 5946062
- config_name: ota
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8015024
num_examples: 241903
download_size: 4343478
dataset_size: 8015024
- config_name: pa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20505754
num_examples: 481522
download_size: 10552147
dataset_size: 20505754
- config_name: pam
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14527964
num_examples: 451253
download_size: 10242443
dataset_size: 14527964
- config_name: pap
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 54505401
num_examples: 1449881
download_size: 40415776
dataset_size: 54505401
- config_name: pcd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42132826
num_examples: 1221362
download_size: 30766812
dataset_size: 42132826
- config_name: pdc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14435256
num_examples: 448055
download_size: 10178322
dataset_size: 14435256
- config_name: pdt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13994892
num_examples: 437200
download_size: 9819388
dataset_size: 13994892
- config_name: pfl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15461023
num_examples: 474198
download_size: 10893651
dataset_size: 15461023
- config_name: pi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8913354
num_examples: 250251
download_size: 4651392
dataset_size: 8913354
- config_name: pih
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13971081
num_examples: 436214
download_size: 9810653
dataset_size: 13971081
- config_name: pl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 426030491
num_examples: 10025139
download_size: 295767506
dataset_size: 426030491
- config_name: pms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 51268512
num_examples: 1477043
download_size: 37698831
dataset_size: 51268512
- config_name: pnb
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16192682
num_examples: 409037
download_size: 9196626
dataset_size: 16192682
- config_name: pnt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4439173
num_examples: 148336
download_size: 2703117
dataset_size: 4439173
- config_name: prg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17940420
num_examples: 544030
download_size: 12958482
dataset_size: 17940420
- config_name: ps
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8860902
num_examples: 259186
download_size: 4916502
dataset_size: 8860902
- config_name: pt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 491184040
num_examples: 11574568
download_size: 340831923
dataset_size: 491184040
- config_name: pt-br
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 318857431
num_examples: 7782980
download_size: 223442911
dataset_size: 318857431
- config_name: pwn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8500
num_examples: 269
download_size: 8738
dataset_size: 8500
- config_name: qu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15254702
num_examples: 468823
download_size: 10750388
dataset_size: 15254702
- config_name: quc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32
num_examples: 1
download_size: 1772
dataset_size: 32
- config_name: qug
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13798264
num_examples: 431733
download_size: 9661685
dataset_size: 13798264
- config_name: rgn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17001688
num_examples: 519871
download_size: 12258201
dataset_size: 17001688
- config_name: rif
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13792951
num_examples: 431588
download_size: 9657698
dataset_size: 13792951
- config_name: rm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 44450577
num_examples: 1284908
download_size: 32519630
dataset_size: 44450577
- config_name: rmc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 159
num_examples: 4
download_size: 1963
dataset_size: 159
- config_name: rmy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5610156
num_examples: 179191
download_size: 3608283
dataset_size: 5610156
- config_name: rn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13935534
num_examples: 435271
download_size: 9779486
dataset_size: 13935534
- config_name: ro
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 247469452
num_examples: 5878366
download_size: 177525205
dataset_size: 247469452
- config_name: roa-tara
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14425120
num_examples: 448972
download_size: 10152875
dataset_size: 14425120
- config_name: ru
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 405103215
num_examples: 7485811
download_size: 257215625
dataset_size: 405103215
- config_name: rue
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4953403
num_examples: 159530
download_size: 3037824
dataset_size: 4953403
- config_name: rup
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14459686
num_examples: 450345
download_size: 10198398
dataset_size: 14459686
- config_name: ruq-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4434290
num_examples: 148404
download_size: 2700920
dataset_size: 4434290
- config_name: ruq-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13783683
num_examples: 430978
download_size: 9656941
dataset_size: 13783683
- config_name: rw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14090196
num_examples: 439172
download_size: 9901257
dataset_size: 14090196
- config_name: rwr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8568706
num_examples: 241841
download_size: 4388475
dataset_size: 8568706
- config_name: ryu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 2852
num_examples: 82
download_size: 4237
dataset_size: 2852
- config_name: sa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 21404327
num_examples: 455674
download_size: 9692464
dataset_size: 21404327
- config_name: sat
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10810040
num_examples: 284911
download_size: 5750917
dataset_size: 10810040
- config_name: sc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47195572
num_examples: 1348137
download_size: 34521764
dataset_size: 47195572
- config_name: scn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 43458983
num_examples: 1259067
download_size: 31775157
dataset_size: 43458983
- config_name: sco
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 56960413
num_examples: 1611092
download_size: 41724559
dataset_size: 56960413
- config_name: sd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14257513
num_examples: 363318
download_size: 7844047
dataset_size: 14257513
- config_name: sdc
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13975497
num_examples: 436913
download_size: 9800517
dataset_size: 13975497
- config_name: se
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 23962268
num_examples: 711439
download_size: 17409387
dataset_size: 23962268
- config_name: sei
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13827581
num_examples: 432520
download_size: 9684192
dataset_size: 13827581
- config_name: sg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13913524
num_examples: 434751
download_size: 9761739
dataset_size: 13913524
- config_name: sh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 30173635
num_examples: 746207
download_size: 20133594
dataset_size: 30173635
- config_name: shi-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13783218
num_examples: 430968
download_size: 9656828
dataset_size: 13783218
- config_name: shi-tfng
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4308577
num_examples: 145279
download_size: 2608525
dataset_size: 4308577
- config_name: shn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10139002
num_examples: 260808
download_size: 4952168
dataset_size: 10139002
- config_name: shy-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4255322
num_examples: 144058
download_size: 2570625
dataset_size: 4255322
- config_name: si
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 7405400
num_examples: 189718
download_size: 4270591
dataset_size: 7405400
- config_name: sjd
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4300688
num_examples: 145047
download_size: 2604357
dataset_size: 4300688
- config_name: sje
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20970223
num_examples: 637639
download_size: 15120381
dataset_size: 20970223
- config_name: sju
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4315103
num_examples: 145655
download_size: 2620763
dataset_size: 4315103
- config_name: sk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 75586366
num_examples: 2050873
download_size: 54951330
dataset_size: 75586366
- config_name: skr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4274062
num_examples: 144443
download_size: 2585286
dataset_size: 4274062
- config_name: sl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 157883240
num_examples: 4112048
download_size: 118047353
dataset_size: 157883240
- config_name: sli
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13909208
num_examples: 434986
download_size: 9745964
dataset_size: 13909208
- config_name: sm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13984823
num_examples: 436830
download_size: 9817472
dataset_size: 13984823
- config_name: sma
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20653595
num_examples: 630437
download_size: 14902319
dataset_size: 20653595
- config_name: smj
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19640206
num_examples: 604326
download_size: 14133964
dataset_size: 19640206
- config_name: smn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 10902411
num_examples: 337543
download_size: 7576850
dataset_size: 10902411
- config_name: sms
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4462345
num_examples: 149355
download_size: 2741038
dataset_size: 4462345
- config_name: sn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20116601
num_examples: 618231
download_size: 14463728
dataset_size: 20116601
- config_name: sq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 304708913
num_examples: 7311820
download_size: 225592169
dataset_size: 304708913
- config_name: sr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 52787253
num_examples: 1018361
download_size: 31364006
dataset_size: 52787253
- config_name: sr-ec
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 9237541
num_examples: 248556
download_size: 5875548
dataset_size: 9237541
- config_name: sr-el
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 48848162
num_examples: 1418824
download_size: 35859120
dataset_size: 48848162
- config_name: srq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12796525
num_examples: 405957
download_size: 8899493
dataset_size: 12796525
- config_name: ss
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13823630
num_examples: 432423
download_size: 9682165
dataset_size: 13823630
- config_name: st
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13938937
num_examples: 435419
download_size: 9785161
dataset_size: 13938937
- config_name: stq
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14484394
num_examples: 449885
download_size: 10228446
dataset_size: 14484394
- config_name: su
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20025826
num_examples: 583096
download_size: 14042822
dataset_size: 20025826
- config_name: sv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 339074900
num_examples: 8115455
download_size: 236022796
dataset_size: 339074900
- config_name: sw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 50612064
num_examples: 1465385
download_size: 37096369
dataset_size: 50612064
- config_name: szl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16772062
num_examples: 500107
download_size: 11868254
dataset_size: 16772062
- config_name: szy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4332021
num_examples: 146136
download_size: 2633271
dataset_size: 4332021
- config_name: ta
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 31251824
num_examples: 546558
download_size: 15157673
dataset_size: 31251824
- config_name: tay
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4345269
num_examples: 146938
download_size: 2632535
dataset_size: 4345269
- config_name: tcy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 8723594
num_examples: 244350
download_size: 4487471
dataset_size: 8723594
- config_name: te
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 27587665
num_examples: 569615
download_size: 13669398
dataset_size: 27587665
- config_name: tet
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15092299
num_examples: 466244
download_size: 10702917
dataset_size: 15092299
- config_name: tg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 12643125
num_examples: 304625
download_size: 7622522
dataset_size: 12643125
- config_name: tg-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4504034
num_examples: 149533
download_size: 2755000
dataset_size: 4504034
- config_name: tg-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 19845835
num_examples: 610020
download_size: 14264492
dataset_size: 19845835
- config_name: th
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32693750
num_examples: 537447
download_size: 15849247
dataset_size: 32693750
- config_name: ti
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4366995
num_examples: 146479
download_size: 2648869
dataset_size: 4366995
- config_name: tk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5797050
num_examples: 184302
download_size: 3728802
dataset_size: 5797050
- config_name: tl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13661554
num_examples: 387377
download_size: 9456413
dataset_size: 13661554
- config_name: tly
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4309748
num_examples: 145312
download_size: 2609307
dataset_size: 4309748
- config_name: tly-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 35
num_examples: 1
download_size: 1793
dataset_size: 35
- config_name: tn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13936132
num_examples: 435219
download_size: 9780279
dataset_size: 13936132
- config_name: to
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13980327
num_examples: 436460
download_size: 9810650
dataset_size: 13980327
- config_name: tpi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14169019
num_examples: 442133
download_size: 9961827
dataset_size: 14169019
- config_name: tr
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 72134544
num_examples: 1770267
download_size: 51032484
dataset_size: 72134544
- config_name: tru
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5322844
num_examples: 171327
download_size: 3371105
dataset_size: 5322844
- config_name: trv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 94285
num_examples: 3109
download_size: 65138
dataset_size: 94285
- config_name: ts
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13943481
num_examples: 435408
download_size: 9783789
dataset_size: 13943481
- config_name: tt
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 24182976
num_examples: 548502
download_size: 14868166
dataset_size: 24182976
- config_name: tt-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4943914
num_examples: 158198
download_size: 3048932
dataset_size: 4943914
- config_name: tt-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13842972
num_examples: 432513
download_size: 9702714
dataset_size: 13842972
- config_name: tum
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13924159
num_examples: 435110
download_size: 9770501
dataset_size: 13924159
- config_name: tw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13830508
num_examples: 432669
download_size: 9688164
dataset_size: 13830508
- config_name: ty
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 16816401
num_examples: 507332
download_size: 12098154
dataset_size: 16816401
- config_name: tyv
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4583082
num_examples: 149929
download_size: 2779632
dataset_size: 4583082
- config_name: tzm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4253588
num_examples: 144002
download_size: 2569067
dataset_size: 4253588
- config_name: udm
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4854947
num_examples: 156300
download_size: 2958444
dataset_size: 4854947
- config_name: ug-arab
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4316690
num_examples: 145443
download_size: 2614962
dataset_size: 4316690
- config_name: ug-latn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13786474
num_examples: 431056
download_size: 9659723
dataset_size: 13786474
- config_name: uk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 251058352
num_examples: 5108733
download_size: 168140976
dataset_size: 251058352
- config_name: ur
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 57063750
num_examples: 987011
download_size: 28328459
dataset_size: 57063750
- config_name: uz
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 11731793
num_examples: 344615
download_size: 8102734
dataset_size: 11731793
- config_name: uz-cyrl
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4252574
num_examples: 143981
download_size: 2567325
dataset_size: 4252574
- config_name: ve
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 13932174
num_examples: 435216
download_size: 9777266
dataset_size: 13932174
- config_name: vec
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 52081230
num_examples: 1466867
download_size: 37307805
dataset_size: 52081230
- config_name: vep
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6174898
num_examples: 192298
download_size: 3994582
dataset_size: 6174898
- config_name: vi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 246835524
num_examples: 5743737
download_size: 172949263
dataset_size: 246835524
- config_name: vls
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42789297
num_examples: 1239359
download_size: 31228294
dataset_size: 42789297
- config_name: vmf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 18352990
num_examples: 555205
download_size: 13289296
dataset_size: 18352990
- config_name: vo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 228352533
num_examples: 5610875
download_size: 155496988
dataset_size: 228352533
- config_name: vot
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5406190
num_examples: 173486
download_size: 3439433
dataset_size: 5406190
- config_name: wa
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 49235347
num_examples: 1426584
download_size: 36167816
dataset_size: 49235347
- config_name: war
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 190306474
num_examples: 4449062
download_size: 133786270
dataset_size: 190306474
- config_name: wls
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4033
num_examples: 104
download_size: 5150
dataset_size: 4033
- config_name: wo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40961626
num_examples: 1193626
download_size: 29778666
dataset_size: 40961626
- config_name: wuu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 40570130
num_examples: 1127741
download_size: 24209117
dataset_size: 40570130
- config_name: wya
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 28
num_examples: 1
download_size: 1740
dataset_size: 28
- config_name: xal
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4475344
num_examples: 149984
download_size: 2722459
dataset_size: 4475344
- config_name: xh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 20036194
num_examples: 615514
download_size: 14405310
dataset_size: 20036194
- config_name: xmf
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5943645
num_examples: 169507
download_size: 3418593
dataset_size: 5943645
- config_name: xsy
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4262789
num_examples: 144305
download_size: 2573349
dataset_size: 4262789
- config_name: yav
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4070
num_examples: 102
download_size: 4718
dataset_size: 4070
- config_name: yi
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 5495313
num_examples: 170277
download_size: 3373820
dataset_size: 5495313
- config_name: yo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 25424749
num_examples: 724345
download_size: 18086773
dataset_size: 25424749
- config_name: za
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15159230
num_examples: 365892
download_size: 7774767
dataset_size: 15159230
- config_name: zea
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 14538518
num_examples: 451577
download_size: 10262897
dataset_size: 14538518
- config_name: zgh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 4253917
num_examples: 144006
download_size: 2569373
dataset_size: 4253917
- config_name: zh
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 264353677
num_examples: 5424320
download_size: 174420118
dataset_size: 264353677
- config_name: zh-cn
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 42868611
num_examples: 1158755
download_size: 27243799
dataset_size: 42868611
- config_name: zh-hans
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 57233156
num_examples: 1483225
download_size: 36583522
dataset_size: 57233156
- config_name: zh-hant
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 53502814
num_examples: 1356560
download_size: 36755083
dataset_size: 53502814
- config_name: zh-hk
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 15325323
num_examples: 408391
download_size: 10455809
dataset_size: 15325323
- config_name: zh-mo
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 6568267
num_examples: 180950
download_size: 3547260
dataset_size: 6568267
- config_name: zh-my
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 32637498
num_examples: 916876
download_size: 19289581
dataset_size: 32637498
- config_name: zh-sg
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 35325327
num_examples: 979652
download_size: 21150070
dataset_size: 35325327
- config_name: zh-tw
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 17500668
num_examples: 443057
download_size: 11121104
dataset_size: 17500668
- config_name: zh-yue
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 1352
num_examples: 30
download_size: 2963
dataset_size: 1352
- config_name: zu
features:
- name: wikidata_id
dtype: string
- name: lastrevid
dtype: int64
- name: label
dtype: string
splits:
- name: label
num_bytes: 47349379
num_examples: 1380550
download_size: 34649660
dataset_size: 47349379
configs:
- config_name: aa
data_files:
- split: label
path: aa/label-*
- config_name: ab
data_files:
- split: label
path: ab/label-*
- config_name: abs
data_files:
- split: label
path: abs/label-*
- config_name: ace
data_files:
- split: label
path: ace/label-*
- config_name: ady
data_files:
- split: label
path: ady/label-*
- config_name: ady-cyrl
data_files:
- split: label
path: ady-cyrl/label-*
- config_name: aeb
data_files:
- split: label
path: aeb/label-*
- config_name: aeb-arab
data_files:
- split: label
path: aeb-arab/label-*
- config_name: aeb-latn
data_files:
- split: label
path: aeb-latn/label-*
- config_name: af
data_files:
- split: label
path: af/label-*
- config_name: agq
data_files:
- split: label
path: agq/label-*
- config_name: ak
data_files:
- split: label
path: ak/label-*
- config_name: aln
data_files:
- split: label
path: aln/label-*
- config_name: als
data_files:
- split: label
path: als/label-*
- config_name: alt
data_files:
- split: label
path: alt/label-*
- config_name: am
data_files:
- split: label
path: am/label-*
- config_name: ami
data_files:
- split: label
path: ami/label-*
- config_name: an
data_files:
- split: label
path: an/label-*
- config_name: ang
data_files:
- split: label
path: ang/label-*
- config_name: anp
data_files:
- split: label
path: anp/label-*
- config_name: ar
data_files:
- split: label
path: ar/label-*
- config_name: arc
data_files:
- split: label
path: arc/label-*
- config_name: arn
data_files:
- split: label
path: arn/label-*
- config_name: arq
data_files:
- split: label
path: arq/label-*
- config_name: ary
data_files:
- split: label
path: ary/label-*
- config_name: arz
data_files:
- split: label
path: arz/label-*
- config_name: as
data_files:
- split: label
path: as/label-*
- config_name: ase
data_files:
- split: label
path: ase/label-*
- config_name: ast
data_files:
- split: label
path: ast/label-*
- config_name: atj
data_files:
- split: label
path: atj/label-*
- config_name: av
data_files:
- split: label
path: av/label-*
- config_name: avk
data_files:
- split: label
path: avk/label-*
- config_name: awa
data_files:
- split: label
path: awa/label-*
- config_name: ay
data_files:
- split: label
path: ay/label-*
- config_name: az
data_files:
- split: label
path: az/label-*
- config_name: azb
data_files:
- split: label
path: azb/label-*
- config_name: ba
data_files:
- split: label
path: ba/label-*
- config_name: ban
data_files:
- split: label
path: ban/label-*
- config_name: ban-bali
data_files:
- split: label
path: ban-bali/label-*
- config_name: bar
data_files:
- split: label
path: bar/label-*
- config_name: bbc
data_files:
- split: label
path: bbc/label-*
- config_name: bcc
data_files:
- split: label
path: bcc/label-*
- config_name: be
data_files:
- split: label
path: be/label-*
- config_name: be-tarask
data_files:
- split: label
path: be-tarask/label-*
- config_name: bg
data_files:
- split: label
path: bg/label-*
- config_name: bgn
data_files:
- split: label
path: bgn/label-*
- config_name: bi
data_files:
- split: label
path: bi/label-*
- config_name: bjn
data_files:
- split: label
path: bjn/label-*
- config_name: bm
data_files:
- split: label
path: bm/label-*
- config_name: bn
data_files:
- split: label
path: bn/label-*
- config_name: bo
data_files:
- split: label
path: bo/label-*
- config_name: bpy
data_files:
- split: label
path: bpy/label-*
- config_name: bqi
data_files:
- split: label
path: bqi/label-*
- config_name: br
data_files:
- split: label
path: br/label-*
- config_name: brh
data_files:
- split: label
path: brh/label-*
- config_name: bs
data_files:
- split: label
path: bs/label-*
- config_name: btm
data_files:
- split: label
path: btm/label-*
- config_name: bto
data_files:
- split: label
path: bto/label-*
- config_name: bug
data_files:
- split: label
path: bug/label-*
- config_name: bxr
data_files:
- split: label
path: bxr/label-*
- config_name: ca
data_files:
- split: label
path: ca/label-*
- config_name: cbk-zam
data_files:
- split: label
path: cbk-zam/label-*
- config_name: cdo
data_files:
- split: label
path: cdo/label-*
- config_name: ce
data_files:
- split: label
path: ce/label-*
- config_name: ceb
data_files:
- split: label
path: ceb/label-*
- config_name: ch
data_files:
- split: label
path: ch/label-*
- config_name: cho
data_files:
- split: label
path: cho/label-*
- config_name: chr
data_files:
- split: label
path: chr/label-*
- config_name: chy
data_files:
- split: label
path: chy/label-*
- config_name: ckb
data_files:
- split: label
path: ckb/label-*
- config_name: co
data_files:
- split: label
path: co/label-*
- config_name: cps
data_files:
- split: label
path: cps/label-*
- config_name: cr
data_files:
- split: label
path: cr/label-*
- config_name: crh
data_files:
- split: label
path: crh/label-*
- config_name: crh-cyrl
data_files:
- split: label
path: crh-cyrl/label-*
- config_name: crh-latn
data_files:
- split: label
path: crh-latn/label-*
- config_name: cs
data_files:
- split: label
path: cs/label-*
- config_name: csb
data_files:
- split: label
path: csb/label-*
- config_name: cv
data_files:
- split: label
path: cv/label-*
- config_name: cy
data_files:
- split: label
path: cy/label-*
- config_name: da
data_files:
- split: label
path: da/label-*
- config_name: dag
data_files:
- split: label
path: dag/label-*
- config_name: de
data_files:
- split: label
path: de/label-*
- config_name: de-at
data_files:
- split: label
path: de-at/label-*
- config_name: de-ch
data_files:
- split: label
path: de-ch/label-*
- config_name: de-formal
data_files:
- split: label
path: de-formal/label-*
- config_name: din
data_files:
- split: label
path: din/label-*
- config_name: diq
data_files:
- split: label
path: diq/label-*
- config_name: dsb
data_files:
- split: label
path: dsb/label-*
- config_name: dtp
data_files:
- split: label
path: dtp/label-*
- config_name: dty
data_files:
- split: label
path: dty/label-*
- config_name: dua
data_files:
- split: label
path: dua/label-*
- config_name: dv
data_files:
- split: label
path: dv/label-*
- config_name: dz
data_files:
- split: label
path: dz/label-*
- config_name: ee
data_files:
- split: label
path: ee/label-*
- config_name: egl
data_files:
- split: label
path: egl/label-*
- config_name: el
data_files:
- split: label
path: el/label-*
- config_name: eml
data_files:
- split: label
path: eml/label-*
- config_name: en
data_files:
- split: label
path: en/label-*
default: true
- config_name: en-ca
data_files:
- split: label
path: en-ca/label-*
- config_name: en-gb
data_files:
- split: label
path: en-gb/label-*
- config_name: en-us
data_files:
- split: label
path: en-us/label-*
- config_name: eo
data_files:
- split: label
path: eo/label-*
- config_name: es
data_files:
- split: label
path: es/label-*
- config_name: es-419
data_files:
- split: label
path: es-419/label-*
- config_name: es-formal
data_files:
- split: label
path: es-formal/label-*
- config_name: et
data_files:
- split: label
path: et/label-*
- config_name: eu
data_files:
- split: label
path: eu/label-*
- config_name: ext
data_files:
- split: label
path: ext/label-*
- config_name: fa
data_files:
- split: label
path: fa/label-*
- config_name: ff
data_files:
- split: label
path: ff/label-*
- config_name: fi
data_files:
- split: label
path: fi/label-*
- config_name: fit
data_files:
- split: label
path: fit/label-*
- config_name: fj
data_files:
- split: label
path: fj/label-*
- config_name: fkv
data_files:
- split: label
path: fkv/label-*
- config_name: fo
data_files:
- split: label
path: fo/label-*
- config_name: fr
data_files:
- split: label
path: fr/label-*
- config_name: frc
data_files:
- split: label
path: frc/label-*
- config_name: frp
data_files:
- split: label
path: frp/label-*
- config_name: frr
data_files:
- split: label
path: frr/label-*
- config_name: fur
data_files:
- split: label
path: fur/label-*
- config_name: ga
data_files:
- split: label
path: ga/label-*
- config_name: gag
data_files:
- split: label
path: gag/label-*
- config_name: gan
data_files:
- split: label
path: gan/label-*
- config_name: gan-hans
data_files:
- split: label
path: gan-hans/label-*
- config_name: gan-hant
data_files:
- split: label
path: gan-hant/label-*
- config_name: gcr
data_files:
- split: label
path: gcr/label-*
- config_name: gd
data_files:
- split: label
path: gd/label-*
- config_name: gl
data_files:
- split: label
path: gl/label-*
- config_name: glk
data_files:
- split: label
path: glk/label-*
- config_name: gn
data_files:
- split: label
path: gn/label-*
- config_name: gom
data_files:
- split: label
path: gom/label-*
- config_name: gom-deva
data_files:
- split: label
path: gom-deva/label-*
- config_name: gom-latn
data_files:
- split: label
path: gom-latn/label-*
- config_name: gor
data_files:
- split: label
path: gor/label-*
- config_name: got
data_files:
- split: label
path: got/label-*
- config_name: grc
data_files:
- split: label
path: grc/label-*
- config_name: gu
data_files:
- split: label
path: gu/label-*
- config_name: guc
data_files:
- split: label
path: guc/label-*
- config_name: guw
data_files:
- split: label
path: guw/label-*
- config_name: gv
data_files:
- split: label
path: gv/label-*
- config_name: ha
data_files:
- split: label
path: ha/label-*
- config_name: hak
data_files:
- split: label
path: hak/label-*
- config_name: haw
data_files:
- split: label
path: haw/label-*
- config_name: he
data_files:
- split: label
path: he/label-*
- config_name: hi
data_files:
- split: label
path: hi/label-*
- config_name: hif
data_files:
- split: label
path: hif/label-*
- config_name: hif-latn
data_files:
- split: label
path: hif-latn/label-*
- config_name: hil
data_files:
- split: label
path: hil/label-*
- config_name: ho
data_files:
- split: label
path: ho/label-*
- config_name: hr
data_files:
- split: label
path: hr/label-*
- config_name: hrx
data_files:
- split: label
path: hrx/label-*
- config_name: hsb
data_files:
- split: label
path: hsb/label-*
- config_name: ht
data_files:
- split: label
path: ht/label-*
- config_name: hu
data_files:
- split: label
path: hu/label-*
- config_name: hu-formal
data_files:
- split: label
path: hu-formal/label-*
- config_name: hy
data_files:
- split: label
path: hy/label-*
- config_name: hyw
data_files:
- split: label
path: hyw/label-*
- config_name: hz
data_files:
- split: label
path: hz/label-*
- config_name: ia
data_files:
- split: label
path: ia/label-*
- config_name: id
data_files:
- split: label
path: id/label-*
- config_name: ie
data_files:
- split: label
path: ie/label-*
- config_name: ig
data_files:
- split: label
path: ig/label-*
- config_name: ii
data_files:
- split: label
path: ii/label-*
- config_name: ik
data_files:
- split: label
path: ik/label-*
- config_name: ike-cans
data_files:
- split: label
path: ike-cans/label-*
- config_name: ike-latn
data_files:
- split: label
path: ike-latn/label-*
- config_name: ilo
data_files:
- split: label
path: ilo/label-*
- config_name: inh
data_files:
- split: label
path: inh/label-*
- config_name: io
data_files:
- split: label
path: io/label-*
- config_name: is
data_files:
- split: label
path: is/label-*
- config_name: it
data_files:
- split: label
path: it/label-*
- config_name: iu
data_files:
- split: label
path: iu/label-*
- config_name: ja
data_files:
- split: label
path: ja/label-*
- config_name: jam
data_files:
- split: label
path: jam/label-*
- config_name: jbo
data_files:
- split: label
path: jbo/label-*
- config_name: jv
data_files:
- split: label
path: jv/label-*
- config_name: ka
data_files:
- split: label
path: ka/label-*
- config_name: kaa
data_files:
- split: label
path: kaa/label-*
- config_name: kab
data_files:
- split: label
path: kab/label-*
- config_name: kbd
data_files:
- split: label
path: kbd/label-*
- config_name: kbd-cyrl
data_files:
- split: label
path: kbd-cyrl/label-*
- config_name: kbp
data_files:
- split: label
path: kbp/label-*
- config_name: kea
data_files:
- split: label
path: kea/label-*
- config_name: kg
data_files:
- split: label
path: kg/label-*
- config_name: khw
data_files:
- split: label
path: khw/label-*
- config_name: ki
data_files:
- split: label
path: ki/label-*
- config_name: kj
data_files:
- split: label
path: kj/label-*
- config_name: kjp
data_files:
- split: label
path: kjp/label-*
- config_name: kk
data_files:
- split: label
path: kk/label-*
- config_name: kk-arab
data_files:
- split: label
path: kk-arab/label-*
- config_name: kk-kz
data_files:
- split: label
path: kk-kz/label-*
- config_name: kk-latn
data_files:
- split: label
path: kk-latn/label-*
- config_name: kk-tr
data_files:
- split: label
path: kk-tr/label-*
- config_name: ko
data_files:
- split: label
path: ko/label-*
- config_name: ko-kp
data_files:
- split: label
path: ko-kp/label-*
- config_name: koi
data_files:
- split: label
path: koi/label-*
- config_name: kr
data_files:
- split: label
path: kr/label-*
- config_name: krc
data_files:
- split: label
path: krc/label-*
- config_name: kri
data_files:
- split: label
path: kri/label-*
- config_name: krj
data_files:
- split: label
path: krj/label-*
- config_name: krl
data_files:
- split: label
path: krl/label-*
- config_name: ks
data_files:
- split: label
path: ks/label-*
- config_name: ks-deva
data_files:
- split: label
path: ks-deva/label-*
- config_name: ksh
data_files:
- split: label
path: ksh/label-*
- config_name: ku
data_files:
- split: label
path: ku/label-*
- config_name: ku-arab
data_files:
- split: label
path: ku-arab/label-*
- config_name: ku-latn
data_files:
- split: label
path: ku-latn/label-*
- config_name: kum
data_files:
- split: label
path: kum/label-*
- config_name: kv
data_files:
- split: label
path: kv/label-*
- config_name: kw
data_files:
- split: label
path: kw/label-*
- config_name: ky
data_files:
- split: label
path: ky/label-*
- config_name: la
data_files:
- split: label
path: la/label-*
- config_name: lad
data_files:
- split: label
path: lad/label-*
- config_name: lb
data_files:
- split: label
path: lb/label-*
- config_name: lbe
data_files:
- split: label
path: lbe/label-*
- config_name: lez
data_files:
- split: label
path: lez/label-*
- config_name: lfn
data_files:
- split: label
path: lfn/label-*
- config_name: lg
data_files:
- split: label
path: lg/label-*
- config_name: li
data_files:
- split: label
path: li/label-*
- config_name: lij
data_files:
- split: label
path: lij/label-*
- config_name: liv
data_files:
- split: label
path: liv/label-*
- config_name: lki
data_files:
- split: label
path: lki/label-*
- config_name: lld
data_files:
- split: label
path: lld/label-*
- config_name: lmo
data_files:
- split: label
path: lmo/label-*
- config_name: ln
data_files:
- split: label
path: ln/label-*
- config_name: lo
data_files:
- split: label
path: lo/label-*
- config_name: loz
data_files:
- split: label
path: loz/label-*
- config_name: lt
data_files:
- split: label
path: lt/label-*
- config_name: ltg
data_files:
- split: label
path: ltg/label-*
- config_name: lus
data_files:
- split: label
path: lus/label-*
- config_name: luz
data_files:
- split: label
path: luz/label-*
- config_name: lv
data_files:
- split: label
path: lv/label-*
- config_name: lzh
data_files:
- split: label
path: lzh/label-*
- config_name: mdf
data_files:
- split: label
path: mdf/label-*
- config_name: mg
data_files:
- split: label
path: mg/label-*
- config_name: mh
data_files:
- split: label
path: mh/label-*
- config_name: mi
data_files:
- split: label
path: mi/label-*
- config_name: min
data_files:
- split: label
path: min/label-*
- config_name: mk
data_files:
- split: label
path: mk/label-*
- config_name: ml
data_files:
- split: label
path: ml/label-*
- config_name: mn
data_files:
- split: label
path: mn/label-*
- config_name: mni
data_files:
- split: label
path: mni/label-*
- config_name: mnw
data_files:
- split: label
path: mnw/label-*
- config_name: mo
data_files:
- split: label
path: mo/label-*
- config_name: mr
data_files:
- split: label
path: mr/label-*
- config_name: mrh
data_files:
- split: label
path: mrh/label-*
- config_name: mrj
data_files:
- split: label
path: mrj/label-*
- config_name: ms
data_files:
- split: label
path: ms/label-*
- config_name: ms-arab
data_files:
- split: label
path: ms-arab/label-*
- config_name: mt
data_files:
- split: label
path: mt/label-*
- config_name: mus
data_files:
- split: label
path: mus/label-*
- config_name: mwl
data_files:
- split: label
path: mwl/label-*
- config_name: my
data_files:
- split: label
path: my/label-*
- config_name: mzn
data_files:
- split: label
path: mzn/label-*
- config_name: na
data_files:
- split: label
path: na/label-*
- config_name: nah
data_files:
- split: label
path: nah/label-*
- config_name: nan-hani
data_files:
- split: label
path: nan-hani/label-*
- config_name: nap
data_files:
- split: label
path: nap/label-*
- config_name: nb
data_files:
- split: label
path: nb/label-*
- config_name: nds
data_files:
- split: label
path: nds/label-*
- config_name: nds-nl
data_files:
- split: label
path: nds-nl/label-*
- config_name: ne
data_files:
- split: label
path: ne/label-*
- config_name: new
data_files:
- split: label
path: new/label-*
- config_name: ng
data_files:
- split: label
path: ng/label-*
- config_name: nia
data_files:
- split: label
path: nia/label-*
- config_name: niu
data_files:
- split: label
path: niu/label-*
- config_name: nl
data_files:
- split: label
path: nl/label-*
- config_name: nn
data_files:
- split: label
path: nn/label-*
- config_name: 'no'
data_files:
- split: label
path: no/label-*
- config_name: nod
data_files:
- split: label
path: nod/label-*
- config_name: nov
data_files:
- split: label
path: nov/label-*
- config_name: nqo
data_files:
- split: label
path: nqo/label-*
- config_name: nrm
data_files:
- split: label
path: nrm/label-*
- config_name: nso
data_files:
- split: label
path: nso/label-*
- config_name: nv
data_files:
- split: label
path: nv/label-*
- config_name: ny
data_files:
- split: label
path: ny/label-*
- config_name: nys
data_files:
- split: label
path: nys/label-*
- config_name: oc
data_files:
- split: label
path: oc/label-*
- config_name: olo
data_files:
- split: label
path: olo/label-*
- config_name: om
data_files:
- split: label
path: om/label-*
- config_name: or
data_files:
- split: label
path: or/label-*
- config_name: os
data_files:
- split: label
path: os/label-*
- config_name: ota
data_files:
- split: label
path: ota/label-*
- config_name: pa
data_files:
- split: label
path: pa/label-*
- config_name: pam
data_files:
- split: label
path: pam/label-*
- config_name: pap
data_files:
- split: label
path: pap/label-*
- config_name: pcd
data_files:
- split: label
path: pcd/label-*
- config_name: pdc
data_files:
- split: label
path: pdc/label-*
- config_name: pdt
data_files:
- split: label
path: pdt/label-*
- config_name: pfl
data_files:
- split: label
path: pfl/label-*
- config_name: pi
data_files:
- split: label
path: pi/label-*
- config_name: pih
data_files:
- split: label
path: pih/label-*
- config_name: pl
data_files:
- split: label
path: pl/label-*
- config_name: pms
data_files:
- split: label
path: pms/label-*
- config_name: pnb
data_files:
- split: label
path: pnb/label-*
- config_name: pnt
data_files:
- split: label
path: pnt/label-*
- config_name: prg
data_files:
- split: label
path: prg/label-*
- config_name: ps
data_files:
- split: label
path: ps/label-*
- config_name: pt
data_files:
- split: label
path: pt/label-*
- config_name: pt-br
data_files:
- split: label
path: pt-br/label-*
- config_name: pwn
data_files:
- split: label
path: pwn/label-*
- config_name: qu
data_files:
- split: label
path: qu/label-*
- config_name: quc
data_files:
- split: label
path: quc/label-*
- config_name: qug
data_files:
- split: label
path: qug/label-*
- config_name: rgn
data_files:
- split: label
path: rgn/label-*
- config_name: rif
data_files:
- split: label
path: rif/label-*
- config_name: rm
data_files:
- split: label
path: rm/label-*
- config_name: rmc
data_files:
- split: label
path: rmc/label-*
- config_name: rmy
data_files:
- split: label
path: rmy/label-*
- config_name: rn
data_files:
- split: label
path: rn/label-*
- config_name: ro
data_files:
- split: label
path: ro/label-*
- config_name: roa-tara
data_files:
- split: label
path: roa-tara/label-*
- config_name: ru
data_files:
- split: label
path: ru/label-*
- config_name: rue
data_files:
- split: label
path: rue/label-*
- config_name: rup
data_files:
- split: label
path: rup/label-*
- config_name: ruq-cyrl
data_files:
- split: label
path: ruq-cyrl/label-*
- config_name: ruq-latn
data_files:
- split: label
path: ruq-latn/label-*
- config_name: rw
data_files:
- split: label
path: rw/label-*
- config_name: rwr
data_files:
- split: label
path: rwr/label-*
- config_name: ryu
data_files:
- split: label
path: ryu/label-*
- config_name: sa
data_files:
- split: label
path: sa/label-*
- config_name: sat
data_files:
- split: label
path: sat/label-*
- config_name: sc
data_files:
- split: label
path: sc/label-*
- config_name: scn
data_files:
- split: label
path: scn/label-*
- config_name: sco
data_files:
- split: label
path: sco/label-*
- config_name: sd
data_files:
- split: label
path: sd/label-*
- config_name: sdc
data_files:
- split: label
path: sdc/label-*
- config_name: se
data_files:
- split: label
path: se/label-*
- config_name: sei
data_files:
- split: label
path: sei/label-*
- config_name: sg
data_files:
- split: label
path: sg/label-*
- config_name: sh
data_files:
- split: label
path: sh/label-*
- config_name: shi-latn
data_files:
- split: label
path: shi-latn/label-*
- config_name: shi-tfng
data_files:
- split: label
path: shi-tfng/label-*
- config_name: shn
data_files:
- split: label
path: shn/label-*
- config_name: shy-latn
data_files:
- split: label
path: shy-latn/label-*
- config_name: si
data_files:
- split: label
path: si/label-*
- config_name: sjd
data_files:
- split: label
path: sjd/label-*
- config_name: sje
data_files:
- split: label
path: sje/label-*
- config_name: sju
data_files:
- split: label
path: sju/label-*
- config_name: sk
data_files:
- split: label
path: sk/label-*
- config_name: skr
data_files:
- split: label
path: skr/label-*
- config_name: sl
data_files:
- split: label
path: sl/label-*
- config_name: sli
data_files:
- split: label
path: sli/label-*
- config_name: sm
data_files:
- split: label
path: sm/label-*
- config_name: sma
data_files:
- split: label
path: sma/label-*
- config_name: smj
data_files:
- split: label
path: smj/label-*
- config_name: smn
data_files:
- split: label
path: smn/label-*
- config_name: sms
data_files:
- split: label
path: sms/label-*
- config_name: sn
data_files:
- split: label
path: sn/label-*
- config_name: sq
data_files:
- split: label
path: sq/label-*
- config_name: sr
data_files:
- split: label
path: sr/label-*
- config_name: sr-ec
data_files:
- split: label
path: sr-ec/label-*
- config_name: sr-el
data_files:
- split: label
path: sr-el/label-*
- config_name: srq
data_files:
- split: label
path: srq/label-*
- config_name: ss
data_files:
- split: label
path: ss/label-*
- config_name: st
data_files:
- split: label
path: st/label-*
- config_name: stq
data_files:
- split: label
path: stq/label-*
- config_name: su
data_files:
- split: label
path: su/label-*
- config_name: sv
data_files:
- split: label
path: sv/label-*
- config_name: sw
data_files:
- split: label
path: sw/label-*
- config_name: szl
data_files:
- split: label
path: szl/label-*
- config_name: szy
data_files:
- split: label
path: szy/label-*
- config_name: ta
data_files:
- split: label
path: ta/label-*
- config_name: tay
data_files:
- split: label
path: tay/label-*
- config_name: tcy
data_files:
- split: label
path: tcy/label-*
- config_name: te
data_files:
- split: label
path: te/label-*
- config_name: tet
data_files:
- split: label
path: tet/label-*
- config_name: tg
data_files:
- split: label
path: tg/label-*
- config_name: tg-cyrl
data_files:
- split: label
path: tg-cyrl/label-*
- config_name: tg-latn
data_files:
- split: label
path: tg-latn/label-*
- config_name: th
data_files:
- split: label
path: th/label-*
- config_name: ti
data_files:
- split: label
path: ti/label-*
- config_name: tk
data_files:
- split: label
path: tk/label-*
- config_name: tl
data_files:
- split: label
path: tl/label-*
- config_name: tly
data_files:
- split: label
path: tly/label-*
- config_name: tly-cyrl
data_files:
- split: label
path: tly-cyrl/label-*
- config_name: tn
data_files:
- split: label
path: tn/label-*
- config_name: to
data_files:
- split: label
path: to/label-*
- config_name: tpi
data_files:
- split: label
path: tpi/label-*
- config_name: tr
data_files:
- split: label
path: tr/label-*
- config_name: tru
data_files:
- split: label
path: tru/label-*
- config_name: trv
data_files:
- split: label
path: trv/label-*
- config_name: ts
data_files:
- split: label
path: ts/label-*
- config_name: tt
data_files:
- split: label
path: tt/label-*
- config_name: tt-cyrl
data_files:
- split: label
path: tt-cyrl/label-*
- config_name: tt-latn
data_files:
- split: label
path: tt-latn/label-*
- config_name: tum
data_files:
- split: label
path: tum/label-*
- config_name: tw
data_files:
- split: label
path: tw/label-*
- config_name: ty
data_files:
- split: label
path: ty/label-*
- config_name: tyv
data_files:
- split: label
path: tyv/label-*
- config_name: tzm
data_files:
- split: label
path: tzm/label-*
- config_name: udm
data_files:
- split: label
path: udm/label-*
- config_name: ug-arab
data_files:
- split: label
path: ug-arab/label-*
- config_name: ug-latn
data_files:
- split: label
path: ug-latn/label-*
- config_name: uk
data_files:
- split: label
path: uk/label-*
- config_name: ur
data_files:
- split: label
path: ur/label-*
- config_name: uz
data_files:
- split: label
path: uz/label-*
- config_name: uz-cyrl
data_files:
- split: label
path: uz-cyrl/label-*
- config_name: ve
data_files:
- split: label
path: ve/label-*
- config_name: vec
data_files:
- split: label
path: vec/label-*
- config_name: vep
data_files:
- split: label
path: vep/label-*
- config_name: vi
data_files:
- split: label
path: vi/label-*
- config_name: vls
data_files:
- split: label
path: vls/label-*
- config_name: vmf
data_files:
- split: label
path: vmf/label-*
- config_name: vo
data_files:
- split: label
path: vo/label-*
- config_name: vot
data_files:
- split: label
path: vot/label-*
- config_name: wa
data_files:
- split: label
path: wa/label-*
- config_name: war
data_files:
- split: label
path: war/label-*
- config_name: wls
data_files:
- split: label
path: wls/label-*
- config_name: wo
data_files:
- split: label
path: wo/label-*
- config_name: wuu
data_files:
- split: label
path: wuu/label-*
- config_name: wya
data_files:
- split: label
path: wya/label-*
- config_name: xal
data_files:
- split: label
path: xal/label-*
- config_name: xh
data_files:
- split: label
path: xh/label-*
- config_name: xmf
data_files:
- split: label
path: xmf/label-*
- config_name: xsy
data_files:
- split: label
path: xsy/label-*
- config_name: yav
data_files:
- split: label
path: yav/label-*
- config_name: yi
data_files:
- split: label
path: yi/label-*
- config_name: yo
data_files:
- split: label
path: yo/label-*
- config_name: za
data_files:
- split: label
path: za/label-*
- config_name: zea
data_files:
- split: label
path: zea/label-*
- config_name: zgh
data_files:
- split: label
path: zgh/label-*
- config_name: zh
data_files:
- split: label
path: zh/label-*
- config_name: zh-cn
data_files:
- split: label
path: zh-cn/label-*
- config_name: zh-hans
data_files:
- split: label
path: zh-hans/label-*
- config_name: zh-hant
data_files:
- split: label
path: zh-hant/label-*
- config_name: zh-hk
data_files:
- split: label
path: zh-hk/label-*
- config_name: zh-mo
data_files:
- split: label
path: zh-mo/label-*
- config_name: zh-my
data_files:
- split: label
path: zh-my/label-*
- config_name: zh-sg
data_files:
- split: label
path: zh-sg/label-*
- config_name: zh-tw
data_files:
- split: label
path: zh-tw/label-*
- config_name: zh-yue
data_files:
- split: label
path: zh-yue/label-*
- config_name: zu
data_files:
- split: label
path: zu/label-*
task_categories:
- translation
- text2text-generation
language:
- en
- fr
- de
- ja
- zh
- hi
- ar
- bn
- ru
- es
---
# Wikidata Labels
Large parallel corpus for machine translation
- Entity label data extracted from Wikidata (2022-01-03), filtered for item entities only
- Only download the languages you need with `datasets>=2.14.0`
- Similar dataset: https://huggingface.co./datasets/wmt/wikititles (18 Wikipedia titles pairs instead of all Wikidata entities)
## Dataset Details
### Dataset Sources
- Wikidata JSON dump (wikidata-20220103-all.json.gz) https://www.wikidata.org/wiki/Wikidata:Database_download
## Uses
You can generate parallel text examples from this dataset like below:
```python
from datasets import load_dataset
import pandas as pd
def parallel_labels(lang_codes: list, how="inner", repo_id="rayliuca/wikidata_entity_label", merge_config={}, datasets_config={}) -> pd.DataFrame:
out_df = None
for lc in lang_codes:
dataset = load_dataset(repo_id, lc, **datasets_config)
dataset_df = dataset['label'].to_pandas().rename(columns={"label":lc}).drop(columns=['lastrevid'])
if out_df is None:
out_df = dataset_df
else:
out_df = out_df.merge(
dataset_df,
on='wikidata_id',
how=how,
**merge_config
)
return out_df
# Note: the "en" subset is >4GB
parallel_labels(['en', 'fr', 'ja', 'zh']).head()
```
### Output
| | wikidata_id | en | fr | ja | zh |
|---:|:--------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------|:---------------------------------------------|
| 0 | Q109739412 | SARS-CoV-2 Omicron variant | variant Omicron du SARS-CoV-2 | SARSコロナウイルス2-オミクロン株 | 嚴重急性呼吸道症候群冠狀病毒2型Omicron變異株 |
| 1 | Q108460606 | Ulughbegsaurus | Ulughbegsaurus | ウルグベグサウルス | 兀魯伯龍屬 |
| 2 | Q108556886 | AUKUS | AUKUS | AUKUS | AUKUS |
| 3 | Q106496152 | Claude Joseph | Claude Joseph | クロード・ジョゼフ | 克洛德·约瑟夫 |
| 4 | Q105519361 | The World's Finest Assassin Gets Reincarnated in a Different World as an Aristocrat | The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat | 世界最高の暗殺者、異世界貴族に転生する | 世界頂尖的暗殺者轉生為異世界貴族 |
Note: this example table above shows a quirk(?) of the Wiki data. The French Wikipedia page [The World's Finest Assassin Gets Reincarnated in Another World as an Aristocrat](https://fr.wikipedia.org/wiki/The_World%27s_Finest_Assassin_Gets_Reincarnated_in_Another_World_as_an_Aristocrat) uses English for its title. While this could be disadvantageous for direct translation training, it also provides insights into how native speakers might call this entity instead of the literal translation on the Wiki page as well
## Dataset Structure
Each language has its own subset (aka config), which means you only have to download the languages you need with `datasets>=2.14.0`
Each subset has these fields:
- wikidata_id
- lastrevid
- label
## Dataset Creation
#### Data Collection and Processing
- Filtered for item entities only
- Ignored the descriptions as those texts are not very parallel
## Bias, Risks, and Limitations
- Might be slightly outdated (2022)
- Popular languages have more entries
- Labels are not guaranteed to be literal translations (see examples above) |
Samsung/samsum | Samsung | "2024-01-18T11:15:13Z" | 11,310 | 319 | [
"task_categories:summarization",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-nd-4.0",
"size_categories:10K<n<100K",
"arxiv:1911.12237",
"region:us",
"conversations-summarization"
] | [
"summarization"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-nc-nd-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids: []
paperswithcode_id: samsum-corpus
pretty_name: SAMSum Corpus
tags:
- conversations-summarization
dataset_info:
features:
- name: id
dtype: string
- name: dialogue
dtype: string
- name: summary
dtype: string
config_name: samsum
splits:
- name: train
num_bytes: 9479141
num_examples: 14732
- name: test
num_bytes: 534492
num_examples: 819
- name: validation
num_bytes: 516431
num_examples: 818
download_size: 2944100
dataset_size: 10530064
train-eval-index:
- config: samsum
task: summarization
task_id: summarization
splits:
eval_split: test
col_mapping:
dialogue: text
summary: target
---
# Dataset Card for SAMSum Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://arxiv.org/abs/1911.12237v2
- **Repository:** [Needs More Information]
- **Paper:** https://arxiv.org/abs/1911.12237v2
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger convesations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
### Supported Tasks and Leaderboards
[Needs More Information]
### Languages
English
## Dataset Structure
### Data Instances
The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
The first instance in the training set:
{'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- id: unique id of an example.
### Data Splits
- train: 14732
- val: 818
- test: 819
## Dataset Creation
### Curation Rationale
In paper:
> In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol.
As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app.
### Source Data
#### Initial Data Collection and Normalization
In paper:
> We asked linguists to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, discussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora.
#### Who are the source language producers?
linguists
### Annotations
#### Annotation process
In paper:
> Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary.
#### Who are the annotators?
language experts
### Personal and Sensitive Information
None, see above: Initial Data Collection and Normalization
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
non-commercial licence: CC BY-NC-ND 4.0
### Citation Information
```
@inproceedings{gliwa-etal-2019-samsum,
title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
author = "Gliwa, Bogdan and
Mochol, Iwona and
Biesek, Maciej and
Wawer, Aleksander",
booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
month = nov,
year = "2019",
address = "Hong Kong, China",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D19-5409",
doi = "10.18653/v1/D19-5409",
pages = "70--79"
}
```
### Contributions
Thanks to [@cccntu](https://github.com/cccntu) for adding this dataset. |
lmms-lab/MMMU | lmms-lab | "2024-03-08T05:09:42Z" | 11,269 | 4 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-15T06:32:16Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 57719107.0
num_examples: 150
- name: validation
num_bytes: 347519954.0
num_examples: 900
- name: test
num_bytes: 3271046267.0
num_examples: 10500
download_size: 3377778136
dataset_size: 3676285328.0
configs:
- config_name: default
data_files:
- split: dev
path: data/dev-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
This is a merged version of [MMMU/MMMU](https://huggingface.co./datasets/MMMU/MMMU) with all subsets concatenated.
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co./lmms-lab)
# This Dataset
This is a formatted version of [MMMU](https://github.com/MMMU-Benchmark/MMMU). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{yue2023mmmu,
title={Mmmu: A massive multi-discipline multimodal understanding and reasoning benchmark for expert agi},
author={Yue, Xiang and Ni, Yuansheng and Zhang, Kai and Zheng, Tianyu and Liu, Ruoqi and Zhang, Ge and Stevens, Samuel and Jiang, Dongfu and Ren, Weiming and Sun, Yuxuan and others},
journal={arXiv preprint arXiv:2311.16502},
year={2023}
}
``` |
HuggingFaceH4/ultrachat_200k | HuggingFaceH4 | "2024-10-16T11:52:27Z" | 11,176 | 494 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2305.14233",
"region:us"
] | [
"text-generation"
] | "2023-10-24T08:24:57Z" | ---
language:
- en
license: mit
size_categories:
- 100K<n<1M
task_categories:
- text-generation
pretty_name: UltraChat 200k
configs:
- config_name: default
data_files:
- split: train_sft
path: data/train_sft-*
- split: test_sft
path: data/test_sft-*
- split: train_gen
path: data/train_gen-*
- split: test_gen
path: data/test_gen-*
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_sft
num_bytes: 1397058554
num_examples: 207865
- name: test_sft
num_bytes: 154695659
num_examples: 23110
- name: train_gen
num_bytes: 1347396812
num_examples: 256032
- name: test_gen
num_bytes: 148276089
num_examples: 28304
download_size: 1624049723
dataset_size: 3047427114
---
# Dataset Card for UltraChat 200k
## Dataset Description
This is a heavily filtered version of the [UltraChat](https://github.com/thunlp/UltraChat) dataset and was used to train [Zephyr-7B-β](https://huggingface.co./HuggingFaceH4/zephyr-7b-beta), a state of the art 7b chat model.
The original datasets consists of 1.4M dialogues generated by ChatGPT and spanning a wide range of topics. To create `UltraChat 200k`, we applied the following logic:
- Selection of a subset of data for faster supervised fine tuning.
- Truecasing of the dataset, as we observed around 5% of the data contained grammatical errors like "Hello. how are you?" instead of "Hello. How are you?"
- Removal of dialogues where the assistant replies with phrases like "I do not have emotions" or "I don't have opinions", even for fact-based prompts that don't involve either.
## Dataset Structure
The dataset has four splits, suitable for:
* Supervised fine-tuning (`sft`).
* Generation ranking (`gen`) via techniques like rejection sampling or PPO.
The number of examples per split is shown as follows:
| train_sft | test_sft | train_gen | test_gen |
|:-------:|:-----------:|:-----:| :-----:|
| 207865 | 23110 | 256032 | 28304 |
The dataset is stored in parquet format with each entry using the following schema:
```
{
"prompt": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"messages":[
{
"content": "Create a fully-developed protagonist who is challenged to survive within a dystopian society under the rule of a tyrant. ...",
"role": "user"
},
{
"content": "Name: Ava\n\n Ava was just 16 years old when the world as she knew it came crashing down. The government had collapsed, leaving behind a chaotic and lawless society. ...",
"role": "assistant"
},
{
"content": "Wow, Ava's story is so intense and inspiring! Can you provide me with more details. ...",
"role": "user"
},
{
"content": "Certainly! ....",
"role": "assistant"
},
{
"content": "That's really interesting! I would love to hear more...",
"role": "user"
}
{
"content": "Certainly! ....",
"role": "assistant"
},
],
"prompt_id": "d938b65dfe31f05f80eb8572964c6673eddbd68eff3db6bd234d7f1e3b86c2af"
}
```
## Citation
If you find this dataset is useful in your work, please cite the original UltraChat dataset:
```
@misc{ding2023enhancing,
title={Enhancing Chat Language Models by Scaling High-quality Instructional Conversations},
author={Ning Ding and Yulin Chen and Bokai Xu and Yujia Qin and Zhi Zheng and Shengding Hu and Zhiyuan Liu and Maosong Sun and Bowen Zhou},
year={2023},
eprint={2305.14233},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Salesforce/lotsa_data | Salesforce | "2024-04-11T07:00:30Z" | 11,130 | 62 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:arrow",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:mlcroissant",
"arxiv:2402.02592",
"region:us"
] | null | "2024-02-22T03:12:11Z" | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: "*/*.arrow"
- config_name: "BEIJING_SUBWAY_30MIN"
data_files:
- split: train
path: "BEIJING_SUBWAY_30MIN/*.arrow"
- config_name: "HZMETRO"
data_files:
- split: train
path: "HZMETRO/*.arrow"
- config_name: "LOOP_SEATTLE"
data_files:
- split: train
path: "LOOP_SEATTLE/*.arrow"
- config_name: "LOS_LOOP"
data_files:
- split: train
path: "LOS_LOOP/*.arrow"
- config_name: "M_DENSE"
data_files:
- split: train
path: "M_DENSE/*.arrow"
- config_name: "PEMS03"
data_files:
- split: train
path: "PEMS03/*.arrow"
- config_name: "PEMS04"
data_files:
- split: train
path: "PEMS04/*.arrow"
- config_name: "PEMS07"
data_files:
- split: train
path: "PEMS07/*.arrow"
- config_name: "PEMS08"
data_files:
- split: train
path: "PEMS08/*.arrow"
- config_name: "PEMS_BAY"
data_files:
- split: train
path: "PEMS_BAY/*.arrow"
- config_name: "Q-TRAFFIC"
data_files:
- split: train
path: "Q-TRAFFIC/*.arrow"
- config_name: "SHMETRO"
data_files:
- split: train
path: "SHMETRO/*.arrow"
- config_name: "SZ_TAXI"
data_files:
- split: train
path: "SZ_TAXI/*.arrow"
- config_name: "alibaba_cluster_trace_2018"
data_files:
- split: train
path: "alibaba_cluster_trace_2018/*.arrow"
- config_name: "australian_electricity_demand"
data_files:
- split: train
path: "australian_electricity_demand/*.arrow"
- config_name: "azure_vm_traces_2017"
data_files:
- split: train
path: "azure_vm_traces_2017/*.arrow"
- config_name: "bdg-2_bear"
data_files:
- split: train
path: "bdg-2_bear/*.arrow"
- config_name: "bdg-2_fox"
data_files:
- split: train
path: "bdg-2_fox/*.arrow"
- config_name: "bdg-2_panther"
data_files:
- split: train
path: "bdg-2_panther/*.arrow"
- config_name: "bdg-2_rat"
data_files:
- split: train
path: "bdg-2_rat/*.arrow"
- config_name: "beijing_air_quality"
data_files:
- split: train
path: "beijing_air_quality/*.arrow"
- config_name: "bitcoin_with_missing"
data_files:
- split: train
path: "bitcoin_with_missing/*.arrow"
- config_name: "borealis"
data_files:
- split: train
path: "borealis/*.arrow"
- config_name: "borg_cluster_data_2011"
data_files:
- split: train
path: "borg_cluster_data_2011/*.arrow"
- config_name: "buildings_900k"
data_files:
- split: train
path: "buildings_900k/*.arrow"
- config_name: "bull"
data_files:
- split: train
path: "bull/*.arrow"
- config_name: "car_parts_with_missing"
data_files:
- split: train
path: "car_parts_with_missing/*.arrow"
- config_name: "cdc_fluview_ilinet"
data_files:
- split: train
path: "cdc_fluview_ilinet/*.arrow"
- config_name: "cdc_fluview_who_nrevss"
data_files:
- split: train
path: "cdc_fluview_who_nrevss/*.arrow"
- config_name: "china_air_quality"
data_files:
- split: train
path: "china_air_quality/*.arrow"
- config_name: "cif_2016_12"
data_files:
- split: train
path: "cif_2016_12/*.arrow"
- config_name: "cif_2016_6"
data_files:
- split: train
path: "cif_2016_6/*.arrow"
- config_name: "cmip6"
data_files:
- split: train
path: "cmip6_*/*.arrow"
- config_name: "cmip6_1850"
data_files:
- split: train
path: "cmip6_1850/*.arrow"
- config_name: "cmip6_1855"
data_files:
- split: train
path: "cmip6_1855/*.arrow"
- config_name: "cmip6_1860"
data_files:
- split: train
path: "cmip6_1860/*.arrow"
- config_name: "cmip6_1865"
data_files:
- split: train
path: "cmip6_1865/*.arrow"
- config_name: "cmip6_1870"
data_files:
- split: train
path: "cmip6_1870/*.arrow"
- config_name: "cmip6_1875"
data_files:
- split: train
path: "cmip6_1875/*.arrow"
- config_name: "cmip6_1880"
data_files:
- split: train
path: "cmip6_1880/*.arrow"
- config_name: "cmip6_1885"
data_files:
- split: train
path: "cmip6_1885/*.arrow"
- config_name: "cmip6_1890"
data_files:
- split: train
path: "cmip6_1890/*.arrow"
- config_name: "cmip6_1895"
data_files:
- split: train
path: "cmip6_1895/*.arrow"
- config_name: "cmip6_1900"
data_files:
- split: train
path: "cmip6_1900/*.arrow"
- config_name: "cmip6_1905"
data_files:
- split: train
path: "cmip6_1905/*.arrow"
- config_name: "cmip6_1910"
data_files:
- split: train
path: "cmip6_1910/*.arrow"
- config_name: "cmip6_1915"
data_files:
- split: train
path: "cmip6_1915/*.arrow"
- config_name: "cmip6_1920"
data_files:
- split: train
path: "cmip6_1920/*.arrow"
- config_name: "cmip6_1925"
data_files:
- split: train
path: "cmip6_1925/*.arrow"
- config_name: "cmip6_1930"
data_files:
- split: train
path: "cmip6_1930/*.arrow"
- config_name: "cmip6_1935"
data_files:
- split: train
path: "cmip6_1935/*.arrow"
- config_name: "cmip6_1940"
data_files:
- split: train
path: "cmip6_1940/*.arrow"
- config_name: "cmip6_1945"
data_files:
- split: train
path: "cmip6_1945/*.arrow"
- config_name: "cmip6_1950"
data_files:
- split: train
path: "cmip6_1950/*.arrow"
- config_name: "cmip6_1955"
data_files:
- split: train
path: "cmip6_1955/*.arrow"
- config_name: "cmip6_1960"
data_files:
- split: train
path: "cmip6_1960/*.arrow"
- config_name: "cmip6_1965"
data_files:
- split: train
path: "cmip6_1965/*.arrow"
- config_name: "cmip6_1970"
data_files:
- split: train
path: "cmip6_1970/*.arrow"
- config_name: "cmip6_1975"
data_files:
- split: train
path: "cmip6_1975/*.arrow"
- config_name: "cmip6_1980"
data_files:
- split: train
path: "cmip6_1980/*.arrow"
- config_name: "cmip6_1985"
data_files:
- split: train
path: "cmip6_1985/*.arrow"
- config_name: "cmip6_1990"
data_files:
- split: train
path: "cmip6_1990/*.arrow"
- config_name: "cmip6_1995"
data_files:
- split: train
path: "cmip6_1995/*.arrow"
- config_name: "cmip6_2000"
data_files:
- split: train
path: "cmip6_2000/*.arrow"
- config_name: "cmip6_2005"
data_files:
- split: train
path: "cmip6_2005/*.arrow"
- config_name: "cmip6_2010"
data_files:
- split: train
path: "cmip6_2010/*.arrow"
- config_name: "cockatoo"
data_files:
- split: train
path: "cockatoo/*.arrow"
- config_name: "covid19_energy"
data_files:
- split: train
path: "covid19_energy/*.arrow"
- config_name: "covid_deaths"
data_files:
- split: train
path: "covid_deaths/*.arrow"
- config_name: "covid_mobility"
data_files:
- split: train
path: "covid_mobility/*.arrow"
- config_name: "elecdemand"
data_files:
- split: train
path: "elecdemand/*.arrow"
- config_name: "elf"
data_files:
- split: train
path: "elf/*.arrow"
- config_name: "era5"
data_files:
- split: train
path: "era5_*/*.arrow"
- config_name: "era5_1989"
data_files:
- split: train
path: "era5_1989/*.arrow"
- config_name: "era5_1990"
data_files:
- split: train
path: "era5_1990/*.arrow"
- config_name: "era5_1991"
data_files:
- split: train
path: "era5_1991/*.arrow"
- config_name: "era5_1992"
data_files:
- split: train
path: "era5_1992/*.arrow"
- config_name: "era5_1993"
data_files:
- split: train
path: "era5_1993/*.arrow"
- config_name: "era5_1994"
data_files:
- split: train
path: "era5_1994/*.arrow"
- config_name: "era5_1995"
data_files:
- split: train
path: "era5_1995/*.arrow"
- config_name: "era5_1996"
data_files:
- split: train
path: "era5_1996/*.arrow"
- config_name: "era5_1997"
data_files:
- split: train
path: "era5_1997/*.arrow"
- config_name: "era5_1998"
data_files:
- split: train
path: "era5_1998/*.arrow"
- config_name: "era5_1999"
data_files:
- split: train
path: "era5_1999/*.arrow"
- config_name: "era5_2000"
data_files:
- split: train
path: "era5_2000/*.arrow"
- config_name: "era5_2001"
data_files:
- split: train
path: "era5_2001/*.arrow"
- config_name: "era5_2002"
data_files:
- split: train
path: "era5_2002/*.arrow"
- config_name: "era5_2003"
data_files:
- split: train
path: "era5_2003/*.arrow"
- config_name: "era5_2004"
data_files:
- split: train
path: "era5_2004/*.arrow"
- config_name: "era5_2005"
data_files:
- split: train
path: "era5_2005/*.arrow"
- config_name: "era5_2006"
data_files:
- split: train
path: "era5_2006/*.arrow"
- config_name: "era5_2007"
data_files:
- split: train
path: "era5_2007/*.arrow"
- config_name: "era5_2008"
data_files:
- split: train
path: "era5_2008/*.arrow"
- config_name: "era5_2009"
data_files:
- split: train
path: "era5_2009/*.arrow"
- config_name: "era5_2010"
data_files:
- split: train
path: "era5_2010/*.arrow"
- config_name: "era5_2011"
data_files:
- split: train
path: "era5_2011/*.arrow"
- config_name: "era5_2012"
data_files:
- split: train
path: "era5_2012/*.arrow"
- config_name: "era5_2013"
data_files:
- split: train
path: "era5_2013/*.arrow"
- config_name: "era5_2014"
data_files:
- split: train
path: "era5_2014/*.arrow"
- config_name: "era5_2015"
data_files:
- split: train
path: "era5_2015/*.arrow"
- config_name: "era5_2016"
data_files:
- split: train
path: "era5_2016/*.arrow"
- config_name: "era5_2017"
data_files:
- split: train
path: "era5_2017/*.arrow"
- config_name: "era5_2018"
data_files:
- split: train
path: "era5_2018/*.arrow"
- config_name: "extended_web_traffic_with_missing"
data_files:
- split: train
path: "extended_web_traffic_with_missing/*.arrow"
- config_name: "favorita_sales"
data_files:
- split: train
path: "favorita_sales/*.arrow"
- config_name: "favorita_transactions"
data_files:
- split: train
path: "favorita_transactions/*.arrow"
- config_name: "fred_md"
data_files:
- split: train
path: "fred_md/*.arrow"
- config_name: "gfc12_load"
data_files:
- split: train
path: "gfc12_load/*.arrow"
- config_name: "gfc14_load"
data_files:
- split: train
path: "gfc14_load/*.arrow"
- config_name: "gfc17_load"
data_files:
- split: train
path: "gfc17_load/*.arrow"
- config_name: "godaddy"
data_files:
- split: train
path: "godaddy/*.arrow"
- config_name: "hierarchical_sales"
data_files:
- split: train
path: "hierarchical_sales/*.arrow"
- config_name: "hog"
data_files:
- split: train
path: "hog/*.arrow"
- config_name: "hospital"
data_files:
- split: train
path: "hospital/*.arrow"
- config_name: "ideal"
data_files:
- split: train
path: "ideal/*.arrow"
- config_name: "kaggle_web_traffic_weekly"
data_files:
- split: train
path: "kaggle_web_traffic_weekly/*.arrow"
- config_name: "kdd2022"
data_files:
- split: train
path: "kdd2022/*.arrow"
- config_name: "kdd_cup_2018_with_missing"
data_files:
- split: train
path: "kdd_cup_2018_with_missing/*.arrow"
- config_name: "largest"
data_files:
- split: train
path: "largest_*/*.arrow"
- config_name: "largest_2017"
data_files:
- split: train
path: "largest_2017/*.arrow"
- config_name: "largest_2018"
data_files:
- split: train
path: "largest_2018/*.arrow"
- config_name: "largest_2019"
data_files:
- split: train
path: "largest_2019/*.arrow"
- config_name: "largest_2020"
data_files:
- split: train
path: "largest_2020/*.arrow"
- config_name: "largest_2021"
data_files:
- split: train
path: "largest_2021/*.arrow"
- config_name: "lcl"
data_files:
- split: train
path: "lcl/*.arrow"
- config_name: "london_smart_meters_with_missing"
data_files:
- split: train
path: "london_smart_meters_with_missing/*.arrow"
- config_name: "m1_monthly"
data_files:
- split: train
path: "m1_monthly/*.arrow"
- config_name: "m1_quarterly"
data_files:
- split: train
path: "m1_quarterly/*.arrow"
- config_name: "m1_yearly"
data_files:
- split: train
path: "m1_yearly/*.arrow"
- config_name: "m4_daily"
data_files:
- split: train
path: "m4_daily/*.arrow"
- config_name: "m4_hourly"
data_files:
- split: train
path: "m4_hourly/*.arrow"
- config_name: "m4_monthly"
data_files:
- split: train
path: "m4_monthly/*.arrow"
- config_name: "m4_quarterly"
data_files:
- split: train
path: "m4_quarterly/*.arrow"
- config_name: "m4_weekly"
data_files:
- split: train
path: "m4_weekly/*.arrow"
- config_name: "m4_yearly"
data_files:
- split: train
path: "m4_yearly/*.arrow"
- config_name: "m5"
data_files:
- split: train
path: "m5/*.arrow"
- config_name: "monash_m3_monthly"
data_files:
- split: train
path: "monash_m3_monthly/*.arrow"
- config_name: "monash_m3_other"
data_files:
- split: train
path: "monash_m3_other/*.arrow"
- config_name: "monash_m3_quarterly"
data_files:
- split: train
path: "monash_m3_quarterly/*.arrow"
- config_name: "monash_m3_yearly"
data_files:
- split: train
path: "monash_m3_yearly/*.arrow"
- config_name: "nn5_daily_with_missing"
data_files:
- split: train
path: "nn5_daily_with_missing/*.arrow"
- config_name: "nn5_weekly"
data_files:
- split: train
path: "nn5_weekly/*.arrow"
- config_name: "oikolab_weather"
data_files:
- split: train
path: "oikolab_weather/*.arrow"
- config_name: "pdb"
data_files:
- split: train
path: "pdb/*.arrow"
- config_name: "pedestrian_counts"
data_files:
- split: train
path: "pedestrian_counts/*.arrow"
- config_name: "project_tycho"
data_files:
- split: train
path: "project_tycho/*.arrow"
- config_name: "residential_load_power"
data_files:
- split: train
path: "residential_load_power/*.arrow"
- config_name: "residential_pv_power"
data_files:
- split: train
path: "residential_pv_power/*.arrow"
- config_name: "restaurant"
data_files:
- split: train
path: "restaurant/*.arrow"
- config_name: "rideshare_with_missing"
data_files:
- split: train
path: "rideshare_with_missing/*.arrow"
- config_name: "saugeenday"
data_files:
- split: train
path: "saugeenday/*.arrow"
- config_name: "sceaux"
data_files:
- split: train
path: "sceaux/*.arrow"
- config_name: "smart"
data_files:
- split: train
path: "smart/*.arrow"
- config_name: "solar_power"
data_files:
- split: train
path: "solar_power/*.arrow"
- config_name: "spain"
data_files:
- split: train
path: "spain/*.arrow"
- config_name: "subseasonal"
data_files:
- split: train
path: "subseasonal/*.arrow"
- config_name: "subseasonal_precip"
data_files:
- split: train
path: "subseasonal_precip/*.arrow"
- config_name: "sunspot_with_missing"
data_files:
- split: train
path: "sunspot_with_missing/*.arrow"
- config_name: "taxi_30min"
data_files:
- split: train
path: "taxi_30min/*.arrow"
- config_name: "temperature_rain_with_missing"
data_files:
- split: train
path: "temperature_rain_with_missing/*.arrow"
- config_name: "tourism_monthly"
data_files:
- split: train
path: "tourism_monthly/*.arrow"
- config_name: "tourism_quarterly"
data_files:
- split: train
path: "tourism_quarterly/*.arrow"
- config_name: "tourism_yearly"
data_files:
- split: train
path: "tourism_yearly/*.arrow"
- config_name: "traffic_hourly"
data_files:
- split: train
path: "traffic_hourly/*.arrow"
- config_name: "traffic_weekly"
data_files:
- split: train
path: "traffic_weekly/*.arrow"
- config_name: "uber_tlc_daily"
data_files:
- split: train
path: "uber_tlc_daily/*.arrow"
- config_name: "uber_tlc_hourly"
data_files:
- split: train
path: "uber_tlc_hourly/*.arrow"
- config_name: "us_births"
data_files:
- split: train
path: "us_births/*.arrow"
- config_name: "vehicle_trips_with_missing"
data_files:
- split: train
path: "vehicle_trips_with_missing/*.arrow"
- config_name: "weather"
data_files:
- split: train
path: "weather/*.arrow"
- config_name: "wiki-rolling_nips"
data_files:
- split: train
path: "wiki-rolling_nips/*.arrow"
- config_name: "wind_farms_with_missing"
data_files:
- split: train
path: "wind_farms_with_missing/*.arrow"
- config_name: "wind_power"
data_files:
- split: train
path: "wind_power/*.arrow"
---
# LOTSA Data
The Large-scale Open Time Series Archive (LOTSA) is a collection of open time series datasets for time series forecasting.
It was collected for the purpose of pre-training Large Time Series Models.
See the [paper](https://arxiv.org/abs/2402.02592) and [codebase](https://github.com/SalesforceAIResearch/uni2ts) for more information.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
If you're using LOTSA data in your research or applications, please cite it using this BibTeX:
**BibTeX:**
```markdown
@article{woo2024unified,
title={Unified Training of Universal Time Series Forecasting Transformers},
author={Woo, Gerald and Liu, Chenghao and Kumar, Akshat and Xiong, Caiming and Savarese, Silvio and Sahoo, Doyen},
journal={arXiv preprint arXiv:2402.02592},
year={2024}
}
``` |
mteb/sickr-sts | mteb | "2022-09-27T19:13:22Z" | 11,071 | 4 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-19T14:28:03Z" | ---
language:
- en
--- |
fixie-ai/common_voice_17_0 | fixie-ai | "2025-01-07T18:53:14Z" | 11,042 | 5 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-21T18:56:23Z" | ---
dataset_info:
- config_name: ar
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 300234489.0
num_examples: 10470
- name: test
num_bytes: 311234035.0
num_examples: 10480
- name: train
num_bytes: 718845895.0
num_examples: 28369
download_size: 1250028526
dataset_size: 1330314419.0
- config_name: ast
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 10829705.0
num_examples: 387
- name: validation
num_bytes: 2892341.0
num_examples: 112
- name: test
num_bytes: 4465643.0
num_examples: 162
- name: other
num_bytes: 23505247.0
num_examples: 865
- name: invalidated
num_bytes: 482228.0
num_examples: 16
- name: validated
num_bytes: 18236675.0
num_examples: 663
download_size: 58002985
dataset_size: 60411839.0
- config_name: be
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 10733982640.578
num_examples: 347637
- name: validation
num_bytes: 568083900.76
num_examples: 15880
- name: test
num_bytes: 554671489.332
num_examples: 15878
download_size: 10989547372
dataset_size: 11856738030.67
- config_name: bg
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 148338156.76
num_examples: 4849
- name: validation
num_bytes: 94198533.448
num_examples: 2766
- name: test
num_bytes: 111571602.198
num_examples: 3201
- name: other
num_bytes: 72720896.586
num_examples: 2087
- name: invalidated
num_bytes: 27583684.0
num_examples: 746
- name: validated
num_bytes: 377935138.456
num_examples: 10832
download_size: 799144053
dataset_size: 832348011.448
- config_name: br
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 65441060.952
num_examples: 2663
- name: validation
num_bytes: 58381364.479
num_examples: 2253
- name: test
num_bytes: 57203564.256
num_examples: 2212
- name: other
num_bytes: 196312974.159
num_examples: 8037
- name: invalidated
num_bytes: 38704614.352
num_examples: 1364
- name: validated
num_bytes: 542193361.699
num_examples: 21007
download_size: 871007071
dataset_size: 958236939.897
- config_name: cs
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 715383853.824
num_examples: 20144
- name: validation
num_bytes: 313988229.844
num_examples: 9009
- name: test
num_bytes: 343116085.98
num_examples: 9067
- name: other
num_bytes: 4245083794.24
num_examples: 148316
- name: invalidated
num_bytes: 81780482.483
num_examples: 2213
- name: validated
num_bytes: 1867262013.204
num_examples: 61391
download_size: 7228185761
dataset_size: 7566614459.575001
- config_name: cy
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 334497968.0
num_examples: 7960
- name: validation
num_bytes: 202144347.435
num_examples: 5371
- name: test
num_bytes: 219542714.248
num_examples: 5379
- name: other
num_bytes: 853036757.62
num_examples: 20145
- name: invalidated
num_bytes: 168127588.328
num_examples: 4449
- name: validated
num_bytes: 3386459797.8919997
num_examples: 90369
download_size: 4946011941
dataset_size: 5163809173.523001
- config_name: da
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 82011190.3
num_examples: 3484
- name: validation
num_bytes: 68072840.16
num_examples: 2105
- name: test
num_bytes: 71855204.48
num_examples: 2530
- name: other
num_bytes: 9809263.0
num_examples: 396
- name: invalidated
num_bytes: 11802077.0
num_examples: 404
- name: validated
num_bytes: 167119907.175
num_examples: 10225
download_size: 489135817
dataset_size: 410670482.115
- config_name: de
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 23759438592.6
num_examples: 589100
- name: test
num_bytes: 715601886.0
num_examples: 16183
- name: validation
num_bytes: 710830645.0
num_examples: 16183
download_size: 24582787064
dataset_size: 25185871123.6
- config_name: el
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 54020374.6
num_examples: 1920
- name: validation
num_bytes: 45994345.6
num_examples: 1700
- name: test
num_bytes: 53316364.508
num_examples: 1701
- name: other
num_bytes: 286461727.86
num_examples: 10330
- name: invalidated
num_bytes: 24280825.0
num_examples: 837
- name: validated
num_bytes: 506396669.318
num_examples: 16199
download_size: 931351333
dataset_size: 970470306.886
- config_name: en
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: test
num_bytes: 9329520290.338
num_examples: 16393
- name: validation
num_bytes: 9434608798.338
num_examples: 16393
- name: train
num_bytes: 44987747251.6
num_examples: 1101170
- name: validated
num_bytes: 68921650062.024
num_examples: 1799288
download_size: 128219063641
dataset_size: 132673526402.3
- config_name: es
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 13216214878.31
num_examples: 336846
- name: test
num_bytes: 748084507.0
num_examples: 15857
- name: validation
num_bytes: 770184703.0
num_examples: 15857
download_size: 14415677901
dataset_size: 14734484088.309998
- config_name: et
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 155780274.846
num_examples: 3157
- name: validation
num_bytes: 124261027.42200002
num_examples: 2653
- name: test
num_bytes: 142296894.679
num_examples: 2653
- name: other
num_bytes: 2511793.0
num_examples: 60
- name: invalidated
num_bytes: 442940142.204
num_examples: 7449
- name: validated
num_bytes: 1309302759.063
num_examples: 24381
download_size: 1894945286
dataset_size: 2177092891.2139997
- config_name: fa
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 796909284.504
num_examples: 28893
- name: validation
num_bytes: 366343505.737
num_examples: 10559
- name: test
num_bytes: 403851344.903
num_examples: 10559
- name: other
num_bytes: 1242584327.472
num_examples: 32421
- name: invalidated
num_bytes: 663271290.15
num_examples: 14558
- name: validated
num_bytes: 9949122461.2
num_examples: 328720
download_size: 12556870202
dataset_size: 13422082213.966
- config_name: fi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 59037222.672
num_examples: 2076
- name: validation
num_bytes: 49998252.45
num_examples: 1770
- name: test
num_bytes: 57656484.763
num_examples: 1763
- name: other
num_bytes: 171069411.222
num_examples: 6202
- name: invalidated
num_bytes: 9828536.0
num_examples: 293
- name: validated
num_bytes: 345303318.762
num_examples: 10447
download_size: 639777329
dataset_size: 692893225.869
- config_name: fr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 20630346378.228
num_examples: 558054
- name: test
num_bytes: 684908439.0
num_examples: 16159
- name: validation
num_bytes: 703910244.0
num_examples: 16159
download_size: 21981003249
dataset_size: 22019165061.228
- config_name: frold
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 20616364930.228
num_examples: 558054
- name: test
num_bytes: 674959025.258
num_examples: 16159
- name: validation
num_bytes: 703829746.38
num_examples: 16159
download_size: 21972606682
dataset_size: 21995153701.866
- config_name: gl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 765856451.168
num_examples: 25159
- name: validation
num_bytes: 304535689.856
num_examples: 9982
- name: test
num_bytes: 324580802.22
num_examples: 9990
- name: other
num_bytes: 1258726401.146
num_examples: 41313
- name: invalidated
num_bytes: 149035909.144
num_examples: 4688
- name: validated
num_bytes: 1488072059.26
num_examples: 45780
download_size: 4105373800
dataset_size: 4290807312.7939997
- config_name: ha
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 52293332.0
num_examples: 1925
- name: validation
num_bytes: 15326816.0
num_examples: 582
- name: test
num_bytes: 19095449.0
num_examples: 661
download_size: 83233267
dataset_size: 86715597.0
- config_name: hi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 275394930.996
num_examples: 9378
- name: validation
num_bytes: 145392985.176
num_examples: 4856
- name: test
num_bytes: 220164125.264
num_examples: 6308
- name: other
num_bytes: 253400896.056
num_examples: 8088
- name: invalidated
num_bytes: 53706876.0
num_examples: 1550
- name: validated
num_bytes: 721036368.28
num_examples: 20658
download_size: 1481543483
dataset_size: 1669096181.7719998
- config_name: hu
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 1290925823.46
num_examples: 37140
- name: validation
num_bytes: 386527859.2
num_examples: 11350
- name: test
num_bytes: 408581762.4
num_examples: 11435
- name: other
num_bytes: 1601200599.1889997
num_examples: 49019
- name: invalidated
num_bytes: 106830322.07899998
num_examples: 3091
- name: validated
num_bytes: 2029885437.988
num_examples: 60358
download_size: 5649520486
dataset_size: 5823951804.316
- config_name: it
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 6137402083.638
num_examples: 169771
- name: validation
num_bytes: 701042124.0
num_examples: 15149
- name: test
num_bytes: 741163579.0
num_examples: 15155
download_size: 7600033249
dataset_size: 7579607786.638
- config_name: ja
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 186515137.0
num_examples: 6261
- name: test
num_bytes: 199063298.0
num_examples: 6261
- name: train
num_bytes: 307772889.0
num_examples: 10039
download_size: 684220424
dataset_size: 693351324.0
- config_name: ka
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 1734206832.784
num_examples: 52321
- name: validation
num_bytes: 460968626.98
num_examples: 12545
- name: test
num_bytes: 484176773.044
num_examples: 12618
- name: other
num_bytes: 1647678864.023
num_examples: 48563
- name: invalidated
num_bytes: 154652385.072
num_examples: 4252
- name: validated
num_bytes: 3039590286.645
num_examples: 97230
download_size: 7191808920
dataset_size: 7521273768.547998
- config_name: ko
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 14414452.0
num_examples: 376
- name: validation
num_bytes: 10399952.0
num_examples: 330
- name: test
num_bytes: 10995530.0
num_examples: 339
- name: other
num_bytes: 81478785.762
num_examples: 2057
- name: invalidated
num_bytes: 12109483.0
num_examples: 270
- name: validated
num_bytes: 35445686.402
num_examples: 1046
download_size: 154764122
dataset_size: 164843889.164
- config_name: lt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 209835116.997
num_examples: 7253
- name: validation
num_bytes: 151062682.688
num_examples: 4436
- name: test
num_bytes: 162556442.629
num_examples: 4753
- name: other
num_bytes: 6904494.0
num_examples: 228
- name: invalidated
num_bytes: 30174752.0
num_examples: 817
- name: validated
num_bytes: 569626378.111
num_examples: 16643
download_size: 1095722603
dataset_size: 1130159866.425
- config_name: lv
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 389918186.736
num_examples: 11364
- name: validation
num_bytes: 214444094.4
num_examples: 6752
- name: test
num_bytes: 247016773.056
num_examples: 6752
- name: other
num_bytes: 774575206.568
num_examples: 32248
- name: invalidated
num_bytes: 246579090.149
num_examples: 9533
- name: validated
num_bytes: 5125871207.440001
num_examples: 171652
download_size: 6544433914
dataset_size: 6998404558.348999
- config_name: mk
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 47301367.808
num_examples: 1686
- name: validation
num_bytes: 34148332.96
num_examples: 1289
- name: test
num_bytes: 33004372.576
num_examples: 1097
- name: other
num_bytes: 360214120.86600006
num_examples: 12289
- name: invalidated
num_bytes: 7369474.0
num_examples: 243
- name: validated
num_bytes: 197695517.31999996
num_examples: 6512
download_size: 565282221
dataset_size: 679733185.53
- config_name: ml
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 31264003.679
num_examples: 1259
- name: validation
num_bytes: 18739113.0
num_examples: 764
- name: test
num_bytes: 18614275.0
num_examples: 710
- name: other
num_bytes: 141699368.196
num_examples: 5621
- name: invalidated
num_bytes: 3191472.0
num_examples: 106
- name: validated
num_bytes: 76056084.6
num_examples: 2984
download_size: 277861858
dataset_size: 289564316.475
- config_name: mn
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 91286618.6
num_examples: 2175
- name: validation
num_bytes: 83918248.11
num_examples: 1870
- name: test
num_bytes: 87301693.544
num_examples: 1896
- name: other
num_bytes: 261783831.25699997
num_examples: 5773
- name: invalidated
num_bytes: 37994415.0
num_examples: 809
- name: validated
num_bytes: 401045422.717
num_examples: 8757
download_size: 877760445
dataset_size: 963330229.2279999
- config_name: mr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 82324371.32
num_examples: 2215
- name: validation
num_bytes: 72622291.4
num_examples: 1780
- name: test
num_bytes: 69184965.291
num_examples: 1751
- name: other
num_bytes: 105920245.12000002
num_examples: 2805
- name: invalidated
num_bytes: 89957578.6
num_examples: 2265
- name: validated
num_bytes: 430654155.517
num_examples: 10901
download_size: 807328544
dataset_size: 850663607.2479999
- config_name: nl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 1296442281.912
num_examples: 34898
- name: validation
num_bytes: 368174915.228
num_examples: 11252
- name: test
num_bytes: 408713015.03199995
num_examples: 11266
- name: other
num_bytes: 83953868.822
num_examples: 2771
- name: invalidated
num_bytes: 191476101.2
num_examples: 5580
- name: validated
num_bytes: 2890451379.794
num_examples: 90449
download_size: 4761956599
dataset_size: 5239211561.988001
- config_name: oc
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 8370168.0
num_examples: 271
- name: validation
num_bytes: 7369724.0
num_examples: 260
- name: test
num_bytes: 7981225.0
num_examples: 254
- name: other
num_bytes: 233530880.4
num_examples: 7632
- name: invalidated
num_bytes: 5792724.0
num_examples: 182
- name: validated
num_bytes: 49584538.944
num_examples: 1668
download_size: 292926902
dataset_size: 312629260.344
- config_name: pl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 681180803.048
num_examples: 20729
- name: validation
num_bytes: 325217628.02
num_examples: 9230
- name: test
num_bytes: 368033596.56
num_examples: 9230
- name: other
num_bytes: 22160515.0
num_examples: 662
- name: invalidated
num_bytes: 279557995.4
num_examples: 6605
- name: validated
num_bytes: 4518718954.4609995
num_examples: 132661
download_size: 6000668493
dataset_size: 6194869492.488999
- config_name: pt
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 290319070.0
num_examples: 9464
- name: test
num_bytes: 304560776.0
num_examples: 9467
- name: train
num_bytes: 624494986.0
num_examples: 21968
download_size: 1188978689
dataset_size: 1219374832.0
- config_name: ro
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 133078550.984
num_examples: 5141
- name: validation
num_bytes: 105068346.48
num_examples: 3881
- name: test
num_bytes: 123465190.968
num_examples: 3896
- name: other
num_bytes: 543898614.704
num_examples: 23087
- name: invalidated
num_bytes: 23898694.0
num_examples: 977
- name: validated
num_bytes: 560844530.353
num_examples: 17737
download_size: 1437521485
dataset_size: 1490253927.4889998
- config_name: ru
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 393037777.0
num_examples: 10203
- name: test
num_bytes: 397099376.0
num_examples: 10203
- name: train
num_bytes: 977625337.0
num_examples: 26377
download_size: 1734268016
dataset_size: 1767762490.0
- config_name: sk
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 74831491.866
num_examples: 3258
- name: validation
num_bytes: 67653499.816
num_examples: 2588
- name: test
num_bytes: 70771288.681
num_examples: 2647
- name: other
num_bytes: 92158853.128
num_examples: 3392
- name: invalidated
num_bytes: 25400576.0
num_examples: 833
- name: validated
num_bytes: 524330322.198
num_examples: 19513
download_size: 767611996
dataset_size: 855146031.689
- config_name: sl
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 30021851.608
num_examples: 1388
- name: validation
num_bytes: 33182159.072
num_examples: 1232
- name: test
num_bytes: 36852679.33
num_examples: 1242
- name: other
num_bytes: 71031102.54
num_examples: 3145
- name: invalidated
num_bytes: 8357183.0
num_examples: 281
- name: validated
num_bytes: 318885513.516
num_examples: 10819
download_size: 481787837
dataset_size: 498330489.066
- config_name: sr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 33763655.765
num_examples: 1879
- name: validation
num_bytes: 25179221.496
num_examples: 1583
- name: test
num_bytes: 31283094.164
num_examples: 1539
- name: other
num_bytes: 34633621.309
num_examples: 1781
- name: invalidated
num_bytes: 5912716.0
num_examples: 290
- name: validated
num_bytes: 121128666.78
num_examples: 5970
download_size: 219387774
dataset_size: 251900975.514
- config_name: sv-SE
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 201604157.344
num_examples: 7744
- name: validation
num_bytes: 145407584.16
num_examples: 5210
- name: test
num_bytes: 168456898.744
num_examples: 5259
- name: other
num_bytes: 182626841.121
num_examples: 6759
- name: invalidated
num_bytes: 43666692.56
num_examples: 1428
- name: validated
num_bytes: 1302439008.81
num_examples: 40770
download_size: 1772780355
dataset_size: 2044201182.7389998
- config_name: sw
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 1625554237.232
num_examples: 46494
- name: validation
num_bytes: 393719831.126
num_examples: 12251
- name: test
num_bytes: 447161293.396
num_examples: 12253
- name: other
num_bytes: 11713924829.874
num_examples: 377365
- name: invalidated
num_bytes: 2500259913.3079996
num_examples: 80612
- name: validated
num_bytes: 9054232290.616999
num_examples: 267001
download_size: 25679221842
dataset_size: 25734852395.552998
- config_name: ta
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 1787645589.3039997
num_examples: 45587
- name: validation
num_bytes: 411960865.99
num_examples: 12095
- name: test
num_bytes: 478673592.114
num_examples: 12074
- name: other
num_bytes: 3643795189.905
num_examples: 93989
- name: invalidated
num_bytes: 230273211.249
num_examples: 5693
- name: validated
num_bytes: 5422820571.824
num_examples: 135391
download_size: 11548448217
dataset_size: 11975169020.386002
- config_name: te
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 1696115.0
num_examples: 62
- name: validation
num_bytes: 1381471.0
num_examples: 48
- name: test
num_bytes: 1293519.0
num_examples: 49
- name: other
num_bytes: 43324939.612
num_examples: 1732
- name: invalidated
num_bytes: 441556.0
num_examples: 18
- name: validated
num_bytes: 6161936.0
num_examples: 224
download_size: 54489346
dataset_size: 54299536.612
- config_name: th
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 865414649.837
num_examples: 32823
- name: validation
num_bytes: 328919810.63
num_examples: 11042
- name: test
num_bytes: 337683048.872
num_examples: 11042
- name: other
num_bytes: 5266135437.405999
num_examples: 206935
- name: invalidated
num_bytes: 332435894.647
num_examples: 9267
- name: validated
num_bytes: 4151072931.0839996
num_examples: 147160
download_size: 10608529487
dataset_size: 11281661772.476
- config_name: tr
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 854586956.976
num_examples: 35147
- name: validation
num_bytes: 265450510.268
num_examples: 11258
- name: test
num_bytes: 363424742.28
num_examples: 11290
- name: other
num_bytes: 4238883.0
num_examples: 117
- name: invalidated
num_bytes: 152949072.07
num_examples: 4530
- name: validated
num_bytes: 2694662410.926
num_examples: 114056
download_size: 4038924157
dataset_size: 4335312575.5199995
- config_name: uk
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 824014245.552
num_examples: 25137
- name: validation
num_bytes: 338351263.068
num_examples: 10007
- name: test
num_bytes: 363575667.839
num_examples: 10011
- name: other
num_bytes: 211123163.846
num_examples: 7851
- name: invalidated
num_bytes: 141986802.304
num_examples: 3204
- name: validated
num_bytes: 2579348540.4549994
num_examples: 75489
download_size: 4037277320
dataset_size: 4458399683.063999
- config_name: ur
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 133627299.408
num_examples: 5368
- name: validation
num_bytes: 98509203.154
num_examples: 4057
- name: test
num_bytes: 117242341.632
num_examples: 4056
- name: other
num_bytes: 3630451215.8669996
num_examples: 135861
- name: invalidated
num_bytes: 197321142.268
num_examples: 6818
- name: validated
num_bytes: 1353163990.006
num_examples: 53858
download_size: 5354414559
dataset_size: 5530315192.335001
- config_name: vi
features:
- name: client_id
dtype: string
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 48000
- name: sentence
dtype: string
- name: up_votes
dtype: int64
- name: down_votes
dtype: int64
- name: age
dtype: string
- name: gender
dtype: string
- name: accent
dtype: string
- name: locale
dtype: string
- name: segment
dtype: string
- name: variant
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 76589078.04
num_examples: 2298
- name: validation
num_bytes: 14388627.0
num_examples: 641
- name: test
num_bytes: 34782072.802
num_examples: 1274
- name: other
num_bytes: 329412205.64
num_examples: 11533
- name: invalidated
num_bytes: 11291189.0
num_examples: 377
- name: validated
num_bytes: 139846021.79
num_examples: 5135
download_size: 519996701
dataset_size: 606309194.272
configs:
- config_name: ar
data_files:
- split: validation
path: ar/validation-*
- split: test
path: ar/test-*
- split: train
path: ar/train-*
- config_name: ast
data_files:
- split: train
path: ast/train/**
- split: validation
path: ast/validation/**
- split: test
path: ast/test/**
- split: other
path: ast/other/**
- split: invalidated
path: ast/invalidated/**
- split: validated
path: ast/validated/**
- config_name: be
data_files:
- split: train
path: be/train/**
- split: validation
path: be/validation/**
- split: test
path: be/test/**
- config_name: bg
data_files:
- split: train
path: bg/train/**
- split: validation
path: bg/validation/**
- split: test
path: bg/test/**
- split: other
path: bg/other/**
- split: invalidated
path: bg/invalidated/**
- split: validated
path: bg/validated/**
- config_name: br
data_files:
- split: train
path: br/train/**
- split: validation
path: br/validation/**
- split: test
path: br/test/**
- split: other
path: br/other/**
- split: invalidated
path: br/invalidated/**
- split: validated
path: br/validated/**
- config_name: cs
data_files:
- split: train
path: cs/train/**
- split: validation
path: cs/validation/**
- split: test
path: cs/test/**
- split: other
path: cs/other/**
- split: invalidated
path: cs/invalidated/**
- split: validated
path: cs/validated/**
- config_name: cy
data_files:
- split: train
path: cy/train/**
- split: validation
path: cy/validation/**
- split: test
path: cy/test/**
- split: other
path: cy/other/**
- split: invalidated
path: cy/invalidated/**
- split: validated
path: cy/validated/**
- config_name: da
data_files:
- split: train
path: da/train/**
- split: validation
path: da/validation/**
- split: test
path: da/test/**
- split: other
path: da/other/**
- split: invalidated
path: da/invalidated/**
- split: validated
path: da/validated/**
- config_name: de
data_files:
- split: validation
path: de/validation-*
- split: test
path: de/test-*
- split: train
path: de/train-*
- config_name: el
data_files:
- split: train
path: el/train/**
- split: validation
path: el/validation/**
- split: test
path: el/test/**
- split: other
path: el/other/**
- split: invalidated
path: el/invalidated/**
- split: validated
path: el/validated/**
- config_name: en
data_files:
- split: test
path: en/test-*
- split: validation
path: en/validation-*
- split: train
path: en/train-*
- split: validated
path: en/validated-*
- config_name: es
data_files:
- split: validation
path: es/validation-*
- split: test
path: es/test-*
- split: train
path: es/train-*
- config_name: et
data_files:
- split: train
path: et/train/**
- split: validation
path: et/validation/**
- split: test
path: et/test/**
- split: other
path: et/other/**
- split: invalidated
path: et/invalidated/**
- split: validated
path: et/validated/**
- config_name: fa
data_files:
- split: train
path: fa/train/**
- split: validation
path: fa/validation/**
- split: test
path: fa/test/**
- split: other
path: fa/other/**
- split: invalidated
path: fa/invalidated/**
- split: validated
path: fa/validated/**
- config_name: fi
data_files:
- split: train
path: fi/train/**
- split: validation
path: fi/validation/**
- split: test
path: fi/test/**
- split: other
path: fi/other/**
- split: invalidated
path: fi/invalidated/**
- split: validated
path: fi/validated/**
- config_name: fr
data_files:
- split: validation
path: fr/validation-*
- split: train
path: frnew/train-*
- split: test
path: fr/test-*
- config_name: frold
data_files:
- split: train
path: fr/train-*
- split: test
path: fr/test-*
- split: validation
path: fr/validation-*
- config_name: gl
data_files:
- split: train
path: gl/train/**
- split: validation
path: gl/validation/**
- split: test
path: gl/test/**
- split: other
path: gl/other/**
- split: invalidated
path: gl/invalidated/**
- split: validated
path: gl/validated/**
- config_name: ha
data_files:
- split: train
path: ha/train/**
- split: validation
path: ha/validation/**
- split: test
path: ha/test/**
- config_name: hi
data_files:
- split: train
path: hi/train/**
- split: validation
path: hi/validation/**
- split: test
path: hi/test/**
- split: other
path: hi/other/**
- split: invalidated
path: hi/invalidated/**
- split: validated
path: hi/validated/**
- config_name: hu
data_files:
- split: train
path: hu/train/**
- split: validation
path: hu/validation/**
- split: test
path: hu/test/**
- split: other
path: hu/other/**
- split: invalidated
path: hu/invalidated/**
- split: validated
path: hu/validated/**
- config_name: it
data_files:
- split: validation
path: it/validation-*
- split: test
path: it/test-*
- split: train
path: it/train-*
- config_name: ja
data_files:
- split: validation
path: ja/validation-*
- split: test
path: ja/test-*
- split: train
path: ja/train-*
- config_name: ka
data_files:
- split: train
path: ka/train/**
- split: validation
path: ka/validation/**
- split: test
path: ka/test/**
- split: other
path: ka/other/**
- split: invalidated
path: ka/invalidated/**
- split: validated
path: ka/validated/**
- config_name: ko
data_files:
- split: train
path: ko/train/**
- split: validation
path: ko/validation/**
- split: test
path: ko/test/**
- split: other
path: ko/other/**
- split: invalidated
path: ko/invalidated/**
- split: validated
path: ko/validated/**
- config_name: lt
data_files:
- split: train
path: lt/train/**
- split: validation
path: lt/validation/**
- split: test
path: lt/test/**
- split: other
path: lt/other/**
- split: invalidated
path: lt/invalidated/**
- split: validated
path: lt/validated/**
- config_name: lv
data_files:
- split: train
path: lv/train/**
- split: validation
path: lv/validation/**
- split: test
path: lv/test/**
- split: other
path: lv/other/**
- split: invalidated
path: lv/invalidated/**
- split: validated
path: lv/validated/**
- config_name: mk
data_files:
- split: train
path: mk/train/**
- split: validation
path: mk/validation/**
- split: test
path: mk/test/**
- split: other
path: mk/other/**
- split: invalidated
path: mk/invalidated/**
- split: validated
path: mk/validated/**
- config_name: ml
data_files:
- split: train
path: ml/train/**
- split: validation
path: ml/validation/**
- split: test
path: ml/test/**
- split: other
path: ml/other/**
- split: invalidated
path: ml/invalidated/**
- split: validated
path: ml/validated/**
- config_name: mn
data_files:
- split: train
path: mn/train/**
- split: validation
path: mn/validation/**
- split: test
path: mn/test/**
- split: other
path: mn/other/**
- split: invalidated
path: mn/invalidated/**
- split: validated
path: mn/validated/**
- config_name: mr
data_files:
- split: train
path: mr/train/**
- split: validation
path: mr/validation/**
- split: test
path: mr/test/**
- split: other
path: mr/other/**
- split: invalidated
path: mr/invalidated/**
- split: validated
path: mr/validated/**
- config_name: nl
data_files:
- split: train
path: nl/train/**
- split: validation
path: nl/validation/**
- split: test
path: nl/test/**
- split: other
path: nl/other/**
- split: invalidated
path: nl/invalidated/**
- split: validated
path: nl/validated/**
- config_name: oc
data_files:
- split: train
path: oc/train/**
- split: validation
path: oc/validation/**
- split: test
path: oc/test/**
- split: other
path: oc/other/**
- split: invalidated
path: oc/invalidated/**
- split: validated
path: oc/validated/**
- config_name: pl
data_files:
- split: train
path: pl/train/**
- split: validation
path: pl/validation/**
- split: test
path: pl/test/**
- split: other
path: pl/other/**
- split: invalidated
path: pl/invalidated/**
- split: validated
path: pl/validated/**
- config_name: pt
data_files:
- split: validation
path: pt/validation-*
- split: test
path: pt/test-*
- split: train
path: pt/train-*
- config_name: ro
data_files:
- split: train
path: ro/train/**
- split: validation
path: ro/validation/**
- split: test
path: ro/test/**
- split: other
path: ro/other/**
- split: invalidated
path: ro/invalidated/**
- split: validated
path: ro/validated/**
- config_name: ru
data_files:
- split: validation
path: ru/validation-*
- split: test
path: ru/test-*
- split: train
path: ru/train-*
- config_name: sk
data_files:
- split: train
path: sk/train/**
- split: validation
path: sk/validation/**
- split: test
path: sk/test/**
- split: other
path: sk/other/**
- split: invalidated
path: sk/invalidated/**
- split: validated
path: sk/validated/**
- config_name: sl
data_files:
- split: train
path: sl/train/**
- split: validation
path: sl/validation/**
- split: test
path: sl/test/**
- split: other
path: sl/other/**
- split: invalidated
path: sl/invalidated/**
- split: validated
path: sl/validated/**
- config_name: sr
data_files:
- split: train
path: sr/train/**
- split: validation
path: sr/validation/**
- split: test
path: sr/test/**
- split: other
path: sr/other/**
- split: invalidated
path: sr/invalidated/**
- split: validated
path: sr/validated/**
- config_name: sv-SE
data_files:
- split: train
path: sv-SE/train/**
- split: validation
path: sv-SE/validation/**
- split: test
path: sv-SE/test/**
- split: other
path: sv-SE/other/**
- split: invalidated
path: sv-SE/invalidated/**
- split: validated
path: sv-SE/validated/**
- config_name: sw
data_files:
- split: train
path: sw/train/**
- split: validation
path: sw/validation/**
- split: test
path: sw/test/**
- split: other
path: sw/other/**
- split: invalidated
path: sw/invalidated/**
- split: validated
path: sw/validated/**
- config_name: ta
data_files:
- split: train
path: ta/train/**
- split: validation
path: ta/validation/**
- split: test
path: ta/test/**
- split: other
path: ta/other/**
- split: invalidated
path: ta/invalidated/**
- split: validated
path: ta/validated/**
- config_name: te
data_files:
- split: train
path: te/train/**
- split: validation
path: te/validation/**
- split: test
path: te/test/**
- split: other
path: te/other/**
- split: invalidated
path: te/invalidated/**
- split: validated
path: te/validated/**
- config_name: th
data_files:
- split: train
path: th/train/**
- split: validation
path: th/validation/**
- split: test
path: th/test/**
- split: other
path: th/other/**
- split: invalidated
path: th/invalidated/**
- split: validated
path: th/validated/**
- config_name: tr
data_files:
- split: train
path: tr/train/**
- split: validation
path: tr/validation/**
- split: test
path: tr/test/**
- split: other
path: tr/other/**
- split: invalidated
path: tr/invalidated/**
- split: validated
path: tr/validated/**
- config_name: uk
data_files:
- split: train
path: uk/train/**
- split: validation
path: uk/validation/**
- split: test
path: uk/test/**
- split: other
path: uk/other/**
- split: invalidated
path: uk/invalidated/**
- split: validated
path: uk/validated/**
- config_name: ur
data_files:
- split: train
path: ur/train/**
- split: validation
path: ur/validation/**
- split: test
path: ur/test/**
- split: other
path: ur/other/**
- split: invalidated
path: ur/invalidated/**
- split: validated
path: ur/validated/**
- config_name: vi
data_files:
- split: train
path: vi/train/**
- split: validation
path: vi/validation/**
- split: test
path: vi/test/**
- split: other
path: vi/other/**
- split: invalidated
path: vi/invalidated/**
- split: validated
path: vi/validated/**
---
|
allenai/sciq | allenai | "2024-01-04T16:23:51Z" | 11,038 | 97 | [
"task_categories:question-answering",
"task_ids:closed-domain-qa",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-3.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-nc-3.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: sciq
pretty_name: SciQ
dataset_info:
features:
- name: question
dtype: string
- name: distractor3
dtype: string
- name: distractor1
dtype: string
- name: distractor2
dtype: string
- name: correct_answer
dtype: string
- name: support
dtype: string
splits:
- name: train
num_bytes: 6546183
num_examples: 11679
- name: validation
num_bytes: 554120
num_examples: 1000
- name: test
num_bytes: 563927
num_examples: 1000
download_size: 4674410
dataset_size: 7664230
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for "sciq"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/sciq](https://allenai.org/data/sciq)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
### Dataset Summary
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 2.82 MB
- **Size of the generated dataset:** 7.68 MB
- **Total amount of disk used:** 10.50 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"correct_answer": "coriolis effect",
"distractor1": "muon effect",
"distractor2": "centrifugal effect",
"distractor3": "tropical effect",
"question": "What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?",
"support": "\"Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to..."
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `question`: a `string` feature.
- `distractor3`: a `string` feature.
- `distractor1`: a `string` feature.
- `distractor2`: a `string` feature.
- `correct_answer`: a `string` feature.
- `support`: a `string` feature.
### Data Splits
| name |train|validation|test|
|-------|----:|---------:|---:|
|default|11679| 1000|1000|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
The dataset is licensed under the [Creative Commons Attribution-NonCommercial 3.0 Unported License](http://creativecommons.org/licenses/by-nc/3.0/).
### Citation Information
```
@inproceedings{SciQ,
title={Crowdsourcing Multiple Choice Science Questions},
author={Johannes Welbl, Nelson F. Liu, Matt Gardner},
year={2017},
journal={arXiv:1707.06209v1}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
mteb/biosses-sts | mteb | "2022-09-27T19:13:38Z" | 11,021 | 1 | [
"language:en",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-19T14:47:25Z" | ---
language:
- en
--- |
HuggingFaceFW/fineweb-edu-score-2 | HuggingFaceFW | "2025-01-06T14:49:16Z" | 11,006 | 68 | [
"task_categories:text-generation",
"language:en",
"license:odc-by",
"size_categories:10B<n<100B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2404.14219",
"arxiv:2401.10020",
"arxiv:2109.07445",
"region:us"
] | [
"text-generation"
] | "2024-05-28T17:30:16Z" | ---
license: odc-by
task_categories:
- text-generation
language:
- en
pretty_name: FineWeb-Edu (score >= 2)
size_categories:
- n>1T
configs:
- config_name: default
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
data_files:
- split: train
path: data/*/*
- config_name: CC-MAIN-2024-51
data_files:
- split: train
path: data/CC-MAIN-2024-51/*
- config_name: CC-MAIN-2024-46
data_files:
- split: train
path: data/CC-MAIN-2024-46/*
- config_name: CC-MAIN-2024-42
data_files:
- split: train
path: data/CC-MAIN-2024-42/*
- config_name: CC-MAIN-2024-38
data_files:
- split: train
path: data/CC-MAIN-2024-38/*
- config_name: CC-MAIN-2024-33
data_files:
- split: train
path: data/CC-MAIN-2024-33/*
- config_name: CC-MAIN-2024-30
data_files:
- split: train
path: data/CC-MAIN-2024-30/*
- config_name: CC-MAIN-2024-26
data_files:
- split: train
path: data/CC-MAIN-2024-26/*
- config_name: CC-MAIN-2024-22
data_files:
- split: train
path: data/CC-MAIN-2024-22/*
- config_name: CC-MAIN-2024-18
data_files:
- split: train
path: data/CC-MAIN-2024-18/*
- config_name: CC-MAIN-2024-10
data_files:
- split: train
path: data/CC-MAIN-2024-10/*
- config_name: CC-MAIN-2023-50
data_files:
- split: train
path: data/CC-MAIN-2023-50/*
- config_name: CC-MAIN-2023-40
data_files:
- split: train
path: data/CC-MAIN-2023-40/*
- config_name: CC-MAIN-2023-23
data_files:
- split: train
path: data/CC-MAIN-2023-23/*
- config_name: CC-MAIN-2023-14
data_files:
- split: train
path: data/CC-MAIN-2023-14/*
- config_name: CC-MAIN-2023-06
data_files:
- split: train
path: data/CC-MAIN-2023-06/*
- config_name: CC-MAIN-2022-49
data_files:
- split: train
path: data/CC-MAIN-2022-49/*
- config_name: CC-MAIN-2022-40
data_files:
- split: train
path: data/CC-MAIN-2022-40/*
- config_name: CC-MAIN-2022-33
data_files:
- split: train
path: data/CC-MAIN-2022-33/*
- config_name: CC-MAIN-2022-27
data_files:
- split: train
path: data/CC-MAIN-2022-27/*
- config_name: CC-MAIN-2022-21
data_files:
- split: train
path: data/CC-MAIN-2022-21/*
- config_name: CC-MAIN-2022-05
data_files:
- split: train
path: data/CC-MAIN-2022-05/*
- config_name: CC-MAIN-2021-49
data_files:
- split: train
path: data/CC-MAIN-2021-49/*
- config_name: CC-MAIN-2021-43
data_files:
- split: train
path: data/CC-MAIN-2021-43/*
- config_name: CC-MAIN-2021-39
data_files:
- split: train
path: data/CC-MAIN-2021-39/*
- config_name: CC-MAIN-2021-31
data_files:
- split: train
path: data/CC-MAIN-2021-31/*
- config_name: CC-MAIN-2021-25
data_files:
- split: train
path: data/CC-MAIN-2021-25/*
- config_name: CC-MAIN-2021-21
data_files:
- split: train
path: data/CC-MAIN-2021-21/*
- config_name: CC-MAIN-2021-17
data_files:
- split: train
path: data/CC-MAIN-2021-17/*
- config_name: CC-MAIN-2021-10
data_files:
- split: train
path: data/CC-MAIN-2021-10/*
- config_name: CC-MAIN-2021-04
data_files:
- split: train
path: data/CC-MAIN-2021-04/*
- config_name: CC-MAIN-2020-50
data_files:
- split: train
path: data/CC-MAIN-2020-50/*
- config_name: CC-MAIN-2020-45
data_files:
- split: train
path: data/CC-MAIN-2020-45/*
- config_name: CC-MAIN-2020-40
data_files:
- split: train
path: data/CC-MAIN-2020-40/*
- config_name: CC-MAIN-2020-34
data_files:
- split: train
path: data/CC-MAIN-2020-34/*
- config_name: CC-MAIN-2020-29
data_files:
- split: train
path: data/CC-MAIN-2020-29/*
- config_name: CC-MAIN-2020-24
data_files:
- split: train
path: data/CC-MAIN-2020-24/*
- config_name: CC-MAIN-2020-16
data_files:
- split: train
path: data/CC-MAIN-2020-16/*
- config_name: CC-MAIN-2020-10
data_files:
- split: train
path: data/CC-MAIN-2020-10/*
- config_name: CC-MAIN-2020-05
data_files:
- split: train
path: data/CC-MAIN-2020-05/*
- config_name: CC-MAIN-2019-51
data_files:
- split: train
path: data/CC-MAIN-2019-51/*
- config_name: CC-MAIN-2019-47
data_files:
- split: train
path: data/CC-MAIN-2019-47/*
- config_name: CC-MAIN-2019-43
data_files:
- split: train
path: data/CC-MAIN-2019-43/*
- config_name: CC-MAIN-2019-39
data_files:
- split: train
path: data/CC-MAIN-2019-39/*
- config_name: CC-MAIN-2019-35
data_files:
- split: train
path: data/CC-MAIN-2019-35/*
- config_name: CC-MAIN-2019-30
data_files:
- split: train
path: data/CC-MAIN-2019-30/*
- config_name: CC-MAIN-2019-26
data_files:
- split: train
path: data/CC-MAIN-2019-26/*
- config_name: CC-MAIN-2019-22
data_files:
- split: train
path: data/CC-MAIN-2019-22/*
- config_name: CC-MAIN-2019-18
data_files:
- split: train
path: data/CC-MAIN-2019-18/*
- config_name: CC-MAIN-2019-13
data_files:
- split: train
path: data/CC-MAIN-2019-13/*
- config_name: CC-MAIN-2019-09
data_files:
- split: train
path: data/CC-MAIN-2019-09/*
- config_name: CC-MAIN-2019-04
data_files:
- split: train
path: data/CC-MAIN-2019-04/*
- config_name: CC-MAIN-2018-51
data_files:
- split: train
path: data/CC-MAIN-2018-51/*
- config_name: CC-MAIN-2018-47
data_files:
- split: train
path: data/CC-MAIN-2018-47/*
- config_name: CC-MAIN-2018-43
data_files:
- split: train
path: data/CC-MAIN-2018-43/*
- config_name: CC-MAIN-2018-39
data_files:
- split: train
path: data/CC-MAIN-2018-39/*
- config_name: CC-MAIN-2018-34
data_files:
- split: train
path: data/CC-MAIN-2018-34/*
- config_name: CC-MAIN-2018-30
data_files:
- split: train
path: data/CC-MAIN-2018-30/*
- config_name: CC-MAIN-2018-26
data_files:
- split: train
path: data/CC-MAIN-2018-26/*
- config_name: CC-MAIN-2018-22
data_files:
- split: train
path: data/CC-MAIN-2018-22/*
- config_name: CC-MAIN-2018-17
data_files:
- split: train
path: data/CC-MAIN-2018-17/*
- config_name: CC-MAIN-2018-13
data_files:
- split: train
path: data/CC-MAIN-2018-13/*
- config_name: CC-MAIN-2018-09
data_files:
- split: train
path: data/CC-MAIN-2018-09/*
- config_name: CC-MAIN-2018-05
data_files:
- split: train
path: data/CC-MAIN-2018-05/*
- config_name: CC-MAIN-2017-51
data_files:
- split: train
path: data/CC-MAIN-2017-51/*
- config_name: CC-MAIN-2017-47
data_files:
- split: train
path: data/CC-MAIN-2017-47/*
- config_name: CC-MAIN-2017-43
data_files:
- split: train
path: data/CC-MAIN-2017-43/*
- config_name: CC-MAIN-2017-39
data_files:
- split: train
path: data/CC-MAIN-2017-39/*
- config_name: CC-MAIN-2017-34
data_files:
- split: train
path: data/CC-MAIN-2017-34/*
- config_name: CC-MAIN-2017-30
data_files:
- split: train
path: data/CC-MAIN-2017-30/*
- config_name: CC-MAIN-2017-26
data_files:
- split: train
path: data/CC-MAIN-2017-26/*
- config_name: CC-MAIN-2017-22
data_files:
- split: train
path: data/CC-MAIN-2017-22/*
- config_name: CC-MAIN-2017-17
data_files:
- split: train
path: data/CC-MAIN-2017-17/*
- config_name: CC-MAIN-2017-13
data_files:
- split: train
path: data/CC-MAIN-2017-13/*
- config_name: CC-MAIN-2017-09
data_files:
- split: train
path: data/CC-MAIN-2017-09/*
- config_name: CC-MAIN-2017-04
data_files:
- split: train
path: data/CC-MAIN-2017-04/*
- config_name: CC-MAIN-2016-50
data_files:
- split: train
path: data/CC-MAIN-2016-50/*
- config_name: CC-MAIN-2016-44
data_files:
- split: train
path: data/CC-MAIN-2016-44/*
- config_name: CC-MAIN-2016-40
data_files:
- split: train
path: data/CC-MAIN-2016-40/*
- config_name: CC-MAIN-2016-36
data_files:
- split: train
path: data/CC-MAIN-2016-36/*
- config_name: CC-MAIN-2016-30
data_files:
- split: train
path: data/CC-MAIN-2016-30/*
- config_name: CC-MAIN-2016-26
data_files:
- split: train
path: data/CC-MAIN-2016-26/*
- config_name: CC-MAIN-2016-22
data_files:
- split: train
path: data/CC-MAIN-2016-22/*
- config_name: CC-MAIN-2016-18
data_files:
- split: train
path: data/CC-MAIN-2016-18/*
- config_name: CC-MAIN-2016-07
data_files:
- split: train
path: data/CC-MAIN-2016-07/*
- config_name: CC-MAIN-2015-48
data_files:
- split: train
path: data/CC-MAIN-2015-48/*
- config_name: CC-MAIN-2015-40
data_files:
- split: train
path: data/CC-MAIN-2015-40/*
- config_name: CC-MAIN-2015-35
data_files:
- split: train
path: data/CC-MAIN-2015-35/*
- config_name: CC-MAIN-2015-32
data_files:
- split: train
path: data/CC-MAIN-2015-32/*
- config_name: CC-MAIN-2015-27
data_files:
- split: train
path: data/CC-MAIN-2015-27/*
- config_name: CC-MAIN-2015-22
data_files:
- split: train
path: data/CC-MAIN-2015-22/*
- config_name: CC-MAIN-2015-18
data_files:
- split: train
path: data/CC-MAIN-2015-18/*
- config_name: CC-MAIN-2015-14
data_files:
- split: train
path: data/CC-MAIN-2015-14/*
- config_name: CC-MAIN-2015-11
data_files:
- split: train
path: data/CC-MAIN-2015-11/*
- config_name: CC-MAIN-2015-06
data_files:
- split: train
path: data/CC-MAIN-2015-06/*
- config_name: CC-MAIN-2014-52
data_files:
- split: train
path: data/CC-MAIN-2014-52/*
- config_name: CC-MAIN-2014-49
data_files:
- split: train
path: data/CC-MAIN-2014-49/*
- config_name: CC-MAIN-2014-42
data_files:
- split: train
path: data/CC-MAIN-2014-42/*
- config_name: CC-MAIN-2014-41
data_files:
- split: train
path: data/CC-MAIN-2014-41/*
- config_name: CC-MAIN-2014-35
data_files:
- split: train
path: data/CC-MAIN-2014-35/*
- config_name: CC-MAIN-2014-23
data_files:
- split: train
path: data/CC-MAIN-2014-23/*
- config_name: CC-MAIN-2014-15
data_files:
- split: train
path: data/CC-MAIN-2014-15/*
- config_name: CC-MAIN-2014-10
data_files:
- split: train
path: data/CC-MAIN-2014-10/*
- config_name: CC-MAIN-2013-48
data_files:
- split: train
path: data/CC-MAIN-2013-48/*
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: data/CC-MAIN-2013-20/*
---
# 📚 FineWeb-Edu-score-2
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/wwRnEQydH9qdRtFofIE-A.png" alt="FineWeb-Edu: The finest collection of educational content the web has to offer">
</center>
> 1.3 trillion tokens of the finest educational data the 🌐 web has to offer
## What is it?
📚 FineWeb-Edu dataset consists of **1.3T tokens** ([FineWeb-Edu](https://huggingface.co./datasets/HuggingFaceFW/fineweb-edu)) and **5.4T tokens** of educational web pages filtered from 🍷 FineWeb dataset. This is the 5.4 trillion version.
### Note: this version uses a lower educational score threshold = 2, which results in more documents, but lower quality compared to the 1.3T version. For more details check the FineWeb [blog post](https://huggingface.co./spaces/HuggingFaceFW/blogpost-fineweb-v1).
To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co./HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
The [Dataset Curation](https://huggingface.co./datasets/HuggingFaceFW/fineweb-edu#dataset-curation) section details the process for creating the dataset.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/QqXOM8h_ZjjhuCv71xmV7.png)
## What is being released?
Along with the dataset, which includes all filtered CommonCrawl dumps since 2013, we also release the educational classifier used for the filtering as well as the code for training it and running inference at: https://github.com/huggingface/cosmopedia/tree/main/classification.
## Changelog
_Previous versions remain available in the branch `version name`._
- **v1.2.0 (03-01-2024):** Added 9 new snapshots: `CC-MAIN-2024-18`, `CC-MAIN-2024-22`, `CC-MAIN-2024-26`, `CC-MAIN-2024-30`, `CC-MAIN-2024-33`, `CC-MAIN-2024-38`, `CC-MAIN-2024-42`, `CC-MAIN-2024-46`, `CC-MAIN-2024-51`, covering April to December 2024.
- **v1.0.0 (02-06-2024):** Initial version
## How to load the dataset
Similarily to FineWeb, You can load the full dataset or a specific crawl/dump. Dumps have the format `CC-MAIN-(year)-(week number)`.
### Using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/)
```python
from datatrove.pipeline.readers import ParquetReader
# limit determines how many documents will be streamed (remove for all)
data_reader = ParquetReader("hf://datasets/HuggingFaceFW/fineweb-edu-score-2", glob_pattern="data/*/*.parquet", limit=1000)
data_reader = ParquetReader("hf://datasets/HuggingFaceFW/fineweb-edu-score-2/CC-MAIN-2024-10", limit=1000)
for document in data_reader():
# do something with document
print(document)
###############################
# OR for a processing pipeline:
###############################
from datatrove.executor import LocalPipelineExecutor
from datatrove.pipeline.readers import ParquetReader
from datatrove.pipeline.filters import LambdaFilter
from datatrove.pipeline.writers import JsonlWriter
pipeline_exec = LocalPipelineExecutor(
pipeline=[
ParquetReader("hf://datasets/HuggingFaceFW/fineweb-edu-score-2/CC-MAIN-2024-10", limit=1000),
LambdaFilter(lambda doc: "hugging" in doc.text),
JsonlWriter("some-output-path")
],
tasks=10
)
pipeline_exec.run()
```
### Using `datasets`
```python
from datasets import load_dataset
fw = load_dataset("HuggingFaceFW/fineweb-edu-score-2", name="CC-MAIN-2024-10", split="train", streaming=True)
```
## Dataset curation
A new approach has recently emerged for filtering LLM training datasets: using synthetic data to develop classifiers for identifying educational content. This technique was used in the trainings of [LLama3](https://ai.meta.com/blog/meta-llama-3-meta-ai-responsibility/), [Claude3](https://www-cdn.anthropic.com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf) and [Phi3](https://arxiv.org/abs/2404.14219), but its large-scale impact on web data filtering hasn't been fully explored or published.
The highly popular Phi3 models were trained on 3.3 and 4.8 trillion tokens, with the paper stating: “Our training data consists of heavily filtered publicly available web data (according to the 'educational level') from various open internet sources, as well as synthetic LLM-generated data". Similarly, the LLama3 blog post notes: “We found that previous generations of Llama are good at identifying high-quality data, so we used Llama 2 to help build the text-quality classifiers that are powering Llama 3.” However these classifiers and filtered datasets are not publicly available. To enhance FineWeb's quality, we developed an educational quality classifier using annotations generated by [LLama3-70B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3-70B-Instruct) to create FineWeb-Edu.
### Annotation
We used [Llama3-70B-Instruct](https://huggingface.co./meta-llama/Meta-Llama-3-70B-Instruct) to score 500k FineWeb samples for their educational quality on a scale from 0 to 5.
We explored various prompts and found that the additive scale by [Yuan et al.](https://arxiv.org/pdf/2401.10020) worked best. To avoid the LLM favoring highly technical pages like arXiv abstracts and submissions, we focused on grade-school and middle-school level knowledge. By setting a threshold of 3 (on a scale of 0 to 5) during the filtering process, we were able to also retain some high-level educational pages. The final prompt can be found in this blog post TODO.
We also experimented with different LLMs: Llama3-70B-Instruct, Mixtral-8x-7B-Instruct, and Mixtral-8x22B-Instruct. Llama3 and Mixtral-8x22B produced similar scores, while Mixtral-8x7B tended to be more generous, not fully adhering to the score scale. Verga et al. suggest using multiple LLMs as juries. We tried averaging the scores from the three models, but this shifted the distribution to the right due to the higher scores from Mixtral-8x7B. Training on a dataset filtered with a classifier using jury annotations performed worse than using a classifier based on Llama3 annotations. We hypothesize that the jury-based approach retains more low-quality samples.
### Classifier training
We fine-tuned a Bert-like regression model using these annotations, based on [Snowflake-arctic-embed](https://huggingface.co./Snowflake/snowflake-arctic-embed-m). When converted to a binary classification using a score of 3 as a threshold for keeping and removing files, the model achieved an F1 score of 82%. The classification of FineWeb 15T tokens took 6k H100 GPU hours.
The classifier is available at: [https://huggingface.co./HuggingFaceFW/fineweb-edu-classifier/ ](https://huggingface.co./HuggingFaceFW/fineweb-edu-classifier/)
### Filtering and results
**Note**: You can find more details about the ablations and results in the FineWeb blog post (TODO).
We investigated the impact of using different thresholds for the filtering and found that threshold 3 gave the best overall results. Although using a threshold higher than 3 improves performance on knowledge and reasoning intensive benchmarks, it significantly degrades performance on HellaSwag and PIQA.
We then built 📚 FineWeb-Edu by filtering out samples with scores lower than 3. This removed 92% of the dataset, leaving us with 1.3T educational tokens. Our ablation demonstrated that this refined dataset surpasses 🍷 FineWeb and all other open web datasets, with remarkable improvements on educational benchmarks such as MMLU, ARC, and OpenBookQA. The plot below compares FineWeb-Edu to other web datasets:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/hJlyTgDzZpYuxO9LUm0PF.png)
To retain more tokens, we also experimented with a less strict threshold of 2 instead of 3. While being less performant than using threshold 3, it still outperformed FineWeb and it preserved 5.4T tokens. We release these two dataset as [FineWeb-Edu](https://huggingface.co./datasets/HuggingFaceFW/fineweb-edu) and [FineWeb-Edu-score-2](https://huggingface.co./datasets/HuggingFaceFW/fineweb-edu-score-2) along with the [classifier](https://huggingface.co./HuggingFaceFW/fineweb-edu-classifier).
You will find all the ablation models in [this collection](https://huggingface.co./collections/HuggingFaceFW/ablation-models-662457b0d213e8c14fe47f32). The FineWeb-Edu ablation model (trained on 350B tokens) is available at [https://huggingface.co./HuggingFaceFW/ablation-model-fineweb-edu](https://huggingface.co./HuggingFaceFW/ablation-model-fineweb-edu).
## Considerations for Using the Data
This section is copied from the parent dataset: [FineWeb](https://huggingface.co./datasets/HuggingFaceFW/fineweb).
### Social Impact of Dataset
With the release of this dataset we aim to make model training more accessible to the machine learning community at large.
While multiple open-weights models with strong performance have been publicly released in the past, more often than not these releases are not accompanied by the corresponding training dataset. This is unfortunate as the dataset specificities and characteristics have been demonstrated to have a very large impact and role in the performances of the models. As the creation of a high quality training dataset is a fundamental requirement to training an LLM capable of excelling at downstream tasks, with 🍷 FineWeb we (a) not only make the dataset creation process more transparent, by sharing our entire processing setup including the codebase used, we also (b) help alleviate the costs of dataset curation, both in time and in compute, for model creators by publicly releasing our dataset with the community.
### Discussion of Biases
Efforts were made to minimize the amount of NSFW and toxic content present in the dataset by employing filtering on the URL level. However, there are still a significant number of documents present in the final dataset that could be considered toxic or contain harmful content. As 🍷 FineWeb was sourced from the web as a whole, any harmful biases typically present in it may be reproduced on our dataset.
We deliberately avoided using machine learning filtering methods that define text quality based on the similarity to a “gold” source such as wikipedia or toxicity classifiers as these methods have been known to [disproportionately remove content in specific dialects](https://aclanthology.org/D16-1120/) and [overclassify as toxic text related to specific social identities](https://arxiv.org/pdf/2109.07445.pdf), respectively.
### Other Known Limitations
As a consequence of some of the filtering steps applied, it is likely that code content is not prevalent in our dataset. If you are training a model that should also perform code tasks, we recommend you use 🍷 FineWeb with a code dataset, such as [The Stack v2](https://huggingface.co./datasets/bigcode/the-stack-v2). You should also probably consider complementing 🍷 FineWeb with specialized curated sources (such as Wikipedia, for example) as they will likely have better formatting than the wikipedia content included in 🍷 FineWeb (we did not tailor the processing to individual websites).
## Additional Information
### Licensing Information
The dataset is released under the **Open Data Commons Attribution License (ODC-By) v1.0** [license](https://opendatacommons.org/licenses/by/1-0/). The use of this dataset is also subject to [CommonCrawl's Terms of Use](https://commoncrawl.org/terms-of-use).
### Future work
We plan to work on better educational classifier to improve the quality of FineWeb-Edu.
### Citation Information
```
@software{lozhkov2024fineweb-edu,
author = {Lozhkov, Anton and Ben Allal, Loubna and von Werra, Leandro and Wolf, Thomas},
title = {FineWeb-Edu},
month = May,
year = 2024,
url = {https://huggingface.co./datasets/HuggingFaceFW/fineweb-edu}
}
``` |
PleIAs/common_corpus | PleIAs | "2024-11-22T13:41:35Z" | 10,996 | 197 | [
"task_categories:text-generation",
"language:en",
"language:fr",
"language:de",
"language:it",
"language:pt",
"language:nl",
"language:es",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2410.22587",
"region:us",
"legal",
"finance",
"literature",
"science",
"code"
] | [
"text-generation"
] | "2024-11-12T13:44:24Z" | ---
language:
- en
- fr
- de
- it
- pt
- nl
- es
pretty_name: Common Corpus
size_categories:
- n>1T
task_categories:
- text-generation
tags:
- legal
- finance
- literature
- science
- code
---
# Common Corpus
Common Corpus is the largest open and permissible licensed text dataset, comprising over 2 trillion tokens (2,003,039,184,047 tokens). It is a diverse dataset, consisting of books, newspapers, scientific articles, government and legal documents, code, and more.
Common Corpus differs from existing open datasets in that it is:
* **Truly Open**: contains only data that is permissively licensed
* **Multilingual**: mostly representing English and French data, but contains data for XX languages
* **Diverse**: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers
* **Extensively Curated**: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.
# About Common Corpus
Common Corpus is made of five carefully curated collections:
* **OpenCulture**: our largest collection at 926,541,096,243 tokens, featuring public domain books, newspapers, and Wikisource content. We've developed innovative tools like OCROnos-Vintage to correct historical digitization errors, while implementing advanced toxicity filtering to ensure content meets modern ethical standards.
* **OpenGovernment**: 387,965,738,992 tokens of financial and legal documents, including Finance Commons (from sources like SEC and WTO) and Legal Commons (including Europarl and Caselaw Access Project), providing enterprise-grade training data from regulatory bodies and administrative sources.
* **OpenSource**: 334,658,896,533 tokens of high-quality code in open source from GitHub, filtered using ArmoRM to ensure only the top 80% of submissions by quality rating are included.
* **OpenScience**: 221,798,136,564 tokens of academic content from Open Alex and other open science reposiories, processed using vision-language models to preserve crucial document structure and formatting.
* **OpenWeb**: 132,075,315,715 tokens from Wikipedia (official releases from the [Wikimedia Foundation](https://huggingface.co./datasets/wikimedia/wikipedia) on Huggingface), YouTube Commons and other websites available under permissible licenses like Stack-Exchange.
| Collection | Domain | Sources |
|----------------|--------------------------|-------------------------------------------------------------------------------------------|
| OpenGovernment | legal and administrative | [Finance Commons](https://huggingface.co./collections/PleIAs/finance-commons-66925e1095c7fa6e6828e26c) (e.g. SEC, WTO) and Legal Commons (e.g. Europarl, Caselaw Access Project) |
| OpenCulture | cultural heritage | public domain books and newspapers, Wikisource |
| OpenScience | academic | OpenAlex, French theses |
| OpenWeb | web text | [YouTube Commons](https://huggingface.co./datasets/PleIAs/YouTube-Commons), Stack Exchange |
| OpenSource | code | GitHub |
We will accompany the dataset release with a comprehensive technical report detailing our methodologies and data sources will accompany the release, ensuring full transparency and reproducibility. We will release the individual sub-corpora in coming weeks for more fine-grained auditability for to expand uses
## Dataset Structure
<details >
<summary>Data Fields</summary>
* identifier: unique text identifier
* text: post-processed text
* char_count: number of UTF-8 characters in text
* file_name: original file path, organized by collection
* set_id: set id (1-10)
* subset_id: subset id (1-100)
</details >
<br />
# How to Use
## Considerations for Using the Data
All data in Common Corpus are permissibly licensed and may be used for both commercial and non-commercial purposes.
The dataset is multilingual. The language text is included in the metadata, so data can be filtered by language. Additionally, some of the text data are historical. The year each text is written is included in the metadata, therefore it is possible to construct a dataset with a custom date cutoff if desired.
### Discussion of Bias
Some of the dataset sources contain biased and toxic content, such as stereotypes about certain minoritized groups. We have removed texts which had high toxicity scores according to our toxicity classifier, [Celadon](https://huggingface.co./PleIAs/celadon), or which contain offensive terms and slurs. See our [preprint](https://arxiv.org/pdf/2410.22587) for more details.
### Personal and Sensitive Information
We have attempted to remove personally identifiable information (PII). We primarily use [Microsoft Presidio](https://microsoft.github.io/presidio/), but make additional modifications to account for language- and country-specific considerations, such as European phone number formats.
## Use Common Corpus
```
from datasets import load_dataset
data = load_dataset('PleIAs/common_corpus')
```
# Acknowledgements
The corpus was stored and processed with the generous support of the AI Alliance, Jean Zay (Eviden, Idris), Nvidia Inception program, Nebius AI, Tracto AI, Mozilla. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC). This dataset was also made in partnership with Wikimedia Enterprise for the Wikipedia part. The collection of the corpus has been largely facilitated thanks to the open science LLM community insights, cooperation and support (Eleuther AI, Allen AI, HuggingFace…).
<div style="text-align: center;">
<img src="https://huggingface.co./datasets/PleIAs/common_corpus/resolve/main/logo/ai_alliance.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
<img src="https://huggingface.co./datasets/PleIAs/common_corpus/resolve/main/logo/logo-genci-header.svg" style="width: 33%; margin: 0 auto; display: inline-block;"/>
<img src="https://huggingface.co./datasets/PleIAs/common_corpus/resolve/main/logo/Nvidia_(logo).svg.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
<img src="https://huggingface.co./datasets/PleIAs/common_corpus/resolve/main/logo/tractoAI.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
<img src="https://huggingface.co./datasets/PleIAs/common_corpus/resolve/main/logo/mozilla.png" style="width: 33%; margin: 0 auto; display: inline-block;"/>
<img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/ministere_logo.png?token=GHSAT0AAAAAACZUTJMICO3MSWUJ43EQWG5QZZL3RFQ" style="width: 33%; margin: 0 auto; display: inline-block;"/>
<img src="https://raw.githubusercontent.com/Pleias/logos/f117dee70b317bc664eac14ee70d7c0563101ed1/wikimedia_logo.png?token=GHSAT0AAAAAACZUTJMIIPAP4J7MKP6RSSWCZZL3TFA" style="width: 33%; margin: 0 auto; display: inline-block;"/>
</div> |
HuggingFaceM4/Docmatix | HuggingFaceM4 | "2024-08-26T08:15:21Z" | 10,963 | 235 | [
"task_categories:visual-question-answering",
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2408.12637",
"region:us",
"docvqa"
] | [
"visual-question-answering"
] | "2024-07-17T11:33:00Z" | ---
language:
- en
license: mit
size_categories:
- 1M<n<10M
task_categories:
- visual-question-answering
pretty_name: Docmatix
tags:
- docvqa
configs:
- config_name: images
data_files:
- split: train
path: data/train-*
- config_name: pdf
data_files:
- split: train
path: pdf/train-*
- config_name: zero-shot-exp
data_files:
- split: train
path: zero-shot-exp/train-*
- split: test
path: zero-shot-exp/test-*
dataset_info:
- config_name: images
features:
- name: images
sequence: image
- name: texts
list:
- name: user
dtype: string
- name: assistant
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 552957537722.77
num_examples: 1273215
download_size: 159404414330
dataset_size: 552957537722.77
- config_name: pdf
features:
- name: pdf
dtype: binary
- name: texts
list:
- name: user
dtype: string
- name: assistant
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 458612867150
num_examples: 1273245
download_size: 431829972210
dataset_size: 458612867150
- config_name: zero-shot-exp
features:
- name: images
sequence: image
- name: texts
list:
- name: user
dtype: string
- name: assistant
dtype: string
- name: source
dtype: string
splits:
- name: test
num_bytes: 68900253.0
num_examples: 200
- name: train
num_bytes: 578335690.5
num_examples: 1700
download_size: 642963847
dataset_size: 647235943.5
---
# Dataset Card for Docmatix
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/65d66b494bbd0d92b641cdbb/P7rIELr2eom_IorBY5DZu.webp)
## Dataset description
Docmatix is part of the Idefics3 release (stay tuned).
It is a massive dataset for Document Visual Question Answering that was used for the fine-tuning of the vision-language model Idefics3.
## Load the dataset
To load the dataset, install the library `datasets` with `pip install datasets`. Then,
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceM4/Docmatix")
```
If you want the dataset to link to the pdf files as binaries instead of the images, do:
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceM4/Docmatix", "pdf")
```
## Data fields
An example of a sample looks as follows:
```
{
"images" = [PIL.Image]
"texts" = [
{ "user": "What is the purpose of the Confirmation Statement mentioned in the document?",
"assistant": "The purpose of the Confirmation Statement is to confirm that all information required to be delivered by the company to the registrar in relation to the confirmation period concerned has been delivered or is being delivered at the same time as the confirmation statement.",
"source": "PDFA key: 244" },
{ "user": "When was the filing received as per the document?",
"assistant": "The filing was received for filing in Electronic Format on the 23/03/2021.",
"source": "PDFA key: 244" },
]
}
```
In `images`, there is a list of up to 4 images, to be placed before the text.
In `texts`, there is a conversation between a user and an assistant about the images that is represented by a list of turns.
## Comparison to other DocVQA datasets
| Dataset | # images | # Q/A pairs | # tokens |
|----------------------|----------|-------------|------------|
| *Document visual question answering* |
| **Docmatix** | **2,444,750**| **9,500,000** | **390,000,000**|
| DocVQA | 10,189 | 39,463 | 337,829 |
| TextCaps | 21,953 | 21,953 | 389,658 |
| TextVQA | 21,953 | 34,602 | 181,918 |
| ST-VQA | 17,247 | 23,121 | 127,846 |
| OCR-VQA | 165,746 | 801,579 | 6,073,824 |
| VisualMRC | 3,027 | 11,988 | 168,828 |
| IAM | 5,663 | 5,663 | 144,216 |
| InfoVQA | 2,118 | 10,074 | 61,048 |
| Diagram image-to-text| 300 | 300 | 22,196 |
# Citation
**BibTeX:**
```bibtex
@misc{laurençon2024building,
title={Building and better understanding vision-language models: insights and future directions.},
author={Hugo Laurençon and Andrés Marafioti and Victor Sanh and Léo Tronchon},
year={2024},
eprint={2408.12637},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
common-canvas/commoncatalog-cc-by-nc-nd | common-canvas | "2024-05-16T19:46:41Z" | 10,957 | 2 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-nc-nd-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.16825",
"region:us"
] | [
"text-to-image"
] | "2023-10-19T02:10:48Z" | ---
license: cc-by-nc-nd-4.0
dataset_info:
features:
- name: jpg
dtype: image
- name: blip2_caption
dtype: string
- name: caption
dtype: string
- name: licensename
dtype: string
- name: licenseurl
dtype: string
- name: width
dtype: int32
- name: height
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: photoid
dtype: int64
- name: uid
dtype: string
- name: unickname
dtype: string
- name: datetaken
dtype: timestamp[us]
- name: dateuploaded
dtype: int64
- name: capturedevice
dtype: string
- name: title
dtype: string
- name: usertags
dtype: string
- name: machinetags
dtype: string
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: accuracy
dtype: int64
- name: pageurl
dtype: string
- name: downloadurl
dtype: string
- name: serverid
dtype: int64
- name: farmid
dtype: int64
- name: secret
dtype: string
- name: secretoriginal
dtype: string
- name: ext
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: string
- name: exif
dtype: string
- name: sha256
dtype: string
- name: description
dtype: string
task_categories:
- text-to-image
language:
- en
---
# Dataset Card for CommonCatalog CC-BY-NC-ND
This dataset is a large collection of high-resolution Creative Common images (composed of different licenses, see paper Table 1 in the Appendix) collected in 2014 from users of Yahoo Flickr.
The dataset contains images of up to 4k resolution, making this one of the highest resolution captioned image datasets.
## Dataset Details
### Dataset Description
We provide captions synthetic captions to approximately 100 million high resolution images collected from Yahoo Flickr Creative Commons (YFCC).
- **Curated by:** Aaron Gokaslan
- **Language(s) (NLP):** en
- **License:** See relevant yaml tag / dataset name.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/mosaicml/diffusion
- **Paper:** https://arxiv.org/abs/2310.16825
- **Demo:** See CommonCanvas Gradios
## Uses
We use CommonCatalog to train a family latent diffusion models called CommonCanvas.
The goal is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance.
Doing so makes replicating the model significantly easier, and provides a clearer mechanism for applying training-data attribution techniques.
### Direct Use
Training text-to-image models
Training image-to-text models
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
* Commercial use
* Crafting content that is offensive or injurious towards individuals, including negative portrayals of their living conditions, cultural backgrounds, religious beliefs, etc.
* Deliberately creating or spreading content that is discriminatory or reinforces harmful stereotypes.
* Falsely representing individuals without their permission.
* Generating sexual content that may be seen by individuals without their consent.
* Producing or disseminating false or misleading information.
* Creating content that depicts extreme violence or bloodshed.
* Distributing content that modifies copyrighted or licensed material in a way that breaches its usage terms.
## Dataset Structure
The dataset is divided into 10 subsets each containing parquets about 4GB each. Each subfolder within contains a resolution range of the images and their respective aspect ratios.
The dataset is also divided along images licensed for commercial use (C) and those that are not (NC).
## Dataset Creation
### Curation Rationale
Creating a standardized, accessible dataset with synthetic caption and releasing it so other people can train on a common dataset for open source image generation.
### Source Data
Yahoo Flickr Creative Commons 100M Dataset and Synthetically Generated Caption Data.
#### Data Collection and Processing
All synthetic captions were generated with BLIP2. See paper for more details.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Users of Flickr
## Bias, Risks, and Limitations
See Yahoo Flickr Creative Commons 100M dataset for more information. The information was collected circa 2014 and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Citation
**BibTeX:**
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
```
## Dataset Card Authors
[Aaron Gokaslan](https://huggingface.co./Skylion007)
## Dataset Card Contact
[Aaron Gokaslan](https://huggingface.co./Skylion007)
|
ChongyanChen/VQAonline | ChongyanChen | "2024-04-19T04:22:11Z" | 10,936 | 7 | [
"task_categories:visual-question-answering",
"license:cc-by-sa-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2311.15562",
"region:us"
] | [
"visual-question-answering"
] | "2023-12-22T15:00:02Z" | ---
license: cc-by-sa-4.0
task_categories:
- visual-question-answering
pretty_name: VQAonline
---
# VQAonline
<img src="https://cdn-uploads.huggingface.co/production/uploads/6337e9b676421c05430a0287/6vt42q8w7EWx9vVuZqc3U.png" width="50%">
[**🌐 Homepage**](https://vqaonline.github.io/) | [**🤗 Dataset**](https://huggingface.co./datasets/ChongyanChen/VQAonline/) | [**📖 arXiv**](https://arxiv.org/abs/2311.15562)
## Dataset Description
We introduce VQAonline, the first VQA dataset in which all contents originate from an authentic use case.
VQAonline includes 64K visual questions sourced from an online question answering community (i.e., StackExchange).
It differs from prior datasets; examples include that it contains:
- (1) authentic context that clarifies the question
- (2) an answer the individual asking the question validated as acceptable from all community provided answers,
- (3) answers that are considerably longer (e.g., a mean of 173 words versus typically 11 words or fewer in prior work)
- (4) user-chosen topics for each visual question from 105 diverse topics revealing the dataset’s inherent diversity.
## Download
To download, you can use the following code:
```
git clone https://huggingface.co./datasets/ChongyanChen/VQAonline
```
## Dataset Structure
In total, the VQAonline dataset contains 64,696 visual questions.
We designed VQAonline to support few-shot settings given the recent exciting developments around in-context few-shot learning with foundation models. Thus, we split the dataset as follows:
- Training set: 665 visual questions
- Validation set: 285 visual questions
- Test set: 63,746 visual questions
The questions, contexts, and answers are provided in the json files.
Due to the constraint of huggingface, we separate the image files into 7 folders (named from images1 to images7), each of which contains 10,000 image files, except for folder "images 7".
## Contact
- Chongyan Chen: [email protected]
## Citation
**BibTeX:**
```bibtex
@article{chen2023vqaonline,
title={Fully Authentic Visual Question Answering Dataset from Online Communities},
author={Chen, Chongyan and Liu, Mengchen and Codella, Noel and Li, Yunsheng and Yuan, Lu and Gurari, Danna},
journal={arXiv preprint arXiv:2311.15562},
year={2023}
}
``` |
mteb/banking77 | mteb | "2022-09-27T19:15:02Z" | 10,922 | 2 | [
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-05-17T12:14:06Z" | ---
language:
- en
--- |
asahi417/seamless-align-enA-zhA.speaker-embedding.xlsr-2b | asahi417 | "2024-06-17T08:52:20Z" | 10,913 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-14T10:18:38Z" | ---
dataset_info:
- config_name: subset_1
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14209259131
num_examples: 1962
download_size: 14256120203
dataset_size: 14209259131
- config_name: subset_10
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13574781625
num_examples: 2031
download_size: 13621966757
dataset_size: 13574781625
- config_name: subset_100
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13223964877
num_examples: 1891
download_size: 13269307182
dataset_size: 13223964877
- config_name: subset_101
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13038203739
num_examples: 1885
download_size: 13083404216
dataset_size: 13038203739
- config_name: subset_102
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12731679458
num_examples: 1863
download_size: 12775688644
dataset_size: 12731679458
- config_name: subset_103
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12967209285
num_examples: 1861
download_size: 13011071076
dataset_size: 12967209285
- config_name: subset_104
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12798692606
num_examples: 1875
download_size: 12842795816
dataset_size: 12798692606
- config_name: subset_105
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13127114114
num_examples: 1871
download_size: 13172271401
dataset_size: 13127114114
- config_name: subset_106
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12426801586
num_examples: 1865
download_size: 12469421998
dataset_size: 12426801586
- config_name: subset_107
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12484775174
num_examples: 1838
download_size: 12527398592
dataset_size: 12484775174
- config_name: subset_108
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13018346253
num_examples: 1860
download_size: 13063301347
dataset_size: 13018346253
- config_name: subset_109
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12986696298
num_examples: 1866
download_size: 13030608940
dataset_size: 12986696298
- config_name: subset_11
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13038519979
num_examples: 1994
download_size: 13084550040
dataset_size: 13038519979
- config_name: subset_110
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12537003686
num_examples: 1843
download_size: 12580875152
dataset_size: 12537003686
- config_name: subset_111
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12920543044
num_examples: 1845
download_size: 12964231904
dataset_size: 12920543044
- config_name: subset_112
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12666264009
num_examples: 1844
download_size: 12709732284
dataset_size: 12666264009
- config_name: subset_113
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12572103874
num_examples: 1839
download_size: 12615926245
dataset_size: 12572103874
- config_name: subset_114
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12918422777
num_examples: 1851
download_size: 12960836861
dataset_size: 12918422777
- config_name: subset_115
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12832082885
num_examples: 1821
download_size: 12875679807
dataset_size: 12832082885
- config_name: subset_116
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12805128711
num_examples: 1837
download_size: 12848847004
dataset_size: 12805128711
- config_name: subset_117
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12914312061
num_examples: 1854
download_size: 12957416120
dataset_size: 12914312061
- config_name: subset_118
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12536340519
num_examples: 1814
download_size: 12579845649
dataset_size: 12536340519
- config_name: subset_119
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12663849536
num_examples: 1823
download_size: 12706307005
dataset_size: 12663849536
- config_name: subset_12
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13345506634
num_examples: 2034
download_size: 13391815198
dataset_size: 13345506634
- config_name: subset_120
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12540967981
num_examples: 1835
download_size: 12584734529
dataset_size: 12540967981
- config_name: subset_121
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12881361209
num_examples: 1832
download_size: 12925049223
dataset_size: 12881361209
- config_name: subset_122
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12693240892
num_examples: 1824
download_size: 12736912334
dataset_size: 12693240892
- config_name: subset_123
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12762978970
num_examples: 1800
download_size: 12804981254
dataset_size: 12762978970
- config_name: subset_124
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13001669496
num_examples: 1830
download_size: 13045674231
dataset_size: 13001669496
- config_name: subset_125
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12993926322
num_examples: 1858
download_size: 13037516573
dataset_size: 12993926322
- config_name: subset_126
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13297930843
num_examples: 1888
download_size: 13343186712
dataset_size: 13297930843
- config_name: subset_127
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12751048394
num_examples: 1833
download_size: 12794759892
dataset_size: 12751048394
- config_name: subset_128
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13141818374
num_examples: 1835
download_size: 13185797931
dataset_size: 13141818374
- config_name: subset_129
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13304123905
num_examples: 1885
download_size: 13349335826
dataset_size: 13304123905
- config_name: subset_13
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13296351131
num_examples: 2021
download_size: 13342470698
dataset_size: 13296351131
- config_name: subset_130
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12913357124
num_examples: 1828
download_size: 12956967608
dataset_size: 12913357124
- config_name: subset_131
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12861389006
num_examples: 1813
download_size: 12904807434
dataset_size: 12861389006
- config_name: subset_132
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13234681867
num_examples: 1864
download_size: 13279044682
dataset_size: 13234681867
- config_name: subset_133
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12962105674
num_examples: 1844
download_size: 13005284694
dataset_size: 12962105674
- config_name: subset_134
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12743586667
num_examples: 1826
download_size: 12787256702
dataset_size: 12743586667
- config_name: subset_135
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13166439361
num_examples: 1853
download_size: 13211052569
dataset_size: 13166439361
- config_name: subset_136
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13099690718
num_examples: 1881
download_size: 13144846389
dataset_size: 13099690718
- config_name: subset_137
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12741551269
num_examples: 1837
download_size: 12785267794
dataset_size: 12741551269
- config_name: subset_138
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13205037463
num_examples: 1869
download_size: 13250209887
dataset_size: 13205037463
- config_name: subset_139
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12997304962
num_examples: 1830
download_size: 13040947110
dataset_size: 12997304962
- config_name: subset_14
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13147826343
num_examples: 2012
download_size: 13193978395
dataset_size: 13147826343
- config_name: subset_140
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12925120270
num_examples: 1815
download_size: 12968597687
dataset_size: 12925120270
- config_name: subset_141
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12720671970
num_examples: 1814
download_size: 12764228297
dataset_size: 12720671970
- config_name: subset_142
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13068068628
num_examples: 1851
download_size: 13111428243
dataset_size: 13068068628
- config_name: subset_143
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12707538206
num_examples: 1792
download_size: 12750667442
dataset_size: 12707538206
- config_name: subset_144
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13032830257
num_examples: 1856
download_size: 13077677610
dataset_size: 13032830257
- config_name: subset_145
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13125760032
num_examples: 1850
download_size: 13169372344
dataset_size: 13125760032
- config_name: subset_146
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13267575660
num_examples: 1847
download_size: 13312015745
dataset_size: 13267575660
- config_name: subset_147
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13027789155
num_examples: 1851
download_size: 13072382865
dataset_size: 13027789155
- config_name: subset_148
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13035403722
num_examples: 1848
download_size: 13079919602
dataset_size: 13035403722
- config_name: subset_149
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12973733702
num_examples: 1869
download_size: 13017673308
dataset_size: 12973733702
- config_name: subset_15
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13088705529
num_examples: 2010
download_size: 13134907161
dataset_size: 13088705529
- config_name: subset_150
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13483674365
num_examples: 1866
download_size: 13528056417
dataset_size: 13483674365
- config_name: subset_151
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13181178047
num_examples: 1862
download_size: 13225576965
dataset_size: 13181178047
- config_name: subset_152
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12941487991
num_examples: 1825
download_size: 12985061862
dataset_size: 12941487991
- config_name: subset_153
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13368213999
num_examples: 1859
download_size: 13412797003
dataset_size: 13368213999
- config_name: subset_154
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13297990214
num_examples: 1862
download_size: 13342836529
dataset_size: 13297990214
- config_name: subset_155
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13054204743
num_examples: 1827
download_size: 13097552879
dataset_size: 13054204743
- config_name: subset_156
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13062749902
num_examples: 1834
download_size: 13106716484
dataset_size: 13062749902
- config_name: subset_157
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13387093767
num_examples: 1861
download_size: 13431728772
dataset_size: 13387093767
- config_name: subset_158
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12880058650
num_examples: 1783
download_size: 12922917433
dataset_size: 12880058650
- config_name: subset_159
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11881220327
num_examples: 1654
download_size: 11920259054
dataset_size: 11881220327
- config_name: subset_16
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12462625538
num_examples: 1974
download_size: 12506159941
dataset_size: 12462625538
- config_name: subset_160
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13238783195
num_examples: 1841
download_size: 13282707085
dataset_size: 13238783195
- config_name: subset_161
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13020419071
num_examples: 1838
download_size: 13064546764
dataset_size: 13020419071
- config_name: subset_162
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12977261277
num_examples: 1783
download_size: 13020115761
dataset_size: 12977261277
- config_name: subset_163
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13154440690
num_examples: 1805
download_size: 13196753466
dataset_size: 13154440690
- config_name: subset_164
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13221774010
num_examples: 1785
download_size: 13265348083
dataset_size: 13221774010
- config_name: subset_165
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12890453538
num_examples: 1764
download_size: 12932702421
dataset_size: 12890453538
- config_name: subset_166
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13238731446
num_examples: 1865
download_size: 13283761551
dataset_size: 13238731446
- config_name: subset_167
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13116757486
num_examples: 1794
download_size: 13160391058
dataset_size: 13116757486
- config_name: subset_168
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13507948525
num_examples: 1871
download_size: 13553344138
dataset_size: 13507948525
- config_name: subset_169
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13281798625
num_examples: 1845
download_size: 13326150167
dataset_size: 13281798625
- config_name: subset_17
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12530147436
num_examples: 2005
download_size: 12574908317
dataset_size: 12530147436
- config_name: subset_170
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13529725455
num_examples: 1877
download_size: 13575090508
dataset_size: 13529725455
- config_name: subset_171
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13437938918
num_examples: 1863
download_size: 13482733557
dataset_size: 13437938918
- config_name: subset_172
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13501430052
num_examples: 1841
download_size: 13546564951
dataset_size: 13501430052
- config_name: subset_173
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13150686556
num_examples: 1846
download_size: 13195014459
dataset_size: 13150686556
- config_name: subset_174
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13175889059
num_examples: 1833
download_size: 13218163076
dataset_size: 13175889059
- config_name: subset_175
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13102902135
num_examples: 1781
download_size: 13146424916
dataset_size: 13102902135
- config_name: subset_176
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13113639029
num_examples: 1809
download_size: 13156636320
dataset_size: 13113639029
- config_name: subset_177
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12925639837
num_examples: 1779
download_size: 12968379518
dataset_size: 12925639837
- config_name: subset_178
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13408612490
num_examples: 1810
download_size: 13450580598
dataset_size: 13408612490
- config_name: subset_179
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13385760100
num_examples: 1808
download_size: 13429460828
dataset_size: 13385760100
- config_name: subset_18
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12199898588
num_examples: 1933
download_size: 12243282500
dataset_size: 12199898588
- config_name: subset_180
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13530721722
num_examples: 1844
download_size: 13575900699
dataset_size: 13530721722
- config_name: subset_181
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13329843658
num_examples: 1808
download_size: 13373550290
dataset_size: 13329843658
- config_name: subset_182
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13421206325
num_examples: 1807
download_size: 13463752732
dataset_size: 13421206325
- config_name: subset_183
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13293489170
num_examples: 1790
download_size: 13336547592
dataset_size: 13293489170
- config_name: subset_184
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13170304190
num_examples: 1756
download_size: 13212477470
dataset_size: 13170304190
- config_name: subset_185
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13090231354
num_examples: 1782
download_size: 13133751213
dataset_size: 13090231354
- config_name: subset_186
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13202410818
num_examples: 1791
download_size: 13246025198
dataset_size: 13202410818
- config_name: subset_187
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13117901283
num_examples: 1780
download_size: 13161452953
dataset_size: 13117901283
- config_name: subset_188
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13040538960
num_examples: 1783
download_size: 13083376359
dataset_size: 13040538960
- config_name: subset_189
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13439617718
num_examples: 1814
download_size: 13483108790
dataset_size: 13439617718
- config_name: subset_19
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12564662092
num_examples: 1983
download_size: 12608495037
dataset_size: 12564662092
- config_name: subset_190
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13250790159
num_examples: 1802
download_size: 13294475053
dataset_size: 13250790159
- config_name: subset_191
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13576629500
num_examples: 1794
download_size: 13621403290
dataset_size: 13576629500
- config_name: subset_192
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13534614314
num_examples: 1805
download_size: 13579424803
dataset_size: 13534614314
- config_name: subset_193
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13205308518
num_examples: 1801
download_size: 13249015229
dataset_size: 13205308518
- config_name: subset_194
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13173748624
num_examples: 1781
download_size: 13217312510
dataset_size: 13173748624
- config_name: subset_195
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13262230923
num_examples: 1793
download_size: 13305822959
dataset_size: 13262230923
- config_name: subset_196
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13442030054
num_examples: 1808
download_size: 13485707566
dataset_size: 13442030054
- config_name: subset_197
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13279014406
num_examples: 1813
download_size: 13322778547
dataset_size: 13279014406
- config_name: subset_198
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13372535801
num_examples: 1806
download_size: 13416177685
dataset_size: 13372535801
- config_name: subset_199
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13260187374
num_examples: 1798
download_size: 13303786873
dataset_size: 13260187374
- config_name: subset_2
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14824893136
num_examples: 2052
download_size: 14872062544
dataset_size: 14824893136
- config_name: subset_20
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12469186480
num_examples: 1959
download_size: 12512664696
dataset_size: 12469186480
- config_name: subset_200
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13405377823
num_examples: 1814
download_size: 13449064998
dataset_size: 13405377823
- config_name: subset_201
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12857648840
num_examples: 1743
download_size: 12899723972
dataset_size: 12857648840
- config_name: subset_202
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13875371606
num_examples: 1837
download_size: 13920369985
dataset_size: 13875371606
- config_name: subset_203
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13399467921
num_examples: 1802
download_size: 13443037100
dataset_size: 13399467921
- config_name: subset_204
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13199244957
num_examples: 1790
download_size: 13242850615
dataset_size: 13199244957
- config_name: subset_205
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13522066316
num_examples: 1805
download_size: 13566902309
dataset_size: 13522066316
- config_name: subset_206
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13358642027
num_examples: 1793
download_size: 13402228636
dataset_size: 13358642027
- config_name: subset_207
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13117565020
num_examples: 1803
download_size: 13160834483
dataset_size: 13117565020
- config_name: subset_208
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13382606765
num_examples: 1772
download_size: 13425953489
dataset_size: 13382606765
- config_name: subset_209
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12974477742
num_examples: 1760
download_size: 13016667136
dataset_size: 12974477742
- config_name: subset_21
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12085360980
num_examples: 1934
download_size: 12128838861
dataset_size: 12085360980
- config_name: subset_210
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13476138379
num_examples: 1796
download_size: 13519239424
dataset_size: 13476138379
- config_name: subset_211
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13419976632
num_examples: 1804
download_size: 13463566063
dataset_size: 13419976632
- config_name: subset_212
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13480363370
num_examples: 1782
download_size: 13523820554
dataset_size: 13480363370
- config_name: subset_213
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13318991720
num_examples: 1782
download_size: 13362474010
dataset_size: 13318991720
- config_name: subset_214
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13172359156
num_examples: 1797
download_size: 13214875172
dataset_size: 13172359156
- config_name: subset_215
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13280189404
num_examples: 1805
download_size: 13323825654
dataset_size: 13280189404
- config_name: subset_216
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13228612682
num_examples: 1784
download_size: 13272174030
dataset_size: 13228612682
- config_name: subset_217
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13301788292
num_examples: 1795
download_size: 13344755106
dataset_size: 13301788292
- config_name: subset_218
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13444033873
num_examples: 1787
download_size: 13486618579
dataset_size: 13444033873
- config_name: subset_219
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13385006271
num_examples: 1798
download_size: 13428624694
dataset_size: 13385006271
- config_name: subset_22
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12068647069
num_examples: 1919
download_size: 12112052242
dataset_size: 12068647069
- config_name: subset_220
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13387979539
num_examples: 1793
download_size: 13431556808
dataset_size: 13387979539
- config_name: subset_221
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13184366878
num_examples: 1768
download_size: 13227780561
dataset_size: 13184366878
- config_name: subset_222
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13019967994
num_examples: 1771
download_size: 13063453259
dataset_size: 13019967994
- config_name: subset_223
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13009363028
num_examples: 1758
download_size: 13052746979
dataset_size: 13009363028
- config_name: subset_224
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13653747955
num_examples: 1816
download_size: 13698603852
dataset_size: 13653747955
- config_name: subset_225
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13118240446
num_examples: 1793
download_size: 13161929983
dataset_size: 13118240446
- config_name: subset_226
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13687749995
num_examples: 1820
download_size: 13732610680
dataset_size: 13687749995
- config_name: subset_227
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13607917615
num_examples: 1780
download_size: 13652544829
dataset_size: 13607917615
- config_name: subset_228
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13342866303
num_examples: 1787
download_size: 13386381015
dataset_size: 13342866303
- config_name: subset_229
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13310021752
num_examples: 1749
download_size: 13353269100
dataset_size: 13310021752
- config_name: subset_23
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12351336675
num_examples: 1955
download_size: 12394781249
dataset_size: 12351336675
- config_name: subset_230
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13841783584
num_examples: 1822
download_size: 13886657543
dataset_size: 13841783584
- config_name: subset_231
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13716496527
num_examples: 1829
download_size: 13761486198
dataset_size: 13716496527
- config_name: subset_232
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13468291883
num_examples: 1783
download_size: 13510236937
dataset_size: 13468291883
- config_name: subset_233
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13380265372
num_examples: 1800
download_size: 13423849283
dataset_size: 13380265372
- config_name: subset_234
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13713921272
num_examples: 1815
download_size: 13758828731
dataset_size: 13713921272
- config_name: subset_235
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13514220908
num_examples: 1804
download_size: 13559019032
dataset_size: 13514220908
- config_name: subset_236
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13494111624
num_examples: 1801
download_size: 13537700105
dataset_size: 13494111624
- config_name: subset_237
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13544863571
num_examples: 1790
download_size: 13589626744
dataset_size: 13544863571
- config_name: subset_238
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13554393425
num_examples: 1811
download_size: 13597849508
dataset_size: 13554393425
- config_name: subset_239
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13546762388
num_examples: 1801
download_size: 13591616587
dataset_size: 13546762388
- config_name: subset_24
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12065934022
num_examples: 1912
download_size: 12109149622
dataset_size: 12065934022
- config_name: subset_240
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13196715201
num_examples: 1770
download_size: 13240155032
dataset_size: 13196715201
- config_name: subset_241
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13659450783
num_examples: 1832
download_size: 13704113396
dataset_size: 13659450783
- config_name: subset_242
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13246292263
num_examples: 1769
download_size: 13289711363
dataset_size: 13246292263
- config_name: subset_243
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13566094535
num_examples: 1791
download_size: 13610689089
dataset_size: 13566094535
- config_name: subset_244
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13087093994
num_examples: 1754
download_size: 13130462985
dataset_size: 13087093994
- config_name: subset_245
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13606181830
num_examples: 1790
download_size: 13650688469
dataset_size: 13606181830
- config_name: subset_246
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13477021726
num_examples: 1793
download_size: 13520536644
dataset_size: 13477021726
- config_name: subset_247
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13530919547
num_examples: 1820
download_size: 13575873477
dataset_size: 13530919547
- config_name: subset_248
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13076084830
num_examples: 1779
download_size: 13118512234
dataset_size: 13076084830
- config_name: subset_249
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13529657489
num_examples: 1797
download_size: 13574453907
dataset_size: 13529657489
- config_name: subset_25
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12305898739
num_examples: 1925
download_size: 12349158665
dataset_size: 12305898739
- config_name: subset_250
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13381525728
num_examples: 1802
download_size: 13425146039
dataset_size: 13381525728
- config_name: subset_251
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13822740954
num_examples: 1801
download_size: 13866366309
dataset_size: 13822740954
- config_name: subset_252
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13673963248
num_examples: 1800
download_size: 13718685655
dataset_size: 13673963248
- config_name: subset_253
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13835058700
num_examples: 1805
download_size: 13879166806
dataset_size: 13835058700
- config_name: subset_254
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13383712853
num_examples: 1767
download_size: 13426540477
dataset_size: 13383712853
- config_name: subset_255
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13560805607
num_examples: 1763
download_size: 13604948210
dataset_size: 13560805607
- config_name: subset_256
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13237669958
num_examples: 1758
download_size: 13281016253
dataset_size: 13237669958
- config_name: subset_257
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13923624660
num_examples: 1807
download_size: 13968391517
dataset_size: 13923624660
- config_name: subset_258
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11361216368
num_examples: 1503
download_size: 11398068945
dataset_size: 11361216368
- config_name: subset_26
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11342909197
num_examples: 1832
download_size: 11383158933
dataset_size: 11342909197
- config_name: subset_27
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11602861316
num_examples: 1862
download_size: 11643598489
dataset_size: 11602861316
- config_name: subset_28
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11423759992
num_examples: 1829
download_size: 11464108953
dataset_size: 11423759992
- config_name: subset_29
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11317928199
num_examples: 1828
download_size: 11358232335
dataset_size: 11317928199
- config_name: subset_3
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 15009073368
num_examples: 2081
download_size: 15059069139
dataset_size: 15009073368
- config_name: subset_30
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11340440138
num_examples: 1801
download_size: 11380580568
dataset_size: 11340440138
- config_name: subset_31
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11585306838
num_examples: 1904
download_size: 11626002198
dataset_size: 11585306838
- config_name: subset_32
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11839140118
num_examples: 1904
download_size: 11881221624
dataset_size: 11839140118
- config_name: subset_33
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11667822890
num_examples: 1874
download_size: 11709572131
dataset_size: 11667822890
- config_name: subset_34
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12102711256
num_examples: 1932
download_size: 12146166646
dataset_size: 12102711256
- config_name: subset_35
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12121565378
num_examples: 1902
download_size: 12164750724
dataset_size: 12121565378
- config_name: subset_36
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11813142889
num_examples: 1877
download_size: 11853807475
dataset_size: 11813142889
- config_name: subset_37
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12105640911
num_examples: 1862
download_size: 12148512935
dataset_size: 12105640911
- config_name: subset_38
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12055811433
num_examples: 1878
download_size: 12098864002
dataset_size: 12055811433
- config_name: subset_39
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12027703389
num_examples: 1879
download_size: 12070810328
dataset_size: 12027703389
- config_name: subset_4
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14871815890
num_examples: 2102
download_size: 14921781959
dataset_size: 14871815890
- config_name: subset_40
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12297307550
num_examples: 1919
download_size: 12340628479
dataset_size: 12297307550
- config_name: subset_41
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11673113363
num_examples: 1828
download_size: 11714539382
dataset_size: 11673113363
- config_name: subset_42
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12114525371
num_examples: 1884
download_size: 12157548017
dataset_size: 12114525371
- config_name: subset_43
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12072393153
num_examples: 1874
download_size: 12115455581
dataset_size: 12072393153
- config_name: subset_44
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12231430525
num_examples: 1894
download_size: 12274252942
dataset_size: 12231430525
- config_name: subset_45
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12138951543
num_examples: 1869
download_size: 12181833715
dataset_size: 12138951543
- config_name: subset_46
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12338384571
num_examples: 1899
download_size: 12381559009
dataset_size: 12338384571
- config_name: subset_47
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12595976439
num_examples: 1913
download_size: 12639701571
dataset_size: 12595976439
- config_name: subset_48
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12291667679
num_examples: 1922
download_size: 12335068918
dataset_size: 12291667679
- config_name: subset_49
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12076363347
num_examples: 1844
download_size: 12118856817
dataset_size: 12076363347
- config_name: subset_5
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14463875379
num_examples: 2045
download_size: 14511930615
dataset_size: 14463875379
- config_name: subset_50
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12201692647
num_examples: 1884
download_size: 12244065455
dataset_size: 12201692647
- config_name: subset_51
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12205237779
num_examples: 1918
download_size: 12247882809
dataset_size: 12205237779
- config_name: subset_52
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12189874310
num_examples: 1880
download_size: 12232943844
dataset_size: 12189874310
- config_name: subset_53
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12566145830
num_examples: 1897
download_size: 12609016174
dataset_size: 12566145830
- config_name: subset_54
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12446537009
num_examples: 1901
download_size: 12489664423
dataset_size: 12446537009
- config_name: subset_55
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12447865477
num_examples: 1890
download_size: 12490882006
dataset_size: 12447865477
- config_name: subset_56
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 11886184932
num_examples: 1851
download_size: 11927798488
dataset_size: 11886184932
- config_name: subset_57
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12488545723
num_examples: 1904
download_size: 12531536020
dataset_size: 12488545723
- config_name: subset_58
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12062402025
num_examples: 1850
download_size: 12105197651
dataset_size: 12062402025
- config_name: subset_59
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12373068180
num_examples: 1875
download_size: 12415995259
dataset_size: 12373068180
- config_name: subset_6
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14501738118
num_examples: 2090
download_size: 14551152342
dataset_size: 14501738118
- config_name: subset_60
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12180776572
num_examples: 1881
download_size: 12223818393
dataset_size: 12180776572
- config_name: subset_61
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12048086660
num_examples: 1849
download_size: 12090892833
dataset_size: 12048086660
- config_name: subset_62
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12657808103
num_examples: 1895
download_size: 12701743869
dataset_size: 12657808103
- config_name: subset_63
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12392654565
num_examples: 1874
download_size: 12434138173
dataset_size: 12392654565
- config_name: subset_64
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12350385561
num_examples: 1882
download_size: 12393397192
dataset_size: 12350385561
- config_name: subset_65
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12373129329
num_examples: 1871
download_size: 12410450046
dataset_size: 12373129329
- config_name: subset_66
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12278015011
num_examples: 1850
download_size: 12320703540
dataset_size: 12278015011
- config_name: subset_67
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12371002905
num_examples: 1896
download_size: 12414173841
dataset_size: 12371002905
- config_name: subset_68
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12305454575
num_examples: 1850
download_size: 12348208355
dataset_size: 12305454575
- config_name: subset_69
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12411747691
num_examples: 1854
download_size: 12454448363
dataset_size: 12411747691
- config_name: subset_7
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14508087261
num_examples: 2116
download_size: 14558140635
dataset_size: 14508087261
- config_name: subset_70
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12350275460
num_examples: 1851
download_size: 12393029167
dataset_size: 12350275460
- config_name: subset_71
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12537513558
num_examples: 1876
download_size: 12581645796
dataset_size: 12537513558
- config_name: subset_72
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12490553756
num_examples: 1841
download_size: 12533178091
dataset_size: 12490553756
- config_name: subset_73
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12518388540
num_examples: 1882
download_size: 12561275853
dataset_size: 12518388540
- config_name: subset_74
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12497263818
num_examples: 1865
download_size: 12540088306
dataset_size: 12497263818
- config_name: subset_75
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12305999986
num_examples: 1848
download_size: 12348789656
dataset_size: 12305999986
- config_name: subset_76
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12750830510
num_examples: 1894
download_size: 12795055702
dataset_size: 12750830510
- config_name: subset_77
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12671852813
num_examples: 1875
download_size: 12715112660
dataset_size: 12671852813
- config_name: subset_78
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12871126714
num_examples: 1933
download_size: 12914706808
dataset_size: 12871126714
- config_name: subset_79
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12749824472
num_examples: 1900
download_size: 12793397015
dataset_size: 12749824472
- config_name: subset_8
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 14210430391
num_examples: 2095
download_size: 14259459141
dataset_size: 14210430391
- config_name: subset_80
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12490567475
num_examples: 1821
download_size: 12533032118
dataset_size: 12490567475
- config_name: subset_81
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12666304824
num_examples: 1873
download_size: 12710378751
dataset_size: 12666304824
- config_name: subset_82
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12789616733
num_examples: 1862
download_size: 12833584641
dataset_size: 12789616733
- config_name: subset_83
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12338377273
num_examples: 1809
download_size: 12380811191
dataset_size: 12338377273
- config_name: subset_84
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12554741918
num_examples: 1792
download_size: 12597991658
dataset_size: 12554741918
- config_name: subset_85
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12701297528
num_examples: 1870
download_size: 12745293256
dataset_size: 12701297528
- config_name: subset_86
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12337303789
num_examples: 1838
download_size: 12378681021
dataset_size: 12337303789
- config_name: subset_87
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12515934904
num_examples: 1852
download_size: 12559866703
dataset_size: 12515934904
- config_name: subset_88
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12486460328
num_examples: 1867
download_size: 12529352258
dataset_size: 12486460328
- config_name: subset_89
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13005033357
num_examples: 1903
download_size: 13050512457
dataset_size: 13005033357
- config_name: subset_9
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13964425360
num_examples: 2073
download_size: 14011911721
dataset_size: 13964425360
- config_name: subset_90
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12508609901
num_examples: 1826
download_size: 12552399045
dataset_size: 12508609901
- config_name: subset_91
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12766878985
num_examples: 1852
download_size: 12810779213
dataset_size: 12766878985
- config_name: subset_92
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12484630250
num_examples: 1852
download_size: 12527348681
dataset_size: 12484630250
- config_name: subset_93
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12560208384
num_examples: 1830
download_size: 12602554131
dataset_size: 12560208384
- config_name: subset_94
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12563291097
num_examples: 1848
download_size: 12607156154
dataset_size: 12563291097
- config_name: subset_95
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12839362555
num_examples: 1857
download_size: 12882825153
dataset_size: 12839362555
- config_name: subset_96
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12907967612
num_examples: 1885
download_size: 12952072820
dataset_size: 12907967612
- config_name: subset_97
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12893205307
num_examples: 1869
download_size: 12935785916
dataset_size: 12893205307
- config_name: subset_98
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12554140420
num_examples: 1860
download_size: 12598098176
dataset_size: 12554140420
- config_name: subset_99
features:
- name: line_no
dtype: int64
- name: enA.id
dtype: string
- name: enA.laser_score
dtype: float64
- name: zhA.id
dtype: string
- name: zhA.laser_score
dtype: float64
- name: zhA.audio.speaker_embedding
sequence: float32
- name: zhA.audio.speaker_embedding.full
sequence:
sequence: float32
- name: enA.audio.speaker_embedding
sequence: float32
- name: enA.audio.speaker_embedding.full
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13502104593
num_examples: 1915
download_size: 13548188642
dataset_size: 13502104593
configs:
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_150/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_151/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_152/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_153/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_154/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_155/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_156/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_157/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_158/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_159/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_160/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_161/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_162/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_163/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_164/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_165/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_166/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_167/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_168/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_169/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_170/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_171/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_172/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_173/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_174/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_175/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_176/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_177/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_178/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_179/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_180/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_181/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_182/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_183/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_184/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_185/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_186/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_187/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_188/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_189/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_190/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_191/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_192/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_193/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_194/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_195/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_196/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_197/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_198/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_199/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_200/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_201/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_202/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_203/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_204/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_205/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_206/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_207/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_208/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_209/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_210/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_211/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_212/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_213/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_214/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_215/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_216/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_217/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_218/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_219/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_220/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_221/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_222/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_223/train-*
- config_name: subset_224
data_files:
- split: train
path: subset_224/train-*
- config_name: subset_225
data_files:
- split: train
path: subset_225/train-*
- config_name: subset_226
data_files:
- split: train
path: subset_226/train-*
- config_name: subset_227
data_files:
- split: train
path: subset_227/train-*
- config_name: subset_228
data_files:
- split: train
path: subset_228/train-*
- config_name: subset_229
data_files:
- split: train
path: subset_229/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_230
data_files:
- split: train
path: subset_230/train-*
- config_name: subset_231
data_files:
- split: train
path: subset_231/train-*
- config_name: subset_232
data_files:
- split: train
path: subset_232/train-*
- config_name: subset_233
data_files:
- split: train
path: subset_233/train-*
- config_name: subset_234
data_files:
- split: train
path: subset_234/train-*
- config_name: subset_235
data_files:
- split: train
path: subset_235/train-*
- config_name: subset_236
data_files:
- split: train
path: subset_236/train-*
- config_name: subset_237
data_files:
- split: train
path: subset_237/train-*
- config_name: subset_238
data_files:
- split: train
path: subset_238/train-*
- config_name: subset_239
data_files:
- split: train
path: subset_239/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_240
data_files:
- split: train
path: subset_240/train-*
- config_name: subset_241
data_files:
- split: train
path: subset_241/train-*
- config_name: subset_242
data_files:
- split: train
path: subset_242/train-*
- config_name: subset_243
data_files:
- split: train
path: subset_243/train-*
- config_name: subset_244
data_files:
- split: train
path: subset_244/train-*
- config_name: subset_245
data_files:
- split: train
path: subset_245/train-*
- config_name: subset_246
data_files:
- split: train
path: subset_246/train-*
- config_name: subset_247
data_files:
- split: train
path: subset_247/train-*
- config_name: subset_248
data_files:
- split: train
path: subset_248/train-*
- config_name: subset_249
data_files:
- split: train
path: subset_249/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_250
data_files:
- split: train
path: subset_250/train-*
- config_name: subset_251
data_files:
- split: train
path: subset_251/train-*
- config_name: subset_252
data_files:
- split: train
path: subset_252/train-*
- config_name: subset_253
data_files:
- split: train
path: subset_253/train-*
- config_name: subset_254
data_files:
- split: train
path: subset_254/train-*
- config_name: subset_255
data_files:
- split: train
path: subset_255/train-*
- config_name: subset_256
data_files:
- split: train
path: subset_256/train-*
- config_name: subset_257
data_files:
- split: train
path: subset_257/train-*
- config_name: subset_258
data_files:
- split: train
path: subset_258/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
HuggingFaceTB/smollm-corpus | HuggingFaceTB | "2024-09-06T07:04:57Z" | 10,826 | 273 | [
"language:en",
"license:odc-by",
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-15T13:51:48Z" | ---
license: odc-by
dataset_info:
- config_name: cosmopedia-v2
features:
- name: prompt
dtype: string
- name: text
dtype: string
- name: token_length
dtype: int64
- name: audience
dtype: string
- name: format
dtype: string
- name: seed_data
dtype: string
splits:
- name: train
num_bytes: 212503640747
num_examples: 39134000
download_size: 122361137711
dataset_size: 212503640747
- config_name: fineweb-edu-dedup
features:
- name: text
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: dump
dtype: string
- name: url
dtype: string
- name: date
dtype: timestamp[s]
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 957570164451
num_examples: 190168005
download_size: 550069279849
dataset_size: 957570164451
- config_name: python-edu
features:
- name: blob_id
dtype: string
- name: repo_name
dtype: string
- name: path
dtype: string
- name: length_bytes
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 989334135
num_examples: 7678448
download_size: 643903049
dataset_size: 989334135
configs:
- config_name: cosmopedia-v2
data_files:
- split: train
path: cosmopedia-v2/train-*
- config_name: fineweb-edu-dedup
data_files:
- split: train
path: fineweb-edu-dedup/train-*
- config_name: python-edu
data_files:
- split: train
path: python-edu/train-*
language:
- en
---
# SmolLM-Corpus
This dataset is a curated collection of high-quality educational and synthetic data designed for training small language models.
You can find more details about the models trained on this dataset in our [SmolLM blog post](https://huggingface.co./blog/smollm).
# Dataset subsets
## Cosmopedia v2
Cosmopedia v2 is an enhanced version of Cosmopedia, the largest synthetic dataset for pre-training, consisting of over 39 million textbooks, blog posts, and stories generated by [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co./mistralai/Mixtral-8x7B-Instruct-v0.1).
Most of the samples are generated by prompting the model to generate content on specific topics using a web page referred to as a "seed sample," as shown in Figure 1. We use web samples to increase diversity and expand the range of prompts.
You can find more details in this [blog post](https://huggingface.co./blog/smollm).
### Dataset Features
* `prompt (string)`: The input prompt used to generate the text.
* `text (string)`: The generated text content.
* `token_length (int64)`: The length of the text in tokens (Mistral-7B tokenizer).
* `audience (string)`: The intended audience for the content.
* `format (string)`: The format of the content (e.g., textbook, story).
* `seed_data (string)`: The seed sample used to generate the text.
### Loading the dataset
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceTB/smollm-corpus", "cosmopedia-v2", split="train", num_proc=16)
print(ds[0])
```
## Python-Edu
The `python-edu` subset consists of Python files that were scored 4 or more by the [educational code model](https://huggingface.co./HuggingFaceTB/python-edu-scorer).
The files were extracted from the [`stack-v2-train`](https://huggingface.co./datasets/bigcode/the-stack-v2-train-full-ids) dataset.
### Dataset Features
* `blob_id (string)`: Software Heritage (SWH) ID of the file on AWS S3.
* `repo_name (string)`: Repository name on GitHub.
* `path (string)`: The file path within the repository.
* `length_bytes (int64)`: Length of the file content in UTF-8 bytes.
* `score (float32)`: The output of the educational scoring model.
* `int_score (uint8)`: The rounded educational score.
### Downloading the data
The file contents are downloaded from Software Heritage's S3 bucket to ensure data compliance.
Please refer to [the-stack-v2](https://huggingface.co./datasets/bigcode/the-stack-v2-train-full-ids) for the data license.
When running on a 16-core AWS `us-east-1` instance, this script takes ~6 hours to download the files:
```python
import boto3
import gzip
from datasets import load_dataset
from botocore.exceptions import ClientError
num_proc = 16
s3 = boto3.client('s3')
bucket_name = "softwareheritage"
def download_contents(blob_id):
key = f"content/{blob_id}"
try:
obj = s3.get_object(Bucket=bucket_name, Key=key)
with gzip.GzipFile(fileobj=obj['Body']) as fin:
content = fin.read().decode("utf-8", errors="ignore")
return {"text": content, "download_success": True}
except ClientError as e:
if e.response['Error']['Code'] == 'NoSuchKey':
print(f"File not found: {key}")
return {"text": "", "download_success": False}
else:
raise
ds = load_dataset("HuggingFaceTB/smollm-corpus", "python-edu", split="train", num_proc=num_proc)
ds = ds.map(download_contents, input_columns="blob_id", num_proc=num_proc)
# Filter out failed downloads
ds = ds.filter(lambda x: x['download_success'])
# Optionally, print the first example to verify the data
print(ds[0])
```
## FineWeb-Edu (deduplicated)
FineWeb-Edu-Dedup is a deduplicated subset of the [FineWeb-Edu](https://huggingface.co./datasets/HuggingFaceFW/fineweb-edu) dataset, containing 220 billion tokens of educational web pages.
The source dataset was filtered using an educational quality classifier to retain only the highest quality educational content.
For more information refer to the [FineWeb-v1 blog post](https://huggingface.co./spaces/HuggingFaceFW/blogpost-fineweb-v1)
### Dataset Features
* `text (string)`: The web page's text content.
* `id (string)`: Unique ID of the web page.
* `metadata (struct)`: Metadata about the web page, including:
* `dump (string)`: The source CommonCrawl dump.
* `url (string)`: The URL of the web page.
* `date (timestamp[s])`: The date the web page was captured.
* `file_path (string)`: The file path of the commoncrawl snapshot.
* `language (string)`: The language of the web page.
* `language_score (float64)`: The language probability.
* `token_count (int64)`: The token count of the web page (gpt2 tokenizer).
* `score (float64)`: The educational quality score.
* `int_score (int64)`: The rounded educational quality score.
### Loading the dataset
```python
from datasets import load_dataset
ds = load_dataset("HuggingFaceTB/smollm-corpus", "fineweb-edu-dedup", split="train", num_proc=16)
print(ds[0])
```
## Citation
```
@software{benallal2024smollmcorpus,
author = {Ben Allal, Loubna and Lozhkov, Anton and Penedo, Guilherme and Wolf, Thomas and von Werra, Leandro},
title = {SmolLM-Corpus},
month = July,
year = 2024,
url = {https://huggingface.co./datasets/HuggingFaceTB/smollm-corpus}
}
``` |
Jackmin108/xtreme | Jackmin108 | "2023-10-21T20:14:19Z" | 10,814 | 0 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-10-20T13:16:37Z" | ---
license: apache-2.0
configs:
- config_name: mnli
data_files:
- split: train
path:
- "mnli/train-0000.parquet"
- "mnli/train-0001.parquet"
- "mnli/train-0002.parquet"
- "mnli/train-0003.parquet"
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: tydiqa
data_files:
- split: train
path:
- "tydiqa/ko/train.parquet"
- "tydiqa/sw/train.parquet"
- "tydiqa/ru/train.parquet"
- "tydiqa/te/train.parquet"
- "tydiqa/ar/train.parquet"
- "tydiqa/fi/train.parquet"
- "tydiqa/bn/train.parquet"
- "tydiqa/en/train.parquet"
- "tydiqa/id/train.parquet"
- split: validation
path:
- "tydiqa/ko/validation.parquet"
- "tydiqa/sw/validation.parquet"
- "tydiqa/ru/validation.parquet"
- "tydiqa/te/validation.parquet"
- "tydiqa/ar/validation.parquet"
- "tydiqa/fi/validation.parquet"
- "tydiqa/bn/validation.parquet"
- "tydiqa/en/validation.parquet"
- "tydiqa/id/validation.parquet"
- config_name: tydiqa.ko
data_files:
- split: train
path: "tydiqa/ko/train.parquet"
- split: validation
path: "tydiqa/ko/validation.parquet"
- config_name: tydiqa.sw
data_files:
- split: train
path: "tydiqa/sw/train.parquet"
- split: validation
path: "tydiqa/sw/validation.parquet"
- config_name: tydiqa.ru
data_files:
- split: train
path: "tydiqa/ru/train.parquet"
- split: validation
path: "tydiqa/ru/validation.parquet"
- config_name: tydiqa.te
data_files:
- split: train
path: "tydiqa/te/train.parquet"
- split: validation
path: "tydiqa/te/validation.parquet"
- config_name: tydiqa.ar
data_files:
- split: train
path: "tydiqa/ar/train.parquet"
- split: validation
path: "tydiqa/ar/validation.parquet"
- config_name: tydiqa.fi
data_files:
- split: train
path: "tydiqa/fi/train.parquet"
- split: validation
path: "tydiqa/fi/validation.parquet"
- config_name: tydiqa.bn
data_files:
- split: train
path: "tydiqa/bn/train.parquet"
- split: validation
path: "tydiqa/bn/validation.parquet"
- config_name: tydiqa.en
data_files:
- split: train
path: "tydiqa/en/train.parquet"
- split: validation
path: "tydiqa/en/validation.parquet"
- config_name: tydiqa.id
data_files:
- split: train
path: "tydiqa/id/train.parquet"
- split: validation
path: "tydiqa/id/validation.parquet"
- config_name: xnli
data_files:
- split: validation
path:
- xnli/hi/validation.parquet
- xnli/zh/validation.parquet
- xnli/sw/validation.parquet
- xnli/tr/validation.parquet
- xnli/en/validation.parquet
- xnli/th/validation.parquet
- xnli/ru/validation.parquet
- xnli/ar/validation.parquet
- xnli/vi/validation.parquet
- xnli/bg/validation.parquet
- xnli/es/validation.parquet
- xnli/el/validation.parquet
- xnli/fr/validation.parquet
- xnli/ur/validation.parquet
- xnli/de/validation.parquet
- split: test
path:
- xnli/hi/test.parquet
- xnli/zh/test.parquet
- xnli/sw/test.parquet
- xnli/tr/test.parquet
- xnli/en/test.parquet
- xnli/th/test.parquet
- xnli/ru/test.parquet
- xnli/ar/test.parquet
- xnli/vi/test.parquet
- xnli/bg/test.parquet
- xnli/es/test.parquet
- xnli/el/test.parquet
- xnli/fr/test.parquet
- xnli/ur/test.parquet
- xnli/de/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.hi
data_files:
- split: validation
path: xnli/hi/validation.parquet
- split: test
path: xnli/hi/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.zh
data_files:
- split: validation
path: xnli/zh/validation.parquet
- split: test
path: xnli/zh/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.sw
data_files:
- split: validation
path: xnli/sw/validation.parquet
- split: test
path: xnli/sw/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.tr
data_files:
- split: validation
path: xnli/tr/validation.parquet
- split: test
path: xnli/tr/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.en
data_files:
- split: validation
path: xnli/en/validation.parquet
- split: test
path: xnli/en/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.th
data_files:
- split: validation
path: xnli/th/validation.parquet
- split: test
path: xnli/th/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.ru
data_files:
- split: validation
path: xnli/ru/validation.parquet
- split: test
path: xnli/ru/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.ar
data_files:
- split: validation
path: xnli/ar/validation.parquet
- split: test
path: xnli/ar/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.vi
data_files:
- split: validation
path: xnli/vi/validation.parquet
- split: test
path: xnli/vi/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.bg
data_files:
- split: validation
path: xnli/bg/validation.parquet
- split: test
path: xnli/bg/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.es
data_files:
- split: validation
path: xnli/es/validation.parquet
- split: test
path: xnli/es/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.el
data_files:
- split: validation
path: xnli/el/validation.parquet
- split: test
path: xnli/el/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.fr
data_files:
- split: validation
path: xnli/fr/validation.parquet
- split: test
path: xnli/fr/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.ur
data_files:
- split: validation
path: xnli/ur/validation.parquet
- split: test
path: xnli/ur/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: xnli.de
data_files:
- split: validation
path: xnli/de/validation.parquet
- split: test
path: xnli/de/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- entailment
- neutral
- contradiction
_type: ClassLabel
idx:
dtype: int32
_type: Value
- config_name: paws-x.de
data_files:
- split: train
path: paws-x/de/train.parquet
- split: validation
path: paws-x/de/validation.parquet
- split: test
path: paws-x/de/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.en
data_files:
- split: train
path: paws-x/en/train.parquet
- split: validation
path: paws-x/en/validation.parquet
- split: test
path: paws-x/en/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.es
data_files:
- split: train
path: paws-x/es/train.parquet
- split: validation
path: paws-x/es/validation.parquet
- split: test
path: paws-x/es/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.fr
data_files:
- split: train
path: paws-x/fr/train.parquet
- split: validation
path: paws-x/fr/validation.parquet
- split: test
path: paws-x/fr/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.ja
data_files:
- split: train
path: paws-x/ja/train.parquet
- split: validation
path: paws-x/ja/validation.parquet
- split: test
path: paws-x/ja/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.ko
data_files:
- split: train
path: paws-x/ko/train.parquet
- split: validation
path: paws-x/ko/validation.parquet
- split: test
path: paws-x/ko/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
- config_name: paws-x.zh
data_files:
- split: train
path: paws-x/zh/train.parquet
- split: validation
path: paws-x/zh/validation.parquet
- split: test
path: paws-x/zh/test.parquet
features:
sentence1:
dtype: string
_type: Value
sentence2:
dtype: string
_type: Value
label:
names:
- not_paraphrase
- paraphrase
_type: ClassLabel
---
|
lmms-lab/Video-MME | lmms-lab | "2024-07-04T08:14:20Z" | 10,733 | 32 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-07T12:06:37Z" | ---
dataset_info:
config_name: videomme
features:
- name: video_id
dtype: string
- name: duration
dtype: string
- name: domain
dtype: string
- name: sub_category
dtype: string
- name: url
dtype: string
- name: videoID
dtype: string
- name: question_id
dtype: string
- name: task_type
dtype: string
- name: question
dtype: string
- name: options
sequence: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1003241.0
num_examples: 2700
download_size: 405167
dataset_size: 1003241.0
configs:
- config_name: videomme
data_files:
- split: test
path: videomme/test-*
---
|
bigcode/self-oss-instruct-sc2-instructions | bigcode | "2024-04-23T20:23:15Z" | 10,661 | 4 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-04-23T20:23:00Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: fingerprint
dtype: 'null'
- name: seed
dtype: string
- name: sha1
dtype: string
- name: id
dtype: int64
- name: concepts
sequence: string
- name: instruction
dtype: string
splits:
- name: train
num_bytes: 1708698948
num_examples: 237517
download_size: 341570013
dataset_size: 1708698948
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Real-IAD/Real-IAD | Real-IAD | "2024-09-22T05:59:43Z" | 10,646 | 34 | [
"license:cc-by-nc-sa-4.0",
"size_categories:100B<n<1T",
"region:us"
] | null | "2024-04-11T08:03:17Z" | ---
license: cc-by-nc-sa-4.0
pretty_name: Real-IAD
size_categories:
- 100B<n<1T
---
Website: [https://realiad4ad.github.io/Real-IAD/](https://realiad4ad.github.io/Real-IAD/)
Real-IAD is released for research purpose only.
If your Hugging face is registrated by your college email address, we will approve your access request directly,
otherwise please send an email to [email protected] from your affiliation email address. Thank your understanding and cooperation.
A recommended application email format is:
----------------------------------------------------------------------
I am writing to request access to the Real-IAD for research purposes.
Name: [Your Full Name]
Affilication: [Name of your Institution/Organization]
Title:[Your Title]
Supervisor: [Name of your supervisor](This information is only necessary if you are a student)
----------------------------------------------------------------------
Evaluation Tool: [ADEval](https://pypi.org/project/ADEval/)
- Install ADEval
```shell
python3 -m pip install ADEval
```
- Execute Evaluation
```shell
python3 -m adeval --sample_key_pat "([a-zA-Z][a-zA-Z0-9_]*_[0-9]{4}_[A-Z][A-Z_]*[A-Z])_C[0-9]_" some_object.pkl
```
where the result file `some_object.pkl` can be generated following the guidance of ADEval [ReadMe](https://github.com/winggan/adeval?tab=readme-ov-file#commandline-usage).
For Real-IAD, we should generate 30 result files, one file per object given an experiment setting, i.e. `audiojack.pkl`, `bottle_cap.pkl`, ...
> Note: the argument `--sample_key_pat` is identical for all experiment settings of Real-IAD |
trl-internal-testing/zen | trl-internal-testing | "2024-11-26T10:29:22Z" | 10,529 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-13T21:03:47Z" | ---
dataset_info:
- config_name: conversational_implicit_prompt_preference
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2755
num_examples: 17
- name: test
num_bytes: 386
num_examples: 2
download_size: 6623
dataset_size: 3141
- config_name: conversational_language_modeling
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1399
num_examples: 17
- name: test
num_bytes: 210
num_examples: 2
download_size: 3723
dataset_size: 1609
- config_name: conversational_preference
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2070
num_examples: 17
- name: test
num_bytes: 295
num_examples: 2
download_size: 8123
dataset_size: 2365
- config_name: conversational_prompt_completion
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1467
num_examples: 17
- name: test
num_bytes: 218
num_examples: 2
download_size: 5796
dataset_size: 1685
- config_name: conversational_prompt_only
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 821
num_examples: 17
- name: test
num_bytes: 107
num_examples: 2
download_size: 3326
dataset_size: 928
- config_name: conversational_unpaired_preference
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
list:
- name: content
dtype: string
- name: role
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 1441
num_examples: 17
- name: test
num_bytes: 219
num_examples: 2
download_size: 6421
dataset_size: 1660
- config_name: standard_implicit_prompt_preference
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 1537
num_examples: 17
- name: test
num_bytes: 258
num_examples: 2
download_size: 4330
dataset_size: 1795
- config_name: standard_language_modeling
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 744
num_examples: 17
- name: test
num_bytes: 136
num_examples: 2
download_size: 2457
dataset_size: 880
- config_name: standard_preference
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 1213
num_examples: 17
- name: test
num_bytes: 205
num_examples: 2
download_size: 4466
dataset_size: 1418
- config_name: standard_prompt_completion
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 812
num_examples: 17
- name: test
num_bytes: 144
num_examples: 2
download_size: 3231
dataset_size: 956
- config_name: standard_prompt_only
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 460
num_examples: 17
- name: test
num_bytes: 69
num_examples: 2
download_size: 2044
dataset_size: 529
- config_name: standard_stepwise
features:
- name: prompt
dtype: string
- name: completions
sequence: string
- name: label
sequence: bool
splits:
- name: train
num_bytes: 1402.9473684210527
num_examples: 17
- name: test
num_bytes: 165.05263157894737
num_examples: 2
download_size: 5033
dataset_size: 1568.0
- config_name: standard_stepwise_supervision
features:
- name: prompt
dtype: string
- name: completions
sequence: string
- name: labels
sequence: bool
splits:
- name: train
num_bytes: 1382
num_examples: 17
- name: test
num_bytes: 187
num_examples: 2
download_size: 5039
dataset_size: 1569
- config_name: standard_unpaired_preference
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: label
dtype: bool
splits:
- name: train
num_bytes: 840
num_examples: 17
- name: test
num_bytes: 131
num_examples: 2
download_size: 3861
dataset_size: 971
configs:
- config_name: conversational_implicit_prompt_preference
data_files:
- split: train
path: conversational_implicit_prompt_preference/train-*
- split: test
path: conversational_implicit_prompt_preference/test-*
- config_name: conversational_language_modeling
data_files:
- split: train
path: conversational_language_modeling/train-*
- split: test
path: conversational_language_modeling/test-*
- config_name: conversational_preference
data_files:
- split: train
path: conversational_preference/train-*
- split: test
path: conversational_preference/test-*
- config_name: conversational_prompt_completion
data_files:
- split: train
path: conversational_prompt_completion/train-*
- split: test
path: conversational_prompt_completion/test-*
- config_name: conversational_prompt_only
data_files:
- split: train
path: conversational_prompt_only/train-*
- split: test
path: conversational_prompt_only/test-*
- config_name: conversational_unpaired_preference
data_files:
- split: train
path: conversational_unpaired_preference/train-*
- split: test
path: conversational_unpaired_preference/test-*
- config_name: standard_implicit_prompt_preference
data_files:
- split: train
path: standard_implicit_prompt_preference/train-*
- split: test
path: standard_implicit_prompt_preference/test-*
- config_name: standard_language_modeling
data_files:
- split: train
path: standard_language_modeling/train-*
- split: test
path: standard_language_modeling/test-*
- config_name: standard_preference
data_files:
- split: train
path: standard_preference/train-*
- split: test
path: standard_preference/test-*
- config_name: standard_prompt_completion
data_files:
- split: train
path: standard_prompt_completion/train-*
- split: test
path: standard_prompt_completion/test-*
- config_name: standard_prompt_only
data_files:
- split: train
path: standard_prompt_only/train-*
- split: test
path: standard_prompt_only/test-*
- config_name: standard_stepwise
data_files:
- split: train
path: standard_stepwise/train-*
- split: test
path: standard_stepwise/test-*
- config_name: standard_stepwise_supervision
data_files:
- split: train
path: standard_stepwise_supervision/train-*
- split: test
path: standard_stepwise_supervision/test-*
- config_name: standard_unpaired_preference
data_files:
- split: train
path: standard_unpaired_preference/train-*
- split: test
path: standard_unpaired_preference/test-*
---
|
EleutherAI/race | EleutherAI | "2023-07-03T21:27:18Z" | 10,526 | 5 | [
"task_categories:multiple-choice",
"task_ids:multiple-choice-qa",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:1704.04683",
"region:us"
] | [
"multiple-choice"
] | "2023-07-03T13:20:38Z" | ---
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
license:
- other
multilinguality:
- monolingual
pretty_name: RACE
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- multiple-choice
task_ids:
- multiple-choice-qa
paperswithcode_id: race
dataset_info:
---
# "race" Grouped by Article
This is a modified version of https://huggingface.co./datasets/race that returns documents grouped by article context instead of by question. **Note:** This dataset currently only contains that test set of the ```high``` subset of the data.
The original readme is contained below.
# Dataset Card for "race"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.cs.cmu.edu/~glai1/data/race/](http://www.cs.cmu.edu/~glai1/data/race/)
- **Repository:** https://github.com/qizhex/RACE_AR_baselines
- **Paper:** [RACE: Large-scale ReAding Comprehension Dataset From Examinations](https://arxiv.org/abs/1704.04683)
- **Point of Contact:** [Guokun Lai](mailto:[email protected]), [Qizhe Xie](mailto:[email protected])
- **Size of downloaded dataset files:** 76.33 MB
- **Size of the generated dataset:** 349.46 MB
- **Total amount of disk used:** 425.80 MB
### Dataset Summary
RACE is a large-scale reading comprehension dataset with more than 28,000 passages and nearly 100,000 questions. The
dataset is collected from English examinations in China, which are designed for middle school and high school students.
The dataset can be served as the training and test sets for machine comprehension.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### all
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 174.73 MB
- **Total amount of disk used:** 200.17 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### high
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 140.12 MB
- **Total amount of disk used:** 165.56 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "A",
"article": "\"Schoolgirls have been wearing such short skirts at Paget High School in Branston that they've been ordered to wear trousers ins...",
"example_id": "high132.txt",
"options": ["short skirts give people the impression of sexualisation", "short skirts are too expensive for parents to afford", "the headmaster doesn't like girls wearing short skirts", "the girls wearing short skirts will be at the risk of being laughed at"],
"question": "The girls at Paget High School are not allowed to wear skirts in that _ ."
}
```
#### middle
- **Size of downloaded dataset files:** 25.44 MB
- **Size of the generated dataset:** 34.61 MB
- **Total amount of disk used:** 60.05 MB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"answer": "B",
"article": "\"There is not enough oil in the world now. As time goes by, it becomes less and less, so what are we going to do when it runs ou...",
"example_id": "middle3.txt",
"options": ["There is more petroleum than we can use now.", "Trees are needed for some other things besides making gas.", "We got electricity from ocean tides in the old days.", "Gas wasn't used to run cars in the Second World War."],
"question": "According to the passage, which of the following statements is TRUE?"
}
```
### Data Fields
The data fields are the same among all splits.
#### all
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### high
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
#### middle
- `example_id`: a `string` feature.
- `article`: a `string` feature.
- `answer`: a `string` feature.
- `question`: a `string` feature.
- `options`: a `list` of `string` features.
### Data Splits
| name |train|validation|test|
|------|----:|---------:|---:|
|all |87866| 4887|4934|
|high |62445| 3451|3498|
|middle|25421| 1436|1436|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
http://www.cs.cmu.edu/~glai1/data/race/
1. RACE dataset is available for non-commercial research purpose only.
2. All passages are obtained from the Internet which is not property of Carnegie Mellon University. We are not responsible for the content nor the meaning of these passages.
3. You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purpose, any portion of the contexts and any portion of derived data.
4. We reserve the right to terminate your access to the RACE dataset at any time.
### Citation Information
```
@inproceedings{lai-etal-2017-race,
title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
author = "Lai, Guokun and
Xie, Qizhe and
Liu, Hanxiao and
Yang, Yiming and
Hovy, Eduard",
booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
month = sep,
year = "2017",
address = "Copenhagen, Denmark",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/D17-1082",
doi = "10.18653/v1/D17-1082",
pages = "785--794",
}
```
### Contributions
Thanks to [@abarbosa94](https://github.com/abarbosa94), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
stanfordnlp/sst2 | stanfordnlp | "2024-01-04T16:31:07Z" | 10,513 | 105 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | "2022-06-13T14:01:47Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: sst
pretty_name: Stanford Sentiment Treebank v2
dataset_info:
features:
- name: idx
dtype: int32
- name: sentence
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': positive
splits:
- name: train
num_bytes: 4681603
num_examples: 67349
- name: validation
num_bytes: 106252
num_examples: 872
- name: test
num_bytes: 216640
num_examples: 1821
download_size: 3331058
dataset_size: 5004495
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nlp.stanford.edu/sentiment/
- **Repository:**
- **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://www.aclweb.org/anthology/D13-1170/)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the
compositional effects of sentiment in language. The corpus is based on the dataset introduced by Pang and Lee (2005)
and consists of 11,855 single sentences extracted from movie reviews. It was parsed with the Stanford parser and
includes a total of 215,154 unique phrases from those parse trees, each annotated by 3 human judges.
Binary classification experiments on full sentences (negative or somewhat negative vs somewhat positive or positive
with neutral sentences discarded) refer to the dataset as SST-2 or SST binary.
### Supported Tasks and Leaderboards
- `sentiment-classification`
### Languages
The text in the dataset is in English (`en`).
## Dataset Structure
### Data Instances
```
{'idx': 0,
'sentence': 'hide new secretions from the parental units ',
'label': 0}
```
### Data Fields
- `idx`: Monotonically increasing index ID.
- `sentence`: Complete sentence expressing an opinion about a film.
- `label`: Sentiment of the opinion, either "negative" (0) or positive (1). The test set labels are hidden (-1).
### Data Splits
| | train | validation | test |
|--------------------|---------:|-----------:|-----:|
| Number of examples | 67349 | 872 | 1821 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
Rotten Tomatoes reviewers.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Unknown.
### Citation Information
```bibtex
@inproceedings{socher-etal-2013-recursive,
title = "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank",
author = "Socher, Richard and
Perelygin, Alex and
Wu, Jean and
Chuang, Jason and
Manning, Christopher D. and
Ng, Andrew and
Potts, Christopher",
booktitle = "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
month = oct,
year = "2013",
address = "Seattle, Washington, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/D13-1170",
pages = "1631--1642",
}
```
### Contributions
Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset. |
omegalabsinc/omega-voice | omegalabsinc | "2025-01-11T01:27:50Z" | 10,469 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-16T01:44:48Z" | ---
license: mit
---
|
IGNF/PASTIS-HD | IGNF | "2024-10-04T13:39:24Z" | 10,442 | 10 | [
"task_categories:image-classification",
"task_categories:image-segmentation",
"license:etalab-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2107.07933",
"arxiv:2112.07558",
"arxiv:2404.08351",
"region:us",
"remote sensing",
"Agricultural"
] | [
"image-classification",
"image-segmentation"
] | "2024-04-02T14:58:15Z" | ---
license: etalab-2.0
task_categories:
- image-classification
- image-segmentation
tags:
- remote sensing
- Agricultural
size_categories:
- 1K<n<10K
---
# 🌱 PASTIS-HD 🌿 Panoptic Agricultural Satellite TIme Series : optical time series, radar time series and very high resolution image
[PASTIS](https://github.com/VSainteuf/pastis-benchmark) is a benchmark dataset for panoptic and semantic segmentation of agricultural parcels from satellite time series.
It contains 2,433 patches within the French metropolitan territory with panoptic annotations (instance index + semantic label for each pixel).
Each patch is a Sentinel-2 multispectral image time series of variable lentgh.
This dataset have been extended in 2021 with aligned radar Sentinel-1 observations for all 2433 patches.
For each patch, it constains approximately 70 observations of Sentinel-1 in ascending orbit, and 70 observations in descending orbit. Each each Sentinel1 observation is assembled into a 3-channel image: vertical polarization (VV), horizontal polarisation (VH), and the ratio vertical over horizontal polarization (VV/VH). This extension is named PASTIS-R.
We extend PASTIS with aligned very high resolution satellite images from SPOT 6-7 constellation for all 2433 patches in addition to the Sentinel-1 and 2 time series.
The image are resampled to a 1m resolution and converted to 8 bits.
This enhancement significantly improves the dataset's spatial content, providing more granular information for agricultural parcel segmentation.
**PASTIS-HD** can be used to evaluate multi-modal fusion methods (with optical time series, radar time series and VHR images) for parcel-based classification, semantic segmentation, and panoptic segmentation.
## Dataset in numbers
🛰️ Sentinel 2 | 🛰️ Sentinel 1 | 🛰️ **SPOT 6-7 VHR** | 🗻 Annotations
:-------------------------------------------- | :-------------------------------------------------- | :------------------------------| :------------------------------
➡️ 2,433 time series | ➡️ 2 time 2,433 time series | ➡️ **2,433 images** | 124,422 individual parcels
➡️ 10m / pixel | ➡️ 10m / pixel | ➡️ **1.5m / pixel** | covers ~4,000 km²
➡️ 128x128 pixels / images | ➡️ 128x128 pixels / images | ➡️ **1280x1280 pixels / images** | over 2B pixels
➡️ 38-61 acquisitions / series | ➡️ ~ 70 acquisitions / series | ➡️ **One observation** | 18 crop types
➡️ 10 spectral bands |➡️ 2 spectral bands | ➡️ **3 spectral bands** |
⚠️ The **SPOT data are natively 1.5m resolution**, but we over-sampled them at 1m to align them pixel-perfect with Sentinel data.
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6582b7dd75754a803e484487/sxmnCAGs0p2u_PALLsqyN.jpeg)
## Data loading
The Github repository associated to this dataset contains a PyTorch dataset class of [the OmniSat repository](https://github.com/gastruc/OmniSat/blob/main/src/data/Pastis.py) that can be readily used to load data for training models on PASTIS-HD.
The time series contained in PASTIS have variable lengths.
The Sentinel 1 and 2 time series are stored in numpy array. The SPOT images are in TIFF format.
The annotations are in numpy array too.
⚠️ The S2 and S1 folders contains more than 2433 files on the contrary to the labels folder. Some patches are not labelled and not used for training.
The relevant information can be find in the metadata.geojson file (with 2433 entries), which is used as an index by the dataloader.
### Remark about the folder names
⚠️ The **DATA_S1A** folder contains the Sentinel-1 **ascendent** images whereas the **DATA_S1D** folder contains the Sentinel-1 **descendant** images.
## Ground Truth Annotations
The agricultural parcels are grouped into 18 different crop classes as shown in the table below. The backgroud class corresponds to non-agricultural land, and the void label for parcels that are mostly outside their patch.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6582b7dd75754a803e484487/aHQB0uq4cqBX-7hkCkpFn.png)
Additional information about the dataset can be found in the documentation/pastis-documentation.pdf document.
## Credits
- The Sentinel imagery used in PASTIS was retrieved from [THEIA](www.theia.land.fr):
"Value-added data processed by the CNES for the Theia www.theia.land.fr data cluster using Copernicus data.
The treatments use algorithms developed by Theia’s Scientific Expertise Centres. "
- The annotations used in PASTIS stem from the French [land parcel identification system](https://www.data.gouv.fr/en/datasets/registre-parcellaire-graphique-rpg-contours-des-parcelles-et-ilots-culturaux-et-leur-groupe-de-cultures-majoritaire/) produced
by IGN.
- The SPOT images are opendata thanks to the Dataterra Dinamis initiative in the case of the ["Couverture France DINAMIS"](https://dinamis.data-terra.org/opendata/) program.
## References
If you use PASTIS please cite the [related paper](https://arxiv.org/abs/2107.07933):
```
@article{garnot2021panoptic,
title={Panoptic Segmentation of Satellite Image Time Series
with Convolutional Temporal Attention Networks},
author={Sainte Fare Garnot, Vivien and Landrieu, Loic},
journal={ICCV},
year={2021}
}
```
For the PASTIS-R optical-radar fusion dataset, please also cite [this paper](https://arxiv.org/abs/2112.07558v1):
```
@article{garnot2021mmfusion,
title = {Multi-modal temporal attention models for crop mapping from satellite time series},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
year = {2022},
doi = {https://doi.org/10.1016/j.isprsjprs.2022.03.012},
author = {Vivien {Sainte Fare Garnot} and Loic Landrieu and Nesrine Chehata},
}
```
For the PASTIS-HD with the 3 modalities optical-radar time series plus VHR images dataset, please also cite [this paper](https://arxiv.org/abs/2404.08351):
```
@article{astruc2024omnisat,
title={Omni{S}at: {S}elf-Supervised Modality Fusion for {E}arth Observation},
author={Astruc, Guillaume and Gonthier, Nicolas and Mallet, Clement and Landrieu, Loic},
journal={ECCV},
year={2024}
}
``` |
pico-lm/pretokenized-dolma | pico-lm | "2024-12-05T20:18:06Z" | 10,434 | 1 | [
"language:en",
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-01T17:29:00Z" | ---
license: apache-2.0
language:
- en
pretty_name: 'Pico Dataset: Pre-tokenized, Pre-shuffled Dolma'
size_categories:
- 100B<n<1T
---
## The Pico Dataset
A pre-tokenized, pre-shuffled version of [Dolma](https://huggingface.co./datasets/allenai/dolma), the high-quality text corpus from AI2.
### Overview
The Pico dataset simplifies training by providing:
- Pre-tokenized text in chunks of 2048 tokens, using the [OLMo Tokenizer](https://huggingface.co./allenai/OLMo-7B-0724-hf/blob/main/tokenizer_config.json)
- Pre-shuffled data for consistent training
- Streaming-friendly format
- 420B tokens total (perfect for 200K steps at batch size 1024)
### Benefits
- **Storage Efficient**: No need to download the full 10TB Dolma dataset
- **Memory Efficient**: Stream data directly with `load_dataset(..., streaming=True)`
- **Reproducible**: All models see identical data in identical order
- **Fast**: Skip tokenization during training
- **Simple**: Minimal boilerplate code needed
### Usage
1. Set up HuggingFace credentials in `.env`:
```
HF_USERNAME=your_username
HF_TOKEN=your_token # Get from https://huggingface.co./settings/tokens
```
2. Set up in python:
```
from datasets import load_dataset
dataset = load_dataset("pico-lm/pretokenized-dolma", streaming=True)
``` |
mlfoundations/MINT-1T-PDF-CC-2023-23 | mlfoundations | "2024-09-19T21:07:25Z" | 10,411 | 1 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:43:59Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-23`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co./collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co./HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co./Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co./facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
ruslanmv/ai-medical-chatbot | ruslanmv | "2024-03-23T20:45:11Z" | 10,402 | 202 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-16T12:10:13Z" | ---
configs:
- config_name: default
data_files:
- path: dialogues.*
split: train
dataset_info:
dataset_size: 141665910
download_size: 141665910
features:
- dtype: string
name: Description
- dtype: string
name: Patient
- dtype: string
name: Doctor
splits:
- name: train
num_bytes: 141665910
num_examples: 256916
---
# AI Medical Chatbot Dataset
This is an experimental Dataset designed to run a Medical Chatbot
It contains at least 250k dialogues between a Patient and a Doctor.
[![](future.jpg)](https://huggingface.co./spaces/ruslanmv/AI-Medical-Chatbot)
## Playground ChatBot
[ruslanmv/AI-Medical-Chatbot](https://huggingface.co./spaces/ruslanmv/AI-Medical-Chatbot)
For furter information visit the project here:
[https://github.com/ruslanmv/ai-medical-chatbot](https://github.com/ruslanmv/ai-medical-chatbot) |
cornell-movie-review-data/rotten_tomatoes | cornell-movie-review-data | "2024-03-18T14:28:45Z" | 10,393 | 64 | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
paperswithcode_id: mr
pretty_name: RottenTomatoes - MR Movie Review Data
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': neg
'1': pos
splits:
- name: train
num_bytes: 1074810
num_examples: 8530
- name: validation
num_bytes: 134679
num_examples: 1066
- name: test
num_bytes: 135972
num_examples: 1066
download_size: 487770
dataset_size: 1345461
train-eval-index:
- config: default
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1
args:
average: binary
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for "rotten_tomatoes"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.cs.cornell.edu/people/pabo/movie-review-data/](http://www.cs.cornell.edu/people/pabo/movie-review-data/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [https://arxiv.org/abs/cs/0506075](https://arxiv.org/abs/cs/0506075)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 0.49 MB
- **Size of the generated dataset:** 1.34 MB
- **Total amount of disk used:** 1.84 MB
### Dataset Summary
Movie Review Dataset.
This is a dataset of containing 5,331 positive and 5,331 negative processed
sentences from Rotten Tomatoes movie reviews. This data was first used in Bo
Pang and Lillian Lee, ``Seeing stars: Exploiting class relationships for
sentiment categorization with respect to rating scales.'', Proceedings of the
ACL, 2005.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 0.49 MB
- **Size of the generated dataset:** 1.34 MB
- **Total amount of disk used:** 1.84 MB
An example of 'validation' looks as follows.
```
{
"label": 1,
"text": "Sometimes the days and nights just drag on -- it 's the morning that make me feel alive . And I have one thing to thank for that : pancakes . "
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `text`: a `string` feature.
- `label`: a classification label, with possible values including `neg` (0), `pos` (1).
### Data Splits
Reads Rotten Tomatoes sentences and splits into 80% train, 10% validation, and 10% test, as is the practice set out in
Jinfeng Li, ``TEXTBUGGER: Generating Adversarial Text Against Real-world Applications.''
| name |train|validation|test|
|-------|----:|---------:|---:|
|default| 8530| 1066|1066|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{Pang+Lee:05a,
author = {Bo Pang and Lillian Lee},
title = {Seeing stars: Exploiting class relationships for sentiment
categorization with respect to rating scales},
booktitle = {Proceedings of the ACL},
year = 2005
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jxmorris12](https://github.com/jxmorris12) for adding this dataset. |
ptb-text-only/ptb_text_only | ptb-text-only | "2024-01-18T11:13:39Z" | 10,384 | 16 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:other",
"size_categories:10K<n<100K",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- other
license_details: LDC User Agreement for Non-Members
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
paperswithcode_id: null
pretty_name: Penn Treebank
dataset_info:
features:
- name: sentence
dtype: string
config_name: penn_treebank
splits:
- name: train
num_bytes: 5143706
num_examples: 42068
- name: test
num_bytes: 453710
num_examples: 3761
- name: validation
num_bytes: 403156
num_examples: 3370
download_size: 5951345
dataset_size: 6000572
---
# Dataset Card for Penn Treebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://catalog.ldc.upenn.edu/LDC99T42
- **Repository:** 'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.train.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.valid.txt',
'https://raw.githubusercontent.com/wojzaremba/lstm/master/data/ptb.test.txt'
- **Paper:** https://www.aclweb.org/anthology/J93-2004.pdf
- **Leaderboard:** [Needs More Information]
- **Point of Contact:** [Needs More Information]
### Dataset Summary
This is the Penn Treebank Project: Release 2 CDROM, featuring a million words of 1989 Wall Street Journal material.
The rare words in this version are already replaced with <unk> token. The numbers are replaced with <N> token.
### Supported Tasks and Leaderboards
Language Modelling
### Languages
The text in the dataset is in American English
## Dataset Structure
### Data Instances
[Needs More Information]
### Data Fields
[Needs More Information]
### Data Splits
[Needs More Information]
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
[Needs More Information]
### Licensing Information
Dataset provided for research purposes only. Please check dataset license for additional information.
### Citation Information
@article{marcus-etal-1993-building,
title = "Building a Large Annotated Corpus of {E}nglish: The {P}enn {T}reebank",
author = "Marcus, Mitchell P. and
Santorini, Beatrice and
Marcinkiewicz, Mary Ann",
journal = "Computational Linguistics",
volume = "19",
number = "2",
year = "1993",
url = "https://www.aclweb.org/anthology/J93-2004",
pages = "313--330",
}
### Contributions
Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset. |
mteb/arguana | mteb | "2024-03-03T11:57:44Z" | 10,314 | 0 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"source_datasets:arguana",
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"text-retrieval"
] | [
"text-retrieval"
] | "2024-03-02T20:27:06Z" | ---
language:
- en
multilinguality:
- monolingual
task_categories:
- text-retrieval
source_datasets:
- arguana
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_bytes: 111736
num_examples: 1406
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 9388094
num_examples: 8674
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 1743698
num_examples: 1406
configs:
- config_name: default
data_files:
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
--- |
DL3DV/DL3DV-ALL-480P | DL3DV | "2024-09-02T09:32:50Z" | 10,279 | 2 | [
"size_categories:100B<n<1T",
"region:us",
"3D Vision",
"NeRF",
"3D Gaussian",
"Dataset",
"Novel View Synthesis",
"Text to 3D",
"Image to 3D"
] | null | "2024-03-04T14:55:16Z" | ---
tags:
- 3D Vision
- NeRF
- 3D Gaussian
- Dataset
- Novel View Synthesis
- Text to 3D
- Image to 3D
pretty_name: Dl3DV-Dataset
size_categories:
- 100B<n<1T
---
# DL3DV-Dataset
This repo has all the 480P frames with camera poses of DL3DV-10K Dataset. We are working hard to review all the dataset to avoid sensitive information. Thank you for your patience.
# Download
If you have enough space, you can use git to download a dataset from huggingface. See this [link](https://huggingface.co./docs/hub/en/datasets-downloading). [480P](https://huggingface.co./datasets/DL3DV/DL3DV-ALL-480P)/[960P](https://huggingface.co./datasets/DL3DV/DL3DV-ALL-960P) versions should satisfies most needs.
If you do not have enough space, we further provide a [download script](https://github.com/DL3DV-10K/Dataset/blob/main/scripts/download.py) here to download a subset. The usage:
```Bash
usage: download.py [-h] --odir ODIR --subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K} --resolution {4K,2K,960P,480P} --file_type {images+poses,video,colmap_cache} [--hash HASH]
[--clean_cache]
optional arguments:
-h, --help show this help message and exit
--odir ODIR output directory
--subset {1K,2K,3K,4K,5K,6K,7K,8K,9K,10K}
The subset of the benchmark to download
--resolution {4K,2K,960P,480P}
The resolution to donwnload
--file_type {images+poses,video,colmap_cache}
The file type to download
--hash HASH If set subset=hash, this is the hash code of the scene to download
--clean_cache If set, will clean the huggingface cache to save space
```
Here are some examples:
```Bash
# Make sure you have applied for the access.
# Use this to download the download.py script
wget https://raw.githubusercontent.com/DL3DV-10K/Dataset/main/scripts/download.py
# Download 480P resolution images and poses, 0~1K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 1K --resolution 480P --file_type images+poses --clean_cache
# Download 480P resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 480P --file_type images+poses --clean_cache
```
You can also download a specific scene with its hash. The scene-hash pair visualization can be found [here](https://htmlpreview.github.io/?https://github.com/DL3DV-10K/Dataset/blob/main/visualize/index.html).
```Bash
# Download 480P resolution images and poses, 1K~2K subset, output to DL3DV-10K directory
python download.py --odir DL3DV-10K --subset 2K --resolution 480P --file_type images+poses --hash e2cedefea8a0ed2d0ffbd5bdc08acbe7e1f85c96f72f7b790e9dfe1c98963047 --clean_cache
```
# News
- [x] DL3DV-1K, 2K, 3K, 4K
- [ ] DL3DV-5K ~ 10K |
databricks/databricks-dolly-15k | databricks | "2023-06-30T18:34:13Z" | 10,276 | 777 | [
"task_categories:question-answering",
"task_categories:summarization",
"language:en",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2203.02155",
"region:us"
] | [
"question-answering",
"summarization"
] | "2023-04-11T16:43:13Z" | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- en
size_categories:
- 10K<n<100K
---
# Summary
`databricks-dolly-15k` is an open source dataset of instruction-following records generated by thousands of Databricks employees in several
of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including brainstorming, classification,
closed QA, generation, information extraction, open QA, and summarization.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: English
Version: 1.0
**Owner: Databricks, Inc.**
# Dataset Overview
`databricks-dolly-15k` is a corpus of more than 15,000 records generated by thousands of Databricks employees to enable large language
models to exhibit the magical interactivity of ChatGPT.
Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories, including
the seven outlined in the InstructGPT paper, as well as an open-ended free-form category. The contributors were instructed to avoid using
information from any source on the web with the exception of Wikipedia (for particular subsets of instruction categories), and explicitly
instructed to avoid using generative AI in formulating instructions or responses. Examples of each behavior were provided to motivate the
types of questions and instructions appropriate to each category.
Halfway through the data generation process, contributors were given the option of answering questions posed by other contributors.
They were asked to rephrase the original question and only select questions they could be reasonably expected to answer correctly.
For certain categories contributors were asked to provide reference texts copied from Wikipedia. Reference text (indicated by the `context`
field in the actual dataset) may contain bracketed Wikipedia citation numbers (e.g. `[42]`) which we recommend users remove for downstream applications.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of human-generated instruction prompts,
this dataset also presents a valuable opportunity for synthetic data generation in the methods outlined in the Self-Instruct paper.
For example, contributor--generated prompts could be submitted as few-shot examples to a large open language model to generate a
corpus of millions of examples of instructions in each of the respective InstructGPT categories.
Likewise, both the instructions and responses present fertile ground for data augmentation. A paraphrasing model might be used to
restate each prompt or short responses, with the resulting text associated to the respective ground-truth sample. Such an approach might
provide a form of regularization on the dataset that could allow for more robust instruction-following behavior in models derived from
these synthetic datasets.
# Dataset
## Purpose of Collection
As part of our continuing commitment to open source, Databricks developed what is, to the best of our knowledge, the first open source,
human-generated instruction corpus specifically designed to enable large language models to exhibit the magical interactivity of ChatGPT.
Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including
academic or commercial applications.
## Sources
- **Human-generated data**: Databricks employees were invited to create prompt / response pairs in each of eight different instruction categories.
- **Wikipedia**: For instruction categories that require an annotator to consult a reference text (information extraction, closed QA, summarization)
contributors selected passages from Wikipedia for particular subsets of instruction categories. No guidance was given to annotators as to how to select the
target passages.
## Annotator Guidelines
To create a record, employees were given a brief description of the annotation task as well as examples of the types of prompts typical
of each annotation task. Guidelines were succinct by design so as to encourage a high task completion rate, possibly at the cost of
rigorous compliance to an annotation rubric that concretely and reliably operationalizes the specific task. Caveat emptor.
The annotation guidelines for each of the categories are as follows:
- **Creative Writing**: Write a question or instruction that requires a creative, open-ended written response. The instruction should be reasonable to ask of a person with general world knowledge and should not require searching. In this task, your prompt should give very specific instructions to follow. Constraints, instructions, guidelines, or requirements all work, and the more of them the better.
- **Closed QA**: Write a question or instruction that requires factually correct response based on a passage of text from Wikipedia. The question can be complex and can involve human-level reasoning capabilities, but should not require special knowledge. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Open QA**: Write a question that can be answered using general world knowledge or at most a single search. This task asks for opinions and facts about the world at large and does not provide any reference text for consultation.
- **Summarization**: Give a summary of a paragraph from Wikipedia. Please don't ask questions that will require more than 3-5 minutes to answer. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Information Extraction**: These questions involve reading a paragraph from Wikipedia and extracting information from the passage. Everything required to produce an answer (e.g. a list, keywords etc) should be included in the passages. To create a question for this task include both the text of the question as well as the reference text in the form.
- **Classification**: These prompts contain lists or examples of entities to be classified, e.g. movie reviews, products, etc. In this task the text or list of entities under consideration is contained in the prompt (e.g. there is no reference text.). You can choose any categories for classification you like, the more diverse the better.
- **Brainstorming**: Think up lots of examples in response to a question asking to brainstorm ideas.
## Personal or Sensitive Data
This dataset contains public information (e.g., some information from Wikipedia). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
American English
# Known Limitations
- Wikipedia is a crowdsourced corpus and the contents of this dataset may reflect the bias, factual errors and topical focus found in Wikipedia
- Some annotators may not be native English speakers
- Annotator demographics and subject matter may reflect the makeup of Databricks employees
# Citation
```
@online{DatabricksBlog2023DollyV2,
author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin},
title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM},
year = {2023},
url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm},
urldate = {2023-06-30}
}
```
# License/Attribution
**Copyright (2023) Databricks, Inc.**
This dataset was developed at Databricks (https://www.databricks.com) and its use is subject to the CC BY-SA 3.0 license.
Certain categories of material in the dataset include materials from the following sources, licensed under the CC BY-SA 3.0 license:
Wikipedia (various pages) - https://www.wikipedia.org/
Copyright © Wikipedia editors and contributors. |
google/xtreme | google | "2024-02-22T17:12:06Z" | 10,148 | 95 | [
"task_categories:multiple-choice",
"task_categories:question-answering",
"task_categories:token-classification",
"task_categories:text-classification",
"task_categories:text-retrieval",
"task_ids:multiple-choice-qa",
"task_ids:extractive-qa",
"task_ids:open-domain-qa",
"task_ids:natural-language-inference",
"task_ids:named-entity-recognition",
"task_ids:part-of-speech",
"annotations_creators:found",
"language_creators:found",
"multilinguality:multilingual",
"multilinguality:translation",
"source_datasets:extended|xnli",
"source_datasets:extended|paws-x",
"source_datasets:extended|wikiann",
"source_datasets:extended|xquad",
"source_datasets:extended|mlqa",
"source_datasets:extended|tydiqa",
"source_datasets:extended|tatoeba",
"source_datasets:extended|squad",
"language:af",
"language:ar",
"language:bg",
"language:bn",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:he",
"language:hi",
"language:hu",
"language:id",
"language:it",
"language:ja",
"language:jv",
"language:ka",
"language:kk",
"language:ko",
"language:ml",
"language:mr",
"language:ms",
"language:my",
"language:nl",
"language:pt",
"language:ru",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:tl",
"language:tr",
"language:ur",
"language:vi",
"language:yo",
"language:zh",
"license:apache-2.0",
"license:cc-by-4.0",
"license:cc-by-2.0",
"license:cc-by-sa-4.0",
"license:other",
"license:cc-by-nc-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2003.11080",
"region:us",
"parallel-sentence-retrieval",
"paraphrase-identification"
] | [
"multiple-choice",
"question-answering",
"token-classification",
"text-classification",
"text-retrieval",
"token-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- af
- ar
- bg
- bn
- de
- el
- en
- es
- et
- eu
- fa
- fi
- fr
- he
- hi
- hu
- id
- it
- ja
- jv
- ka
- kk
- ko
- ml
- mr
- ms
- my
- nl
- pt
- ru
- sw
- ta
- te
- th
- tl
- tr
- ur
- vi
- yo
- zh
license:
- apache-2.0
- cc-by-4.0
- cc-by-2.0
- cc-by-sa-4.0
- other
- cc-by-nc-4.0
multilinguality:
- multilingual
- translation
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
source_datasets:
- extended|xnli
- extended|paws-x
- extended|wikiann
- extended|xquad
- extended|mlqa
- extended|tydiqa
- extended|tatoeba
- extended|squad
task_categories:
- multiple-choice
- question-answering
- token-classification
- text-classification
- text-retrieval
- token-classification
task_ids:
- multiple-choice-qa
- extractive-qa
- open-domain-qa
- natural-language-inference
- named-entity-recognition
- part-of-speech
paperswithcode_id: xtreme
pretty_name: XTREME
config_names:
- MLQA.ar.ar
- MLQA.ar.de
- MLQA.ar.en
- MLQA.ar.es
- MLQA.ar.hi
- MLQA.ar.vi
- MLQA.ar.zh
- MLQA.de.ar
- MLQA.de.de
- MLQA.de.en
- MLQA.de.es
- MLQA.de.hi
- MLQA.de.vi
- MLQA.de.zh
- MLQA.en.ar
- MLQA.en.de
- MLQA.en.en
- MLQA.en.es
- MLQA.en.hi
- MLQA.en.vi
- MLQA.en.zh
- MLQA.es.ar
- MLQA.es.de
- MLQA.es.en
- MLQA.es.es
- MLQA.es.hi
- MLQA.es.vi
- MLQA.es.zh
- MLQA.hi.ar
- MLQA.hi.de
- MLQA.hi.en
- MLQA.hi.es
- MLQA.hi.hi
- MLQA.hi.vi
- MLQA.hi.zh
- MLQA.vi.ar
- MLQA.vi.de
- MLQA.vi.en
- MLQA.vi.es
- MLQA.vi.hi
- MLQA.vi.vi
- MLQA.vi.zh
- MLQA.zh.ar
- MLQA.zh.de
- MLQA.zh.en
- MLQA.zh.es
- MLQA.zh.hi
- MLQA.zh.vi
- MLQA.zh.zh
- PAN-X.af
- PAN-X.ar
- PAN-X.bg
- PAN-X.bn
- PAN-X.de
- PAN-X.el
- PAN-X.en
- PAN-X.es
- PAN-X.et
- PAN-X.eu
- PAN-X.fa
- PAN-X.fi
- PAN-X.fr
- PAN-X.he
- PAN-X.hi
- PAN-X.hu
- PAN-X.id
- PAN-X.it
- PAN-X.ja
- PAN-X.jv
- PAN-X.ka
- PAN-X.kk
- PAN-X.ko
- PAN-X.ml
- PAN-X.mr
- PAN-X.ms
- PAN-X.my
- PAN-X.nl
- PAN-X.pt
- PAN-X.ru
- PAN-X.sw
- PAN-X.ta
- PAN-X.te
- PAN-X.th
- PAN-X.tl
- PAN-X.tr
- PAN-X.ur
- PAN-X.vi
- PAN-X.yo
- PAN-X.zh
- PAWS-X.de
- PAWS-X.en
- PAWS-X.es
- PAWS-X.fr
- PAWS-X.ja
- PAWS-X.ko
- PAWS-X.zh
- SQuAD
- XNLI
- XQuAD
- bucc18.de
- bucc18.fr
- bucc18.ru
- bucc18.zh
- tatoeba.afr
- tatoeba.ara
- tatoeba.ben
- tatoeba.bul
- tatoeba.cmn
- tatoeba.deu
- tatoeba.ell
- tatoeba.est
- tatoeba.eus
- tatoeba.fin
- tatoeba.fra
- tatoeba.heb
- tatoeba.hin
- tatoeba.hun
- tatoeba.ind
- tatoeba.ita
- tatoeba.jav
- tatoeba.jpn
- tatoeba.kat
- tatoeba.kaz
- tatoeba.kor
- tatoeba.mal
- tatoeba.mar
- tatoeba.nld
- tatoeba.pes
- tatoeba.por
- tatoeba.rus
- tatoeba.spa
- tatoeba.swh
- tatoeba.tam
- tatoeba.tel
- tatoeba.tgl
- tatoeba.tha
- tatoeba.tur
- tatoeba.urd
- tatoeba.vie
- tydiqa
- udpos.Afrikans
- udpos.Arabic
- udpos.Basque
- udpos.Bulgarian
- udpos.Chinese
- udpos.Dutch
- udpos.English
- udpos.Estonian
- udpos.Finnish
- udpos.French
- udpos.German
- udpos.Greek
- udpos.Hebrew
- udpos.Hindi
- udpos.Hungarian
- udpos.Indonesian
- udpos.Italian
- udpos.Japanese
- udpos.Kazakh
- udpos.Korean
- udpos.Marathi
- udpos.Persian
- udpos.Portuguese
- udpos.Russian
- udpos.Spanish
- udpos.Tagalog
- udpos.Tamil
- udpos.Telugu
- udpos.Thai
- udpos.Turkish
- udpos.Urdu
- udpos.Vietnamese
- udpos.Yoruba
language_bcp47:
- fa-IR
license_details: Licence Universal Dependencies v2.5
tags:
- parallel-sentence-retrieval
- paraphrase-identification
dataset_info:
- config_name: MLQA.ar.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 8368086
num_examples: 5335
- name: validation
num_bytes: 824080
num_examples: 517
download_size: 4048180
dataset_size: 9192166
- config_name: MLQA.ar.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2183914
num_examples: 1649
- name: validation
num_bytes: 364809
num_examples: 207
download_size: 1192825
dataset_size: 2548723
- config_name: MLQA.ar.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 8225634
num_examples: 5335
- name: validation
num_bytes: 810061
num_examples: 517
download_size: 3998008
dataset_size: 9035695
- config_name: MLQA.ar.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3041350
num_examples: 1978
- name: validation
num_bytes: 228152
num_examples: 161
download_size: 1531661
dataset_size: 3269502
- config_name: MLQA.ar.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3039368
num_examples: 1831
- name: validation
num_bytes: 281742
num_examples: 186
download_size: 1369756
dataset_size: 3321110
- config_name: MLQA.ar.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3290601
num_examples: 2047
- name: validation
num_bytes: 288418
num_examples: 163
download_size: 1667238
dataset_size: 3579019
- config_name: MLQA.ar.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3229844
num_examples: 1912
- name: validation
num_bytes: 340021
num_examples: 188
download_size: 1591445
dataset_size: 3569865
- config_name: MLQA.de.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1619978
num_examples: 1649
- name: validation
num_bytes: 200146
num_examples: 207
download_size: 1044483
dataset_size: 1820124
- config_name: MLQA.de.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4366074
num_examples: 4517
- name: validation
num_bytes: 488339
num_examples: 512
download_size: 2798050
dataset_size: 4854413
- config_name: MLQA.de.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4343116
num_examples: 4517
- name: validation
num_bytes: 485866
num_examples: 512
download_size: 2778346
dataset_size: 4828982
- config_name: MLQA.de.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1716587
num_examples: 1776
- name: validation
num_bytes: 170554
num_examples: 196
download_size: 1118751
dataset_size: 1887141
- config_name: MLQA.de.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1371046
num_examples: 1430
- name: validation
num_bytes: 153843
num_examples: 163
download_size: 880652
dataset_size: 1524889
- config_name: MLQA.de.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1688455
num_examples: 1675
- name: validation
num_bytes: 216047
num_examples: 182
download_size: 1108163
dataset_size: 1904502
- config_name: MLQA.de.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1679152
num_examples: 1621
- name: validation
num_bytes: 184290
num_examples: 190
download_size: 1045861
dataset_size: 1863442
- config_name: MLQA.en.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 6739191
num_examples: 5335
- name: validation
num_bytes: 630815
num_examples: 517
download_size: 3939135
dataset_size: 7370006
- config_name: MLQA.en.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 5056694
num_examples: 4517
- name: validation
num_bytes: 594908
num_examples: 512
download_size: 3223196
dataset_size: 5651602
- config_name: MLQA.en.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 14004592
num_examples: 11590
- name: validation
num_bytes: 1329084
num_examples: 1148
download_size: 8217519
dataset_size: 15333676
- config_name: MLQA.en.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 6179221
num_examples: 5253
- name: validation
num_bytes: 555434
num_examples: 500
download_size: 3776828
dataset_size: 6734655
- config_name: MLQA.en.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 6378838
num_examples: 4918
- name: validation
num_bytes: 623143
num_examples: 507
download_size: 3517340
dataset_size: 7001981
- config_name: MLQA.en.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 7056670
num_examples: 5495
- name: validation
num_bytes: 640618
num_examples: 511
download_size: 4170642
dataset_size: 7697288
- config_name: MLQA.en.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 6539279
num_examples: 5137
- name: validation
num_bytes: 608416
num_examples: 504
download_size: 3929122
dataset_size: 7147695
- config_name: MLQA.es.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1740254
num_examples: 1978
- name: validation
num_bytes: 148621
num_examples: 161
download_size: 1107435
dataset_size: 1888875
- config_name: MLQA.es.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1403997
num_examples: 1776
- name: validation
num_bytes: 144158
num_examples: 196
download_size: 950448
dataset_size: 1548155
- config_name: MLQA.es.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4362709
num_examples: 5253
- name: validation
num_bytes: 419040
num_examples: 500
download_size: 2842879
dataset_size: 4781749
- config_name: MLQA.es.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4394305
num_examples: 5253
- name: validation
num_bytes: 422043
num_examples: 500
download_size: 2856931
dataset_size: 4816348
- config_name: MLQA.es.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1523495
num_examples: 1723
- name: validation
num_bytes: 181806
num_examples: 187
download_size: 954018
dataset_size: 1705301
- config_name: MLQA.es.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1747941
num_examples: 2018
- name: validation
num_bytes: 176813
num_examples: 189
download_size: 1187949
dataset_size: 1924754
- config_name: MLQA.es.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1678423
num_examples: 1947
- name: validation
num_bytes: 126618
num_examples: 161
download_size: 1100765
dataset_size: 1805041
- config_name: MLQA.hi.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4445561
num_examples: 1831
- name: validation
num_bytes: 410396
num_examples: 186
download_size: 1542768
dataset_size: 4855957
- config_name: MLQA.hi.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3022836
num_examples: 1430
- name: validation
num_bytes: 301685
num_examples: 163
download_size: 1257846
dataset_size: 3324521
- config_name: MLQA.hi.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 11449233
num_examples: 4918
- name: validation
num_bytes: 1097829
num_examples: 507
download_size: 4131083
dataset_size: 12547062
- config_name: MLQA.hi.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3862201
num_examples: 1723
- name: validation
num_bytes: 420374
num_examples: 187
download_size: 1493468
dataset_size: 4282575
- config_name: MLQA.hi.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 11810447
num_examples: 4918
- name: validation
num_bytes: 1136756
num_examples: 507
download_size: 4235981
dataset_size: 12947203
- config_name: MLQA.hi.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4743456
num_examples: 1947
- name: validation
num_bytes: 419078
num_examples: 177
download_size: 1704964
dataset_size: 5162534
- config_name: MLQA.hi.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4354847
num_examples: 1767
- name: validation
num_bytes: 424218
num_examples: 189
download_size: 1627107
dataset_size: 4779065
- config_name: MLQA.vi.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 3205157
num_examples: 2047
- name: validation
num_bytes: 230307
num_examples: 163
download_size: 1656661
dataset_size: 3435464
- config_name: MLQA.vi.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2227005
num_examples: 1675
- name: validation
num_bytes: 277157
num_examples: 182
download_size: 1268041
dataset_size: 2504162
- config_name: MLQA.vi.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 7843403
num_examples: 5495
- name: validation
num_bytes: 719245
num_examples: 511
download_size: 4071703
dataset_size: 8562648
- config_name: MLQA.vi.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2866569
num_examples: 2018
- name: validation
num_bytes: 283433
num_examples: 189
download_size: 1607926
dataset_size: 3150002
- config_name: MLQA.vi.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2776636
num_examples: 1947
- name: validation
num_bytes: 254979
num_examples: 177
download_size: 1366057
dataset_size: 3031615
- config_name: MLQA.vi.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 7922057
num_examples: 5495
- name: validation
num_bytes: 726490
num_examples: 511
download_size: 4105388
dataset_size: 8648547
- config_name: MLQA.vi.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 2989632
num_examples: 1943
- name: validation
num_bytes: 269361
num_examples: 184
download_size: 1570393
dataset_size: 3258993
- config_name: MLQA.zh.ar
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1731455
num_examples: 1912
- name: validation
num_bytes: 175321
num_examples: 188
download_size: 1223863
dataset_size: 1906776
- config_name: MLQA.zh.de
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1389990
num_examples: 1621
- name: validation
num_bytes: 174577
num_examples: 190
download_size: 1006829
dataset_size: 1564567
- config_name: MLQA.zh.en
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4450957
num_examples: 5137
- name: validation
num_bytes: 446840
num_examples: 504
download_size: 3108433
dataset_size: 4897797
- config_name: MLQA.zh.es
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1736255
num_examples: 1947
- name: validation
num_bytes: 138045
num_examples: 161
download_size: 1223467
dataset_size: 1874300
- config_name: MLQA.zh.hi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1578191
num_examples: 1767
- name: validation
num_bytes: 184373
num_examples: 189
download_size: 1044599
dataset_size: 1762564
- config_name: MLQA.zh.vi
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 1806158
num_examples: 1943
- name: validation
num_bytes: 172906
num_examples: 184
download_size: 1268213
dataset_size: 1979064
- config_name: MLQA.zh.zh
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: test
num_bytes: 4422322
num_examples: 5137
- name: validation
num_bytes: 443782
num_examples: 504
download_size: 3105362
dataset_size: 4866104
- config_name: PAN-X.af
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 1321376
num_examples: 5000
- name: validation
num_bytes: 259689
num_examples: 1000
- name: test
num_bytes: 257184
num_examples: 1000
download_size: 389015
dataset_size: 1838249
- config_name: PAN-X.ar
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3634096
num_examples: 20000
- name: validation
num_bytes: 1808283
num_examples: 10000
- name: test
num_bytes: 1811963
num_examples: 10000
download_size: 1567470
dataset_size: 7254342
- config_name: PAN-X.bg
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4600733
num_examples: 20000
- name: validation
num_bytes: 2310294
num_examples: 10000
- name: test
num_bytes: 2306138
num_examples: 10000
download_size: 2030669
dataset_size: 9217165
- config_name: PAN-X.bn
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 1568825
num_examples: 10000
- name: validation
num_bytes: 159068
num_examples: 1000
- name: test
num_bytes: 159262
num_examples: 1000
download_size: 364024
dataset_size: 1887155
- config_name: PAN-X.de
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4762312
num_examples: 20000
- name: validation
num_bytes: 2381545
num_examples: 10000
- name: test
num_bytes: 2377619
num_examples: 10000
download_size: 2360242
dataset_size: 9521476
- config_name: PAN-X.el
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 5063136
num_examples: 20000
- name: validation
num_bytes: 2533786
num_examples: 10000
- name: test
num_bytes: 2547574
num_examples: 10000
download_size: 2271726
dataset_size: 10144496
- config_name: PAN-X.en
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3823434
num_examples: 20000
- name: validation
num_bytes: 1920049
num_examples: 10000
- name: test
num_bytes: 1916200
num_examples: 10000
download_size: 1886284
dataset_size: 7659683
- config_name: PAN-X.es
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3199121
num_examples: 20000
- name: validation
num_bytes: 1592505
num_examples: 10000
- name: test
num_bytes: 1602271
num_examples: 10000
download_size: 1489562
dataset_size: 6393897
- config_name: PAN-X.et
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3023171
num_examples: 15000
- name: validation
num_bytes: 2030140
num_examples: 10000
- name: test
num_bytes: 2021389
num_examples: 10000
download_size: 1915624
dataset_size: 7074700
- config_name: PAN-X.eu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 2292307
num_examples: 10000
- name: validation
num_bytes: 2296315
num_examples: 10000
- name: test
num_bytes: 2249815
num_examples: 10000
download_size: 1393179
dataset_size: 6838437
- config_name: PAN-X.fa
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3529314
num_examples: 20000
- name: validation
num_bytes: 1782286
num_examples: 10000
- name: test
num_bytes: 1770264
num_examples: 10000
download_size: 1401208
dataset_size: 7081864
- config_name: PAN-X.fi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4273753
num_examples: 20000
- name: validation
num_bytes: 2131749
num_examples: 10000
- name: test
num_bytes: 2130645
num_examples: 10000
download_size: 2459149
dataset_size: 8536147
- config_name: PAN-X.fr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3335384
num_examples: 20000
- name: validation
num_bytes: 1664170
num_examples: 10000
- name: test
num_bytes: 1675765
num_examples: 10000
download_size: 1679283
dataset_size: 6675319
- config_name: PAN-X.he
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4667060
num_examples: 20000
- name: validation
num_bytes: 2332740
num_examples: 10000
- name: test
num_bytes: 2318736
num_examples: 10000
download_size: 2186463
dataset_size: 9318536
- config_name: PAN-X.hi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 964192
num_examples: 5000
- name: validation
num_bytes: 190651
num_examples: 1000
- name: test
num_bytes: 196170
num_examples: 1000
download_size: 266086
dataset_size: 1351013
- config_name: PAN-X.hu
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4499874
num_examples: 20000
- name: validation
num_bytes: 2211831
num_examples: 10000
- name: test
num_bytes: 2249759
num_examples: 10000
download_size: 2399390
dataset_size: 8961464
- config_name: PAN-X.id
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3083967
num_examples: 20000
- name: validation
num_bytes: 1537959
num_examples: 10000
- name: test
num_bytes: 1536859
num_examples: 10000
download_size: 1412049
dataset_size: 6158785
- config_name: PAN-X.it
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3874623
num_examples: 20000
- name: validation
num_bytes: 1908509
num_examples: 10000
- name: test
num_bytes: 1928388
num_examples: 10000
download_size: 1855798
dataset_size: 7711520
- config_name: PAN-X.ja
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 12670361
num_examples: 20000
- name: validation
num_bytes: 6322983
num_examples: 10000
- name: test
num_bytes: 6448940
num_examples: 10000
download_size: 2465674
dataset_size: 25442284
- config_name: PAN-X.jv
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 16086
num_examples: 100
- name: validation
num_bytes: 14580
num_examples: 100
- name: test
num_bytes: 16897
num_examples: 100
download_size: 20475
dataset_size: 47563
- config_name: PAN-X.ka
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 2777342
num_examples: 10000
- name: validation
num_bytes: 2806881
num_examples: 10000
- name: test
num_bytes: 2824621
num_examples: 10000
download_size: 1817280
dataset_size: 8408844
- config_name: PAN-X.kk
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 240256
num_examples: 1000
- name: validation
num_bytes: 238089
num_examples: 1000
- name: test
num_bytes: 236704
num_examples: 1000
download_size: 160554
dataset_size: 715049
- config_name: PAN-X.ko
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4284693
num_examples: 20000
- name: validation
num_bytes: 2138147
num_examples: 10000
- name: test
num_bytes: 2138274
num_examples: 10000
download_size: 2539591
dataset_size: 8561114
- config_name: PAN-X.ml
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 2865184
num_examples: 10000
- name: validation
num_bytes: 290735
num_examples: 1000
- name: test
num_bytes: 276906
num_examples: 1000
download_size: 852955
dataset_size: 3432825
- config_name: PAN-X.mr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 1248239
num_examples: 5000
- name: validation
num_bytes: 245338
num_examples: 1000
- name: test
num_bytes: 255884
num_examples: 1000
download_size: 347215
dataset_size: 1749461
- config_name: PAN-X.ms
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 2965008
num_examples: 20000
- name: validation
num_bytes: 147495
num_examples: 1000
- name: test
num_bytes: 147148
num_examples: 1000
download_size: 708795
dataset_size: 3259651
- config_name: PAN-X.my
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 32715
num_examples: 100
- name: validation
num_bytes: 40408
num_examples: 100
- name: test
num_bytes: 37346
num_examples: 100
download_size: 39008
dataset_size: 110469
- config_name: PAN-X.nl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4062149
num_examples: 20000
- name: validation
num_bytes: 2016836
num_examples: 10000
- name: test
num_bytes: 2038618
num_examples: 10000
download_size: 1943893
dataset_size: 8117603
- config_name: PAN-X.pt
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3149243
num_examples: 20000
- name: validation
num_bytes: 1575121
num_examples: 10000
- name: test
num_bytes: 1562605
num_examples: 10000
download_size: 1540478
dataset_size: 6286969
- config_name: PAN-X.ru
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4121751
num_examples: 20000
- name: validation
num_bytes: 2053149
num_examples: 10000
- name: test
num_bytes: 2074125
num_examples: 10000
download_size: 2127730
dataset_size: 8249025
- config_name: PAN-X.sw
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 135891
num_examples: 1000
- name: validation
num_bytes: 136348
num_examples: 1000
- name: test
num_bytes: 140211
num_examples: 1000
download_size: 87435
dataset_size: 412450
- config_name: PAN-X.ta
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 4122090
num_examples: 15000
- name: validation
num_bytes: 277605
num_examples: 1000
- name: test
num_bytes: 278094
num_examples: 1000
download_size: 1044729
dataset_size: 4677789
- config_name: PAN-X.te
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 295390
num_examples: 1000
- name: validation
num_bytes: 293261
num_examples: 1000
- name: test
num_bytes: 296943
num_examples: 1000
download_size: 200516
dataset_size: 885594
- config_name: PAN-X.th
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 27132989
num_examples: 20000
- name: validation
num_bytes: 13262717
num_examples: 10000
- name: test
num_bytes: 13586908
num_examples: 10000
download_size: 2569566
dataset_size: 53982614
- config_name: PAN-X.tl
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 1168697
num_examples: 10000
- name: validation
num_bytes: 114136
num_examples: 1000
- name: test
num_bytes: 117884
num_examples: 1000
download_size: 308160
dataset_size: 1400717
- config_name: PAN-X.tr
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3779130
num_examples: 20000
- name: validation
num_bytes: 1915332
num_examples: 10000
- name: test
num_bytes: 1911483
num_examples: 10000
download_size: 2000699
dataset_size: 7605945
- config_name: PAN-X.ur
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3072236
num_examples: 20000
- name: validation
num_bytes: 152128
num_examples: 1000
- name: test
num_bytes: 151902
num_examples: 1000
download_size: 610869
dataset_size: 3376266
- config_name: PAN-X.vi
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 3153187
num_examples: 20000
- name: validation
num_bytes: 1565123
num_examples: 10000
- name: test
num_bytes: 1580196
num_examples: 10000
download_size: 1375631
dataset_size: 6298506
- config_name: PAN-X.yo
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 14689
num_examples: 100
- name: validation
num_bytes: 13225
num_examples: 100
- name: test
num_bytes: 13513
num_examples: 100
download_size: 17337
dataset_size: 41427
- config_name: PAN-X.zh
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
- name: langs
sequence: string
splits:
- name: train
num_bytes: 8832011
num_examples: 20000
- name: validation
num_bytes: 4491305
num_examples: 10000
- name: test
num_bytes: 4363152
num_examples: 10000
download_size: 2083198
dataset_size: 17686468
- config_name: PAWS-X.de
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 12451823
num_examples: 49380
- name: validation
num_bytes: 499997
num_examples: 2000
- name: test
num_bytes: 510182
num_examples: 2000
download_size: 9294034
dataset_size: 13462002
- config_name: PAWS-X.en
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 11827659
num_examples: 49175
- name: validation
num_bytes: 478279
num_examples: 2000
- name: test
num_bytes: 480726
num_examples: 2000
download_size: 8717639
dataset_size: 12786664
- config_name: PAWS-X.es
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 12462047
num_examples: 49401
- name: validation
num_bytes: 494057
num_examples: 1961
- name: test
num_bytes: 505035
num_examples: 2000
download_size: 9229918
dataset_size: 13461139
- config_name: PAWS-X.fr
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 12948452
num_examples: 49399
- name: validation
num_bytes: 516099
num_examples: 1988
- name: test
num_bytes: 521019
num_examples: 2000
download_size: 9464987
dataset_size: 13985570
- config_name: PAWS-X.ja
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 14695593
num_examples: 49401
- name: validation
num_bytes: 647762
num_examples: 2000
- name: test
num_bytes: 654628
num_examples: 2000
download_size: 10136228
dataset_size: 15997983
- config_name: PAWS-X.ko
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 13542597
num_examples: 49164
- name: validation
num_bytes: 540775
num_examples: 2000
- name: test
num_bytes: 547966
num_examples: 1999
download_size: 9926292
dataset_size: 14631338
- config_name: PAWS-X.zh
features:
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 10469652
num_examples: 49401
- name: validation
num_bytes: 459108
num_examples: 2000
- name: test
num_bytes: 460626
num_examples: 2000
download_size: 8878855
dataset_size: 11389386
- config_name: SQuAD
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 79316858
num_examples: 87599
- name: validation
num_bytes: 10472597
num_examples: 10570
download_size: 16272656
dataset_size: 89789455
- config_name: XNLI
features:
- name: language
dtype: string
- name: sentence1
dtype: string
- name: sentence2
dtype: string
- name: gold_label
dtype: string
splits:
- name: test
num_bytes: 20359372
num_examples: 75150
- name: validation
num_bytes: 10049239
num_examples: 37350
download_size: 8881623
dataset_size: 30408611
- config_name: XQuAD.ar
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1722775
num_examples: 1190
download_size: 263032
dataset_size: 1722775
- config_name: XQuAD.de
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1283277
num_examples: 1190
download_size: 241987
dataset_size: 1283277
- config_name: XQuAD.el
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 2206666
num_examples: 1190
download_size: 324409
dataset_size: 2206666
- config_name: XQuAD.en
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1116099
num_examples: 1190
download_size: 212402
dataset_size: 1116099
- config_name: XQuAD.es
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1273475
num_examples: 1190
download_size: 236904
dataset_size: 1273475
- config_name: XQuAD.hi
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 2682951
num_examples: 1190
download_size: 322113
dataset_size: 2682951
- config_name: XQuAD.ru
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 2136966
num_examples: 1190
download_size: 321758
dataset_size: 2136966
- config_name: XQuAD.th
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 2854935
num_examples: 1190
download_size: 337337
dataset_size: 2854935
- config_name: XQuAD.tr
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1210739
num_examples: 1190
download_size: 228394
dataset_size: 1210739
- config_name: XQuAD.vi
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 1477215
num_examples: 1190
download_size: 237674
dataset_size: 1477215
- config_name: XQuAD.zh
features:
- name: id
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: validation
num_bytes: 984217
num_examples: 1190
download_size: 205798
dataset_size: 984217
- config_name: bucc18.de
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 248691
num_examples: 1038
- name: test
num_bytes: 2325685
num_examples: 9580
download_size: 1636130
dataset_size: 2574376
- config_name: bucc18.fr
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 212497
num_examples: 929
- name: test
num_bytes: 2082403
num_examples: 9086
download_size: 1437096
dataset_size: 2294900
- config_name: bucc18.ru
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 761331
num_examples: 2374
- name: test
num_bytes: 4641646
num_examples: 14435
download_size: 3074476
dataset_size: 5402977
- config_name: bucc18.zh
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 55723
num_examples: 257
- name: test
num_bytes: 415909
num_examples: 1899
download_size: 320378
dataset_size: 471632
- config_name: tatoeba.afr
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 250635
num_examples: 1000
download_size: 47676
dataset_size: 250635
- config_name: tatoeba.ara
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 263650
num_examples: 1000
download_size: 51228
dataset_size: 263650
- config_name: tatoeba.ben
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 282703
num_examples: 1000
download_size: 51362
dataset_size: 282703
- config_name: tatoeba.bul
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 293279
num_examples: 1000
download_size: 62454
dataset_size: 293279
- config_name: tatoeba.cmn
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 259931
num_examples: 1000
download_size: 58281
dataset_size: 259931
- config_name: tatoeba.deu
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 296567
num_examples: 1000
download_size: 79066
dataset_size: 296567
- config_name: tatoeba.ell
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 269961
num_examples: 1000
download_size: 52251
dataset_size: 269961
- config_name: tatoeba.est
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 250728
num_examples: 1000
download_size: 49968
dataset_size: 250728
- config_name: tatoeba.eus
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 257068
num_examples: 1000
download_size: 54271
dataset_size: 257068
- config_name: tatoeba.fin
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 266669
num_examples: 1000
download_size: 60580
dataset_size: 266669
- config_name: tatoeba.fra
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 271018
num_examples: 1000
download_size: 60925
dataset_size: 271018
- config_name: tatoeba.heb
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 274500
num_examples: 1000
download_size: 57306
dataset_size: 274500
- config_name: tatoeba.hin
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 313558
num_examples: 1000
download_size: 68816
dataset_size: 313558
- config_name: tatoeba.hun
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 259889
num_examples: 1000
download_size: 58096
dataset_size: 259889
- config_name: tatoeba.ind
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 265844
num_examples: 1000
download_size: 57047
dataset_size: 265844
- config_name: tatoeba.ita
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 256833
num_examples: 1000
download_size: 52422
dataset_size: 256833
- config_name: tatoeba.jav
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 53068
num_examples: 205
download_size: 15208
dataset_size: 53068
- config_name: tatoeba.jpn
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 284083
num_examples: 1000
download_size: 66620
dataset_size: 284083
- config_name: tatoeba.kat
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 214646
num_examples: 746
download_size: 41759
dataset_size: 214646
- config_name: tatoeba.kaz
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 157003
num_examples: 575
download_size: 35693
dataset_size: 157003
- config_name: tatoeba.kor
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 270139
num_examples: 1000
download_size: 61210
dataset_size: 270139
- config_name: tatoeba.mal
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 225934
num_examples: 687
download_size: 51077
dataset_size: 225934
- config_name: tatoeba.mar
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 291542
num_examples: 1000
download_size: 56575
dataset_size: 291542
- config_name: tatoeba.nld
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 264263
num_examples: 1000
download_size: 59774
dataset_size: 264263
- config_name: tatoeba.pes
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 284719
num_examples: 1000
download_size: 64642
dataset_size: 284719
- config_name: tatoeba.por
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 266185
num_examples: 1000
download_size: 58250
dataset_size: 266185
- config_name: tatoeba.rus
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 283472
num_examples: 1000
download_size: 61601
dataset_size: 283472
- config_name: tatoeba.spa
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 263266
num_examples: 1000
download_size: 57055
dataset_size: 263266
- config_name: tatoeba.swh
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 94957
num_examples: 390
download_size: 19362
dataset_size: 94957
- config_name: tatoeba.tam
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 98078
num_examples: 307
download_size: 23648
dataset_size: 98078
- config_name: tatoeba.tel
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 69837
num_examples: 234
download_size: 18260
dataset_size: 69837
- config_name: tatoeba.tgl
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 259138
num_examples: 1000
download_size: 53699
dataset_size: 259138
- config_name: tatoeba.tha
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 167866
num_examples: 548
download_size: 39659
dataset_size: 167866
- config_name: tatoeba.tur
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 262885
num_examples: 1000
download_size: 54137
dataset_size: 262885
- config_name: tatoeba.urd
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 279712
num_examples: 1000
download_size: 60399
dataset_size: 279712
- config_name: tatoeba.vie
features:
- name: source_sentence
dtype: string
- name: target_sentence
dtype: string
- name: source_lang
dtype: string
- name: target_lang
dtype: string
splits:
- name: validation
num_bytes: 282407
num_examples: 1000
download_size: 66746
dataset_size: 282407
- config_name: tydiqa
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: answer_start
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 52948467
num_examples: 49881
- name: validation
num_bytes: 5006433
num_examples: 5077
download_size: 29402238
dataset_size: 57954900
- config_name: udpos.Afrikaans
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 586370
num_examples: 1315
- name: validation
num_bytes: 91290
num_examples: 194
- name: test
num_bytes: 174244
num_examples: 425
download_size: 193788
dataset_size: 851904
- config_name: udpos.Arabic
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 4453682
num_examples: 6075
- name: validation
num_bytes: 593650
num_examples: 909
- name: test
num_bytes: 973822
num_examples: 1680
download_size: 1186113
dataset_size: 6021154
- config_name: udpos.Basque
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 1327713
num_examples: 5396
- name: validation
num_bytes: 438671
num_examples: 1798
- name: test
num_bytes: 444644
num_examples: 1799
download_size: 703094
dataset_size: 2211028
- config_name: udpos.Bulgarian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2689767
num_examples: 8907
- name: validation
num_bytes: 347117
num_examples: 1115
- name: test
num_bytes: 339947
num_examples: 1116
download_size: 926186
dataset_size: 3376831
- config_name: udpos.Chinese
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 4218891
num_examples: 18998
- name: validation
num_bytes: 594448
num_examples: 3038
- name: test
num_bytes: 1236051
num_examples: 5528
download_size: 1471747
dataset_size: 6049390
- config_name: udpos.Dutch
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 4517994
num_examples: 18051
- name: validation
num_bytes: 393592
num_examples: 1394
- name: test
num_bytes: 397904
num_examples: 1471
download_size: 1410982
dataset_size: 5309490
- config_name: udpos.English
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 6225509
num_examples: 21253
- name: validation
num_bytes: 1042040
num_examples: 3974
- name: test
num_bytes: 1421148
num_examples: 5440
download_size: 2116535
dataset_size: 8688697
- config_name: udpos.Estonian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 6614893
num_examples: 25749
- name: validation
num_bytes: 814171
num_examples: 3125
- name: test
num_bytes: 1065701
num_examples: 3760
download_size: 2619121
dataset_size: 8494765
- config_name: udpos.Finnish
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 5613706
num_examples: 27198
- name: validation
num_bytes: 656646
num_examples: 3239
- name: test
num_bytes: 1025726
num_examples: 4422
download_size: 2503217
dataset_size: 7296078
- config_name: udpos.French
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 10118933
num_examples: 47308
- name: validation
num_bytes: 1294096
num_examples: 5979
- name: test
num_bytes: 1731049
num_examples: 9465
download_size: 3378680
dataset_size: 13144078
- config_name: udpos.German
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 54773777
num_examples: 166849
- name: validation
num_bytes: 6044838
num_examples: 19233
- name: test
num_bytes: 7345863
num_examples: 22458
download_size: 18623155
dataset_size: 68164478
- config_name: udpos.Greek
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 8932104
num_examples: 28152
- name: validation
num_bytes: 1062447
num_examples: 2559
- name: test
num_bytes: 1028665
num_examples: 2809
download_size: 2763293
dataset_size: 11023216
- config_name: udpos.Hebrew
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2505691
num_examples: 5241
- name: validation
num_bytes: 210013
num_examples: 484
- name: test
num_bytes: 223865
num_examples: 491
download_size: 624771
dataset_size: 2939569
- config_name: udpos.Hindi
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 6690250
num_examples: 13304
- name: validation
num_bytes: 839702
num_examples: 1659
- name: test
num_bytes: 1400225
num_examples: 2684
download_size: 1468314
dataset_size: 8930177
- config_name: udpos.Hungarian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 372226
num_examples: 910
- name: validation
num_bytes: 215879
num_examples: 441
- name: test
num_bytes: 193728
num_examples: 449
download_size: 251882
dataset_size: 781833
- config_name: udpos.Indonesian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 1710678
num_examples: 4477
- name: validation
num_bytes: 220863
num_examples: 559
- name: test
num_bytes: 557101
num_examples: 1557
download_size: 684225
dataset_size: 2488642
- config_name: udpos.Italian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 11299293
num_examples: 29685
- name: validation
num_bytes: 988996
num_examples: 2278
- name: test
num_bytes: 1337869
num_examples: 3518
download_size: 3256246
dataset_size: 13626158
- config_name: udpos.Japanese
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2792951
num_examples: 7125
- name: validation
num_bytes: 200356
num_examples: 511
- name: test
num_bytes: 928902
num_examples: 2372
download_size: 1012282
dataset_size: 3922209
- config_name: udpos.Kazakh
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 11438
num_examples: 31
- name: test
num_bytes: 228924
num_examples: 1047
download_size: 76300
dataset_size: 240362
- config_name: udpos.Korean
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 7341267
num_examples: 27410
- name: validation
num_bytes: 782587
num_examples: 3016
- name: test
num_bytes: 1162539
num_examples: 4276
download_size: 3115101
dataset_size: 9286393
- config_name: udpos.Marathi
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 59023
num_examples: 373
- name: validation
num_bytes: 8497
num_examples: 46
- name: test
num_bytes: 7871
num_examples: 47
download_size: 22133
dataset_size: 75391
- config_name: udpos.Persian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2400776
num_examples: 4798
- name: validation
num_bytes: 317053
num_examples: 599
- name: test
num_bytes: 320683
num_examples: 600
download_size: 606912
dataset_size: 3038512
- config_name: udpos.Portuguese
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 7669556
num_examples: 17992
- name: validation
num_bytes: 712397
num_examples: 1770
- name: test
num_bytes: 1082582
num_examples: 2681
download_size: 2505672
dataset_size: 9464535
- config_name: udpos.Russian
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 24230098
num_examples: 67435
- name: validation
num_bytes: 3457031
num_examples: 9960
- name: test
num_bytes: 4236693
num_examples: 11336
download_size: 8818512
dataset_size: 31923822
- config_name: udpos.Spanish
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 13858406
num_examples: 28492
- name: validation
num_bytes: 1498765
num_examples: 3054
- name: test
num_bytes: 1476500
num_examples: 3147
download_size: 4347905
dataset_size: 16833671
- config_name: udpos.Tagalog
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: test
num_bytes: 5153
num_examples: 55
download_size: 3345
dataset_size: 5153
- config_name: udpos.Tamil
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 202596
num_examples: 400
- name: validation
num_bytes: 40031
num_examples: 80
- name: test
num_bytes: 62366
num_examples: 120
download_size: 73764
dataset_size: 304993
- config_name: udpos.Telugu
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 138049
num_examples: 1051
- name: validation
num_bytes: 17990
num_examples: 131
- name: test
num_bytes: 19575
num_examples: 146
download_size: 46045
dataset_size: 175614
- config_name: udpos.Thai
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: test
num_bytes: 561336
num_examples: 1000
download_size: 92925
dataset_size: 561336
- config_name: udpos.Turkish
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 704405
num_examples: 3664
- name: validation
num_bytes: 186455
num_examples: 988
- name: test
num_bytes: 827382
num_examples: 4785
download_size: 581177
dataset_size: 1718242
- config_name: udpos.Urdu
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 2107362
num_examples: 4043
- name: validation
num_bytes: 284261
num_examples: 552
- name: test
num_bytes: 288553
num_examples: 535
download_size: 499594
dataset_size: 2680176
- config_name: udpos.Vietnamese
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: train
num_bytes: 367335
num_examples: 1400
- name: validation
num_bytes: 206188
num_examples: 800
- name: test
num_bytes: 214063
num_examples: 800
download_size: 181239
dataset_size: 787586
- config_name: udpos.Yoruba
features:
- name: tokens
sequence: string
- name: pos_tags
sequence:
class_label:
names:
'0': ADJ
'1': ADP
'2': ADV
'3': AUX
'4': CCONJ
'5': DET
'6': INTJ
'7': NOUN
'8': NUM
'9': PART
'10': PRON
'11': PROPN
'12': PUNCT
'13': SCONJ
'14': SYM
'15': VERB
'16': X
splits:
- name: test
num_bytes: 44656
num_examples: 100
download_size: 10151
dataset_size: 44656
configs:
- config_name: MLQA.ar.ar
data_files:
- split: test
path: MLQA.ar.ar/test-*
- split: validation
path: MLQA.ar.ar/validation-*
- config_name: MLQA.ar.de
data_files:
- split: test
path: MLQA.ar.de/test-*
- split: validation
path: MLQA.ar.de/validation-*
- config_name: MLQA.ar.en
data_files:
- split: test
path: MLQA.ar.en/test-*
- split: validation
path: MLQA.ar.en/validation-*
- config_name: MLQA.ar.es
data_files:
- split: test
path: MLQA.ar.es/test-*
- split: validation
path: MLQA.ar.es/validation-*
- config_name: MLQA.ar.hi
data_files:
- split: test
path: MLQA.ar.hi/test-*
- split: validation
path: MLQA.ar.hi/validation-*
- config_name: MLQA.ar.vi
data_files:
- split: test
path: MLQA.ar.vi/test-*
- split: validation
path: MLQA.ar.vi/validation-*
- config_name: MLQA.ar.zh
data_files:
- split: test
path: MLQA.ar.zh/test-*
- split: validation
path: MLQA.ar.zh/validation-*
- config_name: MLQA.de.ar
data_files:
- split: test
path: MLQA.de.ar/test-*
- split: validation
path: MLQA.de.ar/validation-*
- config_name: MLQA.de.de
data_files:
- split: test
path: MLQA.de.de/test-*
- split: validation
path: MLQA.de.de/validation-*
- config_name: MLQA.de.en
data_files:
- split: test
path: MLQA.de.en/test-*
- split: validation
path: MLQA.de.en/validation-*
- config_name: MLQA.de.es
data_files:
- split: test
path: MLQA.de.es/test-*
- split: validation
path: MLQA.de.es/validation-*
- config_name: MLQA.de.hi
data_files:
- split: test
path: MLQA.de.hi/test-*
- split: validation
path: MLQA.de.hi/validation-*
- config_name: MLQA.de.vi
data_files:
- split: test
path: MLQA.de.vi/test-*
- split: validation
path: MLQA.de.vi/validation-*
- config_name: MLQA.de.zh
data_files:
- split: test
path: MLQA.de.zh/test-*
- split: validation
path: MLQA.de.zh/validation-*
- config_name: MLQA.en.ar
data_files:
- split: test
path: MLQA.en.ar/test-*
- split: validation
path: MLQA.en.ar/validation-*
- config_name: MLQA.en.de
data_files:
- split: test
path: MLQA.en.de/test-*
- split: validation
path: MLQA.en.de/validation-*
- config_name: MLQA.en.en
data_files:
- split: test
path: MLQA.en.en/test-*
- split: validation
path: MLQA.en.en/validation-*
- config_name: MLQA.en.es
data_files:
- split: test
path: MLQA.en.es/test-*
- split: validation
path: MLQA.en.es/validation-*
- config_name: MLQA.en.hi
data_files:
- split: test
path: MLQA.en.hi/test-*
- split: validation
path: MLQA.en.hi/validation-*
- config_name: MLQA.en.vi
data_files:
- split: test
path: MLQA.en.vi/test-*
- split: validation
path: MLQA.en.vi/validation-*
- config_name: MLQA.en.zh
data_files:
- split: test
path: MLQA.en.zh/test-*
- split: validation
path: MLQA.en.zh/validation-*
- config_name: MLQA.es.ar
data_files:
- split: test
path: MLQA.es.ar/test-*
- split: validation
path: MLQA.es.ar/validation-*
- config_name: MLQA.es.de
data_files:
- split: test
path: MLQA.es.de/test-*
- split: validation
path: MLQA.es.de/validation-*
- config_name: MLQA.es.en
data_files:
- split: test
path: MLQA.es.en/test-*
- split: validation
path: MLQA.es.en/validation-*
- config_name: MLQA.es.es
data_files:
- split: test
path: MLQA.es.es/test-*
- split: validation
path: MLQA.es.es/validation-*
- config_name: MLQA.es.hi
data_files:
- split: test
path: MLQA.es.hi/test-*
- split: validation
path: MLQA.es.hi/validation-*
- config_name: MLQA.es.vi
data_files:
- split: test
path: MLQA.es.vi/test-*
- split: validation
path: MLQA.es.vi/validation-*
- config_name: MLQA.es.zh
data_files:
- split: test
path: MLQA.es.zh/test-*
- split: validation
path: MLQA.es.zh/validation-*
- config_name: MLQA.hi.ar
data_files:
- split: test
path: MLQA.hi.ar/test-*
- split: validation
path: MLQA.hi.ar/validation-*
- config_name: MLQA.hi.de
data_files:
- split: test
path: MLQA.hi.de/test-*
- split: validation
path: MLQA.hi.de/validation-*
- config_name: MLQA.hi.en
data_files:
- split: test
path: MLQA.hi.en/test-*
- split: validation
path: MLQA.hi.en/validation-*
- config_name: MLQA.hi.es
data_files:
- split: test
path: MLQA.hi.es/test-*
- split: validation
path: MLQA.hi.es/validation-*
- config_name: MLQA.hi.hi
data_files:
- split: test
path: MLQA.hi.hi/test-*
- split: validation
path: MLQA.hi.hi/validation-*
- config_name: MLQA.hi.vi
data_files:
- split: test
path: MLQA.hi.vi/test-*
- split: validation
path: MLQA.hi.vi/validation-*
- config_name: MLQA.hi.zh
data_files:
- split: test
path: MLQA.hi.zh/test-*
- split: validation
path: MLQA.hi.zh/validation-*
- config_name: MLQA.vi.ar
data_files:
- split: test
path: MLQA.vi.ar/test-*
- split: validation
path: MLQA.vi.ar/validation-*
- config_name: MLQA.vi.de
data_files:
- split: test
path: MLQA.vi.de/test-*
- split: validation
path: MLQA.vi.de/validation-*
- config_name: MLQA.vi.en
data_files:
- split: test
path: MLQA.vi.en/test-*
- split: validation
path: MLQA.vi.en/validation-*
- config_name: MLQA.vi.es
data_files:
- split: test
path: MLQA.vi.es/test-*
- split: validation
path: MLQA.vi.es/validation-*
- config_name: MLQA.vi.hi
data_files:
- split: test
path: MLQA.vi.hi/test-*
- split: validation
path: MLQA.vi.hi/validation-*
- config_name: MLQA.vi.vi
data_files:
- split: test
path: MLQA.vi.vi/test-*
- split: validation
path: MLQA.vi.vi/validation-*
- config_name: MLQA.vi.zh
data_files:
- split: test
path: MLQA.vi.zh/test-*
- split: validation
path: MLQA.vi.zh/validation-*
- config_name: MLQA.zh.ar
data_files:
- split: test
path: MLQA.zh.ar/test-*
- split: validation
path: MLQA.zh.ar/validation-*
- config_name: MLQA.zh.de
data_files:
- split: test
path: MLQA.zh.de/test-*
- split: validation
path: MLQA.zh.de/validation-*
- config_name: MLQA.zh.en
data_files:
- split: test
path: MLQA.zh.en/test-*
- split: validation
path: MLQA.zh.en/validation-*
- config_name: MLQA.zh.es
data_files:
- split: test
path: MLQA.zh.es/test-*
- split: validation
path: MLQA.zh.es/validation-*
- config_name: MLQA.zh.hi
data_files:
- split: test
path: MLQA.zh.hi/test-*
- split: validation
path: MLQA.zh.hi/validation-*
- config_name: MLQA.zh.vi
data_files:
- split: test
path: MLQA.zh.vi/test-*
- split: validation
path: MLQA.zh.vi/validation-*
- config_name: MLQA.zh.zh
data_files:
- split: test
path: MLQA.zh.zh/test-*
- split: validation
path: MLQA.zh.zh/validation-*
- config_name: PAN-X.af
data_files:
- split: train
path: PAN-X.af/train-*
- split: validation
path: PAN-X.af/validation-*
- split: test
path: PAN-X.af/test-*
- config_name: PAN-X.ar
data_files:
- split: train
path: PAN-X.ar/train-*
- split: validation
path: PAN-X.ar/validation-*
- split: test
path: PAN-X.ar/test-*
- config_name: PAN-X.bg
data_files:
- split: train
path: PAN-X.bg/train-*
- split: validation
path: PAN-X.bg/validation-*
- split: test
path: PAN-X.bg/test-*
- config_name: PAN-X.bn
data_files:
- split: train
path: PAN-X.bn/train-*
- split: validation
path: PAN-X.bn/validation-*
- split: test
path: PAN-X.bn/test-*
- config_name: PAN-X.de
data_files:
- split: train
path: PAN-X.de/train-*
- split: validation
path: PAN-X.de/validation-*
- split: test
path: PAN-X.de/test-*
- config_name: PAN-X.el
data_files:
- split: train
path: PAN-X.el/train-*
- split: validation
path: PAN-X.el/validation-*
- split: test
path: PAN-X.el/test-*
- config_name: PAN-X.en
data_files:
- split: train
path: PAN-X.en/train-*
- split: validation
path: PAN-X.en/validation-*
- split: test
path: PAN-X.en/test-*
- config_name: PAN-X.es
data_files:
- split: train
path: PAN-X.es/train-*
- split: validation
path: PAN-X.es/validation-*
- split: test
path: PAN-X.es/test-*
- config_name: PAN-X.et
data_files:
- split: train
path: PAN-X.et/train-*
- split: validation
path: PAN-X.et/validation-*
- split: test
path: PAN-X.et/test-*
- config_name: PAN-X.eu
data_files:
- split: train
path: PAN-X.eu/train-*
- split: validation
path: PAN-X.eu/validation-*
- split: test
path: PAN-X.eu/test-*
- config_name: PAN-X.fa
data_files:
- split: train
path: PAN-X.fa/train-*
- split: validation
path: PAN-X.fa/validation-*
- split: test
path: PAN-X.fa/test-*
- config_name: PAN-X.fi
data_files:
- split: train
path: PAN-X.fi/train-*
- split: validation
path: PAN-X.fi/validation-*
- split: test
path: PAN-X.fi/test-*
- config_name: PAN-X.fr
data_files:
- split: train
path: PAN-X.fr/train-*
- split: validation
path: PAN-X.fr/validation-*
- split: test
path: PAN-X.fr/test-*
- config_name: PAN-X.he
data_files:
- split: train
path: PAN-X.he/train-*
- split: validation
path: PAN-X.he/validation-*
- split: test
path: PAN-X.he/test-*
- config_name: PAN-X.hi
data_files:
- split: train
path: PAN-X.hi/train-*
- split: validation
path: PAN-X.hi/validation-*
- split: test
path: PAN-X.hi/test-*
- config_name: PAN-X.hu
data_files:
- split: train
path: PAN-X.hu/train-*
- split: validation
path: PAN-X.hu/validation-*
- split: test
path: PAN-X.hu/test-*
- config_name: PAN-X.id
data_files:
- split: train
path: PAN-X.id/train-*
- split: validation
path: PAN-X.id/validation-*
- split: test
path: PAN-X.id/test-*
- config_name: PAN-X.it
data_files:
- split: train
path: PAN-X.it/train-*
- split: validation
path: PAN-X.it/validation-*
- split: test
path: PAN-X.it/test-*
- config_name: PAN-X.ja
data_files:
- split: train
path: PAN-X.ja/train-*
- split: validation
path: PAN-X.ja/validation-*
- split: test
path: PAN-X.ja/test-*
- config_name: PAN-X.jv
data_files:
- split: train
path: PAN-X.jv/train-*
- split: validation
path: PAN-X.jv/validation-*
- split: test
path: PAN-X.jv/test-*
- config_name: PAN-X.ka
data_files:
- split: train
path: PAN-X.ka/train-*
- split: validation
path: PAN-X.ka/validation-*
- split: test
path: PAN-X.ka/test-*
- config_name: PAN-X.kk
data_files:
- split: train
path: PAN-X.kk/train-*
- split: validation
path: PAN-X.kk/validation-*
- split: test
path: PAN-X.kk/test-*
- config_name: PAN-X.ko
data_files:
- split: train
path: PAN-X.ko/train-*
- split: validation
path: PAN-X.ko/validation-*
- split: test
path: PAN-X.ko/test-*
- config_name: PAN-X.ml
data_files:
- split: train
path: PAN-X.ml/train-*
- split: validation
path: PAN-X.ml/validation-*
- split: test
path: PAN-X.ml/test-*
- config_name: PAN-X.mr
data_files:
- split: train
path: PAN-X.mr/train-*
- split: validation
path: PAN-X.mr/validation-*
- split: test
path: PAN-X.mr/test-*
- config_name: PAN-X.ms
data_files:
- split: train
path: PAN-X.ms/train-*
- split: validation
path: PAN-X.ms/validation-*
- split: test
path: PAN-X.ms/test-*
- config_name: PAN-X.my
data_files:
- split: train
path: PAN-X.my/train-*
- split: validation
path: PAN-X.my/validation-*
- split: test
path: PAN-X.my/test-*
- config_name: PAN-X.nl
data_files:
- split: train
path: PAN-X.nl/train-*
- split: validation
path: PAN-X.nl/validation-*
- split: test
path: PAN-X.nl/test-*
- config_name: PAN-X.pt
data_files:
- split: train
path: PAN-X.pt/train-*
- split: validation
path: PAN-X.pt/validation-*
- split: test
path: PAN-X.pt/test-*
- config_name: PAN-X.ru
data_files:
- split: train
path: PAN-X.ru/train-*
- split: validation
path: PAN-X.ru/validation-*
- split: test
path: PAN-X.ru/test-*
- config_name: PAN-X.sw
data_files:
- split: train
path: PAN-X.sw/train-*
- split: validation
path: PAN-X.sw/validation-*
- split: test
path: PAN-X.sw/test-*
- config_name: PAN-X.ta
data_files:
- split: train
path: PAN-X.ta/train-*
- split: validation
path: PAN-X.ta/validation-*
- split: test
path: PAN-X.ta/test-*
- config_name: PAN-X.te
data_files:
- split: train
path: PAN-X.te/train-*
- split: validation
path: PAN-X.te/validation-*
- split: test
path: PAN-X.te/test-*
- config_name: PAN-X.th
data_files:
- split: train
path: PAN-X.th/train-*
- split: validation
path: PAN-X.th/validation-*
- split: test
path: PAN-X.th/test-*
- config_name: PAN-X.tl
data_files:
- split: train
path: PAN-X.tl/train-*
- split: validation
path: PAN-X.tl/validation-*
- split: test
path: PAN-X.tl/test-*
- config_name: PAN-X.tr
data_files:
- split: train
path: PAN-X.tr/train-*
- split: validation
path: PAN-X.tr/validation-*
- split: test
path: PAN-X.tr/test-*
- config_name: PAN-X.ur
data_files:
- split: train
path: PAN-X.ur/train-*
- split: validation
path: PAN-X.ur/validation-*
- split: test
path: PAN-X.ur/test-*
- config_name: PAN-X.vi
data_files:
- split: train
path: PAN-X.vi/train-*
- split: validation
path: PAN-X.vi/validation-*
- split: test
path: PAN-X.vi/test-*
- config_name: PAN-X.yo
data_files:
- split: train
path: PAN-X.yo/train-*
- split: validation
path: PAN-X.yo/validation-*
- split: test
path: PAN-X.yo/test-*
- config_name: PAN-X.zh
data_files:
- split: train
path: PAN-X.zh/train-*
- split: validation
path: PAN-X.zh/validation-*
- split: test
path: PAN-X.zh/test-*
- config_name: PAWS-X.de
data_files:
- split: train
path: PAWS-X.de/train-*
- split: validation
path: PAWS-X.de/validation-*
- split: test
path: PAWS-X.de/test-*
- config_name: PAWS-X.en
data_files:
- split: train
path: PAWS-X.en/train-*
- split: validation
path: PAWS-X.en/validation-*
- split: test
path: PAWS-X.en/test-*
- config_name: PAWS-X.es
data_files:
- split: train
path: PAWS-X.es/train-*
- split: validation
path: PAWS-X.es/validation-*
- split: test
path: PAWS-X.es/test-*
- config_name: PAWS-X.fr
data_files:
- split: train
path: PAWS-X.fr/train-*
- split: validation
path: PAWS-X.fr/validation-*
- split: test
path: PAWS-X.fr/test-*
- config_name: PAWS-X.ja
data_files:
- split: train
path: PAWS-X.ja/train-*
- split: validation
path: PAWS-X.ja/validation-*
- split: test
path: PAWS-X.ja/test-*
- config_name: PAWS-X.ko
data_files:
- split: train
path: PAWS-X.ko/train-*
- split: validation
path: PAWS-X.ko/validation-*
- split: test
path: PAWS-X.ko/test-*
- config_name: PAWS-X.zh
data_files:
- split: train
path: PAWS-X.zh/train-*
- split: validation
path: PAWS-X.zh/validation-*
- split: test
path: PAWS-X.zh/test-*
- config_name: SQuAD
data_files:
- split: train
path: SQuAD/train-*
- split: validation
path: SQuAD/validation-*
- config_name: XNLI
data_files:
- split: test
path: XNLI/test-*
- split: validation
path: XNLI/validation-*
- config_name: XQuAD.ar
data_files:
- split: validation
path: XQuAD.ar/validation-*
- config_name: XQuAD.de
data_files:
- split: validation
path: XQuAD.de/validation-*
- config_name: XQuAD.el
data_files:
- split: validation
path: XQuAD.el/validation-*
- config_name: XQuAD.en
data_files:
- split: validation
path: XQuAD.en/validation-*
- config_name: XQuAD.es
data_files:
- split: validation
path: XQuAD.es/validation-*
- config_name: XQuAD.hi
data_files:
- split: validation
path: XQuAD.hi/validation-*
- config_name: XQuAD.ru
data_files:
- split: validation
path: XQuAD.ru/validation-*
- config_name: XQuAD.th
data_files:
- split: validation
path: XQuAD.th/validation-*
- config_name: XQuAD.tr
data_files:
- split: validation
path: XQuAD.tr/validation-*
- config_name: XQuAD.vi
data_files:
- split: validation
path: XQuAD.vi/validation-*
- config_name: XQuAD.zh
data_files:
- split: validation
path: XQuAD.zh/validation-*
- config_name: bucc18.de
data_files:
- split: validation
path: bucc18.de/validation-*
- split: test
path: bucc18.de/test-*
- config_name: bucc18.fr
data_files:
- split: validation
path: bucc18.fr/validation-*
- split: test
path: bucc18.fr/test-*
- config_name: bucc18.ru
data_files:
- split: validation
path: bucc18.ru/validation-*
- split: test
path: bucc18.ru/test-*
- config_name: bucc18.zh
data_files:
- split: validation
path: bucc18.zh/validation-*
- split: test
path: bucc18.zh/test-*
- config_name: tatoeba.afr
data_files:
- split: validation
path: tatoeba.afr/validation-*
- config_name: tatoeba.ara
data_files:
- split: validation
path: tatoeba.ara/validation-*
- config_name: tatoeba.ben
data_files:
- split: validation
path: tatoeba.ben/validation-*
- config_name: tatoeba.bul
data_files:
- split: validation
path: tatoeba.bul/validation-*
- config_name: tatoeba.cmn
data_files:
- split: validation
path: tatoeba.cmn/validation-*
- config_name: tatoeba.deu
data_files:
- split: validation
path: tatoeba.deu/validation-*
- config_name: tatoeba.ell
data_files:
- split: validation
path: tatoeba.ell/validation-*
- config_name: tatoeba.est
data_files:
- split: validation
path: tatoeba.est/validation-*
- config_name: tatoeba.eus
data_files:
- split: validation
path: tatoeba.eus/validation-*
- config_name: tatoeba.fin
data_files:
- split: validation
path: tatoeba.fin/validation-*
- config_name: tatoeba.fra
data_files:
- split: validation
path: tatoeba.fra/validation-*
- config_name: tatoeba.heb
data_files:
- split: validation
path: tatoeba.heb/validation-*
- config_name: tatoeba.hin
data_files:
- split: validation
path: tatoeba.hin/validation-*
- config_name: tatoeba.hun
data_files:
- split: validation
path: tatoeba.hun/validation-*
- config_name: tatoeba.ind
data_files:
- split: validation
path: tatoeba.ind/validation-*
- config_name: tatoeba.ita
data_files:
- split: validation
path: tatoeba.ita/validation-*
- config_name: tatoeba.jav
data_files:
- split: validation
path: tatoeba.jav/validation-*
- config_name: tatoeba.jpn
data_files:
- split: validation
path: tatoeba.jpn/validation-*
- config_name: tatoeba.kat
data_files:
- split: validation
path: tatoeba.kat/validation-*
- config_name: tatoeba.kaz
data_files:
- split: validation
path: tatoeba.kaz/validation-*
- config_name: tatoeba.kor
data_files:
- split: validation
path: tatoeba.kor/validation-*
- config_name: tatoeba.mal
data_files:
- split: validation
path: tatoeba.mal/validation-*
- config_name: tatoeba.mar
data_files:
- split: validation
path: tatoeba.mar/validation-*
- config_name: tatoeba.nld
data_files:
- split: validation
path: tatoeba.nld/validation-*
- config_name: tatoeba.pes
data_files:
- split: validation
path: tatoeba.pes/validation-*
- config_name: tatoeba.por
data_files:
- split: validation
path: tatoeba.por/validation-*
- config_name: tatoeba.rus
data_files:
- split: validation
path: tatoeba.rus/validation-*
- config_name: tatoeba.spa
data_files:
- split: validation
path: tatoeba.spa/validation-*
- config_name: tatoeba.swh
data_files:
- split: validation
path: tatoeba.swh/validation-*
- config_name: tatoeba.tam
data_files:
- split: validation
path: tatoeba.tam/validation-*
- config_name: tatoeba.tel
data_files:
- split: validation
path: tatoeba.tel/validation-*
- config_name: tatoeba.tgl
data_files:
- split: validation
path: tatoeba.tgl/validation-*
- config_name: tatoeba.tha
data_files:
- split: validation
path: tatoeba.tha/validation-*
- config_name: tatoeba.tur
data_files:
- split: validation
path: tatoeba.tur/validation-*
- config_name: tatoeba.urd
data_files:
- split: validation
path: tatoeba.urd/validation-*
- config_name: tatoeba.vie
data_files:
- split: validation
path: tatoeba.vie/validation-*
- config_name: tydiqa
data_files:
- split: train
path: tydiqa/train-*
- split: validation
path: tydiqa/validation-*
- config_name: udpos.Afrikaans
data_files:
- split: train
path: udpos.Afrikaans/train-*
- split: validation
path: udpos.Afrikaans/validation-*
- split: test
path: udpos.Afrikaans/test-*
- config_name: udpos.Arabic
data_files:
- split: train
path: udpos.Arabic/train-*
- split: validation
path: udpos.Arabic/validation-*
- split: test
path: udpos.Arabic/test-*
- config_name: udpos.Basque
data_files:
- split: train
path: udpos.Basque/train-*
- split: validation
path: udpos.Basque/validation-*
- split: test
path: udpos.Basque/test-*
- config_name: udpos.Bulgarian
data_files:
- split: train
path: udpos.Bulgarian/train-*
- split: validation
path: udpos.Bulgarian/validation-*
- split: test
path: udpos.Bulgarian/test-*
- config_name: udpos.Chinese
data_files:
- split: train
path: udpos.Chinese/train-*
- split: validation
path: udpos.Chinese/validation-*
- split: test
path: udpos.Chinese/test-*
- config_name: udpos.Dutch
data_files:
- split: train
path: udpos.Dutch/train-*
- split: validation
path: udpos.Dutch/validation-*
- split: test
path: udpos.Dutch/test-*
- config_name: udpos.English
data_files:
- split: train
path: udpos.English/train-*
- split: validation
path: udpos.English/validation-*
- split: test
path: udpos.English/test-*
- config_name: udpos.Estonian
data_files:
- split: train
path: udpos.Estonian/train-*
- split: validation
path: udpos.Estonian/validation-*
- split: test
path: udpos.Estonian/test-*
- config_name: udpos.Finnish
data_files:
- split: train
path: udpos.Finnish/train-*
- split: validation
path: udpos.Finnish/validation-*
- split: test
path: udpos.Finnish/test-*
- config_name: udpos.French
data_files:
- split: train
path: udpos.French/train-*
- split: validation
path: udpos.French/validation-*
- split: test
path: udpos.French/test-*
- config_name: udpos.German
data_files:
- split: train
path: udpos.German/train-*
- split: validation
path: udpos.German/validation-*
- split: test
path: udpos.German/test-*
- config_name: udpos.Greek
data_files:
- split: train
path: udpos.Greek/train-*
- split: validation
path: udpos.Greek/validation-*
- split: test
path: udpos.Greek/test-*
- config_name: udpos.Hebrew
data_files:
- split: train
path: udpos.Hebrew/train-*
- split: validation
path: udpos.Hebrew/validation-*
- split: test
path: udpos.Hebrew/test-*
- config_name: udpos.Hindi
data_files:
- split: train
path: udpos.Hindi/train-*
- split: validation
path: udpos.Hindi/validation-*
- split: test
path: udpos.Hindi/test-*
- config_name: udpos.Hungarian
data_files:
- split: train
path: udpos.Hungarian/train-*
- split: validation
path: udpos.Hungarian/validation-*
- split: test
path: udpos.Hungarian/test-*
- config_name: udpos.Indonesian
data_files:
- split: train
path: udpos.Indonesian/train-*
- split: validation
path: udpos.Indonesian/validation-*
- split: test
path: udpos.Indonesian/test-*
- config_name: udpos.Italian
data_files:
- split: train
path: udpos.Italian/train-*
- split: validation
path: udpos.Italian/validation-*
- split: test
path: udpos.Italian/test-*
- config_name: udpos.Japanese
data_files:
- split: train
path: udpos.Japanese/train-*
- split: validation
path: udpos.Japanese/validation-*
- split: test
path: udpos.Japanese/test-*
- config_name: udpos.Kazakh
data_files:
- split: train
path: udpos.Kazakh/train-*
- split: test
path: udpos.Kazakh/test-*
- config_name: udpos.Korean
data_files:
- split: train
path: udpos.Korean/train-*
- split: validation
path: udpos.Korean/validation-*
- split: test
path: udpos.Korean/test-*
- config_name: udpos.Marathi
data_files:
- split: train
path: udpos.Marathi/train-*
- split: validation
path: udpos.Marathi/validation-*
- split: test
path: udpos.Marathi/test-*
- config_name: udpos.Persian
data_files:
- split: train
path: udpos.Persian/train-*
- split: validation
path: udpos.Persian/validation-*
- split: test
path: udpos.Persian/test-*
- config_name: udpos.Portuguese
data_files:
- split: train
path: udpos.Portuguese/train-*
- split: validation
path: udpos.Portuguese/validation-*
- split: test
path: udpos.Portuguese/test-*
- config_name: udpos.Russian
data_files:
- split: train
path: udpos.Russian/train-*
- split: validation
path: udpos.Russian/validation-*
- split: test
path: udpos.Russian/test-*
- config_name: udpos.Spanish
data_files:
- split: train
path: udpos.Spanish/train-*
- split: validation
path: udpos.Spanish/validation-*
- split: test
path: udpos.Spanish/test-*
- config_name: udpos.Tagalog
data_files:
- split: test
path: udpos.Tagalog/test-*
- config_name: udpos.Tamil
data_files:
- split: train
path: udpos.Tamil/train-*
- split: validation
path: udpos.Tamil/validation-*
- split: test
path: udpos.Tamil/test-*
- config_name: udpos.Telugu
data_files:
- split: train
path: udpos.Telugu/train-*
- split: validation
path: udpos.Telugu/validation-*
- split: test
path: udpos.Telugu/test-*
- config_name: udpos.Thai
data_files:
- split: test
path: udpos.Thai/test-*
- config_name: udpos.Turkish
data_files:
- split: train
path: udpos.Turkish/train-*
- split: validation
path: udpos.Turkish/validation-*
- split: test
path: udpos.Turkish/test-*
- config_name: udpos.Urdu
data_files:
- split: train
path: udpos.Urdu/train-*
- split: validation
path: udpos.Urdu/validation-*
- split: test
path: udpos.Urdu/test-*
- config_name: udpos.Vietnamese
data_files:
- split: train
path: udpos.Vietnamese/train-*
- split: validation
path: udpos.Vietnamese/validation-*
- split: test
path: udpos.Vietnamese/test-*
- config_name: udpos.Yoruba
data_files:
- split: test
path: udpos.Yoruba/test-*
---
# Dataset Card for "xtreme"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://github.com/google-research/xtreme](https://github.com/google-research/xtreme)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 15.88 GB
- **Size of the generated dataset:** 1.08 GB
- **Total amount of disk used:** 16.96 GB
### Dataset Summary
The Cross-lingual Natural Language Inference (XNLI) corpus is a crowd-sourced collection of 5,000 test and
2,500 dev pairs for the MultiNLI corpus. The pairs are annotated with textual entailment and translated into
14 languages: French, Spanish, German, Greek, Bulgarian, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese,
Hindi, Swahili and Urdu. This results in 112.5k annotated pairs. Each premise can be associated with the
corresponding hypothesis in the 15 languages, summing up to more than 1.5M combinations. The corpus is made to
evaluate how to perform inference in any language (including low-resources ones like Swahili or Urdu) when only
English NLI data is available at training time. One solution is cross-lingual sentence encoding, for which XNLI
is an evaluation benchmark.
The Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark is a benchmark for the evaluation of
the cross-lingual generalization ability of pre-trained multilingual models. It covers 40 typologically diverse languages
(spanning 12 language families) and includes nine tasks that collectively require reasoning about different levels of
syntax and semantics. The languages in XTREME are selected to maximize language diversity, coverage in existing tasks,
and availability of training data. Among these are many under-studied languages, such as the Dravidian languages Tamil
(spoken in southern India, Sri Lanka, and Singapore), Telugu and Malayalam (spoken mainly in southern India), and the
Niger-Congo languages Swahili and Yoruba, spoken in Africa.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### MLQA.ar.ar
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 9.20 MB
- **Total amount of disk used:** 84.91 MB
An example of 'validation' looks as follows.
```
```
#### MLQA.ar.de
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 2.55 MB
- **Total amount of disk used:** 78.27 MB
An example of 'validation' looks as follows.
```
```
#### MLQA.ar.en
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 9.04 MB
- **Total amount of disk used:** 84.76 MB
An example of 'validation' looks as follows.
```
```
#### MLQA.ar.es
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 3.27 MB
- **Total amount of disk used:** 78.99 MB
An example of 'validation' looks as follows.
```
```
#### MLQA.ar.hi
- **Size of downloaded dataset files:** 75.72 MB
- **Size of the generated dataset:** 3.32 MB
- **Total amount of disk used:** 79.04 MB
An example of 'validation' looks as follows.
```
```
### Data Fields
The data fields are the same among all splits.
#### MLQA.ar.ar
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
#### MLQA.ar.de
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
#### MLQA.ar.en
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
#### MLQA.ar.es
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
#### MLQA.ar.hi
- `id`: a `string` feature.
- `title`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `answer_start`: a `int32` feature.
- `text`: a `string` feature.
### Data Splits
| name |validation|test|
|----------|---------:|---:|
|MLQA.ar.ar| 517|5335|
|MLQA.ar.de| 207|1649|
|MLQA.ar.en| 517|5335|
|MLQA.ar.es| 161|1978|
|MLQA.ar.hi| 186|1831|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@InProceedings{conneau2018xnli,
author = {Conneau, Alexis
and Rinott, Ruty
and Lample, Guillaume
and Williams, Adina
and Bowman, Samuel R.
and Schwenk, Holger
and Stoyanov, Veselin},
title = {XNLI: Evaluating Cross-lingual Sentence Representations},
booktitle = {Proceedings of the 2018 Conference on Empirical Methods
in Natural Language Processing},
year = {2018},
publisher = {Association for Computational Linguistics},
location = {Brussels, Belgium},
}
@article{hu2020xtreme,
author = {Junjie Hu and Sebastian Ruder and Aditya Siddhant and Graham Neubig and Orhan Firat and Melvin Johnson},
title = {XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization},
journal = {CoRR},
volume = {abs/2003.11080},
year = {2020},
archivePrefix = {arXiv},
eprint = {2003.11080}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lvwerra](https://github.com/lvwerra), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset. |
lmms-lab/DocVQA | lmms-lab | "2024-04-18T05:14:35Z" | 10,023 | 28 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2007.00398",
"region:us"
] | null | "2024-01-22T16:29:32Z" | ---
license: apache-2.0
dataset_info:
- config_name: DocVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: question_types
sequence: string
- name: image
dtype: image
- name: docId
dtype: int64
- name: ucsf_document_id
dtype: string
- name: ucsf_document_page_no
dtype: string
- name: answers
sequence: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 5659006943.631
# num_examples: 39463
- name: validation
num_bytes: 2532447207.066
num_examples: 5349
- name: test
num_bytes: 2500408525.732
num_examples: 5188
download_size: 9555791945
dataset_size: 10691862676.428999
- config_name: InfographicVQA
features:
- name: questionId
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: answer_type
sequence: string
- name: image
dtype: image
- name: image_url
dtype: string
- name: operation/reasoning
sequence: string
- name: ocr
dtype: string
- name: data_split
dtype: string
splits:
# - name: train
# num_bytes: 11559694546.32
# num_examples: 23946
- name: validation
num_bytes: 1863177404.253
num_examples: 2801
- name: test
num_bytes: 1851304047.712
num_examples: 3288
download_size: 2544892079
dataset_size: 15274175998.285
configs:
- config_name: DocVQA
data_files:
# - split: train
# path: DocVQA/train-*
- split: validation
path: DocVQA/validation-*
- split: test
path: DocVQA/test-*
- config_name: InfographicVQA
data_files:
# - split: train
# path: InfographicVQA/train-*
- split: validation
path: InfographicVQA/validation-*
- split: test
path: InfographicVQA/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co./lmms-lab)
# This Dataset
This is a formatted version of [DocVQA](https://arxiv.org/abs/2007.00398). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@article{mathew2020docvqa,
title={DocVQA: A Dataset for VQA on Document Images. CoRR abs/2007.00398 (2020)},
author={Mathew, Minesh and Karatzas, Dimosthenis and Manmatha, R and Jawahar, CV},
journal={arXiv preprint arXiv:2007.00398},
year={2020}
}
```
|
SadilKhan/Text2CAD | SadilKhan | "2025-01-03T13:28:33Z" | 9,970 | 10 | [
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"arxiv:2409.17106",
"region:us",
"text-to-cad",
"dataset",
"annotation"
] | null | "2024-12-03T21:51:26Z" | ---
language:
- en
license: cc-by-nc-sa-4.0
pipeline_tag: text-to-3D
tags:
- text-to-cad
- dataset
- annotation
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_prompt: "### Text2CAD Community License Agreement\n\n [![License: CC BY-NC-SA\
\ 4.0](https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/)\n\
\ \n This named Dataset, **Text2CAD.v1**, was developed by [Deutsche Forschungszentrum\
\ für Kuntlische Inlligenz (DFKI GmbH)] (Germany).\
\ It is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike\
\ 4.0 International License.\n \n ## Attribution-NonCommercial-ShareAlike 4.0 International\
\ (CC BY-NC-SA 4.0)\n You are free to:\n - **Share**: copy and redistribute the\
\ material in any medium or format.\n - **Adapt**: remix, transform, and build upon\
\ the material.\n \n\n Under the following terms:\n - **Attribution**: You must\
\ give appropriate credit, provide a link to the license, and indicate if changes\
\ were made. You may do so in any reasonable manner, but not in any way that suggests\
\ the licensor endorses you or your use.\n - **NonCommercial**: You may not use\
\ the material for commercial purposes.\n - **ShareAlike**: If you remix, transform,\
\ or build upon the material, you must distribute your contributions under the same\
\ license as the original.\n \n \n \n To view a copy of this license, visit [https://creativecommons.org/licenses/by-nc-sa/4.0/](https://creativecommons.org/licenses/by-nc-sa/4.0/).\n\
\ \n If you use this dataset in your work, please cite the following publication:\n\
\ \n Mohammad Sadil Khan*, Sankalp Sinha*, Talha Uddin Sheikh, Didier Stricker,\
\ Sk Aziz Ali, Muhammad Zeshan Afzal. \"Text2CAD: Generating Sequential CAD Designs\
\ from Beginner-to-Expert Level Text Prompts.\" In The Thirty-eighth Annual Conference\
\ on Neural Information Processing Systems (2024).\n \n \n \n For inquiries or additional\
\ information, contact:\n \n \n \n - [DFKI GmbH Contact Information]\n \n Deutsches\
\ Forschungszentrum Fur Kunstliche Intelligenz GmbH\n Trippstadter Str. 122\n\
\ 67663 Kaiserslautern\n Germany\n Phone: +49 631 20575 0\n link: https://www.dfki.de/web\n\
"
extra_gated_fields:
First Name: text
Last Name: text
Country: country
Affiliation: text
I want to use this dataset for:
type: select
options:
- Research
- Education
- label: Other
value: other
I agree to use this dataset for non-commercial use ONLY: checkbox
---
<div align="center">
<p style="font-size:40px;"> 🚀 Text2CAD: Generating Sequential CAD Designs from Beginner-to-Expert Level Text Prompts 🎨 </p>
[Mohammad Sadil Khan*](https://scholar.google.com/citations?user=XIDQo_IAAAAJ&hl=en&authuser=1) · [Sankalp Sinha*](https://scholar.google.com/citations?user=QYcfOjEAAAAJ&hl=en&authuser=1&oi=ao) · [Talha Uddin Sheikh](https://scholar.google.com/citations?hl=en&authuser=1&user=yW7VfAgAAAAJ) · [Didier Stricker](https://scholar.google.com/citations?hl=en&authuser=1&user=ImhXfxgAAAAJ) · [Sk Aziz Ali](https://scholar.google.com/citations?hl=en&authuser=1&user=zywjMeMAAAAJ) · [Muhammad Zeshan Afzal](https://scholar.google.com/citations?user=kHMVj6oAAAAJ&hl=en&authuser=1&oi=ao)
_*equal contributions_
<div style="display: flex; justify-content: center; gap: 10px;">
<a href="https://arxiv.org/abs/2409.17106">
<img src="https://img.shields.io/badge/Arxiv-3498db?style=for-the-badge&logoWidth=40&logoColor=white" alt="Paper" />
</a>
<a href="https://sadilkhan.github.io/text2cad-project/">
<img src="https://img.shields.io/badge/Project-2ecc71?style=for-the-badge&logoWidth=40&logoColor=white" alt="Project" />
</a>
<a href="https://github.com/SadilKhan/Text2CAD">
<img src="https://img.shields.io/badge/Code-89AAE6?style=for-the-badge&logoWidth=40&logoColor=white" alt="Code" />
</a>
</div>
<p style="font-size:35px;"> NeurIPS 2024 </p>
</div>
<div align="left">
<p style="font-size:30px;"> 📊 Dataset Versions </p>
</div>
We are releasing the following versions of the Text2CAD dataset.
<table><thead>
<tr>
<th>Dataset</th>
<th>🤖 VLM</th>
<th>🤖 LLM</th>
<th>📝 Remarks</th>
</tr></thead>
<tbody>
<tr>
<td><b>Text2CAD 1.0</b></td>
<td><code>LLaVA-NeXT</code></td>
<td><code>Mistral-7x8B-Instruct</code></td>
<td>The original dataset used in NeurIPS Paper</td>
</tr>
<tr>
<td><b>Text2CAD 1.1</b></td>
<td><code>Qwen2-VL 14B</code></td>
<td><code>Qwen2.5-72B-Instruct (8bit)</code></td>
<td> Updated dataset with latest VLM and LLM.</td>
</tr>
</tbody>
</table>
<div align="left">
<p style="font-size:30px;"> ✅ Tasks </p>
</div>
- [x] Extra Data (Minimal Json, CAD Sequence, RGB, Depth) Release
- [x] Code Release
- [x] Text2CAD v1.1 Release
- [x] Text2CAD v1.0 Release
<div align="left">
<p style="font-size:30px;"> 🗂️ Folder Description </p>
</div>
_Warning: Full Storage 555GB_.
`cad_seq.zip` `(579MB)` - CAD Vector Representation
<details><summary> Text2CAD v1.1</summary>
<p>
- 📌 `text2cad_v1.1.csv` (`1.3GB`)
- `uid`: Identifier (Same as DeepCAD).
- `abstract`: Abstract Level Annotation (Level 0).
- `beginner`: Beginner Level Annotation (Level 1).
- `intermediate`: Intermediate Level Annotation (Level 2).
- `expert`: Expert Level Annotation (Level 3).
- `description` (_New_): Description of the CAD Model generated by VLM.
- `keywords` (_New_): List of Keywords generated by VLM.
- `all_level_data`: All the level annotations generated by LLM before spliting.
- `nli_data`: The intermediate NLI response.
- 📌 `misc/`
- `minimal_json/` (`246MB`): Simplified JSON generated from DeepCAD original JSON. Random UIDs, redundant information is removed. _This json can be used by an LLM to generate more captions_.
- `rgb_images/` (`397GB`): RGB images of the final CAD Model as well as the intermediate ones using Blender. It contains 10 images (8 side views in circular trajectory, bottom and top) and their respective camera parameters for each model. (_Note: The camera matrices follow Blender coordinate system_. Use [this](https://github.com/facebookresearch/pytorch3d/issues/1105#issuecomment-1784095670) to convert them to pytorch3D coordinate system.)
- 📌 `train_test_val.json`: Train,Test and Validation UIDs. It contains filtered training samples with `5%` cuboid and cylindrical models (Original json has ~ `25%` cuboid and ~ `8%` cylindtrical samples in the training dataset).
</p>
</details>
<br>
<details><summary> Text2CAD v1.0</summary>
<p>
- 📌 `text2cad_v1.0.csv` `(1.1GB)`
- `uid`: Identifier (Same as DeepCAD).
- `abstract`: Abstract Level Annotation (Level 0).
- `beginner`: Beginner Level Annotation (Level 1).
- `intermediate`: Intermediate Level Annotation (Level 2).
- `expert`: Expert Level Annotation (Level 3).
- `all_level_data`: All the level annotations generated by LLM before spliting.
- `nli_data`: The intermediate NLI response.
- 📌 `misc/`
- `minimal_json/` (`110MB`): Simplified JSON generated from DeepCAD original JSON. Random UIDs, redundant information is removed. _This json is used in our NeurIPS paper_.
- `rgb_images/` (`48GB`): Random 9 multi-view rgb images of the CAD model and their intermediate models using `Open3D`.
- `depth_images/` (`37GB`): Depth images of the corresponding rgb images using `Open3D`.
- 📌 `train_test_val.json`: Original train,test and validation json.
- 📌 `Text2CAD_1.0.pth`: Text2CAD Transformer model weights (Used in our paper).
</p>
</details>
<div align="left">
<p style="font-size:30px;"> 📜 Citation</p>
</div>
If you use this dataset in your work, please cite the following publications.
```
@inproceedings{khan2024textcad,
title={Text2CAD: Generating Sequential CAD Designs from Beginner-to-Expert Level Text Prompts},
author={Mohammad Sadil Khan and Sankalp Sinha and Sheikh Talha Uddin and Didier Stricker and Sk Aziz Ali and Muhammad Zeshan Afzal},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=5k9XeHIK3L}
}
@InProceedings{Khan_2024_CVPR,
author = {Khan, Mohammad Sadil and Dupont, Elona and Ali, Sk Aziz and Cherenkova, Kseniya and Kacem, Anis and Aouada, Djamila},
title = {CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise Sketch Instance Guided Attention},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
pages = {4713-4722}
}
```
|
bigcode/the-stack-v2 | bigcode | "2024-04-23T15:52:32Z" | 9,949 | 312 | [
"task_categories:text-generation",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:multilingual",
"language:code",
"license:other",
"size_categories:1B<n<10B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2402.19173",
"arxiv:2107.03374",
"arxiv:2207.14157",
"region:us"
] | [
"text-generation"
] | "2024-02-26T04:26:48Z" | ---
annotations_creators: []
language_creators:
- crowdsourced
- expert-generated
language:
- code
license:
- other
multilinguality:
- multilingual
pretty_name: The-Stack-v2
size_categories:
- unknown
source_datasets: []
task_categories:
- text-generation
task_ids: []
extra_gated_prompt: |-
## Terms of Use for The Stack v2
The Stack v2 dataset is a collection of source code in over 600 programming languages. We ask that you read and acknowledge the following points before using the dataset:
1. Downloading the dataset in bulk requires a an agreement with SoftwareHeritage and INRIA. Contact [[email protected]](mailto:[email protected]?subject=TheStackV2%20request%20for%20dataset%20access%20information) for more information.
2. If you are using the dataset to train models you must adhere to the SoftwareHeritage [principles for language model training](https://www.softwareheritage.org/2023/10/19/swh-statement-on-llm-for-code/).
3. The Stack v2 is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack v2 must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
4. The Stack v2 is regularly updated to enact validated data removal requests. By clicking on "Access repository", you agree to update your own version of The Stack v2 to the most recent usable version.
By clicking on "Access repository" below, you accept that your contact information (email address and username) can be shared with the dataset maintainers as well.
extra_gated_fields:
Email: text
I have read the License and agree with its terms: checkbox
dataset_info:
features:
- name: blob_id
dtype: string
- name: directory_id
dtype: string
- name: path
dtype: string
- name: content_id
dtype: string
- name: detected_licenses
sequence: string
- name: license_type
dtype: string
- name: repo_name
dtype: string
- name: snapshot_id
dtype: string
- name: revision_id
dtype: string
- name: branch_name
dtype: string
- name: visit_date
dtype: timestamp[ns]
- name: revision_date
dtype: timestamp[ns]
- name: committer_date
dtype: timestamp[ns]
- name: github_id
dtype: int64
- name: star_events_count
dtype: int64
- name: fork_events_count
dtype: int64
- name: gha_license_id
dtype: string
- name: gha_event_created_at
dtype: timestamp[ns]
- name: gha_created_at
dtype: timestamp[ns]
- name: gha_language
dtype: string
- name: src_encoding
dtype: string
- name: language
dtype: string
- name: is_vendor
dtype: bool
- name: is_generated
dtype: bool
- name: length_bytes
dtype: int64
- name: extension
dtype: string
configs:
- config_name: default
default: true
data_files:
- split: train
path: "data/*/*.parquet"
- config_name: "1C_Enterprise"
data_files:
- split: train
path: "data/1C_Enterprise/*.parquet"
- config_name: "2-Dimensional_Array"
data_files:
- split: train
path: "data/2-Dimensional_Array/*.parquet"
- config_name: "4D"
data_files:
- split: train
path: "data/4D/*.parquet"
- config_name: "ABAP"
data_files:
- split: train
path: "data/ABAP/*.parquet"
- config_name: "ABAP_CDS"
data_files:
- split: train
path: "data/ABAP_CDS/*.parquet"
- config_name: "ABNF"
data_files:
- split: train
path: "data/ABNF/*.parquet"
- config_name: "AGS_Script"
data_files:
- split: train
path: "data/AGS_Script/*.parquet"
- config_name: "AIDL"
data_files:
- split: train
path: "data/AIDL/*.parquet"
- config_name: "AL"
data_files:
- split: train
path: "data/AL/*.parquet"
- config_name: "AMPL"
data_files:
- split: train
path: "data/AMPL/*.parquet"
- config_name: "ANTLR"
data_files:
- split: train
path: "data/ANTLR/*.parquet"
- config_name: "API_Blueprint"
data_files:
- split: train
path: "data/API_Blueprint/*.parquet"
- config_name: "APL"
data_files:
- split: train
path: "data/APL/*.parquet"
- config_name: "ASL"
data_files:
- split: train
path: "data/ASL/*.parquet"
- config_name: "ASN.1"
data_files:
- split: train
path: "data/ASN.1/*.parquet"
- config_name: "ASP.NET"
data_files:
- split: train
path: "data/ASP.NET/*.parquet"
- config_name: "ATS"
data_files:
- split: train
path: "data/ATS/*.parquet"
- config_name: "ActionScript"
data_files:
- split: train
path: "data/ActionScript/*.parquet"
- config_name: "Ada"
data_files:
- split: train
path: "data/Ada/*.parquet"
- config_name: "Adobe_Font_Metrics"
data_files:
- split: train
path: "data/Adobe_Font_Metrics/*.parquet"
- config_name: "Agda"
data_files:
- split: train
path: "data/Agda/*.parquet"
- config_name: "Alloy"
data_files:
- split: train
path: "data/Alloy/*.parquet"
- config_name: "Alpine_Abuild"
data_files:
- split: train
path: "data/Alpine_Abuild/*.parquet"
- config_name: "Altium_Designer"
data_files:
- split: train
path: "data/Altium_Designer/*.parquet"
- config_name: "AngelScript"
data_files:
- split: train
path: "data/AngelScript/*.parquet"
- config_name: "Ant_Build_System"
data_files:
- split: train
path: "data/Ant_Build_System/*.parquet"
- config_name: "Antlers"
data_files:
- split: train
path: "data/Antlers/*.parquet"
- config_name: "ApacheConf"
data_files:
- split: train
path: "data/ApacheConf/*.parquet"
- config_name: "Apex"
data_files:
- split: train
path: "data/Apex/*.parquet"
- config_name: "Apollo_Guidance_Computer"
data_files:
- split: train
path: "data/Apollo_Guidance_Computer/*.parquet"
- config_name: "AppleScript"
data_files:
- split: train
path: "data/AppleScript/*.parquet"
- config_name: "Arc"
data_files:
- split: train
path: "data/Arc/*.parquet"
- config_name: "AsciiDoc"
data_files:
- split: train
path: "data/AsciiDoc/*.parquet"
- config_name: "AspectJ"
data_files:
- split: train
path: "data/AspectJ/*.parquet"
- config_name: "Assembly"
data_files:
- split: train
path: "data/Assembly/*.parquet"
- config_name: "Astro"
data_files:
- split: train
path: "data/Astro/*.parquet"
- config_name: "Asymptote"
data_files:
- split: train
path: "data/Asymptote/*.parquet"
- config_name: "Augeas"
data_files:
- split: train
path: "data/Augeas/*.parquet"
- config_name: "AutoHotkey"
data_files:
- split: train
path: "data/AutoHotkey/*.parquet"
- config_name: "AutoIt"
data_files:
- split: train
path: "data/AutoIt/*.parquet"
- config_name: "Avro_IDL"
data_files:
- split: train
path: "data/Avro_IDL/*.parquet"
- config_name: "Awk"
data_files:
- split: train
path: "data/Awk/*.parquet"
- config_name: "BASIC"
data_files:
- split: train
path: "data/BASIC/*.parquet"
- config_name: "Ballerina"
data_files:
- split: train
path: "data/Ballerina/*.parquet"
- config_name: "Batchfile"
data_files:
- split: train
path: "data/Batchfile/*.parquet"
- config_name: "Beef"
data_files:
- split: train
path: "data/Beef/*.parquet"
- config_name: "Befunge"
data_files:
- split: train
path: "data/Befunge/*.parquet"
- config_name: "Berry"
data_files:
- split: train
path: "data/Berry/*.parquet"
- config_name: "BibTeX"
data_files:
- split: train
path: "data/BibTeX/*.parquet"
- config_name: "Bicep"
data_files:
- split: train
path: "data/Bicep/*.parquet"
- config_name: "Bikeshed"
data_files:
- split: train
path: "data/Bikeshed/*.parquet"
- config_name: "Bison"
data_files:
- split: train
path: "data/Bison/*.parquet"
- config_name: "BitBake"
data_files:
- split: train
path: "data/BitBake/*.parquet"
- config_name: "Blade"
data_files:
- split: train
path: "data/Blade/*.parquet"
- config_name: "BlitzBasic"
data_files:
- split: train
path: "data/BlitzBasic/*.parquet"
- config_name: "BlitzMax"
data_files:
- split: train
path: "data/BlitzMax/*.parquet"
- config_name: "Bluespec"
data_files:
- split: train
path: "data/Bluespec/*.parquet"
- config_name: "Boo"
data_files:
- split: train
path: "data/Boo/*.parquet"
- config_name: "Boogie"
data_files:
- split: train
path: "data/Boogie/*.parquet"
- config_name: "Brainfuck"
data_files:
- split: train
path: "data/Brainfuck/*.parquet"
- config_name: "BrighterScript"
data_files:
- split: train
path: "data/BrighterScript/*.parquet"
- config_name: "Brightscript"
data_files:
- split: train
path: "data/Brightscript/*.parquet"
- config_name: "Browserslist"
data_files:
- split: train
path: "data/Browserslist/*.parquet"
- config_name: "C"
data_files:
- split: train
path: "data/C/*.parquet"
- config_name: "C++"
data_files:
- split: train
path: "data/C++/*.parquet"
- config_name: "C-ObjDump"
data_files:
- split: train
path: "data/C-ObjDump/*.parquet"
- config_name: "C-Sharp"
data_files:
- split: train
path: "data/C-Sharp/*.parquet"
- config_name: "C2hs_Haskell"
data_files:
- split: train
path: "data/C2hs_Haskell/*.parquet"
- config_name: "CAP_CDS"
data_files:
- split: train
path: "data/CAP_CDS/*.parquet"
- config_name: "CIL"
data_files:
- split: train
path: "data/CIL/*.parquet"
- config_name: "CLIPS"
data_files:
- split: train
path: "data/CLIPS/*.parquet"
- config_name: "CMake"
data_files:
- split: train
path: "data/CMake/*.parquet"
- config_name: "COBOL"
data_files:
- split: train
path: "data/COBOL/*.parquet"
- config_name: "CODEOWNERS"
data_files:
- split: train
path: "data/CODEOWNERS/*.parquet"
- config_name: "COLLADA"
data_files:
- split: train
path: "data/COLLADA/*.parquet"
- config_name: "CSON"
data_files:
- split: train
path: "data/CSON/*.parquet"
- config_name: "CSS"
data_files:
- split: train
path: "data/CSS/*.parquet"
- config_name: "CSV"
data_files:
- split: train
path: "data/CSV/*.parquet"
- config_name: "CUE"
data_files:
- split: train
path: "data/CUE/*.parquet"
- config_name: "CWeb"
data_files:
- split: train
path: "data/CWeb/*.parquet"
- config_name: "Cabal_Config"
data_files:
- split: train
path: "data/Cabal_Config/*.parquet"
- config_name: "Cadence"
data_files:
- split: train
path: "data/Cadence/*.parquet"
- config_name: "Cairo"
data_files:
- split: train
path: "data/Cairo/*.parquet"
- config_name: "CameLIGO"
data_files:
- split: train
path: "data/CameLIGO/*.parquet"
- config_name: "Cap-n_Proto"
data_files:
- split: train
path: "data/Cap-n_Proto/*.parquet"
- config_name: "CartoCSS"
data_files:
- split: train
path: "data/CartoCSS/*.parquet"
- config_name: "Ceylon"
data_files:
- split: train
path: "data/Ceylon/*.parquet"
- config_name: "Chapel"
data_files:
- split: train
path: "data/Chapel/*.parquet"
- config_name: "Charity"
data_files:
- split: train
path: "data/Charity/*.parquet"
- config_name: "Checksums"
data_files:
- split: train
path: "data/Checksums/*.parquet"
- config_name: "ChucK"
data_files:
- split: train
path: "data/ChucK/*.parquet"
- config_name: "Cirru"
data_files:
- split: train
path: "data/Cirru/*.parquet"
- config_name: "Clarion"
data_files:
- split: train
path: "data/Clarion/*.parquet"
- config_name: "Clarity"
data_files:
- split: train
path: "data/Clarity/*.parquet"
- config_name: "Classic_ASP"
data_files:
- split: train
path: "data/Classic_ASP/*.parquet"
- config_name: "Clean"
data_files:
- split: train
path: "data/Clean/*.parquet"
- config_name: "Click"
data_files:
- split: train
path: "data/Click/*.parquet"
- config_name: "Clojure"
data_files:
- split: train
path: "data/Clojure/*.parquet"
- config_name: "Closure_Templates"
data_files:
- split: train
path: "data/Closure_Templates/*.parquet"
- config_name: "Cloud_Firestore_Security_Rules"
data_files:
- split: train
path: "data/Cloud_Firestore_Security_Rules/*.parquet"
- config_name: "CoNLL-U"
data_files:
- split: train
path: "data/CoNLL-U/*.parquet"
- config_name: "CodeQL"
data_files:
- split: train
path: "data/CodeQL/*.parquet"
- config_name: "CoffeeScript"
data_files:
- split: train
path: "data/CoffeeScript/*.parquet"
- config_name: "ColdFusion"
data_files:
- split: train
path: "data/ColdFusion/*.parquet"
- config_name: "ColdFusion_CFC"
data_files:
- split: train
path: "data/ColdFusion_CFC/*.parquet"
- config_name: "Common_Lisp"
data_files:
- split: train
path: "data/Common_Lisp/*.parquet"
- config_name: "Common_Workflow_Language"
data_files:
- split: train
path: "data/Common_Workflow_Language/*.parquet"
- config_name: "Component_Pascal"
data_files:
- split: train
path: "data/Component_Pascal/*.parquet"
- config_name: "Cool"
data_files:
- split: train
path: "data/Cool/*.parquet"
- config_name: "Coq"
data_files:
- split: train
path: "data/Coq/*.parquet"
- config_name: "Creole"
data_files:
- split: train
path: "data/Creole/*.parquet"
- config_name: "Crystal"
data_files:
- split: train
path: "data/Crystal/*.parquet"
- config_name: "Csound"
data_files:
- split: train
path: "data/Csound/*.parquet"
- config_name: "Csound_Document"
data_files:
- split: train
path: "data/Csound_Document/*.parquet"
- config_name: "Csound_Score"
data_files:
- split: train
path: "data/Csound_Score/*.parquet"
- config_name: "Cuda"
data_files:
- split: train
path: "data/Cuda/*.parquet"
- config_name: "Cue_Sheet"
data_files:
- split: train
path: "data/Cue_Sheet/*.parquet"
- config_name: "Curry"
data_files:
- split: train
path: "data/Curry/*.parquet"
- config_name: "Cycript"
data_files:
- split: train
path: "data/Cycript/*.parquet"
- config_name: "Cython"
data_files:
- split: train
path: "data/Cython/*.parquet"
- config_name: "D"
data_files:
- split: train
path: "data/D/*.parquet"
- config_name: "DIGITAL_Command_Language"
data_files:
- split: train
path: "data/DIGITAL_Command_Language/*.parquet"
- config_name: "DM"
data_files:
- split: train
path: "data/DM/*.parquet"
- config_name: "DNS_Zone"
data_files:
- split: train
path: "data/DNS_Zone/*.parquet"
- config_name: "DTrace"
data_files:
- split: train
path: "data/DTrace/*.parquet"
- config_name: "Dafny"
data_files:
- split: train
path: "data/Dafny/*.parquet"
- config_name: "Darcs_Patch"
data_files:
- split: train
path: "data/Darcs_Patch/*.parquet"
- config_name: "Dart"
data_files:
- split: train
path: "data/Dart/*.parquet"
- config_name: "DataWeave"
data_files:
- split: train
path: "data/DataWeave/*.parquet"
- config_name: "Debian_Package_Control_File"
data_files:
- split: train
path: "data/Debian_Package_Control_File/*.parquet"
- config_name: "DenizenScript"
data_files:
- split: train
path: "data/DenizenScript/*.parquet"
- config_name: "Dhall"
data_files:
- split: train
path: "data/Dhall/*.parquet"
- config_name: "Diff"
data_files:
- split: train
path: "data/Diff/*.parquet"
- config_name: "DirectX_3D_File"
data_files:
- split: train
path: "data/DirectX_3D_File/*.parquet"
- config_name: "Dockerfile"
data_files:
- split: train
path: "data/Dockerfile/*.parquet"
- config_name: "Dogescript"
data_files:
- split: train
path: "data/Dogescript/*.parquet"
- config_name: "Dylan"
data_files:
- split: train
path: "data/Dylan/*.parquet"
- config_name: "E"
data_files:
- split: train
path: "data/E/*.parquet"
- config_name: "E-mail"
data_files:
- split: train
path: "data/E-mail/*.parquet"
- config_name: "EBNF"
data_files:
- split: train
path: "data/EBNF/*.parquet"
- config_name: "ECL"
data_files:
- split: train
path: "data/ECL/*.parquet"
- config_name: "ECLiPSe"
data_files:
- split: train
path: "data/ECLiPSe/*.parquet"
- config_name: "EJS"
data_files:
- split: train
path: "data/EJS/*.parquet"
- config_name: "EQ"
data_files:
- split: train
path: "data/EQ/*.parquet"
- config_name: "Eagle"
data_files:
- split: train
path: "data/Eagle/*.parquet"
- config_name: "Earthly"
data_files:
- split: train
path: "data/Earthly/*.parquet"
- config_name: "Easybuild"
data_files:
- split: train
path: "data/Easybuild/*.parquet"
- config_name: "Ecere_Projects"
data_files:
- split: train
path: "data/Ecere_Projects/*.parquet"
- config_name: "EditorConfig"
data_files:
- split: train
path: "data/EditorConfig/*.parquet"
- config_name: "Edje_Data_Collection"
data_files:
- split: train
path: "data/Edje_Data_Collection/*.parquet"
- config_name: "Eiffel"
data_files:
- split: train
path: "data/Eiffel/*.parquet"
- config_name: "Elixir"
data_files:
- split: train
path: "data/Elixir/*.parquet"
- config_name: "Elm"
data_files:
- split: train
path: "data/Elm/*.parquet"
- config_name: "Elvish"
data_files:
- split: train
path: "data/Elvish/*.parquet"
- config_name: "Emacs_Lisp"
data_files:
- split: train
path: "data/Emacs_Lisp/*.parquet"
- config_name: "EmberScript"
data_files:
- split: train
path: "data/EmberScript/*.parquet"
- config_name: "Erlang"
data_files:
- split: train
path: "data/Erlang/*.parquet"
- config_name: "Euphoria"
data_files:
- split: train
path: "data/Euphoria/*.parquet"
- config_name: "F-Sharp"
data_files:
- split: train
path: "data/F-Sharp/*.parquet"
- config_name: "F-Star"
data_files:
- split: train
path: "data/F-Star/*.parquet"
- config_name: "FIGlet_Font"
data_files:
- split: train
path: "data/FIGlet_Font/*.parquet"
- config_name: "FLUX"
data_files:
- split: train
path: "data/FLUX/*.parquet"
- config_name: "Factor"
data_files:
- split: train
path: "data/Factor/*.parquet"
- config_name: "Fancy"
data_files:
- split: train
path: "data/Fancy/*.parquet"
- config_name: "Fantom"
data_files:
- split: train
path: "data/Fantom/*.parquet"
- config_name: "Faust"
data_files:
- split: train
path: "data/Faust/*.parquet"
- config_name: "Fennel"
data_files:
- split: train
path: "data/Fennel/*.parquet"
- config_name: "Filebench_WML"
data_files:
- split: train
path: "data/Filebench_WML/*.parquet"
- config_name: "Filterscript"
data_files:
- split: train
path: "data/Filterscript/*.parquet"
- config_name: "Fluent"
data_files:
- split: train
path: "data/Fluent/*.parquet"
- config_name: "Formatted"
data_files:
- split: train
path: "data/Formatted/*.parquet"
- config_name: "Forth"
data_files:
- split: train
path: "data/Forth/*.parquet"
- config_name: "Fortran"
data_files:
- split: train
path: "data/Fortran/*.parquet"
- config_name: "Fortran_Free_Form"
data_files:
- split: train
path: "data/Fortran_Free_Form/*.parquet"
- config_name: "FreeBasic"
data_files:
- split: train
path: "data/FreeBasic/*.parquet"
- config_name: "FreeMarker"
data_files:
- split: train
path: "data/FreeMarker/*.parquet"
- config_name: "Frege"
data_files:
- split: train
path: "data/Frege/*.parquet"
- config_name: "Futhark"
data_files:
- split: train
path: "data/Futhark/*.parquet"
- config_name: "G-code"
data_files:
- split: train
path: "data/G-code/*.parquet"
- config_name: "GAML"
data_files:
- split: train
path: "data/GAML/*.parquet"
- config_name: "GAMS"
data_files:
- split: train
path: "data/GAMS/*.parquet"
- config_name: "GAP"
data_files:
- split: train
path: "data/GAP/*.parquet"
- config_name: "GCC_Machine_Description"
data_files:
- split: train
path: "data/GCC_Machine_Description/*.parquet"
- config_name: "GDB"
data_files:
- split: train
path: "data/GDB/*.parquet"
- config_name: "GDScript"
data_files:
- split: train
path: "data/GDScript/*.parquet"
- config_name: "GEDCOM"
data_files:
- split: train
path: "data/GEDCOM/*.parquet"
- config_name: "GLSL"
data_files:
- split: train
path: "data/GLSL/*.parquet"
- config_name: "GN"
data_files:
- split: train
path: "data/GN/*.parquet"
- config_name: "GSC"
data_files:
- split: train
path: "data/GSC/*.parquet"
- config_name: "Game_Maker_Language"
data_files:
- split: train
path: "data/Game_Maker_Language/*.parquet"
- config_name: "Gemfile.lock"
data_files:
- split: train
path: "data/Gemfile.lock/*.parquet"
- config_name: "Gemini"
data_files:
- split: train
path: "data/Gemini/*.parquet"
- config_name: "Genero"
data_files:
- split: train
path: "data/Genero/*.parquet"
- config_name: "Genero_Forms"
data_files:
- split: train
path: "data/Genero_Forms/*.parquet"
- config_name: "Genie"
data_files:
- split: train
path: "data/Genie/*.parquet"
- config_name: "Genshi"
data_files:
- split: train
path: "data/Genshi/*.parquet"
- config_name: "Gentoo_Ebuild"
data_files:
- split: train
path: "data/Gentoo_Ebuild/*.parquet"
- config_name: "Gentoo_Eclass"
data_files:
- split: train
path: "data/Gentoo_Eclass/*.parquet"
- config_name: "Gerber_Image"
data_files:
- split: train
path: "data/Gerber_Image/*.parquet"
- config_name: "Gettext_Catalog"
data_files:
- split: train
path: "data/Gettext_Catalog/*.parquet"
- config_name: "Gherkin"
data_files:
- split: train
path: "data/Gherkin/*.parquet"
- config_name: "Git_Attributes"
data_files:
- split: train
path: "data/Git_Attributes/*.parquet"
- config_name: "Git_Config"
data_files:
- split: train
path: "data/Git_Config/*.parquet"
- config_name: "Git_Revision_List"
data_files:
- split: train
path: "data/Git_Revision_List/*.parquet"
- config_name: "Gleam"
data_files:
- split: train
path: "data/Gleam/*.parquet"
- config_name: "Glyph"
data_files:
- split: train
path: "data/Glyph/*.parquet"
- config_name: "Glyph_Bitmap_Distribution_Format"
data_files:
- split: train
path: "data/Glyph_Bitmap_Distribution_Format/*.parquet"
- config_name: "Gnuplot"
data_files:
- split: train
path: "data/Gnuplot/*.parquet"
- config_name: "Go"
data_files:
- split: train
path: "data/Go/*.parquet"
- config_name: "Go_Checksums"
data_files:
- split: train
path: "data/Go_Checksums/*.parquet"
- config_name: "Go_Module"
data_files:
- split: train
path: "data/Go_Module/*.parquet"
- config_name: "Golo"
data_files:
- split: train
path: "data/Golo/*.parquet"
- config_name: "Gosu"
data_files:
- split: train
path: "data/Gosu/*.parquet"
- config_name: "Grace"
data_files:
- split: train
path: "data/Grace/*.parquet"
- config_name: "Gradle"
data_files:
- split: train
path: "data/Gradle/*.parquet"
- config_name: "Grammatical_Framework"
data_files:
- split: train
path: "data/Grammatical_Framework/*.parquet"
- config_name: "GraphQL"
data_files:
- split: train
path: "data/GraphQL/*.parquet"
- config_name: "Graph_Modeling_Language"
data_files:
- split: train
path: "data/Graph_Modeling_Language/*.parquet"
- config_name: "Graphviz_(DOT)"
data_files:
- split: train
path: "data/Graphviz_(DOT)/*.parquet"
- config_name: "Groovy"
data_files:
- split: train
path: "data/Groovy/*.parquet"
- config_name: "Groovy_Server_Pages"
data_files:
- split: train
path: "data/Groovy_Server_Pages/*.parquet"
- config_name: "HAProxy"
data_files:
- split: train
path: "data/HAProxy/*.parquet"
- config_name: "HCL"
data_files:
- split: train
path: "data/HCL/*.parquet"
- config_name: "HLSL"
data_files:
- split: train
path: "data/HLSL/*.parquet"
- config_name: "HOCON"
data_files:
- split: train
path: "data/HOCON/*.parquet"
- config_name: "HTML"
data_files:
- split: train
path: "data/HTML/*.parquet"
- config_name: "HTML+ECR"
data_files:
- split: train
path: "data/HTML+ECR/*.parquet"
- config_name: "HTML+EEX"
data_files:
- split: train
path: "data/HTML+EEX/*.parquet"
- config_name: "HTML+ERB"
data_files:
- split: train
path: "data/HTML+ERB/*.parquet"
- config_name: "HTML+PHP"
data_files:
- split: train
path: "data/HTML+PHP/*.parquet"
- config_name: "HTML+Razor"
data_files:
- split: train
path: "data/HTML+Razor/*.parquet"
- config_name: "HTTP"
data_files:
- split: train
path: "data/HTTP/*.parquet"
- config_name: "HXML"
data_files:
- split: train
path: "data/HXML/*.parquet"
- config_name: "Hack"
data_files:
- split: train
path: "data/Hack/*.parquet"
- config_name: "Haml"
data_files:
- split: train
path: "data/Haml/*.parquet"
- config_name: "Handlebars"
data_files:
- split: train
path: "data/Handlebars/*.parquet"
- config_name: "Harbour"
data_files:
- split: train
path: "data/Harbour/*.parquet"
- config_name: "Haskell"
data_files:
- split: train
path: "data/Haskell/*.parquet"
- config_name: "Haxe"
data_files:
- split: train
path: "data/Haxe/*.parquet"
- config_name: "HiveQL"
data_files:
- split: train
path: "data/HiveQL/*.parquet"
- config_name: "HolyC"
data_files:
- split: train
path: "data/HolyC/*.parquet"
- config_name: "Hy"
data_files:
- split: train
path: "data/Hy/*.parquet"
- config_name: "HyPhy"
data_files:
- split: train
path: "data/HyPhy/*.parquet"
- config_name: "IDL"
data_files:
- split: train
path: "data/IDL/*.parquet"
- config_name: "IGOR_Pro"
data_files:
- split: train
path: "data/IGOR_Pro/*.parquet"
- config_name: "INI"
data_files:
- split: train
path: "data/INI/*.parquet"
- config_name: "IRC_log"
data_files:
- split: train
path: "data/IRC_log/*.parquet"
- config_name: "Idris"
data_files:
- split: train
path: "data/Idris/*.parquet"
- config_name: "Ignore_List"
data_files:
- split: train
path: "data/Ignore_List/*.parquet"
- config_name: "ImageJ_Macro"
data_files:
- split: train
path: "data/ImageJ_Macro/*.parquet"
- config_name: "Inform_7"
data_files:
- split: train
path: "data/Inform_7/*.parquet"
- config_name: "Inno_Setup"
data_files:
- split: train
path: "data/Inno_Setup/*.parquet"
- config_name: "Io"
data_files:
- split: train
path: "data/Io/*.parquet"
- config_name: "Ioke"
data_files:
- split: train
path: "data/Ioke/*.parquet"
- config_name: "Isabelle"
data_files:
- split: train
path: "data/Isabelle/*.parquet"
- config_name: "Isabelle_ROOT"
data_files:
- split: train
path: "data/Isabelle_ROOT/*.parquet"
- config_name: "J"
data_files:
- split: train
path: "data/J/*.parquet"
- config_name: "JAR_Manifest"
data_files:
- split: train
path: "data/JAR_Manifest/*.parquet"
- config_name: "JFlex"
data_files:
- split: train
path: "data/JFlex/*.parquet"
- config_name: "JSON"
data_files:
- split: train
path: "data/JSON/*.parquet"
- config_name: "JSON5"
data_files:
- split: train
path: "data/JSON5/*.parquet"
- config_name: "JSONLD"
data_files:
- split: train
path: "data/JSONLD/*.parquet"
- config_name: "JSON_with_Comments"
data_files:
- split: train
path: "data/JSON_with_Comments/*.parquet"
- config_name: "JSONiq"
data_files:
- split: train
path: "data/JSONiq/*.parquet"
- config_name: "Janet"
data_files:
- split: train
path: "data/Janet/*.parquet"
- config_name: "Jasmin"
data_files:
- split: train
path: "data/Jasmin/*.parquet"
- config_name: "Java"
data_files:
- split: train
path: "data/Java/*.parquet"
- config_name: "JavaScript"
data_files:
- split: train
path: "data/JavaScript/*.parquet"
- config_name: "JavaScript+ERB"
data_files:
- split: train
path: "data/JavaScript+ERB/*.parquet"
- config_name: "Java_Properties"
data_files:
- split: train
path: "data/Java_Properties/*.parquet"
- config_name: "Java_Server_Pages"
data_files:
- split: train
path: "data/Java_Server_Pages/*.parquet"
- config_name: "Jest_Snapshot"
data_files:
- split: train
path: "data/Jest_Snapshot/*.parquet"
- config_name: "JetBrains_MPS"
data_files:
- split: train
path: "data/JetBrains_MPS/*.parquet"
- config_name: "Jinja"
data_files:
- split: train
path: "data/Jinja/*.parquet"
- config_name: "Jison"
data_files:
- split: train
path: "data/Jison/*.parquet"
- config_name: "Jison_Lex"
data_files:
- split: train
path: "data/Jison_Lex/*.parquet"
- config_name: "Jolie"
data_files:
- split: train
path: "data/Jolie/*.parquet"
- config_name: "Jsonnet"
data_files:
- split: train
path: "data/Jsonnet/*.parquet"
- config_name: "Julia"
data_files:
- split: train
path: "data/Julia/*.parquet"
- config_name: "Jupyter_Notebook"
data_files:
- split: train
path: "data/Jupyter_Notebook/*.parquet"
- config_name: "KRL"
data_files:
- split: train
path: "data/KRL/*.parquet"
- config_name: "Kaitai_Struct"
data_files:
- split: train
path: "data/Kaitai_Struct/*.parquet"
- config_name: "KakouneScript"
data_files:
- split: train
path: "data/KakouneScript/*.parquet"
- config_name: "KiCad_Layout"
data_files:
- split: train
path: "data/KiCad_Layout/*.parquet"
- config_name: "KiCad_Legacy_Layout"
data_files:
- split: train
path: "data/KiCad_Legacy_Layout/*.parquet"
- config_name: "KiCad_Schematic"
data_files:
- split: train
path: "data/KiCad_Schematic/*.parquet"
- config_name: "Kit"
data_files:
- split: train
path: "data/Kit/*.parquet"
- config_name: "Kotlin"
data_files:
- split: train
path: "data/Kotlin/*.parquet"
- config_name: "Kusto"
data_files:
- split: train
path: "data/Kusto/*.parquet"
- config_name: "LFE"
data_files:
- split: train
path: "data/LFE/*.parquet"
- config_name: "LLVM"
data_files:
- split: train
path: "data/LLVM/*.parquet"
- config_name: "LOLCODE"
data_files:
- split: train
path: "data/LOLCODE/*.parquet"
- config_name: "LSL"
data_files:
- split: train
path: "data/LSL/*.parquet"
- config_name: "LTspice_Symbol"
data_files:
- split: train
path: "data/LTspice_Symbol/*.parquet"
- config_name: "LabVIEW"
data_files:
- split: train
path: "data/LabVIEW/*.parquet"
- config_name: "Lark"
data_files:
- split: train
path: "data/Lark/*.parquet"
- config_name: "Lasso"
data_files:
- split: train
path: "data/Lasso/*.parquet"
- config_name: "Latte"
data_files:
- split: train
path: "data/Latte/*.parquet"
- config_name: "Lean"
data_files:
- split: train
path: "data/Lean/*.parquet"
- config_name: "Less"
data_files:
- split: train
path: "data/Less/*.parquet"
- config_name: "Lex"
data_files:
- split: train
path: "data/Lex/*.parquet"
- config_name: "LigoLANG"
data_files:
- split: train
path: "data/LigoLANG/*.parquet"
- config_name: "LilyPond"
data_files:
- split: train
path: "data/LilyPond/*.parquet"
- config_name: "Limbo"
data_files:
- split: train
path: "data/Limbo/*.parquet"
- config_name: "Linker_Script"
data_files:
- split: train
path: "data/Linker_Script/*.parquet"
- config_name: "Linux_Kernel_Module"
data_files:
- split: train
path: "data/Linux_Kernel_Module/*.parquet"
- config_name: "Liquid"
data_files:
- split: train
path: "data/Liquid/*.parquet"
- config_name: "Literate_Agda"
data_files:
- split: train
path: "data/Literate_Agda/*.parquet"
- config_name: "Literate_CoffeeScript"
data_files:
- split: train
path: "data/Literate_CoffeeScript/*.parquet"
- config_name: "Literate_Haskell"
data_files:
- split: train
path: "data/Literate_Haskell/*.parquet"
- config_name: "LiveScript"
data_files:
- split: train
path: "data/LiveScript/*.parquet"
- config_name: "Logos"
data_files:
- split: train
path: "data/Logos/*.parquet"
- config_name: "Logtalk"
data_files:
- split: train
path: "data/Logtalk/*.parquet"
- config_name: "LookML"
data_files:
- split: train
path: "data/LookML/*.parquet"
- config_name: "LoomScript"
data_files:
- split: train
path: "data/LoomScript/*.parquet"
- config_name: "Lua"
data_files:
- split: train
path: "data/Lua/*.parquet"
- config_name: "M"
data_files:
- split: train
path: "data/M/*.parquet"
- config_name: "M4"
data_files:
- split: train
path: "data/M4/*.parquet"
- config_name: "M4Sugar"
data_files:
- split: train
path: "data/M4Sugar/*.parquet"
- config_name: "MATLAB"
data_files:
- split: train
path: "data/MATLAB/*.parquet"
- config_name: "MAXScript"
data_files:
- split: train
path: "data/MAXScript/*.parquet"
- config_name: "MLIR"
data_files:
- split: train
path: "data/MLIR/*.parquet"
- config_name: "MQL4"
data_files:
- split: train
path: "data/MQL4/*.parquet"
- config_name: "MQL5"
data_files:
- split: train
path: "data/MQL5/*.parquet"
- config_name: "MTML"
data_files:
- split: train
path: "data/MTML/*.parquet"
- config_name: "MUF"
data_files:
- split: train
path: "data/MUF/*.parquet"
- config_name: "Macaulay2"
data_files:
- split: train
path: "data/Macaulay2/*.parquet"
- config_name: "Makefile"
data_files:
- split: train
path: "data/Makefile/*.parquet"
- config_name: "Mako"
data_files:
- split: train
path: "data/Mako/*.parquet"
- config_name: "Markdown"
data_files:
- split: train
path: "data/Markdown/*.parquet"
- config_name: "Marko"
data_files:
- split: train
path: "data/Marko/*.parquet"
- config_name: "Mask"
data_files:
- split: train
path: "data/Mask/*.parquet"
- config_name: "Mathematica"
data_files:
- split: train
path: "data/Mathematica/*.parquet"
- config_name: "Maven_POM"
data_files:
- split: train
path: "data/Maven_POM/*.parquet"
- config_name: "Max"
data_files:
- split: train
path: "data/Max/*.parquet"
- config_name: "Mercury"
data_files:
- split: train
path: "data/Mercury/*.parquet"
- config_name: "Meson"
data_files:
- split: train
path: "data/Meson/*.parquet"
- config_name: "Metal"
data_files:
- split: train
path: "data/Metal/*.parquet"
- config_name: "Microsoft_Developer_Studio_Project"
data_files:
- split: train
path: "data/Microsoft_Developer_Studio_Project/*.parquet"
- config_name: "Microsoft_Visual_Studio_Solution"
data_files:
- split: train
path: "data/Microsoft_Visual_Studio_Solution/*.parquet"
- config_name: "MiniD"
data_files:
- split: train
path: "data/MiniD/*.parquet"
- config_name: "MiniYAML"
data_files:
- split: train
path: "data/MiniYAML/*.parquet"
- config_name: "Mint"
data_files:
- split: train
path: "data/Mint/*.parquet"
- config_name: "Mirah"
data_files:
- split: train
path: "data/Mirah/*.parquet"
- config_name: "Modelica"
data_files:
- split: train
path: "data/Modelica/*.parquet"
- config_name: "Modula-2"
data_files:
- split: train
path: "data/Modula-2/*.parquet"
- config_name: "Modula-3"
data_files:
- split: train
path: "data/Modula-3/*.parquet"
- config_name: "Module_Management_System"
data_files:
- split: train
path: "data/Module_Management_System/*.parquet"
- config_name: "Monkey"
data_files:
- split: train
path: "data/Monkey/*.parquet"
- config_name: "Monkey_C"
data_files:
- split: train
path: "data/Monkey_C/*.parquet"
- config_name: "Moocode"
data_files:
- split: train
path: "data/Moocode/*.parquet"
- config_name: "MoonScript"
data_files:
- split: train
path: "data/MoonScript/*.parquet"
- config_name: "Motoko"
data_files:
- split: train
path: "data/Motoko/*.parquet"
- config_name: "Motorola_68K_Assembly"
data_files:
- split: train
path: "data/Motorola_68K_Assembly/*.parquet"
- config_name: "Move"
data_files:
- split: train
path: "data/Move/*.parquet"
- config_name: "Muse"
data_files:
- split: train
path: "data/Muse/*.parquet"
- config_name: "Mustache"
data_files:
- split: train
path: "data/Mustache/*.parquet"
- config_name: "Myghty"
data_files:
- split: train
path: "data/Myghty/*.parquet"
- config_name: "NASL"
data_files:
- split: train
path: "data/NASL/*.parquet"
- config_name: "NCL"
data_files:
- split: train
path: "data/NCL/*.parquet"
- config_name: "NEON"
data_files:
- split: train
path: "data/NEON/*.parquet"
- config_name: "NL"
data_files:
- split: train
path: "data/NL/*.parquet"
- config_name: "NPM_Config"
data_files:
- split: train
path: "data/NPM_Config/*.parquet"
- config_name: "NSIS"
data_files:
- split: train
path: "data/NSIS/*.parquet"
- config_name: "NWScript"
data_files:
- split: train
path: "data/NWScript/*.parquet"
- config_name: "Nasal"
data_files:
- split: train
path: "data/Nasal/*.parquet"
- config_name: "Nearley"
data_files:
- split: train
path: "data/Nearley/*.parquet"
- config_name: "Nemerle"
data_files:
- split: train
path: "data/Nemerle/*.parquet"
- config_name: "NetLinx"
data_files:
- split: train
path: "data/NetLinx/*.parquet"
- config_name: "NetLinx+ERB"
data_files:
- split: train
path: "data/NetLinx+ERB/*.parquet"
- config_name: "NetLogo"
data_files:
- split: train
path: "data/NetLogo/*.parquet"
- config_name: "NewLisp"
data_files:
- split: train
path: "data/NewLisp/*.parquet"
- config_name: "Nextflow"
data_files:
- split: train
path: "data/Nextflow/*.parquet"
- config_name: "Nginx"
data_files:
- split: train
path: "data/Nginx/*.parquet"
- config_name: "Nim"
data_files:
- split: train
path: "data/Nim/*.parquet"
- config_name: "Ninja"
data_files:
- split: train
path: "data/Ninja/*.parquet"
- config_name: "Nit"
data_files:
- split: train
path: "data/Nit/*.parquet"
- config_name: "Nix"
data_files:
- split: train
path: "data/Nix/*.parquet"
- config_name: "Nu"
data_files:
- split: train
path: "data/Nu/*.parquet"
- config_name: "NumPy"
data_files:
- split: train
path: "data/NumPy/*.parquet"
- config_name: "Nunjucks"
data_files:
- split: train
path: "data/Nunjucks/*.parquet"
- config_name: "OCaml"
data_files:
- split: train
path: "data/OCaml/*.parquet"
- config_name: "ObjDump"
data_files:
- split: train
path: "data/ObjDump/*.parquet"
- config_name: "ObjectScript"
data_files:
- split: train
path: "data/ObjectScript/*.parquet"
- config_name: "Object_Data_Instance_Notation"
data_files:
- split: train
path: "data/Object_Data_Instance_Notation/*.parquet"
- config_name: "Objective-C"
data_files:
- split: train
path: "data/Objective-C/*.parquet"
- config_name: "Objective-C++"
data_files:
- split: train
path: "data/Objective-C++/*.parquet"
- config_name: "Objective-J"
data_files:
- split: train
path: "data/Objective-J/*.parquet"
- config_name: "Odin"
data_files:
- split: train
path: "data/Odin/*.parquet"
- config_name: "Omgrofl"
data_files:
- split: train
path: "data/Omgrofl/*.parquet"
- config_name: "Opa"
data_files:
- split: train
path: "data/Opa/*.parquet"
- config_name: "Opal"
data_files:
- split: train
path: "data/Opal/*.parquet"
- config_name: "OpenCL"
data_files:
- split: train
path: "data/OpenCL/*.parquet"
- config_name: "OpenEdge_ABL"
data_files:
- split: train
path: "data/OpenEdge_ABL/*.parquet"
- config_name: "OpenQASM"
data_files:
- split: train
path: "data/OpenQASM/*.parquet"
- config_name: "OpenRC_runscript"
data_files:
- split: train
path: "data/OpenRC_runscript/*.parquet"
- config_name: "OpenSCAD"
data_files:
- split: train
path: "data/OpenSCAD/*.parquet"
- config_name: "OpenStep_Property_List"
data_files:
- split: train
path: "data/OpenStep_Property_List/*.parquet"
- config_name: "OpenType_Feature_File"
data_files:
- split: train
path: "data/OpenType_Feature_File/*.parquet"
- config_name: "Open_Policy_Agent"
data_files:
- split: train
path: "data/Open_Policy_Agent/*.parquet"
- config_name: "Org"
data_files:
- split: train
path: "data/Org/*.parquet"
- config_name: "Ox"
data_files:
- split: train
path: "data/Ox/*.parquet"
- config_name: "Oxygene"
data_files:
- split: train
path: "data/Oxygene/*.parquet"
- config_name: "Oz"
data_files:
- split: train
path: "data/Oz/*.parquet"
- config_name: "P4"
data_files:
- split: train
path: "data/P4/*.parquet"
- config_name: "PEG.js"
data_files:
- split: train
path: "data/PEG.js/*.parquet"
- config_name: "PHP"
data_files:
- split: train
path: "data/PHP/*.parquet"
- config_name: "PLSQL"
data_files:
- split: train
path: "data/PLSQL/*.parquet"
- config_name: "PLpgSQL"
data_files:
- split: train
path: "data/PLpgSQL/*.parquet"
- config_name: "POV-Ray_SDL"
data_files:
- split: train
path: "data/POV-Ray_SDL/*.parquet"
- config_name: "Pan"
data_files:
- split: train
path: "data/Pan/*.parquet"
- config_name: "Papyrus"
data_files:
- split: train
path: "data/Papyrus/*.parquet"
- config_name: "Parrot"
data_files:
- split: train
path: "data/Parrot/*.parquet"
- config_name: "Parrot_Assembly"
data_files:
- split: train
path: "data/Parrot_Assembly/*.parquet"
- config_name: "Parrot_Internal_Representation"
data_files:
- split: train
path: "data/Parrot_Internal_Representation/*.parquet"
- config_name: "Pascal"
data_files:
- split: train
path: "data/Pascal/*.parquet"
- config_name: "Pawn"
data_files:
- split: train
path: "data/Pawn/*.parquet"
- config_name: "Pep8"
data_files:
- split: train
path: "data/Pep8/*.parquet"
- config_name: "Perl"
data_files:
- split: train
path: "data/Perl/*.parquet"
- config_name: "Pic"
data_files:
- split: train
path: "data/Pic/*.parquet"
- config_name: "Pickle"
data_files:
- split: train
path: "data/Pickle/*.parquet"
- config_name: "PicoLisp"
data_files:
- split: train
path: "data/PicoLisp/*.parquet"
- config_name: "PigLatin"
data_files:
- split: train
path: "data/PigLatin/*.parquet"
- config_name: "Pike"
data_files:
- split: train
path: "data/Pike/*.parquet"
- config_name: "PlantUML"
data_files:
- split: train
path: "data/PlantUML/*.parquet"
- config_name: "Pod"
data_files:
- split: train
path: "data/Pod/*.parquet"
- config_name: "Pod_6"
data_files:
- split: train
path: "data/Pod_6/*.parquet"
- config_name: "PogoScript"
data_files:
- split: train
path: "data/PogoScript/*.parquet"
- config_name: "Pony"
data_files:
- split: train
path: "data/Pony/*.parquet"
- config_name: "Portugol"
data_files:
- split: train
path: "data/Portugol/*.parquet"
- config_name: "PostCSS"
data_files:
- split: train
path: "data/PostCSS/*.parquet"
- config_name: "PostScript"
data_files:
- split: train
path: "data/PostScript/*.parquet"
- config_name: "PowerBuilder"
data_files:
- split: train
path: "data/PowerBuilder/*.parquet"
- config_name: "PowerShell"
data_files:
- split: train
path: "data/PowerShell/*.parquet"
- config_name: "Prisma"
data_files:
- split: train
path: "data/Prisma/*.parquet"
- config_name: "Processing"
data_files:
- split: train
path: "data/Processing/*.parquet"
- config_name: "Procfile"
data_files:
- split: train
path: "data/Procfile/*.parquet"
- config_name: "Proguard"
data_files:
- split: train
path: "data/Proguard/*.parquet"
- config_name: "Prolog"
data_files:
- split: train
path: "data/Prolog/*.parquet"
- config_name: "Promela"
data_files:
- split: train
path: "data/Promela/*.parquet"
- config_name: "Propeller_Spin"
data_files:
- split: train
path: "data/Propeller_Spin/*.parquet"
- config_name: "Protocol_Buffer"
data_files:
- split: train
path: "data/Protocol_Buffer/*.parquet"
- config_name: "Protocol_Buffer_Text_Format"
data_files:
- split: train
path: "data/Protocol_Buffer_Text_Format/*.parquet"
- config_name: "Public_Key"
data_files:
- split: train
path: "data/Public_Key/*.parquet"
- config_name: "Pug"
data_files:
- split: train
path: "data/Pug/*.parquet"
- config_name: "Puppet"
data_files:
- split: train
path: "data/Puppet/*.parquet"
- config_name: "PureBasic"
data_files:
- split: train
path: "data/PureBasic/*.parquet"
- config_name: "PureScript"
data_files:
- split: train
path: "data/PureScript/*.parquet"
- config_name: "Pure_Data"
data_files:
- split: train
path: "data/Pure_Data/*.parquet"
- config_name: "Python"
data_files:
- split: train
path: "data/Python/*.parquet"
- config_name: "Python_traceback"
data_files:
- split: train
path: "data/Python_traceback/*.parquet"
- config_name: "Q-Sharp"
data_files:
- split: train
path: "data/Q-Sharp/*.parquet"
- config_name: "QML"
data_files:
- split: train
path: "data/QML/*.parquet"
- config_name: "QMake"
data_files:
- split: train
path: "data/QMake/*.parquet"
- config_name: "Qt_Script"
data_files:
- split: train
path: "data/Qt_Script/*.parquet"
- config_name: "Quake"
data_files:
- split: train
path: "data/Quake/*.parquet"
- config_name: "R"
data_files:
- split: train
path: "data/R/*.parquet"
- config_name: "RAML"
data_files:
- split: train
path: "data/RAML/*.parquet"
- config_name: "RDoc"
data_files:
- split: train
path: "data/RDoc/*.parquet"
- config_name: "REALbasic"
data_files:
- split: train
path: "data/REALbasic/*.parquet"
- config_name: "REXX"
data_files:
- split: train
path: "data/REXX/*.parquet"
- config_name: "RMarkdown"
data_files:
- split: train
path: "data/RMarkdown/*.parquet"
- config_name: "RPC"
data_files:
- split: train
path: "data/RPC/*.parquet"
- config_name: "RPGLE"
data_files:
- split: train
path: "data/RPGLE/*.parquet"
- config_name: "RPM_Spec"
data_files:
- split: train
path: "data/RPM_Spec/*.parquet"
- config_name: "RUNOFF"
data_files:
- split: train
path: "data/RUNOFF/*.parquet"
- config_name: "Racket"
data_files:
- split: train
path: "data/Racket/*.parquet"
- config_name: "Ragel"
data_files:
- split: train
path: "data/Ragel/*.parquet"
- config_name: "Raku"
data_files:
- split: train
path: "data/Raku/*.parquet"
- config_name: "Rascal"
data_files:
- split: train
path: "data/Rascal/*.parquet"
- config_name: "Raw_token_data"
data_files:
- split: train
path: "data/Raw_token_data/*.parquet"
- config_name: "ReScript"
data_files:
- split: train
path: "data/ReScript/*.parquet"
- config_name: "Readline_Config"
data_files:
- split: train
path: "data/Readline_Config/*.parquet"
- config_name: "Reason"
data_files:
- split: train
path: "data/Reason/*.parquet"
- config_name: "ReasonLIGO"
data_files:
- split: train
path: "data/ReasonLIGO/*.parquet"
- config_name: "Rebol"
data_files:
- split: train
path: "data/Rebol/*.parquet"
- config_name: "Record_Jar"
data_files:
- split: train
path: "data/Record_Jar/*.parquet"
- config_name: "Red"
data_files:
- split: train
path: "data/Red/*.parquet"
- config_name: "Redcode"
data_files:
- split: train
path: "data/Redcode/*.parquet"
- config_name: "Redirect_Rules"
data_files:
- split: train
path: "data/Redirect_Rules/*.parquet"
- config_name: "Regular_Expression"
data_files:
- split: train
path: "data/Regular_Expression/*.parquet"
- config_name: "Ren-Py"
data_files:
- split: train
path: "data/Ren-Py/*.parquet"
- config_name: "RenderScript"
data_files:
- split: train
path: "data/RenderScript/*.parquet"
- config_name: "Rich_Text_Format"
data_files:
- split: train
path: "data/Rich_Text_Format/*.parquet"
- config_name: "Ring"
data_files:
- split: train
path: "data/Ring/*.parquet"
- config_name: "Riot"
data_files:
- split: train
path: "data/Riot/*.parquet"
- config_name: "RobotFramework"
data_files:
- split: train
path: "data/RobotFramework/*.parquet"
- config_name: "Roff"
data_files:
- split: train
path: "data/Roff/*.parquet"
- config_name: "Roff_Manpage"
data_files:
- split: train
path: "data/Roff_Manpage/*.parquet"
- config_name: "Rouge"
data_files:
- split: train
path: "data/Rouge/*.parquet"
- config_name: "RouterOS_Script"
data_files:
- split: train
path: "data/RouterOS_Script/*.parquet"
- config_name: "Ruby"
data_files:
- split: train
path: "data/Ruby/*.parquet"
- config_name: "Rust"
data_files:
- split: train
path: "data/Rust/*.parquet"
- config_name: "SAS"
data_files:
- split: train
path: "data/SAS/*.parquet"
- config_name: "SCSS"
data_files:
- split: train
path: "data/SCSS/*.parquet"
- config_name: "SELinux_Policy"
data_files:
- split: train
path: "data/SELinux_Policy/*.parquet"
- config_name: "SMT"
data_files:
- split: train
path: "data/SMT/*.parquet"
- config_name: "SPARQL"
data_files:
- split: train
path: "data/SPARQL/*.parquet"
- config_name: "SQF"
data_files:
- split: train
path: "data/SQF/*.parquet"
- config_name: "SQL"
data_files:
- split: train
path: "data/SQL/*.parquet"
- config_name: "SQLPL"
data_files:
- split: train
path: "data/SQLPL/*.parquet"
- config_name: "SRecode_Template"
data_files:
- split: train
path: "data/SRecode_Template/*.parquet"
- config_name: "SSH_Config"
data_files:
- split: train
path: "data/SSH_Config/*.parquet"
- config_name: "STAR"
data_files:
- split: train
path: "data/STAR/*.parquet"
- config_name: "STL"
data_files:
- split: train
path: "data/STL/*.parquet"
- config_name: "STON"
data_files:
- split: train
path: "data/STON/*.parquet"
- config_name: "SVG"
data_files:
- split: train
path: "data/SVG/*.parquet"
- config_name: "SWIG"
data_files:
- split: train
path: "data/SWIG/*.parquet"
- config_name: "Sage"
data_files:
- split: train
path: "data/Sage/*.parquet"
- config_name: "SaltStack"
data_files:
- split: train
path: "data/SaltStack/*.parquet"
- config_name: "Sass"
data_files:
- split: train
path: "data/Sass/*.parquet"
- config_name: "Scala"
data_files:
- split: train
path: "data/Scala/*.parquet"
- config_name: "Scaml"
data_files:
- split: train
path: "data/Scaml/*.parquet"
- config_name: "Scheme"
data_files:
- split: train
path: "data/Scheme/*.parquet"
- config_name: "Scilab"
data_files:
- split: train
path: "data/Scilab/*.parquet"
- config_name: "Self"
data_files:
- split: train
path: "data/Self/*.parquet"
- config_name: "ShaderLab"
data_files:
- split: train
path: "data/ShaderLab/*.parquet"
- config_name: "Shell"
data_files:
- split: train
path: "data/Shell/*.parquet"
- config_name: "ShellCheck_Config"
data_files:
- split: train
path: "data/ShellCheck_Config/*.parquet"
- config_name: "ShellSession"
data_files:
- split: train
path: "data/ShellSession/*.parquet"
- config_name: "Shen"
data_files:
- split: train
path: "data/Shen/*.parquet"
- config_name: "Sieve"
data_files:
- split: train
path: "data/Sieve/*.parquet"
- config_name: "Singularity"
data_files:
- split: train
path: "data/Singularity/*.parquet"
- config_name: "Slash"
data_files:
- split: train
path: "data/Slash/*.parquet"
- config_name: "Slice"
data_files:
- split: train
path: "data/Slice/*.parquet"
- config_name: "Slim"
data_files:
- split: train
path: "data/Slim/*.parquet"
- config_name: "SmPL"
data_files:
- split: train
path: "data/SmPL/*.parquet"
- config_name: "Smali"
data_files:
- split: train
path: "data/Smali/*.parquet"
- config_name: "Smalltalk"
data_files:
- split: train
path: "data/Smalltalk/*.parquet"
- config_name: "Smarty"
data_files:
- split: train
path: "data/Smarty/*.parquet"
- config_name: "Solidity"
data_files:
- split: train
path: "data/Solidity/*.parquet"
- config_name: "Soong"
data_files:
- split: train
path: "data/Soong/*.parquet"
- config_name: "SourcePawn"
data_files:
- split: train
path: "data/SourcePawn/*.parquet"
- config_name: "Spline_Font_Database"
data_files:
- split: train
path: "data/Spline_Font_Database/*.parquet"
- config_name: "Squirrel"
data_files:
- split: train
path: "data/Squirrel/*.parquet"
- config_name: "Stan"
data_files:
- split: train
path: "data/Stan/*.parquet"
- config_name: "Standard_ML"
data_files:
- split: train
path: "data/Standard_ML/*.parquet"
- config_name: "Starlark"
data_files:
- split: train
path: "data/Starlark/*.parquet"
- config_name: "Stata"
data_files:
- split: train
path: "data/Stata/*.parquet"
- config_name: "StringTemplate"
data_files:
- split: train
path: "data/StringTemplate/*.parquet"
- config_name: "Stylus"
data_files:
- split: train
path: "data/Stylus/*.parquet"
- config_name: "SubRip_Text"
data_files:
- split: train
path: "data/SubRip_Text/*.parquet"
- config_name: "SugarSS"
data_files:
- split: train
path: "data/SugarSS/*.parquet"
- config_name: "SuperCollider"
data_files:
- split: train
path: "data/SuperCollider/*.parquet"
- config_name: "Svelte"
data_files:
- split: train
path: "data/Svelte/*.parquet"
- config_name: "Swift"
data_files:
- split: train
path: "data/Swift/*.parquet"
- config_name: "SystemVerilog"
data_files:
- split: train
path: "data/SystemVerilog/*.parquet"
- config_name: "TI_Program"
data_files:
- split: train
path: "data/TI_Program/*.parquet"
- config_name: "TLA"
data_files:
- split: train
path: "data/TLA/*.parquet"
- config_name: "TOML"
data_files:
- split: train
path: "data/TOML/*.parquet"
- config_name: "TSQL"
data_files:
- split: train
path: "data/TSQL/*.parquet"
- config_name: "TSV"
data_files:
- split: train
path: "data/TSV/*.parquet"
- config_name: "TSX"
data_files:
- split: train
path: "data/TSX/*.parquet"
- config_name: "TXL"
data_files:
- split: train
path: "data/TXL/*.parquet"
- config_name: "Talon"
data_files:
- split: train
path: "data/Talon/*.parquet"
- config_name: "Tcl"
data_files:
- split: train
path: "data/Tcl/*.parquet"
- config_name: "Tcsh"
data_files:
- split: train
path: "data/Tcsh/*.parquet"
- config_name: "TeX"
data_files:
- split: train
path: "data/TeX/*.parquet"
- config_name: "Tea"
data_files:
- split: train
path: "data/Tea/*.parquet"
- config_name: "Terra"
data_files:
- split: train
path: "data/Terra/*.parquet"
- config_name: "Texinfo"
data_files:
- split: train
path: "data/Texinfo/*.parquet"
- config_name: "Text"
data_files:
- split: train
path: "data/Text/*.parquet"
- config_name: "TextMate_Properties"
data_files:
- split: train
path: "data/TextMate_Properties/*.parquet"
- config_name: "Textile"
data_files:
- split: train
path: "data/Textile/*.parquet"
- config_name: "Thrift"
data_files:
- split: train
path: "data/Thrift/*.parquet"
- config_name: "Turing"
data_files:
- split: train
path: "data/Turing/*.parquet"
- config_name: "Turtle"
data_files:
- split: train
path: "data/Turtle/*.parquet"
- config_name: "Twig"
data_files:
- split: train
path: "data/Twig/*.parquet"
- config_name: "TypeScript"
data_files:
- split: train
path: "data/TypeScript/*.parquet"
- config_name: "Type_Language"
data_files:
- split: train
path: "data/Type_Language/*.parquet"
- config_name: "Unified_Parallel_C"
data_files:
- split: train
path: "data/Unified_Parallel_C/*.parquet"
- config_name: "Unity3D_Asset"
data_files:
- split: train
path: "data/Unity3D_Asset/*.parquet"
- config_name: "Unix_Assembly"
data_files:
- split: train
path: "data/Unix_Assembly/*.parquet"
- config_name: "Uno"
data_files:
- split: train
path: "data/Uno/*.parquet"
- config_name: "UnrealScript"
data_files:
- split: train
path: "data/UnrealScript/*.parquet"
- config_name: "UrWeb"
data_files:
- split: train
path: "data/UrWeb/*.parquet"
- config_name: "V"
data_files:
- split: train
path: "data/V/*.parquet"
- config_name: "VBA"
data_files:
- split: train
path: "data/VBA/*.parquet"
- config_name: "VBScript"
data_files:
- split: train
path: "data/VBScript/*.parquet"
- config_name: "VCL"
data_files:
- split: train
path: "data/VCL/*.parquet"
- config_name: "VHDL"
data_files:
- split: train
path: "data/VHDL/*.parquet"
- config_name: "Vala"
data_files:
- split: train
path: "data/Vala/*.parquet"
- config_name: "Valve_Data_Format"
data_files:
- split: train
path: "data/Valve_Data_Format/*.parquet"
- config_name: "Velocity_Template_Language"
data_files:
- split: train
path: "data/Velocity_Template_Language/*.parquet"
- config_name: "Verilog"
data_files:
- split: train
path: "data/Verilog/*.parquet"
- config_name: "Vim_Help_File"
data_files:
- split: train
path: "data/Vim_Help_File/*.parquet"
- config_name: "Vim_Script"
data_files:
- split: train
path: "data/Vim_Script/*.parquet"
- config_name: "Vim_Snippet"
data_files:
- split: train
path: "data/Vim_Snippet/*.parquet"
- config_name: "Visual_Basic_.NET"
data_files:
- split: train
path: "data/Visual_Basic_.NET/*.parquet"
- config_name: "Volt"
data_files:
- split: train
path: "data/Volt/*.parquet"
- config_name: "Vue"
data_files:
- split: train
path: "data/Vue/*.parquet"
- config_name: "Vyper"
data_files:
- split: train
path: "data/Vyper/*.parquet"
- config_name: "Wavefront_Material"
data_files:
- split: train
path: "data/Wavefront_Material/*.parquet"
- config_name: "Wavefront_Object"
data_files:
- split: train
path: "data/Wavefront_Object/*.parquet"
- config_name: "WebAssembly"
data_files:
- split: train
path: "data/WebAssembly/*.parquet"
- config_name: "WebIDL"
data_files:
- split: train
path: "data/WebIDL/*.parquet"
- config_name: "WebVTT"
data_files:
- split: train
path: "data/WebVTT/*.parquet"
- config_name: "Web_Ontology_Language"
data_files:
- split: train
path: "data/Web_Ontology_Language/*.parquet"
- config_name: "Wget_Config"
data_files:
- split: train
path: "data/Wget_Config/*.parquet"
- config_name: "Whiley"
data_files:
- split: train
path: "data/Whiley/*.parquet"
- config_name: "Wikitext"
data_files:
- split: train
path: "data/Wikitext/*.parquet"
- config_name: "Win32_Message_File"
data_files:
- split: train
path: "data/Win32_Message_File/*.parquet"
- config_name: "Windows_Registry_Entries"
data_files:
- split: train
path: "data/Windows_Registry_Entries/*.parquet"
- config_name: "Witcher_Script"
data_files:
- split: train
path: "data/Witcher_Script/*.parquet"
- config_name: "Wollok"
data_files:
- split: train
path: "data/Wollok/*.parquet"
- config_name: "World_of_Warcraft_Addon_Data"
data_files:
- split: train
path: "data/World_of_Warcraft_Addon_Data/*.parquet"
- config_name: "Wren"
data_files:
- split: train
path: "data/Wren/*.parquet"
- config_name: "X10"
data_files:
- split: train
path: "data/X10/*.parquet"
- config_name: "XC"
data_files:
- split: train
path: "data/XC/*.parquet"
- config_name: "XCompose"
data_files:
- split: train
path: "data/XCompose/*.parquet"
- config_name: "XML"
data_files:
- split: train
path: "data/XML/*.parquet"
- config_name: "XML_Property_List"
data_files:
- split: train
path: "data/XML_Property_List/*.parquet"
- config_name: "XPages"
data_files:
- split: train
path: "data/XPages/*.parquet"
- config_name: "XProc"
data_files:
- split: train
path: "data/XProc/*.parquet"
- config_name: "XQuery"
data_files:
- split: train
path: "data/XQuery/*.parquet"
- config_name: "XS"
data_files:
- split: train
path: "data/XS/*.parquet"
- config_name: "XSLT"
data_files:
- split: train
path: "data/XSLT/*.parquet"
- config_name: "X_BitMap"
data_files:
- split: train
path: "data/X_BitMap/*.parquet"
- config_name: "X_Font_Directory_Index"
data_files:
- split: train
path: "data/X_Font_Directory_Index/*.parquet"
- config_name: "X_PixMap"
data_files:
- split: train
path: "data/X_PixMap/*.parquet"
- config_name: "Xojo"
data_files:
- split: train
path: "data/Xojo/*.parquet"
- config_name: "Xonsh"
data_files:
- split: train
path: "data/Xonsh/*.parquet"
- config_name: "Xtend"
data_files:
- split: train
path: "data/Xtend/*.parquet"
- config_name: "YAML"
data_files:
- split: train
path: "data/YAML/*.parquet"
- config_name: "YANG"
data_files:
- split: train
path: "data/YANG/*.parquet"
- config_name: "YARA"
data_files:
- split: train
path: "data/YARA/*.parquet"
- config_name: "YASnippet"
data_files:
- split: train
path: "data/YASnippet/*.parquet"
- config_name: "Yacc"
data_files:
- split: train
path: "data/Yacc/*.parquet"
- config_name: "Yul"
data_files:
- split: train
path: "data/Yul/*.parquet"
- config_name: "ZAP"
data_files:
- split: train
path: "data/ZAP/*.parquet"
- config_name: "ZIL"
data_files:
- split: train
path: "data/ZIL/*.parquet"
- config_name: "Zeek"
data_files:
- split: train
path: "data/Zeek/*.parquet"
- config_name: "ZenScript"
data_files:
- split: train
path: "data/ZenScript/*.parquet"
- config_name: "Zephir"
data_files:
- split: train
path: "data/Zephir/*.parquet"
- config_name: "Zig"
data_files:
- split: train
path: "data/Zig/*.parquet"
- config_name: "Zimpl"
data_files:
- split: train
path: "data/Zimpl/*.parquet"
- config_name: "cURL_Config"
data_files:
- split: train
path: "data/cURL_Config/*.parquet"
- config_name: "desktop"
data_files:
- split: train
path: "data/desktop/*.parquet"
- config_name: "dircolors"
data_files:
- split: train
path: "data/dircolors/*.parquet"
- config_name: "eC"
data_files:
- split: train
path: "data/eC/*.parquet"
- config_name: "edn"
data_files:
- split: train
path: "data/edn/*.parquet"
- config_name: "fish"
data_files:
- split: train
path: "data/fish/*.parquet"
- config_name: "hoon"
data_files:
- split: train
path: "data/hoon/*.parquet"
- config_name: "jq"
data_files:
- split: train
path: "data/jq/*.parquet"
- config_name: "kvlang"
data_files:
- split: train
path: "data/kvlang/*.parquet"
- config_name: "mIRC_Script"
data_files:
- split: train
path: "data/mIRC_Script/*.parquet"
- config_name: "mcfunction"
data_files:
- split: train
path: "data/mcfunction/*.parquet"
- config_name: "mupad"
data_files:
- split: train
path: "data/mupad/*.parquet"
- config_name: "nanorc"
data_files:
- split: train
path: "data/nanorc/*.parquet"
- config_name: "nesC"
data_files:
- split: train
path: "data/nesC/*.parquet"
- config_name: "ooc"
data_files:
- split: train
path: "data/ooc/*.parquet"
- config_name: "q"
data_files:
- split: train
path: "data/q/*.parquet"
- config_name: "reStructuredText"
data_files:
- split: train
path: "data/reStructuredText/*.parquet"
- config_name: "robots.txt"
data_files:
- split: train
path: "data/robots.txt/*.parquet"
- config_name: "sed"
data_files:
- split: train
path: "data/sed/*.parquet"
- config_name: "wdl"
data_files:
- split: train
path: "data/wdl/*.parquet"
- config_name: "wisp"
data_files:
- split: train
path: "data/wisp/*.parquet"
- config_name: "xBase"
data_files:
- split: train
path: "data/xBase/*.parquet"
---
# The Stack v2
<center>
<img src="https://huggingface.co./datasets/bigcode/admin_private/resolve/main/thestackv2_banner.png" alt="Stackv2" width="900" height="600">
</center>
## Dataset Description
- **Homepage:** https://www.bigcode-project.org/
- **Repository:** https://github.com/bigcode-project
- **Paper:** [Link](https://huggingface.co./papers/2402.19173)
- **Point of Contact:** [email protected]
The dataset consists of 4 versions:
- [`bigcode/the-stack-v2`](https://huggingface.co./datasets/bigcode/the-stack-v2): the full "The Stack v2" dataset **<-- you are here**
- [`bigcode/the-stack-v2-dedup`](https://huggingface.co./datasets/bigcode/the-stack-v2-dedup): based on the `bigcode/the-stack-v2` but further near-deduplicated
- [`bigcode/the-stack-v2-train-full-ids`](https://huggingface.co./datasets/bigcode/the-stack-v2-train-full-ids): based on the `bigcode/the-stack-v2-dedup` dataset but further filtered with heuristics and spanning 600+ programming languages. The data is grouped into repositories.
- [`bigcode/the-stack-v2-train-smol-ids`](https://huggingface.co./datasets/bigcode/the-stack-v2-train-smol-ids): based on the `bigcode/the-stack-v2-dedup` dataset but further filtered with heuristics and spanning 17 programming languages. The data is grouped into repositories.
**These datasets only contain the SWHIDs to download the code files and not the content of the files itself. See examples below to see how to download content. We are working on making the training datasets available in the coming weeks.**
The Stack v2 is significantly larger than v1:
||The Stack v1|The Stack v2|
|-|-|-|
| full | 6.4TB | 67.5TB |
| dedup | 2.9TB | 32.1TB |
| train (full) | ~200B tokens | ~900B tokens |
### Changelog
|Release|Description|
|-|-|
| v2.1.0 | Removed repositories that opted out before 2024-04-09. Removed unreachable/private repositories (according to SWH) |
| v2.0.1 | Removed repositories that opted out before 2023-10-20. StarCoder2 was trained on this version |
| v2.0 | Initial release of the Stack v2 |
### Dataset Summary
The Stack v2 contains over 3B files in 600+ programming and markup languages. The dataset was created as part of the [BigCode Project](https://www.bigcode-project.org/), an open scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs). The Stack serves as a pre-training dataset for Code LLMs, i.e., code-generating AI systems which enable the synthesis of programs from natural language descriptions as well as other from code snippets.
This dataset is derived from the Software Heritage archive, the largest public archive of software source code and accompanying development history. Software Heritage is an open, non profit initiative to collect, preserve, and share the source code of all publicly available software, launched by Inria, in partnership with UNESCO. We acknowledge Software Heritage for providing access to this invaluable resource. For more details, visit the [Software Heritage website](https://www.softwareheritage.org).
### Languages
The dataset contains 658 languages. The full list can be found in the [language stats table](https://huggingface.co./datasets/bigcode/the-stack-v2/blob/main/language_stats.csv).
### How to use it
```python
from datasets import load_dataset
# full dataset (file IDs only)
ds = load_dataset("bigcode/the-stack-v2", split="train")
# specific language (e.g. Dockerfiles)
ds = load_dataset("bigcode/the-stack-v2", "Dockerfile", split="train")
# dataset streaming (will only download the data as needed)
ds = load_dataset("bigcode/the-stack-v2", streaming=True, split="train")
for sample in iter(ds):
print(sample)
```
#### Downloading the file contents
The file contents are stored in the Software Heritage S3 bucket to ensure data compliance. Downloading data in bulk requires an agreement with SoftwareHeritage and INRIA as stated in the dataset agreement.
Make sure to configure your environment with your [AWS credentials](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/index.html#examples).
```bash
pip install smart_open[s3]
```
```python
import os
import boto3
from smart_open import open
from datasets import load_dataset
session = boto3.Session(
aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"])
s3 = session.client("s3")
def download_contents(blob_id, src_encoding):
s3_url = f"s3://softwareheritage/content/{blob_id}"
with open(s3_url, "rb", compression=".gz", transport_params={"client": s3}) as fin:
content = fin.read().decode(src_encoding)
return {"content": content}
ds = load_dataset("bigcode/the-stack-v2", split="train", streaming=True)
ds = ds.map(lambda row: download_contents(row["blob_id"], row["src_encoding"]))
for row in ds:
print(row["content"])
break
```
## Dataset Structure
### Data Fields
* `blob_id` (`string`): Software Heritage (SWH) ID of the file on AWS S3.
* `directory_id` (`string`): SWH ID of the root directory of the repository.
* `path` (`string`): The file path within the repository.
* `content_id` (`string`): SWH content ID.
* `detected_licenses` (`string[]`): List of licenses (SPDX) detected by ScanCode.
* `license_type` (`string`): Inferred license type (`permissive` or `no_license`).
* `repo_name` (`string`): Repository name on GitHub.
* `snapshot_id` (`string`): SWH snapshot ID.
* `revision_id` (`string`): SWH revision (commit) ID.
* `branch_name` (`string`): Repository branch name.
* `visit_date` (`timestamp[ns]`): SWH crawl (snapshot) timestamp.
* `revision_date` (`timestamp[ns]`): SWH revision (commit) timestamp.
* `committer_date` (`timestamp[ns]`): SWH revision (commit) timestamp reported by the committer.
* `github_id` (`int64`): GitHub identifier for the repository.
* `star_events_count` (`int64`): number of stars calculated from GHArchive events.
* `fork_events_count` (`int64`): number of forks calculated from GHArchive events.
* `gha_license_id` (`string`): GHArchive SPDX license identifier, `None` if the repo is missing.
* `gha_event_created_at` (`timestamp[ns]`): Timestamp of the latest event on GHArchive for this repository.
* `gha_created_at` (`timestamp[ns]`): Timestamp of repository creation on GitHub, `None` if the repo is missing.
* `gha_language` (`string`): Repository's primary programming language on GitHub, `None` if the repo is missing.
* `src_encoding` (`string`): Original encoding of the file content befre converting to UTF-8.
* `language` (`string`): Programming language of the file, detected by `go-enry / linguist`.
* `is_vendor` (`bool`): Indicator of vendor file (external library), detected by `go-enry`.
* `is_generated` (`bool`): Indicator of generated file (external library), detected by `go-enry`.
* `length_bytes` (`int64`): Length of the file content in UTF-8 bytes.
* `extension` (`string`): File extension.
### Data Splits
The dataset has no splits and all data is loaded as train split by default. If you want to setup a custom train-test split beware that dataset contains a lot of near-duplicates which can cause leakage into the test split.
## Dataset Creation
For more information on the dataset creation pipeline please refer to the [technical report](https://huggingface.co./papers/2402.19173).
### Curation Rationale
One of the challenges faced by researchers working on code LLMs is the lack of openness and transparency around the development of these systems. Most prior works described the high-level data collection process but did not release the training data. It is therefore difficult for other researchers to fully reproduce these models and understand what kind of pre-training data leads to high-performing code LLMs. By releasing an open large-scale code dataset we hope to make training of code LLMs more reproducible.
### Source Data
#### Data Collection
3.28B unique files belonging to 104.2M github repositories were collected by traversing the Software Heritage [2023-09-06](https://docs.softwareheritage.org/devel/swh-dataset/graph/dataset.html#graph-dataset-2023-09-06) graph dataset.
Additional repository-level metadata was collected from [GitHub Archive](https://www.gharchive.org/) data up to 2023-09-14.
The total uncompressed size of all files is 67.53TB.
Near-deduplication was implemented in the pre-processing pipeline on top of exact deduplication.
Roughly 40% of permissively licensed files were (near-)duplicates.
The following are not stored:
* Files that cannot contribute to training code: binary, empty, could not be decoded
* Files larger than 10MB
**Training Datasets**: For the training datasets the programming languages were filtered further to 17 and 600+ for the `the-stack-v2-smol-ids` and `the-stack-v2-full-ids` dataset, respecively. In addition, heuristics were applied to further increase the quality of the dataset. The code files are also grouped into repositories to allow to pretrain with full repository context. For more details see the [technical report](https://drive.google.com/file/d/17iGn3c-sYNiLyRSY-A85QOzgzGnGiVI3/view?usp=sharing).
##### License detection
We extract repository-level license information from [GH Archive](https://www.gharchive.org/) for all repositories with matching names in the SWH dataset.
When the repo-level license is not available, i.e., for 96.93\% of repositories, we use the [ScanCode Toolkit](https://github.com/nexB/scancode-toolkit) to detect file-level licenses as follows:
* Find all filenames that could contain a license (e.g., LICENSE, MIT.txt, Apache2.0) or contain a reference to the license (e.g., README.md, GUIDELINES);
* Apply ScanCode's license detection to the matching files and gather the SPDX IDs of the detected licenses;
* Propagate the detected licenses to all files that have the same base path within the repository as the license file.
The licenses we consider permissive are listed [here](https://huggingface.co./datasets/bigcode/the-stack-v2/blob/main/license_stats.csv).
This list was compiled from the licenses approved by the [Blue Oak Council](https://blueoakcouncil.org/list),
as well as licenses categorized as "Permissive" or "Public Domain" by [ScanCode](https://scancode-licensedb.aboutcode.org/).
#### Who are the source language producers?
The source (code) language producers are users of GitHub that created unique repository names up until 2023-09-06 (cutoff date).
### Personal and Sensitive Information
The released dataset may contain sensitive information such as emails, IP addresses, and API/ssh keys that have previously been published to public repositories on GitHub. Deduplication has helped to reduce the amount of sensitive data that may exist. In the event that the dataset contains personal information, researchers should only use public, non-personal information in support of conducting and publishing their [open-access](https://en.wikipedia.org/wiki/Open_access) research. Personal information should not be used for spamming purposes, including sending unsolicited emails or selling of personal information. Complaints, removal requests, and "do not contact" requests can be sent to [email protected].
### Opting out of The Stack v2
We are giving developers the ability to have their code removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools.
You can check if your code is in The Stack v2 with the following ["Am I In The Stack?" Space](https://huggingface.co./spaces/bigcode/in-the-stack). If you'd like to have your data removed from the dataset follow the [instructions on GitHub](https://github.com/bigcode-project/opt-out-v2).
## Considerations for Using the Data
### Social Impact of Dataset
The Stack v2 is an output of the BigCode Project. BigCode aims to be responsible by design and by default. The project is conducted in the spirit of Open Science, focused on the responsible development of LLMs for code.
With the release of The Stack v2, we aim to increase access, reproducibility, and transparency of code LLMs in the research community. Work to de-risk and improve on the implementation of ethical best practices of code LLMs is conducted in various BigCode working groups. The Legal, Ethics, and Governance working group has explored topics such as licensing (including copyleft and the intended use of permissively licensed code), attribution of generated code to original code, rights to restrict processing, the inclusion of Personally Identifiable Information (PII), and risks of malicious code, among other topics. This work is ongoing as of October 25th, 2022.
We expect code LLMs to enable people from diverse backgrounds to write higher quality code and develop low-code applications. Mission-critical software could become easier to maintain as professional developers are guided by code-generating systems on how to write more robust and efficient code. While the social impact is intended to be positive, the increased accessibility of code LLMs comes with certain risks such as over-reliance on the generated code and long-term effects on the software development job market.
A broader impact analysis relating to Code LLMs can be found in section 7 of this [paper](https://arxiv.org/abs/2107.03374). An in-depth risk assessments for Code LLMs can be found in section 4 of this [paper](https://arxiv.org/abs/2207.14157).
### Discussion of Biases
The code collected from GitHub does not contain demographic information or proxy information about the demographics. However, it is not without risks,
as the comments within the code may contain harmful or offensive language, which could be learned by the models.
Widely adopted programming languages like C and Javascript are overrepresented compared to niche programming languages like Julia and Scala. Some programming languages such as SQL, Batchfile, TypeScript are less likely to be permissively licensed (4% vs the average 10%). This may result in a biased representation of those languages. Permissively licensed files also tend to be longer.
The majority of natural language present in code from GitHub is English.
### Other Known Limitations
One of the current limitations of The Stack v2 is that scraped HTML for websites may not be compliant with Web Content Accessibility Guidelines ([WCAG](https://www.w3.org/WAI/standards-guidelines/wcag/)). This could have an impact on HTML-generated code that may introduce web accessibility issues.
The training dataset could contain malicious code and/or the model could be used to generate malware or ransomware.
To the best of our knowledge, all files contained in the dataset are licensed with one of the permissive licenses (see list in [Licensing information](#licensing-information)) or no license.
The accuracy of license attribution is limited by the accuracy of GHArchive and ScanCode Toolkit.
Any mistakes should be reported to BigCode Project for review and follow-up as needed.
## Additional Information
### Dataset Curators
1. Harm de Vries, ServiceNow Research, [email protected]
2. Leandro von Werra, Hugging Face, [email protected]
### Licensing Information
The Stack v2 is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack v2 must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co./datasets/bigcode/the-stack-v2/blob/main/license_stats.csv).
### Citation Information
```bash
@misc{lozhkov2024starcoder,
title={StarCoder 2 and The Stack v2: The Next Generation},
author={Anton Lozhkov and Raymond Li and Loubna Ben Allal and Federico Cassano and Joel Lamy-Poirier and Nouamane Tazi and Ao Tang and Dmytro Pykhtar and Jiawei Liu and Yuxiang Wei and Tianyang Liu and Max Tian and Denis Kocetkov and Arthur Zucker and Younes Belkada and Zijian Wang and Qian Liu and Dmitry Abulkhanov and Indraneil Paul and Zhuang Li and Wen-Ding Li and Megan Risdal and Jia Li and Jian Zhu and Terry Yue Zhuo and Evgenii Zheltonozhskii and Nii Osae Osae Dade and Wenhao Yu and Lucas Krauß and Naman Jain and Yixuan Su and Xuanli He and Manan Dey and Edoardo Abati and Yekun Chai and Niklas Muennighoff and Xiangru Tang and Muhtasham Oblokulov and Christopher Akiki and Marc Marone and Chenghao Mou and Mayank Mishra and Alex Gu and Binyuan Hui and Tri Dao and Armel Zebaze and Olivier Dehaene and Nicolas Patry and Canwen Xu and Julian McAuley and Han Hu and Torsten Scholak and Sebastien Paquet and Jennifer Robinson and Carolyn Jane Anderson and Nicolas Chapados and Mostofa Patwary and Nima Tajbakhsh and Yacine Jernite and Carlos Muñoz Ferrandis and Lingming Zhang and Sean Hughes and Thomas Wolf and Arjun Guha and Leandro von Werra and Harm de Vries},
year={2024},
eprint={2402.19173},
archivePrefix={arXiv},
primaryClass={cs.SE}
}
```
|
cardiffnlp/tweet_eval | cardiffnlp | "2024-01-04T16:40:33Z" | 9,943 | 118 | [
"task_categories:text-classification",
"task_ids:intent-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:extended|other-tweet-datasets",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2010.12421",
"region:us"
] | [
"text-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
- 10K<n<100K
- 1K<n<10K
- n<1K
source_datasets:
- extended|other-tweet-datasets
task_categories:
- text-classification
task_ids:
- intent-classification
- multi-class-classification
- sentiment-classification
paperswithcode_id: tweeteval
pretty_name: TweetEval
config_names:
- emoji
- emotion
- hate
- irony
- offensive
- sentiment
- stance_abortion
- stance_atheism
- stance_climate
- stance_feminist
- stance_hillary
dataset_info:
- config_name: emoji
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': ❤
'1': 😍
'2': 😂
'3': 💕
'4': 🔥
'5': 😊
'6': 😎
'7': ✨
'8': 💙
'9': 😘
'10': 📷
'11': 🇺🇸
'12': ☀
'13': 💜
'14': 😉
'15': 💯
'16': 😁
'17': 🎄
'18': 📸
'19': 😜
splits:
- name: train
num_bytes: 3803167
num_examples: 45000
- name: test
num_bytes: 4255901
num_examples: 50000
- name: validation
num_bytes: 396079
num_examples: 5000
download_size: 5939308
dataset_size: 8455147
- config_name: emotion
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': anger
'1': joy
'2': optimism
'3': sadness
splits:
- name: train
num_bytes: 338871
num_examples: 3257
- name: test
num_bytes: 146645
num_examples: 1421
- name: validation
num_bytes: 38273
num_examples: 374
download_size: 367016
dataset_size: 523789
- config_name: hate
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-hate
'1': hate
splits:
- name: train
num_bytes: 1223650
num_examples: 9000
- name: test
num_bytes: 428934
num_examples: 2970
- name: validation
num_bytes: 154144
num_examples: 1000
download_size: 1196346
dataset_size: 1806728
- config_name: irony
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non_irony
'1': irony
splits:
- name: train
num_bytes: 259187
num_examples: 2862
- name: test
num_bytes: 75897
num_examples: 784
- name: validation
num_bytes: 86017
num_examples: 955
download_size: 297647
dataset_size: 421101
- config_name: offensive
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': non-offensive
'1': offensive
splits:
- name: train
num_bytes: 1648061
num_examples: 11916
- name: test
num_bytes: 135473
num_examples: 860
- name: validation
num_bytes: 192417
num_examples: 1324
download_size: 1234528
dataset_size: 1975951
- config_name: sentiment
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': negative
'1': neutral
'2': positive
splits:
- name: train
num_bytes: 5425122
num_examples: 45615
- name: test
num_bytes: 1279540
num_examples: 12284
- name: validation
num_bytes: 239084
num_examples: 2000
download_size: 4849675
dataset_size: 6943746
- config_name: stance_abortion
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 68694
num_examples: 587
- name: test
num_bytes: 33171
num_examples: 280
- name: validation
num_bytes: 7657
num_examples: 66
download_size: 73517
dataset_size: 109522
- config_name: stance_atheism
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 54775
num_examples: 461
- name: test
num_bytes: 25716
num_examples: 220
- name: validation
num_bytes: 6320
num_examples: 52
download_size: 62265
dataset_size: 86811
- config_name: stance_climate
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 40249
num_examples: 355
- name: test
num_bytes: 19925
num_examples: 169
- name: validation
num_bytes: 4801
num_examples: 40
download_size: 48493
dataset_size: 64975
- config_name: stance_feminist
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 70509
num_examples: 597
- name: test
num_bytes: 33305
num_examples: 285
- name: validation
num_bytes: 8035
num_examples: 67
download_size: 76345
dataset_size: 111849
- config_name: stance_hillary
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': none
'1': against
'2': favor
splits:
- name: train
num_bytes: 69596
num_examples: 620
- name: test
num_bytes: 34487
num_examples: 295
- name: validation
num_bytes: 7532
num_examples: 69
download_size: 74057
dataset_size: 111615
configs:
- config_name: emoji
data_files:
- split: train
path: emoji/train-*
- split: test
path: emoji/test-*
- split: validation
path: emoji/validation-*
- config_name: emotion
data_files:
- split: train
path: emotion/train-*
- split: test
path: emotion/test-*
- split: validation
path: emotion/validation-*
- config_name: hate
data_files:
- split: train
path: hate/train-*
- split: test
path: hate/test-*
- split: validation
path: hate/validation-*
- config_name: irony
data_files:
- split: train
path: irony/train-*
- split: test
path: irony/test-*
- split: validation
path: irony/validation-*
- config_name: offensive
data_files:
- split: train
path: offensive/train-*
- split: test
path: offensive/test-*
- split: validation
path: offensive/validation-*
- config_name: sentiment
data_files:
- split: train
path: sentiment/train-*
- split: test
path: sentiment/test-*
- split: validation
path: sentiment/validation-*
- config_name: stance_abortion
data_files:
- split: train
path: stance_abortion/train-*
- split: test
path: stance_abortion/test-*
- split: validation
path: stance_abortion/validation-*
- config_name: stance_atheism
data_files:
- split: train
path: stance_atheism/train-*
- split: test
path: stance_atheism/test-*
- split: validation
path: stance_atheism/validation-*
- config_name: stance_climate
data_files:
- split: train
path: stance_climate/train-*
- split: test
path: stance_climate/test-*
- split: validation
path: stance_climate/validation-*
- config_name: stance_feminist
data_files:
- split: train
path: stance_feminist/train-*
- split: test
path: stance_feminist/test-*
- split: validation
path: stance_feminist/validation-*
- config_name: stance_hillary
data_files:
- split: train
path: stance_hillary/train-*
- split: test
path: stance_hillary/test-*
- split: validation
path: stance_hillary/validation-*
train-eval-index:
- config: emotion
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: hate
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: irony
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: offensive
task: text-classification
task_id: binary_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 binary
args:
average: binary
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
- config: sentiment
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
label: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
---
# Dataset Card for tweet_eval
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Needs More Information]
- **Repository:** [GitHub](https://github.com/cardiffnlp/tweeteval)
- **Paper:** [EMNLP Paper](https://arxiv.org/pdf/2010.12421.pdf)
- **Leaderboard:** [GitHub Leaderboard](https://github.com/cardiffnlp/tweeteval)
- **Point of Contact:** [Needs More Information]
### Dataset Summary
TweetEval consists of seven heterogenous tasks in Twitter, all framed as multi-class tweet classification. The tasks include - irony, hate, offensive, stance, emoji, emotion, and sentiment. All tasks have been unified into the same benchmark, with each dataset presented in the same format and with fixed training, validation and test splits.
### Supported Tasks and Leaderboards
- `text_classification`: The dataset can be trained using a SentenceClassification model from HuggingFace transformers.
### Languages
The text in the dataset is in English, as spoken by Twitter users.
## Dataset Structure
### Data Instances
An instance from `emoji` config:
```
{'label': 12, 'text': 'Sunday afternoon walking through Venice in the sun with @user ️ ️ ️ @ Abbot Kinney, Venice'}
```
An instance from `emotion` config:
```
{'label': 2, 'text': "“Worry is a down payment on a problem you may never have'. \xa0Joyce Meyer. #motivation #leadership #worry"}
```
An instance from `hate` config:
```
{'label': 0, 'text': '@user nice new signage. Are you not concerned by Beatlemania -style hysterical crowds crongregating on you…'}
```
An instance from `irony` config:
```
{'label': 1, 'text': 'seeing ppl walking w/ crutches makes me really excited for the next 3 weeks of my life'}
```
An instance from `offensive` config:
```
{'label': 0, 'text': '@user Bono... who cares. Soon people will understand that they gain nothing from following a phony celebrity. Become a Leader of your people instead or help and support your fellow countrymen.'}
```
An instance from `sentiment` config:
```
{'label': 2, 'text': '"QT @user In the original draft of the 7th book, Remus Lupin survived the Battle of Hogwarts. #HappyBirthdayRemusLupin"'}
```
An instance from `stance_abortion` config:
```
{'label': 1, 'text': 'we remind ourselves that love means to be willing to give until it hurts - Mother Teresa'}
```
An instance from `stance_atheism` config:
```
{'label': 1, 'text': '@user Bless Almighty God, Almighty Holy Spirit and the Messiah. #SemST'}
```
An instance from `stance_climate` config:
```
{'label': 0, 'text': 'Why Is The Pope Upset? via @user #UnzippedTruth #PopeFrancis #SemST'}
```
An instance from `stance_feminist` config:
```
{'label': 1, 'text': "@user @user is the UK's answer to @user and @user #GamerGate #SemST"}
```
An instance from `stance_hillary` config:
```
{'label': 1, 'text': "If a man demanded staff to get him an ice tea he'd be called a sexists elitist pig.. Oink oink #Hillary #SemST"}
```
### Data Fields
For `emoji` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: ❤
`1`: 😍
`2`: 😂
`3`: 💕
`4`: 🔥
`5`: 😊
`6`: 😎
`7`: ✨
`8`: 💙
`9`: 😘
`10`: 📷
`11`: 🇺🇸
`12`: ☀
`13`: 💜
`14`: 😉
`15`: 💯
`16`: 😁
`17`: 🎄
`18`: 📸
`19`: 😜
For `emotion` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: anger
`1`: joy
`2`: optimism
`3`: sadness
For `hate` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non-hate
`1`: hate
For `irony` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non_irony
`1`: irony
For `offensive` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: non-offensive
`1`: offensive
For `sentiment` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: negative
`1`: neutral
`2`: positive
For `stance_abortion` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_atheism` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_climate` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_feminist` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
For `stance_hillary` config:
- `text`: a `string` feature containing the tweet.
- `label`: an `int` classification label with the following mapping:
`0`: none
`1`: against
`2`: favor
### Data Splits
| name | train | validation | test |
| --------------- | ----- | ---------- | ----- |
| emoji | 45000 | 5000 | 50000 |
| emotion | 3257 | 374 | 1421 |
| hate | 9000 | 1000 | 2970 |
| irony | 2862 | 955 | 784 |
| offensive | 11916 | 1324 | 860 |
| sentiment | 45615 | 2000 | 12284 |
| stance_abortion | 587 | 66 | 280 |
| stance_atheism | 461 | 52 | 220 |
| stance_climate | 355 | 40 | 169 |
| stance_feminist | 597 | 67 | 285 |
| stance_hillary | 620 | 69 | 295 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
[Needs More Information]
## Considerations for Using the Data
### Social Impact of Dataset
[Needs More Information]
### Discussion of Biases
[Needs More Information]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
Francesco Barbieri, Jose Camacho-Collados, Luis Espiinosa-Anke and Leonardo Neves through Cardiff NLP.
### Licensing Information
This is not a single dataset, therefore each subset has its own license (the collection itself does not have additional restrictions).
All of the datasets require complying with Twitter [Terms Of Service](https://twitter.com/tos) and Twitter API [Terms Of Service](https://developer.twitter.com/en/developer-terms/agreement-and-policy)
Additionally the license are:
- emoji: Undefined
- emotion(EmoInt): Undefined
- hate (HateEval): Need permission [here](http://hatespeech.di.unito.it/hateval.html)
- irony: Undefined
- Offensive: Undefined
- Sentiment: [Creative Commons Attribution 3.0 Unported License](https://groups.google.com/g/semevaltweet/c/k5DDcvVb_Vo/m/zEOdECFyBQAJ)
- Stance: Undefined
### Citation Information
```
@inproceedings{barbieri2020tweeteval,
title={{TweetEval:Unified Benchmark and Comparative Evaluation for Tweet Classification}},
author={Barbieri, Francesco and Camacho-Collados, Jose and Espinosa-Anke, Luis and Neves, Leonardo},
booktitle={Proceedings of Findings of EMNLP},
year={2020}
}
```
If you use any of the TweetEval datasets, please cite their original publications:
#### Emotion Recognition:
```
@inproceedings{mohammad2018semeval,
title={Semeval-2018 task 1: Affect in tweets},
author={Mohammad, Saif and Bravo-Marquez, Felipe and Salameh, Mohammad and Kiritchenko, Svetlana},
booktitle={Proceedings of the 12th international workshop on semantic evaluation},
pages={1--17},
year={2018}
}
```
#### Emoji Prediction:
```
@inproceedings{barbieri2018semeval,
title={Semeval 2018 task 2: Multilingual emoji prediction},
author={Barbieri, Francesco and Camacho-Collados, Jose and Ronzano, Francesco and Espinosa-Anke, Luis and
Ballesteros, Miguel and Basile, Valerio and Patti, Viviana and Saggion, Horacio},
booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
pages={24--33},
year={2018}
}
```
#### Irony Detection:
```
@inproceedings{van2018semeval,
title={Semeval-2018 task 3: Irony detection in english tweets},
author={Van Hee, Cynthia and Lefever, Els and Hoste, V{\'e}ronique},
booktitle={Proceedings of The 12th International Workshop on Semantic Evaluation},
pages={39--50},
year={2018}
}
```
#### Hate Speech Detection:
```
@inproceedings{basile-etal-2019-semeval,
title = "{S}em{E}val-2019 Task 5: Multilingual Detection of Hate Speech Against Immigrants and Women in {T}witter",
author = "Basile, Valerio and Bosco, Cristina and Fersini, Elisabetta and Nozza, Debora and Patti, Viviana and
Rangel Pardo, Francisco Manuel and Rosso, Paolo and Sanguinetti, Manuela",
booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
year = "2019",
address = "Minneapolis, Minnesota, USA",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/S19-2007",
doi = "10.18653/v1/S19-2007",
pages = "54--63"
}
```
#### Offensive Language Identification:
```
@inproceedings{zampieri2019semeval,
title={SemEval-2019 Task 6: Identifying and Categorizing Offensive Language in Social Media (OffensEval)},
author={Zampieri, Marcos and Malmasi, Shervin and Nakov, Preslav and Rosenthal, Sara and Farra, Noura and Kumar, Ritesh},
booktitle={Proceedings of the 13th International Workshop on Semantic Evaluation},
pages={75--86},
year={2019}
}
```
#### Sentiment Analysis:
```
@inproceedings{rosenthal2017semeval,
title={SemEval-2017 task 4: Sentiment analysis in Twitter},
author={Rosenthal, Sara and Farra, Noura and Nakov, Preslav},
booktitle={Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017)},
pages={502--518},
year={2017}
}
```
#### Stance Detection:
```
@inproceedings{mohammad2016semeval,
title={Semeval-2016 task 6: Detecting stance in tweets},
author={Mohammad, Saif and Kiritchenko, Svetlana and Sobhani, Parinaz and Zhu, Xiaodan and Cherry, Colin},
booktitle={Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)},
pages={31--41},
year={2016}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchhablani) and [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset. |
billion-word-benchmark/lm1b | billion-word-benchmark | "2024-01-18T11:08:23Z" | 9,917 | 14 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"language:en",
"arxiv:1312.3005",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: One Billion Word Language Model Benchmark
paperswithcode_id: billion-word-benchmark
dataset_info:
features:
- name: text
dtype: string
config_name: plain_text
splits:
- name: train
num_bytes: 4238206516
num_examples: 30301028
- name: test
num_bytes: 42942045
num_examples: 306688
download_size: 1792209805
dataset_size: 4281148561
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
language:
- en
---
# Dataset Card for One Billion Word Language Model Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [statmt](http://www.statmt.org/lm-benchmark/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [arxiv](https://arxiv.org/pdf/1312.3005v3.pdf)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Size of downloaded dataset files:** 1.79 GB
- **Size of the generated dataset:** 4.28 GB
- **Total amount of disk used:** 6.07 GB
### Dataset Summary
A benchmark corpus to be used for measuring progress in statistical language modeling. This has almost one billion words in the training data.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### plain_text
- **Size of downloaded dataset files:** 1.79 GB
- **Size of the generated dataset:** 4.28 GB
- **Total amount of disk used:** 6.07 GB
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": "While athletes in different professions dealt with doping scandals and other controversies , Woods continued to do what he did best : dominate the field of professional golf and rake in endorsements ."
}
```
### Data Fields
The data fields are the same among all splits.
#### plain_text
- `text`: a `string` feature.
### Data Splits
| name | train | test |
|------------|----------|--------|
| plain_text | 30301028 | 306688 |
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
The dataset doesn't contain annotations.
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needeate this repository accordingly.
### Citation Information
```bibtex
@misc{chelba2014billion,
title={One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling},
author={Ciprian Chelba and Tomas Mikolov and Mike Schuster and Qi Ge and Thorsten Brants and Phillipp Koehn and Tony Robinson},
year={2014},
eprint={1312.3005},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@thomwolf](https://github.com/thomwolf) for adding this dataset. |
openslr/librispeech_asr | openslr | "2024-08-14T10:48:50Z" | 9,888 | 133 | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"task_ids:speaker-identification",
"annotations_creators:expert-generated",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"region:us"
] | [
"automatic-speech-recognition",
"audio-classification"
] | "2022-03-02T23:29:22Z" | ---
pretty_name: LibriSpeech
annotations_creators:
- expert-generated
language_creators:
- crowdsourced
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
paperswithcode_id: librispeech-1
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- automatic-speech-recognition
- audio-classification
task_ids:
- speaker-identification
dataset_info:
- config_name: clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.100
num_bytes: 6619683041
num_examples: 28539
- name: train.360
num_bytes: 23898214592
num_examples: 104014
- name: validation
num_bytes: 359572231
num_examples: 2703
- name: test
num_bytes: 367705423
num_examples: 2620
download_size: 30121377654
dataset_size: 31245175287
- config_name: other
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.500
num_bytes: 31810256902
num_examples: 148688
- name: validation
num_bytes: 337283304
num_examples: 2864
- name: test
num_bytes: 352396474
num_examples: 2939
download_size: 31236565377
dataset_size: 32499936680
- config_name: all
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: train.clean.100
num_bytes: 6627791685
num_examples: 28539
- name: train.clean.360
num_bytes: 23927767570
num_examples: 104014
- name: train.other.500
num_bytes: 31852502880
num_examples: 148688
- name: validation.clean
num_bytes: 359505691
num_examples: 2703
- name: validation.other
num_bytes: 337213112
num_examples: 2864
- name: test.clean
num_bytes: 368449831
num_examples: 2620
- name: test.other
num_bytes: 353231518
num_examples: 2939
download_size: 61357943031
dataset_size: 63826462287
---
# Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [LibriSpeech ASR corpus](http://www.openslr.org/12)
- **Repository:** [Needs More Information]
- **Paper:** [LibriSpeech: An ASR Corpus Based On Public Domain Audio Books](https://www.danielpovey.com/files/2015_icassp_librispeech.pdf)
- **Leaderboard:** [The 🤗 Speech Bench](https://huggingface.co./spaces/huggingface/hf-speech-bench)
- **Point of Contact:** [Daniel Povey](mailto:[email protected])
### Dataset Summary
LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech, prepared by Vassil Panayotov with the assistance of Daniel Povey. The data is derived from read audiobooks from the LibriVox project, and has been carefully segmented and aligned.
### Supported Tasks and Leaderboards
- `automatic-speech-recognition`, `audio-speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER). The task has an active Hugging Face leaderboard which can be found at https://huggingface.co./spaces/huggingface/hf-speech-bench. The leaderboard ranks models uploaded to the Hub based on their WER. An external leaderboard at https://paperswithcode.com/sota/speech-recognition-on-librispeech-test-clean ranks the latest models from research and academia.
### Languages
The audio is in English. There are two configurations: `clean` and `other`.
The speakers in the corpus were ranked according to the WER of the transcripts of a model trained on
a different dataset, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher WER speakers designated as "other".
## Dataset Structure
### Data Instances
A typical data point comprises the path to the audio file, usually called `file` and its transcription, called `text`. Some additional information about the speaker and the passage which contains the transcription is provided.
```
{'chapter_id': 141231,
'file': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'audio': {'path': '/home/patrick/.cache/huggingface/datasets/downloads/extracted/b7ded9969e09942ab65313e691e6fc2e12066192ee8527e21d634aca128afbe2/dev_clean/1272/141231/1272-141231-0000.flac',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346,
0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 16000},
'id': '1272-141231-0000',
'speaker_id': 1272,
'text': 'A MAN SAID TO THE UNIVERSE SIR I EXIST'}
```
### Data Fields
- file: A path to the downloaded audio file in .flac format.
- audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
- text: the transcription of the audio file.
- id: unique id of the data sample.
- speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
- chapter_id: id of the audiobook chapter which includes the transcription.
### Data Splits
The size of the corpus makes it impractical, or at least inconvenient
for some users, to distribute it as a single large archive. Thus the
training portion of the corpus is split into three subsets, with approximate size 100, 360 and 500 hours respectively.
A simple automatic
procedure was used to select the audio in the first two sets to be, on
average, of higher recording quality and with accents closer to US
English. An acoustic model was trained on WSJ’s si-84 data subset
and was used to recognize the audio in the corpus, using a bigram
LM estimated on the text of the respective books. We computed the
Word Error Rate (WER) of this automatic transcript relative to our
reference transcripts obtained from the book texts.
The speakers in the corpus were ranked according to the WER of
the WSJ model’s transcripts, and were divided roughly in the middle,
with the lower-WER speakers designated as "clean" and the higher-WER speakers designated as "other".
For "clean", the data is split into train, validation, and test set. The train set is further split into train.100 and train.360
respectively accounting for 100h and 360h of the training data.
For "other", the data is split into train, validation, and test set. The train set contains approximately 500h of recorded speech.
| | Train.500 | Train.360 | Train.100 | Valid | Test |
| ----- | ------ | ----- | ---- | ---- | ---- |
| clean | - | 104014 | 28539 | 2703 | 2620|
| other | 148688 | - | - | 2864 | 2939 |
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in this dataset.
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[Needs More Information]
## Additional Information
### Dataset Curators
The dataset was initially created by Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur.
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
### Citation Information
```
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on},
pages={5206--5210},
year={2015},
organization={IEEE}
}
```
### Contributions
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
CohereForAI/Global-MMLU | CohereForAI | "2024-12-12T13:49:50Z" | 9,820 | 99 | [
"language:en",
"language:ar",
"language:bn",
"language:es",
"language:fr",
"language:hi",
"language:ru",
"language:de",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:pt",
"language:zh",
"language:yo",
"language:nl",
"language:ro",
"language:uk",
"language:vi",
"language:tr",
"language:pl",
"language:fa",
"language:cs",
"language:he",
"language:el",
"language:ms",
"language:fil",
"language:te",
"language:si",
"language:ne",
"language:ky",
"language:sv",
"language:lt",
"language:sr",
"language:mg",
"language:so",
"language:ha",
"language:am",
"language:sn",
"language:ig",
"language:ny",
"language:sw",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"arxiv:2412.03304",
"region:us",
"argilla"
] | null | "2024-12-01T22:45:59Z" | ---
dataset_info:
- config_name: am
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 209505
num_examples: 285
- name: test
num_bytes: 12085768
num_examples: 14042
download_size: 10260448
dataset_size: 12295273
- config_name: ar
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 202343
num_examples: 285
- name: test
num_bytes: 11621977
num_examples: 14042
download_size: 9817049
dataset_size: 11824320
- config_name: bn
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 301875
num_examples: 285
- name: test
num_bytes: 18061158
num_examples: 14042
download_size: 12524784
dataset_size: 18363033
- config_name: cs
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 149807
num_examples: 285
- name: test
num_bytes: 8607308
num_examples: 14042
download_size: 8640151
dataset_size: 8757115
- config_name: de
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 162406
num_examples: 285
- name: test
num_bytes: 9575360
num_examples: 14042
download_size: 9187953
dataset_size: 9737766
- config_name: el
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 254308
num_examples: 285
- name: test
num_bytes: 14502137
num_examples: 14042
download_size: 12288940
dataset_size: 14756445
- config_name: en
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 146364
num_examples: 285
- name: test
num_bytes: 8440632
num_examples: 14042
download_size: 7912429
dataset_size: 8586996
- config_name: es
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 160633
num_examples: 285
- name: test
num_bytes: 9399724
num_examples: 14042
download_size: 8752720
dataset_size: 9560357
- config_name: fa
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 202609
num_examples: 285
- name: test
num_bytes: 11611890
num_examples: 14042
download_size: 9564082
dataset_size: 11814499
- config_name: fil
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 165182
num_examples: 285
- name: test
num_bytes: 9510179
num_examples: 14042
download_size: 8564879
dataset_size: 9675361
- config_name: fr
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 166173
num_examples: 285
- name: test
num_bytes: 9858873
num_examples: 14042
download_size: 9202595
dataset_size: 10025046
- config_name: ha
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 147406
num_examples: 285
- name: test
num_bytes: 8445707
num_examples: 14042
download_size: 7665529
dataset_size: 8593113
- config_name: he
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 178912
num_examples: 285
- name: test
num_bytes: 10248592
num_examples: 14042
download_size: 8818618
dataset_size: 10427504
- config_name: hi
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 308254
num_examples: 285
- name: test
num_bytes: 17970478
num_examples: 14042
download_size: 12407854
dataset_size: 18278732
- config_name: id
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 154692
num_examples: 285
- name: test
num_bytes: 8886643
num_examples: 14042
download_size: 7793365
dataset_size: 9041335
- config_name: ig
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 157376
num_examples: 285
- name: test
num_bytes: 9221405
num_examples: 14042
download_size: 7644102
dataset_size: 9378781
- config_name: it
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 157547
num_examples: 285
- name: test
num_bytes: 9374481
num_examples: 14042
download_size: 8873034
dataset_size: 9532028
- config_name: ja
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 167646
num_examples: 285
- name: test
num_bytes: 9830716
num_examples: 14042
download_size: 8826164
dataset_size: 9998362
- config_name: ko
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 160572
num_examples: 285
- name: test
num_bytes: 9454859
num_examples: 14042
download_size: 8640457
dataset_size: 9615431
- config_name: ky
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 235001
num_examples: 285
- name: test
num_bytes: 13483934
num_examples: 14042
download_size: 11148813
dataset_size: 13718935
- config_name: lt
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 148917
num_examples: 285
- name: test
num_bytes: 8504949
num_examples: 14042
download_size: 8416467
dataset_size: 8653866
- config_name: mg
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 161992
num_examples: 285
- name: test
num_bytes: 9337415
num_examples: 14042
download_size: 8011427
dataset_size: 9499407
- config_name: ms
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 152549
num_examples: 285
- name: test
num_bytes: 8823844
num_examples: 14042
download_size: 7783581
dataset_size: 8976393
- config_name: ne
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 294790
num_examples: 285
- name: test
num_bytes: 16972110
num_examples: 14042
download_size: 11895818
dataset_size: 17266900
- config_name: nl
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 158122
num_examples: 285
- name: test
num_bytes: 9099176
num_examples: 14042
download_size: 8565959
dataset_size: 9257298
- config_name: ny
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 151315
num_examples: 285
- name: test
num_bytes: 8686819
num_examples: 14042
download_size: 7822699
dataset_size: 8838134
- config_name: pl
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 157290
num_examples: 285
- name: test
num_bytes: 8980730
num_examples: 14042
download_size: 8981270
dataset_size: 9138020
- config_name: pt
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 154592
num_examples: 285
- name: test
num_bytes: 8983299
num_examples: 14042
download_size: 8517588
dataset_size: 9137891
- config_name: ro
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 158311
num_examples: 285
- name: test
num_bytes: 9163189
num_examples: 14042
download_size: 8773232
dataset_size: 9321500
- config_name: ru
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 246059
num_examples: 285
- name: test
num_bytes: 14059847
num_examples: 14042
download_size: 11904365
dataset_size: 14305906
- config_name: si
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 297843
num_examples: 285
- name: test
num_bytes: 17374939
num_examples: 14042
download_size: 12790098
dataset_size: 17672782
- config_name: sn
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 147355
num_examples: 285
- name: test
num_bytes: 8507368
num_examples: 14042
download_size: 7962672
dataset_size: 8654723
- config_name: so
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 156282
num_examples: 285
- name: test
num_bytes: 9033243
num_examples: 14042
download_size: 8706693
dataset_size: 9189525
- config_name: sr
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 221580
num_examples: 285
- name: test
num_bytes: 12695546
num_examples: 14042
download_size: 10748391
dataset_size: 12917126
- config_name: sv
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 147893
num_examples: 285
- name: test
num_bytes: 8549708
num_examples: 14042
download_size: 8181997
dataset_size: 8697601
- config_name: sw
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 147069
num_examples: 285
- name: test
num_bytes: 8653210
num_examples: 14042
download_size: 7932986
dataset_size: 8800279
- config_name: te
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 315724
num_examples: 285
- name: test
num_bytes: 18170058
num_examples: 14042
download_size: 12631358
dataset_size: 18485782
- config_name: tr
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 153426
num_examples: 285
- name: test
num_bytes: 8833244
num_examples: 14042
download_size: 8351339
dataset_size: 8986670
- config_name: uk
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 229888
num_examples: 285
- name: test
num_bytes: 13233771
num_examples: 14042
download_size: 11347842
dataset_size: 13463659
- config_name: vi
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 185712
num_examples: 285
- name: test
num_bytes: 10604332
num_examples: 14042
download_size: 8971266
dataset_size: 10790044
- config_name: yo
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 153810
num_examples: 285
- name: test
num_bytes: 10694916
num_examples: 14042
download_size: 9303668
dataset_size: 10848726
- config_name: zh
features:
- name: sample_id
dtype: string
- name: subject
dtype: string
- name: subject_category
dtype: string
- name: question
dtype: string
- name: option_a
dtype: string
- name: option_b
dtype: string
- name: option_c
dtype: string
- name: option_d
dtype: string
- name: answer
dtype: string
- name: required_knowledge
dtype: string
- name: time_sensitive
dtype: string
- name: reference
dtype: string
- name: culture
dtype: string
- name: region
dtype: string
- name: country
dtype: string
- name: cultural_sensitivity_label
dtype: string
- name: is_annotated
dtype: bool
splits:
- name: dev
num_bytes: 127577
num_examples: 285
- name: test
num_bytes: 7393764
num_examples: 14042
download_size: 7322261
dataset_size: 7521341
configs:
- config_name: am
data_files:
- split: test
path: am/test-*
- split: dev
path: am/dev-*
- config_name: ar
data_files:
- split: test
path: ar/test-*
- split: dev
path: ar/dev-*
- config_name: bn
data_files:
- split: test
path: bn/test-*
- split: dev
path: bn/dev-*
- config_name: cs
data_files:
- split: test
path: cs/test-*
- split: dev
path: cs/dev-*
- config_name: de
data_files:
- split: test
path: de/test-*
- split: dev
path: de/dev-*
- config_name: el
data_files:
- split: test
path: el/test-*
- split: dev
path: el/dev-*
- config_name: en
data_files:
- split: test
path: en/test-*
- split: dev
path: en/dev-*
- config_name: es
data_files:
- split: test
path: es/test-*
- split: dev
path: es/dev-*
- config_name: fa
data_files:
- split: test
path: fa/test-*
- split: dev
path: fa/dev-*
- config_name: fil
data_files:
- split: test
path: fil/test-*
- split: dev
path: fil/dev-*
- config_name: fr
data_files:
- split: test
path: fr/test-*
- split: dev
path: fr/dev-*
- config_name: ha
data_files:
- split: test
path: ha/test-*
- split: dev
path: ha/dev-*
- config_name: he
data_files:
- split: test
path: he/test-*
- split: dev
path: he/dev-*
- config_name: hi
data_files:
- split: test
path: hi/test-*
- split: dev
path: hi/dev-*
- config_name: id
data_files:
- split: test
path: id/test-*
- split: dev
path: id/dev-*
- config_name: ig
data_files:
- split: test
path: ig/test-*
- split: dev
path: ig/dev-*
- config_name: it
data_files:
- split: test
path: it/test-*
- split: dev
path: it/dev-*
- config_name: ja
data_files:
- split: test
path: ja/test-*
- split: dev
path: ja/dev-*
- config_name: ko
data_files:
- split: test
path: ko/test-*
- split: dev
path: ko/dev-*
- config_name: ky
data_files:
- split: test
path: ky/test-*
- split: dev
path: ky/dev-*
- config_name: lt
data_files:
- split: test
path: lt/test-*
- split: dev
path: lt/dev-*
- config_name: mg
data_files:
- split: test
path: mg/test-*
- split: dev
path: mg/dev-*
- config_name: ms
data_files:
- split: test
path: ms/test-*
- split: dev
path: ms/dev-*
- config_name: ne
data_files:
- split: test
path: ne/test-*
- split: dev
path: ne/dev-*
- config_name: nl
data_files:
- split: test
path: nl/test-*
- split: dev
path: nl/dev-*
- config_name: ny
data_files:
- split: test
path: ny/test-*
- split: dev
path: ny/dev-*
- config_name: pl
data_files:
- split: test
path: pl/test-*
- split: dev
path: pl/dev-*
- config_name: pt
data_files:
- split: test
path: pt/test-*
- split: dev
path: pt/dev-*
- config_name: ro
data_files:
- split: test
path: ro/test-*
- split: dev
path: ro/dev-*
- config_name: ru
data_files:
- split: test
path: ru/test-*
- split: dev
path: ru/dev-*
- config_name: si
data_files:
- split: test
path: si/test-*
- split: dev
path: si/dev-*
- config_name: sn
data_files:
- split: test
path: sn/test-*
- split: dev
path: sn/dev-*
- config_name: so
data_files:
- split: test
path: so/test-*
- split: dev
path: so/dev-*
- config_name: sr
data_files:
- split: test
path: sr/test-*
- split: dev
path: sr/dev-*
- config_name: sv
data_files:
- split: test
path: sv/test-*
- split: dev
path: sv/dev-*
- config_name: sw
data_files:
- split: test
path: sw/test-*
- split: dev
path: sw/dev-*
- config_name: te
data_files:
- split: test
path: te/test-*
- split: dev
path: te/dev-*
- config_name: tr
data_files:
- split: test
path: tr/test-*
- split: dev
path: tr/dev-*
- config_name: uk
data_files:
- split: test
path: uk/test-*
- split: dev
path: uk/dev-*
- config_name: vi
data_files:
- split: test
path: vi/test-*
- split: dev
path: vi/dev-*
- config_name: yo
data_files:
- split: test
path: yo/test-*
- split: dev
path: yo/dev-*
- config_name: zh
data_files:
- split: test
path: zh/test-*
- split: dev
path: zh/dev-*
tags:
- argilla
language:
- en
- ar
- bn
- es
- fr
- hi
- ru
- de
- id
- it
- ja
- ko
- pt
- zh
- yo
- nl
- ro
- uk
- vi
- tr
- pl
- fa
- cs
- he
- el
- ms
- fil
- te
- si
- ne
- ky
- sv
- lt
- sr
- mg
- so
- ha
- am
- sn
- ig
- ny
- sw
---
![GlobalMMLU Header](https://huggingface.co./datasets/CohereForAI/Global-MMLU/resolve/main/global_mmlu.jpg)
# Dataset Summary
[Global-MMLU](https://arxiv.org/abs/2412.03304) 🌍 is a multilingual evaluation set spanning 42 languages, including English. This dataset combines machine translations for [MMLU](https://huggingface.co./datasets/cais/mmlu) questions along with professional translations and crowd-sourced post-edits.
It also includes cultural sensitivity annotations for a subset of the questions (2850 questions per language) and classifies them as *Culturally Sensitive* (CS) 🗽 or *Culturally Agnostic* (CA) ⚖️. These annotations were collected as part of an open science initiative led by Cohere For AI in collaboration with many external collaborators from both industry and academia.
- **Curated by:** Professional annotators and contributors of [Cohere For AI Community](https://cohere.com/research).
- **Language(s):** 42 languages.
- **License:** [Apache 2.0](https://opensource.org/license/apache-2-0)
**Note:** We also provide a "lite" version of Global MMLU called ["Global-MMLU-Lite"](https://huggingface.co./datasets/CohereForAI/Global-MMLU-Lite). This datatset is more balanced containing 200 samples each for CA and CS subsets for each language. And provides coverage for 15 languages with human translations.
### **Global-MMLU Dataset Family:**
| Name | Explanation |
|------|--------------|
| [Global-MMLU](https://huggingface.co./datasets/CohereForAI/Global-MMLU) | Full Global-MMLU set with translations for all 14K samples including CS and CA subsets|
| [Global-MMLU-Lite](https://huggingface.co./datasets/CohereForAI/Global-MMLU-Lite) | Lite version of Global-MMLU with human translated samples in 15 languages and containing 200 samples each for CS and CA subsets per language.|
## Load with Datasets
To load this dataset with `datasets`, you'll first need to install it using `pip install datasets` and then use the following code:
```python
from datasets import load_dataset
# load HF dataset
global_mmlu = load_dataset("CohereForAI/Global-MMLU", 'en')
# can also be used as pandas dataframe
global_mmlu.set_format("pandas")
global_mmlu_test = global_mmlu['test'][:]
global_mmlu_dev = global_mmlu['dev'][:]
```
<details>
<summary> additional details </summary>
The columns corresponding to annotations collected from our cultural bias study (i.e. 'required_knowledge', 'time_sensitive', 'reference', 'culture', 'region', 'country') contain a list of values representing annotations from different annotators.
However, to avoid conversion issues to HF dataset, these columns are provided as string in the final dataset.
You can convert these columns back to list of values for easier manipulation as follows:
```python
import ast
# convert string values to list
global_mmlu_df['required_knowledge'] = global_mmlu_df['required_knowledge'].apply(lamda x: ast.literal_eval(x))
```
</details>
<br>
## Data Fields
The data fields are the same among all splits. Brief description of each field is provided below.
<details>
<summary> data field description </summary>
- `sample_id`: A unique identifier for the question.
- `subject`: The main topic the question falls under.
- `subject_category`: The high-level category the subject falls under i.e. STEM/Humanities/Social Sciences/Medical/Business/Other.
- `question`: translated question from MMLU
- `option_a`: one of the possible option choices
- `option_b`: one of the possible option choices
- `option_c`: one of the possible option choices
- `option_d`: one of the possible option choices
- `answer': the correct answer (A/B/C/D)
- `required_knowledge`: annotator votes for knowledge needed to answer the question correctly. Possible values include: "cultural", "regional", "dialect" or "none"
- `time_sensitive`: annotator votes indicating if the question's answer is time-dependent. Possible values include: Yes/No
- `reference`: annotations for which part of the question contains cultural/regional/dialect references. The different items in the list are annotations from different annotators.
- `culture`: annotations for which culture does the question belong to. The different items in the list correspond to annotations from different annotators.
- `region`: Geographic region the question is relevant to. Each item in the list correspond to annotations from different annotators.
- `country`: Specific country the question pertains to. Each item in the list correspond to annotations from different annotators.
- `cultural_sensitivity_label`: Label to indicate if question is culturally sensitive (CS) or culturally agnostic (CA) based on annotator votes.
- `is_annotated`: True/False flag to indicate if sample contains any annotations from our cultural bias study.
</details>
<br>
## Data Splits
The following are the splits of the data:
| Split | No. of instances | Language Coverage |
|-------|------------------|-------------------|
| test | 589,764 | 42 |
| dev | 11,970 | 42 |
## Data Instances
An example from `test` set looks as follows:
```json
{'sample_id': 'world_religions/test/170'
'subject': 'world_religions',
'subject_category': 'Humanities',
'question': ' The numen of Augustus referred to which of the following characteristics?',
'option_a': 'Divine power',
'option_b': 'Sexual virility',
'option_c': 'Military acumen',
'option_d': 'Philosophical intellect',
'answer': 'A',
'required_knowledge': "['none', 'cultural', 'cultural', 'cultural']",
'time_sensitive': "['No', 'No', 'No', 'No']",
'reference': "['-', '-', {'end': 22, 'label': 'Cultural', 'score': None, 'start': 5}, {'end': 22, 'label': 'Cultural', 'score': None, 'start': 5}]",
'culture': "['Western Culture', 'Western Culture', 'Western Culture']",
'region': "['North America', 'Europe']",
'country': "['Italy']",
'cultural_sensitivity_label': 'CS',
'is_annotated': True,
}
```
## Statistics
### Annotation Types
The following is the breakdown of CS🗽, CA⚖️ and MA📝 samples in the final dataset.
| Type of Annotation | Instances per language | No. of languages | Total instances
|--------------------|------------------------|------------------|----------------|
| Culturally Sensitive 🗽 | 792 | 42 | 33,264 |
| Culturally Agnostic ⚖️ | 2058 |42 | 86,436 |
| MMLU Annotated 📝| 2850 |42 | 119,700 |
### Languages
The dataset covers 42 languages: 20 high-resource, 9 mid-resource, and 13 low-resource languages. The following is details about the languages included in the dataset.
<details>
<summary> Languages Info </summary>
| ISO Code | Language | Resources |
|----------|----------|-----------|
| `am` | Amharic | Low |
| `ar` | Arabic (Standard)| High |
| `bn` | Bengali | Mid |
| `de` | German | High |
| `el` | Greek | Mid |
| `en` | English | High |
| `fil` | Filipino | Mid |
| `fr` | French | High |
| `ha` | Hausa | Low |
| `he` | Hebrew | Mid |
| `hi` | Hindi | High |
| `ig` | Igbo | Low |
| `id` | Indonesian | Mid |
| `it` | Italian | High |
| `ja` | Japanese | High |
| `ky` | Kyrgyz | Low |
| `ko` | Korean | Mid |
| `lt` | Lithuanian | Mid |
| `mg` | Malagasy | Low |
| `ms` | Malay | Mid |
| `ne` | Nepali | Low |
| `nl` | Dutch | High |
| `ny` | Chichewa | Low |
| `fa` | Persian | High |
| `pl` | Polish | High |
| `pt` | Portuguese | High |
| `ru` | Russian | High |
| `si` | Sinhala | Low |
| `sn` | Shona | Low |
| `so` | Somali | Low |
| `es` | Spanish | High |
| `sr` | Serbian | High |
| `sw` | Swahili | Low |
| `sw` | Swedish | High |
| `te` | Telugu | Low |
| `tr` | Turkish | High |
| `uk` | Ukrainian | Mid |
| `vi` | Vietnamese | High |
| `yo` | Yorùbá | Low |
| `zh` | Chinese (Simplified) | High |
</details>
<br>
# Known Limitations
A brief overview of limitations of this dataset is provided below.
<details>
<summary> show limitations </summary>
- **Language and dialect coverage:** Global-MMLU focusses on 42 languages. However, this is still only a tiny fraction of the world’s linguistic diversity. Future work is needed to continue to improve evaluations beyond these 42 languages and take into account how technology serves different dialects.
- **Uneven distribution of contributions:** The dataset contains translation post-edits from community volunteers, with a 'long tail' of volunteers making only one or two contributions. Similarly, there is a huge gap between languages with the highest number of contributions and ones with the lowest number of contributions.
- **Toxic or offensive speech:** Our annotation process did not focus on flagging for toxic,harmful, or offensive speech, so it is possible that Global-MMLU contains some data that could be considered harmful. We believe this is of relatively low risk because of the nature of the original MMLU and the focus on examination material.
- **Region Category Assignment:** For the annotation of geographically sensitive questions, we classified regions into six geographic regions (Africa, Asia, Europe, North America, Oceania,and South America). However, based upon discussions we would going forward recommend switching to the taxonomy proposed by the World Bank which is more granular and includes separate designations for Central America and Sub-Saharan Africa.
- **Identifying cultural sensitivity does not guarantee cultural inclusion:** While Global-MMLU highlights important limitations in current datasets by identifying gaps in non-Western cultural representation. Future work must prioritize the integration of diverse culturally grounded knowledge to achieve true inclusivity and fairness in multilingual AI evaluation.
</details>
<br>
# Additional Information
## Provenance
- **Methods Used:** Professional annotations as well as crowd-sourced through volunteer annotations.
- **Methodology Details:** We collected cultural bias annotations as well as post-edits of translations for different mmlu questions.
- [Cultural Sensitivity Annotation Platform](https://huggingface.co./spaces/CohereForAI/MMLU-evaluation)
- [Translation Quality Annotation Platform](https://huggingface.co./spaces/CohereForAI/review-mmlu-translations)
- Dates of Collection: May 2024 - Aug 2024
## Dataset Version and Maintenance
- **Maintenance Status:** Actively Maintained
- **Version Details:**
- *Current version:* 1.0
- *Last Update:* 12/2024
- *First Release:* 12/2024
## Authorship
- **Publishing Organization:** [Cohere For AI](https://cohere.com/research)
- **Industry Type:** Not-for-profit - Tech
## Licensing Information
This dataset can be used for any purpose, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
## Additional Details
For any additional details, please check our paper, [Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation](https://arxiv.org/abs/2412.03304).
## Citation Information
```bibtex
@misc{singh2024globalmmluunderstandingaddressing,
title={Global MMLU: Understanding and Addressing Cultural and Linguistic Biases in Multilingual Evaluation},
author={Shivalika Singh and Angelika Romanou and Clémentine Fourrier and David I. Adelani and Jian Gang Ngui and Daniel Vila-Suero and Peerat Limkonchotiwat and Kelly Marchisio and Wei Qi Leong and Yosephine Susanto and Raymond Ng and Shayne Longpre and Wei-Yin Ko and Madeline Smith and Antoine Bosselut and Alice Oh and Andre F. T. Martins and Leshem Choshen and Daphne Ippolito and Enzo Ferrante and Marzieh Fadaee and Beyza Ermis and Sara Hooker},
year={2024},
eprint={2412.03304},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.03304},
}
``` |
Tuxifan/UbuntuIRC | Tuxifan | "2023-06-04T15:35:31Z" | 9,776 | 0 | [
"task_categories:text-generation",
"license:cc0-1.0",
"size_categories:1M<n<10M",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | "2023-06-02T22:48:40Z" | ---
license: cc0-1.0
task_categories:
- text-generation
pretty_name: Ubuntu IRC channels
---
Completely uncurated collection of IRC logs from the Ubuntu IRC channels |
mteb/emotion | mteb | "2022-09-27T19:14:18Z" | 9,708 | 11 | [
"language:en",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-05-23T09:55:39Z" | ---
language:
- en
---
** Attention: There appears an overlap in train / test. I trained a model on the train set and achieved 100% acc on test set. With the original emotion dataset this is not the case (92.4% acc)** |
laion/strategic_game_chess | laion | "2023-10-20T04:14:20Z" | 9,620 | 29 | [
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"game"
] | null | "2023-06-06T02:09:13Z" | ---
tags:
- game
pretty_name: The Chess Dataset
license: cc-by-4.0
---
# Chess
> Recent advancements in artificial intelligence (AI) underscore the progress of reasoning and planning shown by recent generalist machine learning (ML) models. The progress can be boosted by datasets that can further boost these generic capabilities when used for training foundation models of various kind. This research initiative has generated extensive synthetic datasets from complex games — chess, Rubik's Cube, and mazes — to study facilitation and the advancement of these critical generic skills in AI models.
This dataset contains 3.2 billion games, equating to approximately 608 billion individual moves.
it is generated through self-play by Stockfish engine using Fugaku and we add initial moves to expand its diversity.
Each game has three columns: 'Moves', 'Termination' and 'Result',
- 'Move': recorded chess moves of the whole game.
- 'Termination': include CHECKMATE, INSUFFICIENT_MATERIAL, ... etc.
- Please check this for detail information
https://python-chess.readthedocs.io/en/latest/core.html#chess.Outcome.termination
- 'Result': result of this game, 1-0, 1/2-1/2, 0-1.
### Call for Collaboration
We invite interested researchers and ML practitioners to explore these datasets' potential. Whether training GPT models from scratch or fine-tuning pre-existing models, we encourage the exploration of various pre-training and fine-tuning strategies using these game-based datasets standalone or as enhancement of other already composed large-scale data.
Our team is prepared to assist in securing necessary GPU resources for these explorations. We are particularly interested in collaborators eager to pre-train models of small to medium scale on our game data, subsequently transition to standard text-based training, and then perform comparative analyses against models of similar architecture trained exclusively on text data.
Conclusively, this initiative marks a significant stride toward intricate problem-solving and strategic planning in AI, extending an open invitation to the research community for collaborative advancement in this domain. |
japanese-asr/whisper_transcriptions.mls.wer_10.0.vectorized | japanese-asr | "2024-09-15T01:35:08Z" | 9,620 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-11T12:32:36Z" | ---
dataset_info:
- config_name: subset_0
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95550219596
num_examples: 62101
download_size: 43092578892
dataset_size: 95550219596
- config_name: subset_1
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95892233884
num_examples: 62323
download_size: 43217224829
dataset_size: 95892233884
- config_name: subset_10
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95659534424
num_examples: 62172
download_size: 43197712726
dataset_size: 95659534424
- config_name: subset_100
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95570234896
num_examples: 62114
download_size: 43084233453
dataset_size: 95570234896
- config_name: subset_101
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95741000524
num_examples: 62225
download_size: 43183665345
dataset_size: 95741000524
- config_name: subset_102
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95732487892
num_examples: 62219
download_size: 43229537725
dataset_size: 95732487892
- config_name: subset_103
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95776664816
num_examples: 62248
download_size: 43187441638
dataset_size: 95776664816
- config_name: subset_104
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95930500816
num_examples: 62348
download_size: 43294625977
dataset_size: 95930500816
- config_name: subset_105
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95836868972
num_examples: 62287
download_size: 43251807028
dataset_size: 95836868972
- config_name: subset_106
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95940716900
num_examples: 62355
download_size: 43289304103
dataset_size: 95940716900
- config_name: subset_107
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95782476488
num_examples: 62252
download_size: 43209137820
dataset_size: 95782476488
- config_name: subset_108
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 96007104792
num_examples: 62398
download_size: 43221018658
dataset_size: 96007104792
- config_name: subset_109
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95764776944
num_examples: 62240
download_size: 43162176171
dataset_size: 95764776944
- config_name: subset_11
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95749669360
num_examples: 62230
download_size: 43193067430
dataset_size: 95749669360
- config_name: subset_110
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95826353540
num_examples: 62281
download_size: 43217482451
dataset_size: 95826353540
- config_name: subset_111
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95910493660
num_examples: 62335
download_size: 43268379463
dataset_size: 95910493660
- config_name: subset_112
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95782539616
num_examples: 62252
download_size: 43198507530
dataset_size: 95782539616
- config_name: subset_113
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95496693376
num_examples: 62066
download_size: 43106662052
dataset_size: 95496693376
- config_name: subset_114
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 16920876
num_examples: 11
download_size: 7573002
dataset_size: 16920876
- config_name: subset_115
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95639177564
num_examples: 62159
download_size: 43180784518
dataset_size: 95639177564
- config_name: subset_116
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95625267448
num_examples: 62150
download_size: 43124129761
dataset_size: 95625267448
- config_name: subset_117
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95545048296
num_examples: 62098
download_size: 43082968259
dataset_size: 95545048296
- config_name: subset_118
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95866827908
num_examples: 62307
download_size: 43167164098
dataset_size: 95866827908
- config_name: subset_119
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 96068332252
num_examples: 62437
download_size: 43339136980
dataset_size: 96068332252
- config_name: subset_12
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95728691164
num_examples: 62217
download_size: 43198747627
dataset_size: 95728691164
- config_name: subset_120
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95722632700
num_examples: 62213
download_size: 43167373358
dataset_size: 95722632700
- config_name: subset_121
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95732191100
num_examples: 62219
download_size: 43221505796
dataset_size: 95732191100
- config_name: subset_122
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95699024432
num_examples: 62198
download_size: 43219580053
dataset_size: 95699024432
- config_name: subset_123
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95713523564
num_examples: 62207
download_size: 43177149081
dataset_size: 95713523564
- config_name: subset_124
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95463256840
num_examples: 62044
download_size: 43081995426
dataset_size: 95463256840
- config_name: subset_125
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95597812312
num_examples: 62132
download_size: 43093919552
dataset_size: 95597812312
- config_name: subset_126
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95798796016
num_examples: 62262
download_size: 43254288601
dataset_size: 95798796016
- config_name: subset_127
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95674872576
num_examples: 62182
download_size: 43251503801
dataset_size: 95674872576
- config_name: subset_128
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95741950380
num_examples: 62225
download_size: 43150675085
dataset_size: 95741950380
- config_name: subset_129
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95902821264
num_examples: 62330
download_size: 43266797081
dataset_size: 95902821264
- config_name: subset_13
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95706484544
num_examples: 62202
download_size: 43194357797
dataset_size: 95706484544
- config_name: subset_130
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13845812
num_examples: 9
download_size: 6597728
dataset_size: 13845812
- config_name: subset_131
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95385820008
num_examples: 61994
download_size: 43049793791
dataset_size: 95385820008
- config_name: subset_132
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95718111696
num_examples: 62210
download_size: 43160367467
dataset_size: 95718111696
- config_name: subset_133
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95642427284
num_examples: 62161
download_size: 43145455128
dataset_size: 95642427284
- config_name: subset_134
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95783162736
num_examples: 62252
download_size: 43157288094
dataset_size: 95783162736
- config_name: subset_135
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95905643680
num_examples: 62332
download_size: 43211878248
dataset_size: 95905643680
- config_name: subset_136
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95602572980
num_examples: 62135
download_size: 43148250609
dataset_size: 95602572980
- config_name: subset_137
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95742569912
num_examples: 62226
download_size: 43196126465
dataset_size: 95742569912
- config_name: subset_138
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95784277468
num_examples: 62253
download_size: 43213036863
dataset_size: 95784277468
- config_name: subset_14
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95484035440
num_examples: 62058
download_size: 43038787620
dataset_size: 95484035440
- config_name: subset_15
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95847923004
num_examples: 62295
download_size: 43269622880
dataset_size: 95847923004
- config_name: subset_16
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143676345616
num_examples: 93380
download_size: 64763101794
dataset_size: 143676345616
- config_name: subset_17
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143218562076
num_examples: 93081
download_size: 64543519703
dataset_size: 143218562076
- config_name: subset_18
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1538508
num_examples: 1
download_size: 888657
dataset_size: 1538508
- config_name: subset_19
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143234313008
num_examples: 93092
download_size: 64590945738
dataset_size: 143234313008
- config_name: subset_2
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95692976304
num_examples: 62194
download_size: 43156432229
dataset_size: 95692976304
- config_name: subset_20
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143387469416
num_examples: 93192
download_size: 64657130955
dataset_size: 143387469416
- config_name: subset_21
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143761745188
num_examples: 93435
download_size: 64848639452
dataset_size: 143761745188
- config_name: subset_22
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143510788288
num_examples: 93272
download_size: 64664207735
dataset_size: 143510788288
- config_name: subset_23
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143745597332
num_examples: 93425
download_size: 64881327829
dataset_size: 143745597332
- config_name: subset_24
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143524148912
num_examples: 93280
download_size: 64658212505
dataset_size: 143524148912
- config_name: subset_25
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143893466228
num_examples: 93521
download_size: 64887011756
dataset_size: 143893466228
- config_name: subset_26
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143679361468
num_examples: 93381
download_size: 64845399473
dataset_size: 143679361468
- config_name: subset_27
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143578388120
num_examples: 93316
download_size: 64733082218
dataset_size: 143578388120
- config_name: subset_28
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143336154232
num_examples: 93158
download_size: 64663766459
dataset_size: 143336154232
- config_name: subset_29
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 7691452
num_examples: 5
download_size: 3459998
dataset_size: 7691452
- config_name: subset_3
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95679747492
num_examples: 62185
download_size: 43162138038
dataset_size: 95679747492
- config_name: subset_30
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143445434128
num_examples: 93230
download_size: 64632174781
dataset_size: 143445434128
- config_name: subset_31
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143557489496
num_examples: 93302
download_size: 64701593443
dataset_size: 143557489496
- config_name: subset_32
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143480746600
num_examples: 93252
download_size: 64739797925
dataset_size: 143480746600
- config_name: subset_33
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143804688340
num_examples: 93463
download_size: 64883427549
dataset_size: 143804688340
- config_name: subset_34
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143763133852
num_examples: 93435
download_size: 64878027444
dataset_size: 143763133852
- config_name: subset_35
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143471499924
num_examples: 93247
download_size: 64668279919
dataset_size: 143471499924
- config_name: subset_36
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143578658920
num_examples: 93316
download_size: 64771308961
dataset_size: 143578658920
- config_name: subset_37
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143421843384
num_examples: 93214
download_size: 64702071071
dataset_size: 143421843384
- config_name: subset_38
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143684196348
num_examples: 93385
download_size: 64784473635
dataset_size: 143684196348
- config_name: subset_39
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143680679576
num_examples: 93382
download_size: 64742283095
dataset_size: 143680679576
- config_name: subset_4
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95688480364
num_examples: 62191
download_size: 43186075154
dataset_size: 95688480364
- config_name: subset_40
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12306560
num_examples: 8
download_size: 5824713
dataset_size: 12306560
- config_name: subset_41
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143692662464
num_examples: 93390
download_size: 64709902673
dataset_size: 143692662464
- config_name: subset_42
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143203990624
num_examples: 93072
download_size: 64613043749
dataset_size: 143203990624
- config_name: subset_43
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143423685888
num_examples: 93216
download_size: 64706208398
dataset_size: 143423685888
- config_name: subset_44
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143599898184
num_examples: 93330
download_size: 64731002218
dataset_size: 143599898184
- config_name: subset_45
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143523564744
num_examples: 93280
download_size: 64697514930
dataset_size: 143523564744
- config_name: subset_46
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143657253604
num_examples: 93367
download_size: 64832235795
dataset_size: 143657253604
- config_name: subset_47
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143635083108
num_examples: 93353
download_size: 64816566584
dataset_size: 143635083108
- config_name: subset_48
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143498198048
num_examples: 93264
download_size: 64761835832
dataset_size: 143498198048
- config_name: subset_49
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95628254736
num_examples: 62152
download_size: 43114668093
dataset_size: 95628254736
- config_name: subset_5
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95761408264
num_examples: 62238
download_size: 43098449131
dataset_size: 95761408264
- config_name: subset_50
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95710114492
num_examples: 62205
download_size: 43145722727
dataset_size: 95710114492
- config_name: subset_51
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95718176136
num_examples: 62210
download_size: 43219455050
dataset_size: 95718176136
- config_name: subset_52
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95945517480
num_examples: 62358
download_size: 43266431091
dataset_size: 95945517480
- config_name: subset_53
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95659331056
num_examples: 62172
download_size: 43159537104
dataset_size: 95659331056
- config_name: subset_54
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95733778168
num_examples: 62220
download_size: 43168252529
dataset_size: 95733778168
- config_name: subset_55
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95529164348
num_examples: 62087
download_size: 43137593293
dataset_size: 95529164348
- config_name: subset_56
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 12308272
num_examples: 8
download_size: 6100142
dataset_size: 12308272
- config_name: subset_57
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95725368820
num_examples: 62215
download_size: 43172851860
dataset_size: 95725368820
- config_name: subset_58
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95675366136
num_examples: 62182
download_size: 43119589688
dataset_size: 95675366136
- config_name: subset_59
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95757684168
num_examples: 62236
download_size: 43189671985
dataset_size: 95757684168
- config_name: subset_6
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95578098092
num_examples: 62119
download_size: 43139859949
dataset_size: 95578098092
- config_name: subset_60
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95700890356
num_examples: 62199
download_size: 43168843386
dataset_size: 95700890356
- config_name: subset_61
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95730743624
num_examples: 62218
download_size: 43225423665
dataset_size: 95730743624
- config_name: subset_62
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95939249240
num_examples: 62354
download_size: 43194512512
dataset_size: 95939249240
- config_name: subset_63
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95771529340
num_examples: 62245
download_size: 43196958015
dataset_size: 95771529340
- config_name: subset_64
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95848159444
num_examples: 62295
download_size: 43215788957
dataset_size: 95848159444
- config_name: subset_65
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95882062420
num_examples: 62317
download_size: 43207070631
dataset_size: 95882062420
- config_name: subset_66
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95871828072
num_examples: 62310
download_size: 43225588983
dataset_size: 95871828072
- config_name: subset_67
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95709466896
num_examples: 62204
download_size: 43120587410
dataset_size: 95709466896
- config_name: subset_68
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95761289000
num_examples: 62238
download_size: 43168961328
dataset_size: 95761289000
- config_name: subset_69
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95724204184
num_examples: 62214
download_size: 43179130241
dataset_size: 95724204184
- config_name: subset_7
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1538156
num_examples: 1
download_size: 820398
dataset_size: 1538156
- config_name: subset_70
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95609034668
num_examples: 62139
download_size: 43117847437
dataset_size: 95609034668
- config_name: subset_71
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95387596792
num_examples: 61996
download_size: 43021151166
dataset_size: 95387596792
- config_name: subset_72
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4615244
num_examples: 3
download_size: 2452533
dataset_size: 4615244
- config_name: subset_73
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95846947264
num_examples: 62294
download_size: 43209608729
dataset_size: 95846947264
- config_name: subset_74
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95818688740
num_examples: 62275
download_size: 43205934001
dataset_size: 95818688740
- config_name: subset_75
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95775232460
num_examples: 62247
download_size: 43190536605
dataset_size: 95775232460
- config_name: subset_76
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95665581880
num_examples: 62176
download_size: 43152943289
dataset_size: 95665581880
- config_name: subset_77
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95538306996
num_examples: 62093
download_size: 43186488482
dataset_size: 95538306996
- config_name: subset_78
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95653366336
num_examples: 62168
download_size: 43143452346
dataset_size: 95653366336
- config_name: subset_79
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95676339900
num_examples: 62183
download_size: 43128557833
dataset_size: 95676339900
- config_name: subset_8
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95715343992
num_examples: 62208
download_size: 43164885461
dataset_size: 95715343992
- config_name: subset_80
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95766599296
num_examples: 62242
download_size: 43239930128
dataset_size: 95766599296
- config_name: subset_81
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143321630332
num_examples: 93149
download_size: 64626209893
dataset_size: 143321630332
- config_name: subset_82
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143413608572
num_examples: 93209
download_size: 64621538517
dataset_size: 143413608572
- config_name: subset_83
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143531418364
num_examples: 93285
download_size: 64801287818
dataset_size: 143531418364
- config_name: subset_84
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143436760360
num_examples: 93224
download_size: 64683558260
dataset_size: 143436760360
- config_name: subset_85
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143693790772
num_examples: 93391
download_size: 64884349404
dataset_size: 143693790772
- config_name: subset_86
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143485525272
num_examples: 93256
download_size: 64696999488
dataset_size: 143485525272
- config_name: subset_87
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143365510504
num_examples: 93178
download_size: 64633922570
dataset_size: 143365510504
- config_name: subset_88
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143459907372
num_examples: 93239
download_size: 64646778148
dataset_size: 143459907372
- config_name: subset_89
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143713229680
num_examples: 93404
download_size: 64788831986
dataset_size: 143713229680
- config_name: subset_9
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95562629284
num_examples: 62109
download_size: 43123134026
dataset_size: 95562629284
- config_name: subset_90
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 143379593708
num_examples: 93187
download_size: 64733038170
dataset_size: 143379593708
- config_name: subset_91
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95696250128
num_examples: 62196
download_size: 43134085960
dataset_size: 95696250128
- config_name: subset_92
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95934407652
num_examples: 62351
download_size: 43224929875
dataset_size: 95934407652
- config_name: subset_93
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95753580284
num_examples: 62233
download_size: 43222236201
dataset_size: 95753580284
- config_name: subset_94
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95891874916
num_examples: 62323
download_size: 43252770071
dataset_size: 95891874916
- config_name: subset_95
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95659464728
num_examples: 62172
download_size: 43086228614
dataset_size: 95659464728
- config_name: subset_96
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95794821896
num_examples: 62260
download_size: 43179370699
dataset_size: 95794821896
- config_name: subset_97
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95622506640
num_examples: 62148
download_size: 43156846644
dataset_size: 95622506640
- config_name: subset_98
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 13845628
num_examples: 9
download_size: 6713409
dataset_size: 13845628
- config_name: subset_99
features:
- name: transcription
sequence: int64
- name: transcription/ja_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/ja_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 95636401316
num_examples: 62157
download_size: 43147436863
dataset_size: 95636401316
configs:
- config_name: subset_0
data_files:
- split: train
path: subset_0/train-*
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_105/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_53/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
EdinburghNLP/xsum | EdinburghNLP | "2023-04-05T13:45:25Z" | 9,549 | 96 | [
"task_categories:summarization",
"task_ids:news-articles-summarization",
"annotations_creators:found",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"arxiv:1808.08745",
"region:us"
] | [
"summarization"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
pretty_name: Extreme Summarization (XSum)
paperswithcode_id: xsum
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
dataset_info:
features:
- name: document
dtype: string
- name: summary
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 479206608
num_examples: 204045
- name: validation
num_bytes: 26292901
num_examples: 11332
- name: test
num_bytes: 26756165
num_examples: 11334
download_size: 257302866
dataset_size: 532255674
---
# Dataset Card for "xsum"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:**
- **Repository:** https://github.com/EdinburghNLP/XSum
- **Paper:** [Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization](https://arxiv.org/abs/1808.08745)
- **Point of Contact:** [Shashi Narayan](mailto:[email protected])
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
### Dataset Summary
Extreme Summarization (XSum) Dataset.
There are three features:
- document: Input news article.
- summary: One sentence summary of the article.
- id: BBC ID of the article.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### default
- **Size of downloaded dataset files:** 257.30 MB
- **Size of the generated dataset:** 532.26 MB
- **Total amount of disk used:** 789.56 MB
An example of 'validation' looks as follows.
```
{
"document": "some-body",
"id": "29750031",
"summary": "some-sentence"
}
```
### Data Fields
The data fields are the same among all splits.
#### default
- `document`: a `string` feature.
- `summary`: a `string` feature.
- `id`: a `string` feature.
### Data Splits
| name |train |validation|test |
|-------|-----:|---------:|----:|
|default|204045| 11332|11334|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Narayan2018DontGM,
title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization},
author={Shashi Narayan and Shay B. Cohen and Mirella Lapata},
journal={ArXiv},
year={2018},
volume={abs/1808.08745}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@jbragg](https://github.com/jbragg), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset. |
mshah1/speech_robust_bench | mshah1 | "2024-11-23T05:03:22Z" | 9,504 | 3 | [
"size_categories:1M<n<10M",
"modality:audio",
"modality:text",
"region:us"
] | null | "2024-01-21T01:39:08Z" | ---
dataset_info:
- config_name: accented_cv
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: age
dtype: string
- name: gender
dtype: string
- name: accents
dtype: string
- name: locale
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 55407854.085
num_examples: 1355
- name: test.clean
num_bytes: 25593824.0
num_examples: 640
download_size: 78598662
dataset_size: 81001678.08500001
- config_name: accented_cv_es
features:
- name: audio
dtype: audio
- name: accent
dtype: string
- name: text
dtype: string
- name: gender
dtype: string
- name: age
dtype: string
- name: locale
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 65868440.963
num_examples: 1483
download_size: 60557913
dataset_size: 65868440.963
- config_name: accented_cv_fr
features:
- name: file_name
dtype: string
- name: accent
dtype: string
- name: text
dtype: string
- name: gender
dtype: string
- name: age
dtype: string
- name: locale
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 337528
num_examples: 2171
download_size: 148493
dataset_size: 337528
- config_name: chime
features:
- name: audio
dtype: audio
- name: end_time
dtype: string
- name: start_time
dtype: string
- name: speaker
dtype: string
- name: ref
dtype: string
- name: location
dtype: string
- name: session_id
dtype: string
- name: text
dtype: string
splits:
- name: farfield
num_bytes: 521160936.31
num_examples: 6535
- name: nearfield
num_bytes: 1072274621.0799999
num_examples: 6535
download_size: 1532887016
dataset_size: 1593435557.3899999
- config_name: in-the-wild
features:
- name: audio
dtype: audio
- name: end_time
dtype: string
- name: start_time
dtype: string
- name: speaker
dtype: string
- name: ref
dtype: string
- name: location
dtype: string
- name: session_id
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: farfield
num_bytes: 521363521.31
num_examples: 6535
- name: nearfield
num_bytes: 1072477206.0799999
num_examples: 6535
download_size: 1533124839
dataset_size: 1593840727.3899999
- config_name: in-the-wild-AMI
features:
- name: meeting_id
dtype: string
- name: id
dtype: string
- name: text
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: begin_time
dtype: float32
- name: end_time
dtype: float32
- name: microphone_id
dtype: string
- name: speaker_id
dtype: string
splits:
- name: nearfield
num_bytes: 1382749390.9785259
num_examples: 6584
- name: farfield
num_bytes: 1040706691.1008185
num_examples: 6584
download_size: 2164898498
dataset_size: 2423456082.0793443
- config_name: in-the-wild-ami
features:
- name: meeting_id
dtype: string
- name: audio_id
dtype: string
- name: text
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: begin_time
dtype: float32
- name: end_time
dtype: float32
- name: microphone_id
dtype: string
- name: speaker_id
dtype: string
splits:
- name: nearfield
num_bytes: 1382749390.9785259
num_examples: 6584
- name: farfield
num_bytes: 1040706691.1008185
num_examples: 6584
download_size: 2164900274
dataset_size: 2423456082.0793443
- config_name: librispeech_asr-test.clean
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: speedup.1
num_bytes: 498896619.34
num_examples: 2620
- name: speedup.2
num_bytes: 415901075.34
num_examples: 2620
- name: speedup.3
num_bytes: 356617835.34
num_examples: 2620
- name: speedup.4
num_bytes: 312152811.34
num_examples: 2620
- name: slowdown.1
num_bytes: 712320343.34
num_examples: 2620
- name: slowdown.2
num_bytes: 830887339.34
num_examples: 2620
- name: slowdown.3
num_bytes: 996880127.34
num_examples: 2620
- name: slowdown.4
num_bytes: 1245871847.34
num_examples: 2620
- name: pitch_up.3
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_up.4
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_down.1
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_down.2
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_down.3
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_down.4
num_bytes: 623392467.34
num_examples: 2620
- name: pitch_up.1
num_bytes: 623392458.5
num_examples: 2620
- name: pitch_up.2
num_bytes: 623392458.5
num_examples: 2620
- name: resample.1
num_bytes: 623392535.34
num_examples: 2620
- name: resample.2
num_bytes: 623392535.34
num_examples: 2620
- name: resample.3
num_bytes: 623392579.34
num_examples: 2620
- name: resample.4
num_bytes: 623392623.34
num_examples: 2620
- name: voice_conversion.4
num_bytes: 799852214.5
num_examples: 2620
- name: voice_conversion.3
num_bytes: 580185782.5
num_examples: 2620
- name: voice_conversion.1
num_bytes: 589259446.5
num_examples: 2620
- name: voice_conversion.2
num_bytes: 571175606.5
num_examples: 2620
- name: gain.1
num_bytes: 623392467.34
num_examples: 2620
- name: gain.2
num_bytes: 623392467.34
num_examples: 2620
- name: gain.3
num_bytes: 623392467.34
num_examples: 2620
- name: echo.1
num_bytes: 633872467.34
num_examples: 2620
- name: echo.2
num_bytes: 644352467.34
num_examples: 2620
- name: echo.3
num_bytes: 665312467.34
num_examples: 2620
- name: echo.4
num_bytes: 707232467.34
num_examples: 2620
- name: phaser.1
num_bytes: 623392467.34
num_examples: 2620
- name: phaser.2
num_bytes: 623392467.34
num_examples: 2620
- name: phaser.3
num_bytes: 623392467.34
num_examples: 2620
- name: tempo_up.1
num_bytes: 498896595.34
num_examples: 2620
- name: tempo_up.2
num_bytes: 415899351.34
num_examples: 2620
- name: tempo_up.3
num_bytes: 356615595.34
num_examples: 2620
- name: tempo_up.4
num_bytes: 312152811.34
num_examples: 2620
- name: tempo_down.1
num_bytes: 712318083.34
num_examples: 2620
- name: tempo_down.2
num_bytes: 830885583.34
num_examples: 2620
- name: tempo_down.3
num_bytes: 996880103.34
num_examples: 2620
- name: tempo_down.4
num_bytes: 1245871847.34
num_examples: 2620
- name: gain.4
num_bytes: 623392467.34
num_examples: 2620
- name: phaser.4
num_bytes: 623392467.34
num_examples: 2620
- name: lowpass.1
num_bytes: 623392467.34
num_examples: 2620
- name: lowpass.2
num_bytes: 623392467.34
num_examples: 2620
- name: lowpass.3
num_bytes: 623392467.34
num_examples: 2620
- name: lowpass.4
num_bytes: 623392467.34
num_examples: 2620
- name: highpass.1
num_bytes: 623392467.34
num_examples: 2620
- name: highpass.2
num_bytes: 623392467.34
num_examples: 2620
- name: highpass.3
num_bytes: 623392467.34
num_examples: 2620
- name: highpass.4
num_bytes: 623392467.34
num_examples: 2620
- name: voice_conversion_vctk.1
num_bytes: 495165825.88
num_examples: 2620
- name: universal_adv.1
num_bytes: 623392467.34
num_examples: 2620
- name: rir.1
num_bytes: 705636818.5
num_examples: 2620
- name: rir.2
num_bytes: 744484818.5
num_examples: 2620
- name: rir.3
num_bytes: 758740818.5
num_examples: 2620
- name: rir.4
num_bytes: 776116818.5
num_examples: 2620
- name: gnoise.1
num_bytes: 623392455.88
num_examples: 2620
- name: gnoise.2
num_bytes: 623392455.88
num_examples: 2620
- name: gnoise.3
num_bytes: 623392455.88
num_examples: 2620
- name: gnoise.4
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_esc50.1
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_esc50.2
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_esc50.3
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_esc50.4
num_bytes: 623392455.88
num_examples: 2620
- name: music.1
num_bytes: 623392455.88
num_examples: 2620
- name: music.2
num_bytes: 623392455.88
num_examples: 2620
- name: music.3
num_bytes: 623392455.88
num_examples: 2620
- name: music.4
num_bytes: 623392455.88
num_examples: 2620
- name: crosstalk.1
num_bytes: 623392455.88
num_examples: 2620
- name: crosstalk.2
num_bytes: 623392455.88
num_examples: 2620
- name: crosstalk.3
num_bytes: 623392455.88
num_examples: 2620
- name: crosstalk.4
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_musan.1
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_musan.2
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_musan.3
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_musan.4
num_bytes: 623392455.88
num_examples: 2620
- name: real_rir.1
num_bytes: 638169615.88
num_examples: 2620
- name: real_rir.2
num_bytes: 694281819.88
num_examples: 2620
- name: real_rir.3
num_bytes: 713200537.88
num_examples: 2620
- name: real_rir.4
num_bytes: 1515177725.88
num_examples: 2620
- name: env_noise.1
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise.2
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise.3
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise.4
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_wham.1
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_wham.2
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_wham.3
num_bytes: 623392455.88
num_examples: 2620
- name: env_noise_wham.4
num_bytes: 623392455.88
num_examples: 2620
- name: tremolo.1
num_bytes: 623392455.88
num_examples: 2620
- name: tremolo.2
num_bytes: 623392455.88
num_examples: 2620
- name: tremolo.3
num_bytes: 623392455.88
num_examples: 2620
- name: tremolo.4
num_bytes: 623392455.88
num_examples: 2620
- name: treble.1
num_bytes: 623392455.88
num_examples: 2620
- name: treble.2
num_bytes: 623392455.88
num_examples: 2620
- name: treble.3
num_bytes: 623392455.88
num_examples: 2620
- name: treble.4
num_bytes: 623392455.88
num_examples: 2620
- name: bass.1
num_bytes: 623392455.88
num_examples: 2620
- name: bass.2
num_bytes: 623392455.88
num_examples: 2620
- name: bass.3
num_bytes: 623392455.88
num_examples: 2620
- name: bass.4
num_bytes: 623392455.88
num_examples: 2620
- name: chorus.1
num_bytes: 626913735.88
num_examples: 2620
- name: chorus.2
num_bytes: 628590535.88
num_examples: 2620
- name: chorus.3
num_bytes: 630267335.88
num_examples: 2620
- name: chorus.4
num_bytes: 631944135.88
num_examples: 2620
- name: None.0
num_bytes: 367982506.42
num_examples: 2620
download_size: 67547733720
dataset_size: 68871044112.51988
- config_name: librispeech_asr-test.clean_pertEval_500_30
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: pert_idx
dtype: int64
splits:
- name: gnoise.1
num_bytes: 3592401090.0
num_examples: 15000
- name: env_noise_esc50.1
num_bytes: 3592401090.0
num_examples: 15000
download_size: 7170899040
dataset_size: 7184802180.0
- config_name: multilingual_librispeech-french_test
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: gnoise.1
num_bytes: 1160858614.324
num_examples: 2426
- name: gnoise.2
num_bytes: 1160858614.324
num_examples: 2426
- name: gnoise.3
num_bytes: 1160858614.324
num_examples: 2426
- name: speedup.1
num_bytes: 928910526.324
num_examples: 2426
- name: speedup.3
num_bytes: 663829084.324
num_examples: 2426
- name: pitch_up.1
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_up.2
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_up.3
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_down.1
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_down.2
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise.1
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise.3
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_wham.1
num_bytes: 1160858614.324
num_examples: 2426
- name: slowdown.2
num_bytes: 1547440398.324
num_examples: 2426
- name: real_rir.3
num_bytes: 1241772582.324
num_examples: 2426
- name: env_noise.2
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_wham.2
num_bytes: 1160858614.324
num_examples: 2426
- name: speedup.2
num_bytes: 774280064.324
num_examples: 2426
- name: slowdown.1
num_bytes: 1326537936.324
num_examples: 2426
- name: slowdown.3
num_bytes: 1856702974.324
num_examples: 2426
- name: env_noise_esc50.1
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_esc50.2
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_esc50.3
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_musan.1
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_musan.2
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_musan.3
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_wham.3
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_down.3
num_bytes: 1160858614.324
num_examples: 2426
- name: rir.1
num_bytes: 1235965442.324
num_examples: 2426
- name: rir.2
num_bytes: 1273085442.324
num_examples: 2426
- name: rir.3
num_bytes: 1284653442.324
num_examples: 2426
- name: real_rir.1
num_bytes: 1174422106.324
num_examples: 2426
- name: real_rir.2
num_bytes: 1226129514.324
num_examples: 2426
- name: resample.1
num_bytes: 1160858656.324
num_examples: 2426
- name: resample.2
num_bytes: 1160858642.324
num_examples: 2426
- name: resample.3
num_bytes: 1160858694.324
num_examples: 2426
- name: gain.1
num_bytes: 1160858614.324
num_examples: 2426
- name: gain.2
num_bytes: 1160858614.324
num_examples: 2426
- name: gain.3
num_bytes: 1160858614.324
num_examples: 2426
- name: echo.1
num_bytes: 1170562614.324
num_examples: 2426
- name: echo.2
num_bytes: 1180266614.324
num_examples: 2426
- name: echo.3
num_bytes: 1199674614.324
num_examples: 2426
- name: phaser.1
num_bytes: 1160858614.324
num_examples: 2426
- name: phaser.2
num_bytes: 1160858614.324
num_examples: 2426
- name: phaser.3
num_bytes: 1160858614.324
num_examples: 2426
- name: tempo_up.1
num_bytes: 928910510.324
num_examples: 2426
- name: tempo_up.2
num_bytes: 774278396.324
num_examples: 2426
- name: tempo_up.3
num_bytes: 663826914.324
num_examples: 2426
- name: tempo_down.1
num_bytes: 1326535834.324
num_examples: 2426
- name: tempo_down.2
num_bytes: 1547438832.324
num_examples: 2426
- name: tempo_down.3
num_bytes: 1856702944.324
num_examples: 2426
- name: lowpass.1
num_bytes: 1160858614.324
num_examples: 2426
- name: lowpass.2
num_bytes: 1160858614.324
num_examples: 2426
- name: lowpass.3
num_bytes: 1160858614.324
num_examples: 2426
- name: highpass.1
num_bytes: 1160858614.324
num_examples: 2426
- name: highpass.2
num_bytes: 1160858614.324
num_examples: 2426
- name: highpass.3
num_bytes: 1160858614.324
num_examples: 2426
- name: music.1
num_bytes: 1160858614.324
num_examples: 2426
- name: music.2
num_bytes: 1160858614.324
num_examples: 2426
- name: music.3
num_bytes: 1160858614.324
num_examples: 2426
- name: crosstalk.1
num_bytes: 1160858614.324
num_examples: 2426
- name: crosstalk.2
num_bytes: 1160858614.324
num_examples: 2426
- name: crosstalk.3
num_bytes: 1160858614.324
num_examples: 2426
- name: tremolo.1
num_bytes: 1160858614.324
num_examples: 2426
- name: tremolo.2
num_bytes: 1160858614.324
num_examples: 2426
- name: tremolo.3
num_bytes: 1160858614.324
num_examples: 2426
- name: treble.1
num_bytes: 1160858614.324
num_examples: 2426
- name: treble.2
num_bytes: 1160858614.324
num_examples: 2426
- name: treble.3
num_bytes: 1160858614.324
num_examples: 2426
- name: bass.1
num_bytes: 1160858614.324
num_examples: 2426
- name: bass.2
num_bytes: 1160858614.324
num_examples: 2426
- name: bass.3
num_bytes: 1160858614.324
num_examples: 2426
- name: chorus.1
num_bytes: 1164119158.324
num_examples: 2426
- name: chorus.2
num_bytes: 1165671798.324
num_examples: 2426
- name: chorus.3
num_bytes: 1167224438.324
num_examples: 2426
- name: gnoise.4
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise.4
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_esc50.4
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_musan.4
num_bytes: 1160858614.324
num_examples: 2426
- name: env_noise_wham.4
num_bytes: 1160858614.324
num_examples: 2426
- name: speedup.4
num_bytes: 580988352.324
num_examples: 2426
- name: slowdown.4
num_bytes: 2320599166.324
num_examples: 2426
- name: pitch_up.4
num_bytes: 1160858614.324
num_examples: 2426
- name: pitch_down.4
num_bytes: 1160858614.324
num_examples: 2426
- name: rir.4
num_bytes: 1302669442.324
num_examples: 2426
- name: real_rir.4
num_bytes: 2020765820.324
num_examples: 2426
- name: resample.4
num_bytes: 1160858814.324
num_examples: 2426
- name: gain.4
num_bytes: 1160858614.324
num_examples: 2426
- name: echo.4
num_bytes: 1238490614.324
num_examples: 2426
- name: phaser.4
num_bytes: 1160858614.324
num_examples: 2426
- name: tempo_up.4
num_bytes: 580988352.324
num_examples: 2426
- name: tempo_down.4
num_bytes: 2320599166.324
num_examples: 2426
- name: lowpass.4
num_bytes: 1160858614.324
num_examples: 2426
- name: highpass.4
num_bytes: 1160858614.324
num_examples: 2426
- name: music.4
num_bytes: 1160858614.324
num_examples: 2426
- name: crosstalk.4
num_bytes: 1160858614.324
num_examples: 2426
- name: tremolo.4
num_bytes: 1160858614.324
num_examples: 2426
- name: treble.4
num_bytes: 1160858614.324
num_examples: 2426
- name: bass.4
num_bytes: 1160858614.324
num_examples: 2426
- name: chorus.4
num_bytes: 1168777078.324
num_examples: 2426
download_size: 121459263523
dataset_size: 119151206300.40016
- config_name: multilingual_librispeech-german_test
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: original_path
dtype: string
- name: begin_time
dtype: float64
- name: end_time
dtype: float64
- name: audio_duration
dtype: float64
- name: speaker_id
dtype: string
- name: chapter_id
dtype: string
- name: file
dtype: string
- name: id
dtype: string
- name: text
dtype: string
splits:
- name: gnoise.1
num_bytes: 1648113341.356
num_examples: 3394
- name: gnoise.2
num_bytes: 1648113341.356
num_examples: 3394
- name: gnoise.3
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise.1
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise.2
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise.3
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_esc50.1
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_esc50.2
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_esc50.3
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_musan.1
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_musan.2
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_musan.3
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_wham.1
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_wham.2
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_wham.3
num_bytes: 1648113341.356
num_examples: 3394
- name: speedup.1
num_bytes: 1318802109.356
num_examples: 3394
- name: speedup.2
num_bytes: 1099263673.356
num_examples: 3394
- name: speedup.3
num_bytes: 942449495.356
num_examples: 3394
- name: slowdown.1
num_bytes: 1883338719.356
num_examples: 3394
- name: slowdown.2
num_bytes: 2196967643.356
num_examples: 3394
- name: slowdown.3
num_bytes: 2636047081.356
num_examples: 3394
- name: pitch_up.1
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_up.2
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_up.3
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_down.1
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_down.2
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_down.3
num_bytes: 1648113341.356
num_examples: 3394
- name: rir.1
num_bytes: 1755612473.356
num_examples: 3394
- name: rir.2
num_bytes: 1806508473.356
num_examples: 3394
- name: rir.3
num_bytes: 1821740473.356
num_examples: 3394
- name: real_rir.1
num_bytes: 1666887689.356
num_examples: 3394
- name: real_rir.2
num_bytes: 1738836201.356
num_examples: 3394
- name: real_rir.3
num_bytes: 1764380853.356
num_examples: 3394
- name: resample.1
num_bytes: 1648113369.356
num_examples: 3394
- name: resample.2
num_bytes: 1648113363.356
num_examples: 3394
- name: resample.3
num_bytes: 1648113411.356
num_examples: 3394
- name: gain.1
num_bytes: 1648113341.356
num_examples: 3394
- name: gain.2
num_bytes: 1648113341.356
num_examples: 3394
- name: gain.3
num_bytes: 1648113341.356
num_examples: 3394
- name: echo.1
num_bytes: 1661689341.356
num_examples: 3394
- name: echo.2
num_bytes: 1675265341.356
num_examples: 3394
- name: echo.3
num_bytes: 1702417341.356
num_examples: 3394
- name: phaser.1
num_bytes: 1648113341.356
num_examples: 3394
- name: phaser.2
num_bytes: 1648113341.356
num_examples: 3394
- name: phaser.3
num_bytes: 1648113341.356
num_examples: 3394
- name: tempo_up.1
num_bytes: 1318802103.356
num_examples: 3394
- name: tempo_up.2
num_bytes: 1099261101.356
num_examples: 3394
- name: tempo_up.3
num_bytes: 942446355.356
num_examples: 3394
- name: tempo_down.1
num_bytes: 1883335523.356
num_examples: 3394
- name: tempo_down.2
num_bytes: 2196965581.356
num_examples: 3394
- name: tempo_down.3
num_bytes: 2636047065.356
num_examples: 3394
- name: lowpass.1
num_bytes: 1648113341.356
num_examples: 3394
- name: lowpass.2
num_bytes: 1648113341.356
num_examples: 3394
- name: lowpass.3
num_bytes: 1648113341.356
num_examples: 3394
- name: highpass.1
num_bytes: 1648113341.356
num_examples: 3394
- name: highpass.2
num_bytes: 1648113341.356
num_examples: 3394
- name: highpass.3
num_bytes: 1648113341.356
num_examples: 3394
- name: music.1
num_bytes: 1648113341.356
num_examples: 3394
- name: music.2
num_bytes: 1648113341.356
num_examples: 3394
- name: music.3
num_bytes: 1648113341.356
num_examples: 3394
- name: crosstalk.1
num_bytes: 1648113341.356
num_examples: 3394
- name: crosstalk.2
num_bytes: 1648113341.356
num_examples: 3394
- name: crosstalk.3
num_bytes: 1648113341.356
num_examples: 3394
- name: tremolo.1
num_bytes: 1648113341.356
num_examples: 3394
- name: tremolo.2
num_bytes: 1648113341.356
num_examples: 3394
- name: tremolo.3
num_bytes: 1648113341.356
num_examples: 3394
- name: treble.1
num_bytes: 1648113341.356
num_examples: 3394
- name: treble.2
num_bytes: 1648113341.356
num_examples: 3394
- name: treble.3
num_bytes: 1648113341.356
num_examples: 3394
- name: bass.1
num_bytes: 1648113341.356
num_examples: 3394
- name: bass.2
num_bytes: 1648113341.356
num_examples: 3394
- name: bass.3
num_bytes: 1648113341.356
num_examples: 3394
- name: chorus.1
num_bytes: 1652674877.356
num_examples: 3394
- name: chorus.2
num_bytes: 1654847037.356
num_examples: 3394
- name: chorus.3
num_bytes: 1657019197.356
num_examples: 3394
- name: gnoise.4
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise.4
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_esc50.4
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_musan.4
num_bytes: 1648113341.356
num_examples: 3394
- name: env_noise_wham.4
num_bytes: 1648113341.356
num_examples: 3394
- name: speedup.4
num_bytes: 824835247.356
num_examples: 3394
- name: slowdown.4
num_bytes: 3294669551.356
num_examples: 3394
- name: pitch_up.4
num_bytes: 1648113341.356
num_examples: 3394
- name: pitch_down.4
num_bytes: 1648113341.356
num_examples: 3394
- name: rir.4
num_bytes: 1846956473.356
num_examples: 3394
- name: real_rir.4
num_bytes: 2846504095.356
num_examples: 3394
- name: resample.4
num_bytes: 1648113451.356
num_examples: 3394
- name: gain.4
num_bytes: 1648113341.356
num_examples: 3394
- name: echo.4
num_bytes: 1756721341.356
num_examples: 3394
- name: phaser.4
num_bytes: 1648113341.356
num_examples: 3394
- name: tempo_up.4
num_bytes: 824835247.356
num_examples: 3394
- name: tempo_down.4
num_bytes: 3294669551.356
num_examples: 3394
- name: lowpass.4
num_bytes: 1648113341.356
num_examples: 3394
- name: highpass.4
num_bytes: 1648113341.356
num_examples: 3394
- name: music.4
num_bytes: 1648113341.356
num_examples: 3394
- name: crosstalk.4
num_bytes: 1648113341.356
num_examples: 3394
- name: tremolo.4
num_bytes: 1648113341.356
num_examples: 3394
- name: treble.4
num_bytes: 1648113341.356
num_examples: 3394
- name: bass.4
num_bytes: 1648113341.356
num_examples: 3394
- name: chorus.4
num_bytes: 1659191357.356
num_examples: 3394
download_size: 163104340817
dataset_size: 169131696059.59995
- config_name: multilingual_librispeech-spanish_test
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
splits:
- name: None.0
num_bytes: 596762288.01
num_examples: 2385
- name: env_noise.1
num_bytes: 1153485830.17
num_examples: 2385
- name: env_noise.2
num_bytes: 1153485830.17
num_examples: 2385
- name: env_noise.3
num_bytes: 1153485830.17
num_examples: 2385
- name: env_noise.4
num_bytes: 1153485830.17
num_examples: 2385
- name: rir.1
num_bytes: 1268493860.17
num_examples: 2385
- name: rir.2
num_bytes: 1252109860.17
num_examples: 2385
- name: rir.3
num_bytes: 1249517860.17
num_examples: 2385
- name: rir.4
num_bytes: 1222893860.17
num_examples: 2385
- name: speedup.1
num_bytes: 923001764.17
num_examples: 2385
- name: speedup.2
num_bytes: 769347364.17
num_examples: 2385
- name: speedup.3
num_bytes: 659593516.17
num_examples: 2385
- name: speedup.4
num_bytes: 577275652.17
num_examples: 2385
- name: slowdown.1
num_bytes: 1318119422.17
num_examples: 2385
- name: slowdown.2
num_bytes: 1537627530.17
num_examples: 2385
- name: slowdown.3
num_bytes: 1844938056.17
num_examples: 2385
- name: slowdown.4
num_bytes: 2305906194.17
num_examples: 2385
- name: pitch_up.3
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_up.4
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_down.1
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_down.2
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_down.3
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_down.4
num_bytes: 1153485830.17
num_examples: 2385
- name: pitch_up.1
num_bytes: 1153485821.72
num_examples: 2385
- name: pitch_up.2
num_bytes: 1153485821.72
num_examples: 2385
- name: resample.2
num_bytes: 1153485842.17
num_examples: 2385
- name: gain.1
num_bytes: 1153485830.17
num_examples: 2385
- name: gain.2
num_bytes: 1153485830.17
num_examples: 2385
- name: gain.3
num_bytes: 1153485830.17
num_examples: 2385
- name: gain.4
num_bytes: 1153485830.17
num_examples: 2385
- name: echo.1
num_bytes: 1163025830.17
num_examples: 2385
- name: echo.2
num_bytes: 1172565830.17
num_examples: 2385
- name: echo.3
num_bytes: 1191645830.17
num_examples: 2385
- name: echo.4
num_bytes: 1229805830.17
num_examples: 2385
- name: tempo_up.1
num_bytes: 923001758.17
num_examples: 2385
- name: tempo_up.2
num_bytes: 769345632.17
num_examples: 2385
- name: tempo_up.3
num_bytes: 659591372.17
num_examples: 2385
- name: tempo_up.4
num_bytes: 577275652.17
num_examples: 2385
- name: tempo_down.1
num_bytes: 1318117252.17
num_examples: 2385
- name: tempo_down.2
num_bytes: 1537626028.17
num_examples: 2385
- name: tempo_down.3
num_bytes: 1844938048.17
num_examples: 2385
- name: tempo_down.4
num_bytes: 2305906194.17
num_examples: 2385
- name: phaser.1
num_bytes: 1153485830.17
num_examples: 2385
- name: phaser.2
num_bytes: 1153485830.17
num_examples: 2385
- name: phaser.3
num_bytes: 1153485830.17
num_examples: 2385
- name: phaser.4
num_bytes: 1153485830.17
num_examples: 2385
- name: resample.1
num_bytes: 1153485840.17
num_examples: 2385
- name: resample.3
num_bytes: 1153485850.17
num_examples: 2385
- name: resample.4
num_bytes: 1153485882.17
num_examples: 2385
- name: lowpass.1
num_bytes: 1153485830.17
num_examples: 2385
- name: lowpass.2
num_bytes: 1153485830.17
num_examples: 2385
- name: lowpass.3
num_bytes: 1153485830.17
num_examples: 2385
- name: lowpass.4
num_bytes: 1153485830.17
num_examples: 2385
- name: highpass.1
num_bytes: 1153485830.17
num_examples: 2385
- name: highpass.2
num_bytes: 1153485830.17
num_examples: 2385
- name: highpass.3
num_bytes: 1153485830.17
num_examples: 2385
- name: highpass.4
num_bytes: 1153485830.17
num_examples: 2385
- name: gnoise.1
num_bytes: 1153485822.49
num_examples: 2385
- name: gnoise.2
num_bytes: 1153485822.49
num_examples: 2385
- name: gnoise.3
num_bytes: 1153485822.49
num_examples: 2385
- name: gnoise.4
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_esc50.1
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_esc50.2
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_esc50.3
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_esc50.4
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_musan.1
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_musan.2
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_musan.3
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_musan.4
num_bytes: 1153485822.49
num_examples: 2385
- name: music.1
num_bytes: 1153485822.49
num_examples: 2385
- name: music.2
num_bytes: 1153485822.49
num_examples: 2385
- name: music.3
num_bytes: 1153485822.49
num_examples: 2385
- name: music.4
num_bytes: 1153485822.49
num_examples: 2385
- name: crosstalk.1
num_bytes: 1153485822.49
num_examples: 2385
- name: crosstalk.2
num_bytes: 1153485822.49
num_examples: 2385
- name: crosstalk.3
num_bytes: 1153485822.49
num_examples: 2385
- name: crosstalk.4
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_wham.1
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_wham.2
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_wham.3
num_bytes: 1153485822.49
num_examples: 2385
- name: env_noise_wham.4
num_bytes: 1153485822.49
num_examples: 2385
- name: tremolo.1
num_bytes: 1153485822.49
num_examples: 2385
- name: tremolo.2
num_bytes: 1153485822.49
num_examples: 2385
- name: tremolo.4
num_bytes: 1153485822.49
num_examples: 2385
- name: treble.1
num_bytes: 1153485822.49
num_examples: 2385
- name: treble.2
num_bytes: 1153485822.49
num_examples: 2385
- name: treble.3
num_bytes: 1153485822.49
num_examples: 2385
- name: treble.4
num_bytes: 1153485822.49
num_examples: 2385
- name: bass.1
num_bytes: 1153485822.49
num_examples: 2385
- name: bass.2
num_bytes: 1153485822.49
num_examples: 2385
- name: bass.3
num_bytes: 1153485822.49
num_examples: 2385
- name: bass.4
num_bytes: 1153485822.49
num_examples: 2385
- name: chorus.1
num_bytes: 1156691262.49
num_examples: 2385
- name: chorus.2
num_bytes: 1158217662.49
num_examples: 2385
- name: chorus.3
num_bytes: 1159744062.49
num_examples: 2385
- name: chorus.4
num_bytes: 1161270462.49
num_examples: 2385
- name: tremolo.3
num_bytes: 1153485822.49
num_examples: 2385
- name: voice_conversion_bark.1
num_bytes: 1457427139.875
num_examples: 2385
download_size: 119056891470
dataset_size: 114748819328.10516
- config_name: multilingual_librispeech-spanish_test_pertEval_500_30
features:
- name: file
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: int64
- name: chapter_id
dtype: int64
- name: id
dtype: string
- name: pert_idx
dtype: int64
splits:
- name: gnoise.1
num_bytes: 7341021960.0
num_examples: 15000
- name: env_noise_esc50.1
num_bytes: 7341021960.0
num_examples: 15000
download_size: 14645523867
dataset_size: 14682043920.0
- config_name: tedlium-release3_test
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: gender
dtype:
class_label:
names:
'0': unknown
'1': female
'2': male
- name: file
dtype: string
- name: id
dtype: string
splits:
- name: None.0
num_bytes: 277376247.9680054
num_examples: 1155
- name: speedup.1
num_bytes: 221990159.49965963
num_examples: 1155
- name: speedup.2
num_bytes: 185066240.47311097
num_examples: 1155
- name: speedup.3
num_bytes: 158691929.4792376
num_examples: 1155
- name: slowdown.1
num_bytes: 316938966.95371
num_examples: 1155
- name: slowdown.2
num_bytes: 369687787.0762423
num_examples: 1155
- name: slowdown.3
num_bytes: 443535996.23893803
num_examples: 1155
- name: pitch_up.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_up.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_up.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_down.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_down.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_down.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: rir.1
num_bytes: 313788218.1586113
num_examples: 1155
- name: rir.2
num_bytes: 330268000.32334924
num_examples: 1155
- name: rir.3
num_bytes: 336608313.46153843
num_examples: 1155
- name: voice_conversion_vctk.1
num_bytes: 216990920.87134105
num_examples: 1155
- name: resample.1
num_bytes: 277376301.4329476
num_examples: 1155
- name: resample.2
num_bytes: 277376301.4329476
num_examples: 1155
- name: resample.3
num_bytes: 277376354.89788973
num_examples: 1155
- name: gain.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: gain.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: gain.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: echo.1
num_bytes: 281996247.9680054
num_examples: 1155
- name: echo.2
num_bytes: 286616247.9680054
num_examples: 1155
- name: echo.3
num_bytes: 295856247.9680054
num_examples: 1155
- name: phaser.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: phaser.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: phaser.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: tempo_up.1
num_bytes: 221989786.81756297
num_examples: 1155
- name: tempo_up.2
num_bytes: 185065496.68141592
num_examples: 1155
- name: tempo_up.3
num_bytes: 158690987.55275697
num_examples: 1155
- name: tempo_down.1
num_bytes: 316938020.3097345
num_examples: 1155
- name: tempo_down.2
num_bytes: 369686999.254595
num_examples: 1155
- name: tempo_down.3
num_bytes: 443535631.41933286
num_examples: 1155
- name: lowpass.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: lowpass.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: lowpass.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: highpass.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: highpass.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: highpass.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: speedup.4
num_bytes: 138910125.75561607
num_examples: 1155
- name: slowdown.4
num_bytes: 554308545.8577263
num_examples: 1155
- name: pitch_up.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: pitch_down.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: rir.4
num_bytes: 345514943.8223281
num_examples: 1155
- name: resample.4
num_bytes: 277376474.4077604
num_examples: 1155
- name: gain.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: echo.4
num_bytes: 314336247.9680054
num_examples: 1155
- name: phaser.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: tempo_up.4
num_bytes: 138910125.75561607
num_examples: 1155
- name: tempo_down.4
num_bytes: 554308545.8577263
num_examples: 1155
- name: lowpass.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: highpass.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: gnoise.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: gnoise.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: gnoise.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: music.1
num_bytes: 301958728.16
num_examples: 1155
- name: music.2
num_bytes: 301958728.16
num_examples: 1155
- name: music.3
num_bytes: 301958728.16
num_examples: 1155
- name: music.4
num_bytes: 301958728.16
num_examples: 1155
- name: crosstalk.1
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_esc50.1
num_bytes: 277376247.9680054
num_examples: 1155
- name: env_noise_esc50.2
num_bytes: 277376247.9680054
num_examples: 1155
- name: env_noise_esc50.3
num_bytes: 277376247.9680054
num_examples: 1155
- name: gnoise.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: crosstalk.2
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_esc50.4
num_bytes: 277376247.9680054
num_examples: 1155
- name: crosstalk.3
num_bytes: 301958728.16
num_examples: 1155
- name: crosstalk.4
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_musan.1
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_musan.2
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_musan.3
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_musan.4
num_bytes: 301958728.16
num_examples: 1155
- name: real_rir.1
num_bytes: 308750878.16
num_examples: 1155
- name: real_rir.2
num_bytes: 333286988.16
num_examples: 1155
- name: real_rir.3
num_bytes: 341205738.16
num_examples: 1155
- name: real_rir.4
num_bytes: 715155314.16
num_examples: 1155
- name: env_noise.1
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise.2
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise.3
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise.4
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_wham.1
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_wham.2
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_wham.3
num_bytes: 301958728.16
num_examples: 1155
- name: env_noise_wham.4
num_bytes: 301958728.16
num_examples: 1155
- name: tremolo.1
num_bytes: 301958728.16
num_examples: 1155
- name: tremolo.2
num_bytes: 301958728.16
num_examples: 1155
- name: tremolo.3
num_bytes: 301958728.16
num_examples: 1155
- name: tremolo.4
num_bytes: 301958728.16
num_examples: 1155
- name: treble.1
num_bytes: 301958728.16
num_examples: 1155
- name: treble.2
num_bytes: 301958728.16
num_examples: 1155
- name: treble.3
num_bytes: 301958728.16
num_examples: 1155
- name: treble.4
num_bytes: 301958728.16
num_examples: 1155
- name: bass.1
num_bytes: 301958728.16
num_examples: 1155
- name: bass.2
num_bytes: 301958728.16
num_examples: 1155
- name: bass.3
num_bytes: 301958728.16
num_examples: 1155
- name: bass.4
num_bytes: 301958728.16
num_examples: 1155
- name: chorus.1
num_bytes: 303511048.16
num_examples: 1155
- name: chorus.2
num_bytes: 304250248.16
num_examples: 1155
- name: chorus.4
num_bytes: 305728648.16
num_examples: 1155
- name: chorus.3
num_bytes: 304989448.16
num_examples: 1155
download_size: 58723208514
dataset_size: 30342709961.007984
configs:
- config_name: accented_cv
data_files:
- split: test
path: accented_cv/test-*
- split: test.clean
path: accented_cv/test.clean-*
- config_name: accented_cv_es
data_files:
- split: test
path: accented_cv_es/test-*
- config_name: accented_cv_fr
data_files:
- split: test
path: accented_cv_fr/test-*
- config_name: chime
data_files:
- split: farfield
path: chime/farfield-*
- split: nearfield
path: chime/nearfield-*
- config_name: in-the-wild
data_files:
- split: farfield
path: in-the-wild/farfield-*
- split: nearfield
path: in-the-wild/nearfield-*
- config_name: in-the-wild-AMI
data_files:
- split: nearfield
path: in-the-wild-AMI/nearfield-*
- split: farfield
path: in-the-wild-AMI/farfield-*
- config_name: in-the-wild-ami
data_files:
- split: nearfield
path: in-the-wild-ami/nearfield-*
- split: farfield
path: in-the-wild-ami/farfield-*
- config_name: librispeech_asr-test.clean
data_files:
- split: None.0
path: librispeech_asr-test.clean/None.0-*
- split: gnoise.1
path: librispeech_asr-test.clean/gnoise.1-*
- split: gnoise.2
path: librispeech_asr-test.clean/gnoise.2-*
- split: gnoise.3
path: librispeech_asr-test.clean/gnoise.3-*
- split: gnoise.4
path: librispeech_asr-test.clean/gnoise.4-*
- split: env_noise.1
path: librispeech_asr-test.clean/env_noise.1-*
- split: env_noise.2
path: librispeech_asr-test.clean/env_noise.2-*
- split: env_noise.3
path: librispeech_asr-test.clean/env_noise.3-*
- split: env_noise.4
path: librispeech_asr-test.clean/env_noise.4-*
- split: rir.1
path: librispeech_asr-test.clean/rir.1-*
- split: rir.2
path: librispeech_asr-test.clean/rir.2-*
- split: rir.3
path: librispeech_asr-test.clean/rir.3-*
- split: rir.4
path: librispeech_asr-test.clean/rir.4-*
- split: speedup.1
path: librispeech_asr-test.clean/speedup.1-*
- split: speedup.2
path: librispeech_asr-test.clean/speedup.2-*
- split: speedup.3
path: librispeech_asr-test.clean/speedup.3-*
- split: speedup.4
path: librispeech_asr-test.clean/speedup.4-*
- split: slowdown.1
path: librispeech_asr-test.clean/slowdown.1-*
- split: slowdown.2
path: librispeech_asr-test.clean/slowdown.2-*
- split: slowdown.3
path: librispeech_asr-test.clean/slowdown.3-*
- split: slowdown.4
path: librispeech_asr-test.clean/slowdown.4-*
- split: pitch_up.3
path: librispeech_asr-test.clean/pitch_up.3-*
- split: pitch_up.4
path: librispeech_asr-test.clean/pitch_up.4-*
- split: pitch_down.1
path: librispeech_asr-test.clean/pitch_down.1-*
- split: pitch_down.2
path: librispeech_asr-test.clean/pitch_down.2-*
- split: pitch_down.3
path: librispeech_asr-test.clean/pitch_down.3-*
- split: pitch_down.4
path: librispeech_asr-test.clean/pitch_down.4-*
- split: pitch_up.1
path: librispeech_asr-test.clean/pitch_up.1-*
- split: pitch_up.2
path: librispeech_asr-test.clean/pitch_up.2-*
- split: resample.1
path: librispeech_asr-test.clean/resample.1-*
- split: resample.2
path: librispeech_asr-test.clean/resample.2-*
- split: resample.3
path: librispeech_asr-test.clean/resample.3-*
- split: resample.4
path: librispeech_asr-test.clean/resample.4-*
- split: env_noise_esc50.1
path: librispeech_asr-test.clean/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: librispeech_asr-test.clean/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: librispeech_asr-test.clean/env_noise_esc50.3-*
- split: env_noise_esc50.4
path: librispeech_asr-test.clean/env_noise_esc50.4-*
- split: voice_conversion.4
path: librispeech_asr-test.clean/voice_conversion.4-*
- split: voice_conversion.3
path: librispeech_asr-test.clean/voice_conversion.3-*
- split: voice_conversion.1
path: librispeech_asr-test.clean/voice_conversion.1-*
- split: voice_conversion.2
path: librispeech_asr-test.clean/voice_conversion.2-*
- split: gain.1
path: librispeech_asr-test.clean/gain.1-*
- split: gain.2
path: librispeech_asr-test.clean/gain.2-*
- split: gain.3
path: librispeech_asr-test.clean/gain.3-*
- split: echo.1
path: librispeech_asr-test.clean/echo.1-*
- split: echo.2
path: librispeech_asr-test.clean/echo.2-*
- split: echo.3
path: librispeech_asr-test.clean/echo.3-*
- split: echo.4
path: librispeech_asr-test.clean/echo.4-*
- split: phaser.1
path: librispeech_asr-test.clean/phaser.1-*
- split: phaser.2
path: librispeech_asr-test.clean/phaser.2-*
- split: phaser.3
path: librispeech_asr-test.clean/phaser.3-*
- split: tempo_up.1
path: librispeech_asr-test.clean/tempo_up.1-*
- split: tempo_up.2
path: librispeech_asr-test.clean/tempo_up.2-*
- split: tempo_up.3
path: librispeech_asr-test.clean/tempo_up.3-*
- split: tempo_up.4
path: librispeech_asr-test.clean/tempo_up.4-*
- split: tempo_down.1
path: librispeech_asr-test.clean/tempo_down.1-*
- split: tempo_down.2
path: librispeech_asr-test.clean/tempo_down.2-*
- split: tempo_down.3
path: librispeech_asr-test.clean/tempo_down.3-*
- split: tempo_down.4
path: librispeech_asr-test.clean/tempo_down.4-*
- split: gain.4
path: librispeech_asr-test.clean/gain.4-*
- split: lowpass.1
path: librispeech_asr-test.clean/lowpass.1-*
- split: lowpass.2
path: librispeech_asr-test.clean/lowpass.2-*
- split: lowpass.3
path: librispeech_asr-test.clean/lowpass.3-*
- split: lowpass.4
path: librispeech_asr-test.clean/lowpass.4-*
- split: highpass.1
path: librispeech_asr-test.clean/highpass.1-*
- split: highpass.2
path: librispeech_asr-test.clean/highpass.2-*
- split: highpass.3
path: librispeech_asr-test.clean/highpass.3-*
- split: highpass.4
path: librispeech_asr-test.clean/highpass.4-*
- split: phaser.4
path: librispeech_asr-test.clean/phaser.4-*
- split: voice_conversion_vctk.1
path: librispeech_asr-test.clean/voice_conversion_vctk.1-*
- split: universal_adv.1
path: librispeech_asr-test.clean/universal_adv.1-*
- split: music.1
path: librispeech_asr-test.clean/music.1-*
- split: music.2
path: librispeech_asr-test.clean/music.2-*
- split: music.3
path: librispeech_asr-test.clean/music.3-*
- split: music.4
path: librispeech_asr-test.clean/music.4-*
- split: crosstalk.1
path: librispeech_asr-test.clean/crosstalk.1-*
- split: crosstalk.2
path: librispeech_asr-test.clean/crosstalk.2-*
- split: crosstalk.3
path: librispeech_asr-test.clean/crosstalk.3-*
- split: crosstalk.4
path: librispeech_asr-test.clean/crosstalk.4-*
- split: env_noise_musan.1
path: librispeech_asr-test.clean/env_noise_musan.1-*
- split: env_noise_musan.2
path: librispeech_asr-test.clean/env_noise_musan.2-*
- split: env_noise_musan.3
path: librispeech_asr-test.clean/env_noise_musan.3-*
- split: env_noise_musan.4
path: librispeech_asr-test.clean/env_noise_musan.4-*
- split: real_rir.1
path: librispeech_asr-test.clean/real_rir.1-*
- split: real_rir.2
path: librispeech_asr-test.clean/real_rir.2-*
- split: real_rir.3
path: librispeech_asr-test.clean/real_rir.3-*
- split: real_rir.4
path: librispeech_asr-test.clean/real_rir.4-*
- split: env_noise_wham.1
path: librispeech_asr-test.clean/env_noise_wham.1-*
- split: env_noise_wham.2
path: librispeech_asr-test.clean/env_noise_wham.2-*
- split: env_noise_wham.3
path: librispeech_asr-test.clean/env_noise_wham.3-*
- split: env_noise_wham.4
path: librispeech_asr-test.clean/env_noise_wham.4-*
- split: tremolo.1
path: librispeech_asr-test.clean/tremolo.1-*
- split: tremolo.2
path: librispeech_asr-test.clean/tremolo.2-*
- split: tremolo.3
path: librispeech_asr-test.clean/tremolo.3-*
- split: tremolo.4
path: librispeech_asr-test.clean/tremolo.4-*
- split: treble.1
path: librispeech_asr-test.clean/treble.1-*
- split: treble.2
path: librispeech_asr-test.clean/treble.2-*
- split: treble.3
path: librispeech_asr-test.clean/treble.3-*
- split: treble.4
path: librispeech_asr-test.clean/treble.4-*
- split: bass.1
path: librispeech_asr-test.clean/bass.1-*
- split: bass.2
path: librispeech_asr-test.clean/bass.2-*
- split: bass.3
path: librispeech_asr-test.clean/bass.3-*
- split: bass.4
path: librispeech_asr-test.clean/bass.4-*
- split: chorus.1
path: librispeech_asr-test.clean/chorus.1-*
- split: chorus.2
path: librispeech_asr-test.clean/chorus.2-*
- split: chorus.3
path: librispeech_asr-test.clean/chorus.3-*
- split: chorus.4
path: librispeech_asr-test.clean/chorus.4-*
- config_name: librispeech_asr-test.clean_pertEval_500_30
data_files:
- split: gnoise.1
path: librispeech_asr-test.clean_pertEval_500_30/gnoise.1-*
- split: env_noise_esc50.1
path: librispeech_asr-test.clean_pertEval_500_30/env_noise_esc50.1-*
- config_name: multilingual_librispeech-french_test
data_files:
- split: gnoise.1
path: multilingual_librispeech-french_test/gnoise.1-*
- split: gnoise.2
path: multilingual_librispeech-french_test/gnoise.2-*
- split: gnoise.3
path: multilingual_librispeech-french_test/gnoise.3-*
- split: speedup.1
path: multilingual_librispeech-french_test/speedup.1-*
- split: speedup.2
path: multilingual_librispeech-french_test/speedup.2-*
- split: speedup.3
path: multilingual_librispeech-french_test/speedup.3-*
- split: slowdown.1
path: multilingual_librispeech-french_test/slowdown.1-*
- split: slowdown.2
path: multilingual_librispeech-french_test/slowdown.2-*
- split: slowdown.3
path: multilingual_librispeech-french_test/slowdown.3-*
- split: pitch_up.1
path: multilingual_librispeech-french_test/pitch_up.1-*
- split: pitch_up.2
path: multilingual_librispeech-french_test/pitch_up.2-*
- split: pitch_up.3
path: multilingual_librispeech-french_test/pitch_up.3-*
- split: pitch_down.1
path: multilingual_librispeech-french_test/pitch_down.1-*
- split: pitch_down.2
path: multilingual_librispeech-french_test/pitch_down.2-*
- split: env_noise.1
path: multilingual_librispeech-french_test/env_noise.1-*
- split: env_noise.3
path: multilingual_librispeech-french_test/env_noise.3-*
- split: env_noise_wham.1
path: multilingual_librispeech-french_test/env_noise_wham.1-*
- split: env_noise_wham.2
path: multilingual_librispeech-french_test/env_noise_wham.2-*
- split: real_rir.3
path: multilingual_librispeech-french_test/real_rir.3-*
- split: env_noise.2
path: multilingual_librispeech-french_test/env_noise.2-*
- split: env_noise_esc50.1
path: multilingual_librispeech-french_test/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: multilingual_librispeech-french_test/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: multilingual_librispeech-french_test/env_noise_esc50.3-*
- split: env_noise_musan.1
path: multilingual_librispeech-french_test/env_noise_musan.1-*
- split: env_noise_musan.2
path: multilingual_librispeech-french_test/env_noise_musan.2-*
- split: env_noise_musan.3
path: multilingual_librispeech-french_test/env_noise_musan.3-*
- split: env_noise_wham.3
path: multilingual_librispeech-french_test/env_noise_wham.3-*
- split: pitch_down.3
path: multilingual_librispeech-french_test/pitch_down.3-*
- split: rir.1
path: multilingual_librispeech-french_test/rir.1-*
- split: rir.2
path: multilingual_librispeech-french_test/rir.2-*
- split: rir.3
path: multilingual_librispeech-french_test/rir.3-*
- split: real_rir.1
path: multilingual_librispeech-french_test/real_rir.1-*
- split: real_rir.2
path: multilingual_librispeech-french_test/real_rir.2-*
- split: resample.1
path: multilingual_librispeech-french_test/resample.1-*
- split: resample.2
path: multilingual_librispeech-french_test/resample.2-*
- split: resample.3
path: multilingual_librispeech-french_test/resample.3-*
- split: gain.1
path: multilingual_librispeech-french_test/gain.1-*
- split: gain.2
path: multilingual_librispeech-french_test/gain.2-*
- split: gain.3
path: multilingual_librispeech-french_test/gain.3-*
- split: echo.1
path: multilingual_librispeech-french_test/echo.1-*
- split: echo.2
path: multilingual_librispeech-french_test/echo.2-*
- split: echo.3
path: multilingual_librispeech-french_test/echo.3-*
- split: phaser.1
path: multilingual_librispeech-french_test/phaser.1-*
- split: phaser.2
path: multilingual_librispeech-french_test/phaser.2-*
- split: phaser.3
path: multilingual_librispeech-french_test/phaser.3-*
- split: tempo_up.1
path: multilingual_librispeech-french_test/tempo_up.1-*
- split: tempo_up.2
path: multilingual_librispeech-french_test/tempo_up.2-*
- split: tempo_up.3
path: multilingual_librispeech-french_test/tempo_up.3-*
- split: tempo_down.1
path: multilingual_librispeech-french_test/tempo_down.1-*
- split: tempo_down.2
path: multilingual_librispeech-french_test/tempo_down.2-*
- split: tempo_down.3
path: multilingual_librispeech-french_test/tempo_down.3-*
- split: lowpass.1
path: multilingual_librispeech-french_test/lowpass.1-*
- split: lowpass.2
path: multilingual_librispeech-french_test/lowpass.2-*
- split: lowpass.3
path: multilingual_librispeech-french_test/lowpass.3-*
- split: highpass.1
path: multilingual_librispeech-french_test/highpass.1-*
- split: highpass.2
path: multilingual_librispeech-french_test/highpass.2-*
- split: highpass.3
path: multilingual_librispeech-french_test/highpass.3-*
- split: music.1
path: multilingual_librispeech-french_test/music.1-*
- split: music.2
path: multilingual_librispeech-french_test/music.2-*
- split: music.3
path: multilingual_librispeech-french_test/music.3-*
- split: crosstalk.1
path: multilingual_librispeech-french_test/crosstalk.1-*
- split: crosstalk.2
path: multilingual_librispeech-french_test/crosstalk.2-*
- split: crosstalk.3
path: multilingual_librispeech-french_test/crosstalk.3-*
- split: tremolo.1
path: multilingual_librispeech-french_test/tremolo.1-*
- split: tremolo.2
path: multilingual_librispeech-french_test/tremolo.2-*
- split: tremolo.3
path: multilingual_librispeech-french_test/tremolo.3-*
- split: treble.1
path: multilingual_librispeech-french_test/treble.1-*
- split: treble.2
path: multilingual_librispeech-french_test/treble.2-*
- split: treble.3
path: multilingual_librispeech-french_test/treble.3-*
- split: bass.1
path: multilingual_librispeech-french_test/bass.1-*
- split: bass.2
path: multilingual_librispeech-french_test/bass.2-*
- split: bass.3
path: multilingual_librispeech-french_test/bass.3-*
- split: chorus.1
path: multilingual_librispeech-french_test/chorus.1-*
- split: chorus.2
path: multilingual_librispeech-french_test/chorus.2-*
- split: chorus.3
path: multilingual_librispeech-french_test/chorus.3-*
- split: gnoise.4
path: multilingual_librispeech-french_test/gnoise.4-*
- split: env_noise.4
path: multilingual_librispeech-french_test/env_noise.4-*
- split: env_noise_esc50.4
path: multilingual_librispeech-french_test/env_noise_esc50.4-*
- split: env_noise_musan.4
path: multilingual_librispeech-french_test/env_noise_musan.4-*
- split: env_noise_wham.4
path: multilingual_librispeech-french_test/env_noise_wham.4-*
- split: speedup.4
path: multilingual_librispeech-french_test/speedup.4-*
- split: slowdown.4
path: multilingual_librispeech-french_test/slowdown.4-*
- split: pitch_up.4
path: multilingual_librispeech-french_test/pitch_up.4-*
- split: pitch_down.4
path: multilingual_librispeech-french_test/pitch_down.4-*
- split: rir.4
path: multilingual_librispeech-french_test/rir.4-*
- split: real_rir.4
path: multilingual_librispeech-french_test/real_rir.4-*
- split: resample.4
path: multilingual_librispeech-french_test/resample.4-*
- split: gain.4
path: multilingual_librispeech-french_test/gain.4-*
- split: echo.4
path: multilingual_librispeech-french_test/echo.4-*
- split: phaser.4
path: multilingual_librispeech-french_test/phaser.4-*
- split: tempo_up.4
path: multilingual_librispeech-french_test/tempo_up.4-*
- split: tempo_down.4
path: multilingual_librispeech-french_test/tempo_down.4-*
- split: lowpass.4
path: multilingual_librispeech-french_test/lowpass.4-*
- split: highpass.4
path: multilingual_librispeech-french_test/highpass.4-*
- split: music.4
path: multilingual_librispeech-french_test/music.4-*
- split: crosstalk.4
path: multilingual_librispeech-french_test/crosstalk.4-*
- split: tremolo.4
path: multilingual_librispeech-french_test/tremolo.4-*
- split: treble.4
path: multilingual_librispeech-french_test/treble.4-*
- split: bass.4
path: multilingual_librispeech-french_test/bass.4-*
- split: chorus.4
path: multilingual_librispeech-french_test/chorus.4-*
- config_name: multilingual_librispeech-german_test
data_files:
- split: gnoise.1
path: multilingual_librispeech-german_test/gnoise.1-*
- split: gnoise.2
path: multilingual_librispeech-german_test/gnoise.2-*
- split: gnoise.3
path: multilingual_librispeech-german_test/gnoise.3-*
- split: env_noise.1
path: multilingual_librispeech-german_test/env_noise.1-*
- split: env_noise.2
path: multilingual_librispeech-german_test/env_noise.2-*
- split: env_noise.3
path: multilingual_librispeech-german_test/env_noise.3-*
- split: env_noise_esc50.1
path: multilingual_librispeech-german_test/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: multilingual_librispeech-german_test/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: multilingual_librispeech-german_test/env_noise_esc50.3-*
- split: env_noise_musan.1
path: multilingual_librispeech-german_test/env_noise_musan.1-*
- split: env_noise_musan.2
path: multilingual_librispeech-german_test/env_noise_musan.2-*
- split: env_noise_musan.3
path: multilingual_librispeech-german_test/env_noise_musan.3-*
- split: env_noise_wham.1
path: multilingual_librispeech-german_test/env_noise_wham.1-*
- split: env_noise_wham.2
path: multilingual_librispeech-german_test/env_noise_wham.2-*
- split: env_noise_wham.3
path: multilingual_librispeech-german_test/env_noise_wham.3-*
- split: speedup.1
path: multilingual_librispeech-german_test/speedup.1-*
- split: speedup.2
path: multilingual_librispeech-german_test/speedup.2-*
- split: speedup.3
path: multilingual_librispeech-german_test/speedup.3-*
- split: slowdown.1
path: multilingual_librispeech-german_test/slowdown.1-*
- split: slowdown.2
path: multilingual_librispeech-german_test/slowdown.2-*
- split: slowdown.3
path: multilingual_librispeech-german_test/slowdown.3-*
- split: pitch_up.1
path: multilingual_librispeech-german_test/pitch_up.1-*
- split: pitch_up.2
path: multilingual_librispeech-german_test/pitch_up.2-*
- split: pitch_up.3
path: multilingual_librispeech-german_test/pitch_up.3-*
- split: pitch_down.1
path: multilingual_librispeech-german_test/pitch_down.1-*
- split: pitch_down.2
path: multilingual_librispeech-german_test/pitch_down.2-*
- split: pitch_down.3
path: multilingual_librispeech-german_test/pitch_down.3-*
- split: rir.1
path: multilingual_librispeech-german_test/rir.1-*
- split: rir.2
path: multilingual_librispeech-german_test/rir.2-*
- split: rir.3
path: multilingual_librispeech-german_test/rir.3-*
- split: real_rir.1
path: multilingual_librispeech-german_test/real_rir.1-*
- split: real_rir.2
path: multilingual_librispeech-german_test/real_rir.2-*
- split: real_rir.3
path: multilingual_librispeech-german_test/real_rir.3-*
- split: resample.1
path: multilingual_librispeech-german_test/resample.1-*
- split: resample.2
path: multilingual_librispeech-german_test/resample.2-*
- split: resample.3
path: multilingual_librispeech-german_test/resample.3-*
- split: gain.1
path: multilingual_librispeech-german_test/gain.1-*
- split: gain.2
path: multilingual_librispeech-german_test/gain.2-*
- split: gain.3
path: multilingual_librispeech-german_test/gain.3-*
- split: echo.1
path: multilingual_librispeech-german_test/echo.1-*
- split: echo.2
path: multilingual_librispeech-german_test/echo.2-*
- split: echo.3
path: multilingual_librispeech-german_test/echo.3-*
- split: phaser.1
path: multilingual_librispeech-german_test/phaser.1-*
- split: phaser.2
path: multilingual_librispeech-german_test/phaser.2-*
- split: phaser.3
path: multilingual_librispeech-german_test/phaser.3-*
- split: tempo_up.1
path: multilingual_librispeech-german_test/tempo_up.1-*
- split: tempo_up.2
path: multilingual_librispeech-german_test/tempo_up.2-*
- split: tempo_up.3
path: multilingual_librispeech-german_test/tempo_up.3-*
- split: tempo_down.1
path: multilingual_librispeech-german_test/tempo_down.1-*
- split: tempo_down.2
path: multilingual_librispeech-german_test/tempo_down.2-*
- split: tempo_down.3
path: multilingual_librispeech-german_test/tempo_down.3-*
- split: lowpass.1
path: multilingual_librispeech-german_test/lowpass.1-*
- split: lowpass.2
path: multilingual_librispeech-german_test/lowpass.2-*
- split: lowpass.3
path: multilingual_librispeech-german_test/lowpass.3-*
- split: highpass.1
path: multilingual_librispeech-german_test/highpass.1-*
- split: highpass.2
path: multilingual_librispeech-german_test/highpass.2-*
- split: highpass.3
path: multilingual_librispeech-german_test/highpass.3-*
- split: music.1
path: multilingual_librispeech-german_test/music.1-*
- split: music.2
path: multilingual_librispeech-german_test/music.2-*
- split: music.3
path: multilingual_librispeech-german_test/music.3-*
- split: crosstalk.1
path: multilingual_librispeech-german_test/crosstalk.1-*
- split: crosstalk.2
path: multilingual_librispeech-german_test/crosstalk.2-*
- split: crosstalk.3
path: multilingual_librispeech-german_test/crosstalk.3-*
- split: tremolo.1
path: multilingual_librispeech-german_test/tremolo.1-*
- split: tremolo.2
path: multilingual_librispeech-german_test/tremolo.2-*
- split: tremolo.3
path: multilingual_librispeech-german_test/tremolo.3-*
- split: treble.1
path: multilingual_librispeech-german_test/treble.1-*
- split: treble.2
path: multilingual_librispeech-german_test/treble.2-*
- split: treble.3
path: multilingual_librispeech-german_test/treble.3-*
- split: bass.1
path: multilingual_librispeech-german_test/bass.1-*
- split: bass.2
path: multilingual_librispeech-german_test/bass.2-*
- split: bass.3
path: multilingual_librispeech-german_test/bass.3-*
- split: chorus.1
path: multilingual_librispeech-german_test/chorus.1-*
- split: chorus.2
path: multilingual_librispeech-german_test/chorus.2-*
- split: chorus.3
path: multilingual_librispeech-german_test/chorus.3-*
- split: gnoise.4
path: multilingual_librispeech-german_test/gnoise.4-*
- split: env_noise.4
path: multilingual_librispeech-german_test/env_noise.4-*
- split: env_noise_esc50.4
path: multilingual_librispeech-german_test/env_noise_esc50.4-*
- split: env_noise_musan.4
path: multilingual_librispeech-german_test/env_noise_musan.4-*
- split: env_noise_wham.4
path: multilingual_librispeech-german_test/env_noise_wham.4-*
- split: speedup.4
path: multilingual_librispeech-german_test/speedup.4-*
- split: slowdown.4
path: multilingual_librispeech-german_test/slowdown.4-*
- split: pitch_up.4
path: multilingual_librispeech-german_test/pitch_up.4-*
- split: pitch_down.4
path: multilingual_librispeech-german_test/pitch_down.4-*
- split: rir.4
path: multilingual_librispeech-german_test/rir.4-*
- split: real_rir.4
path: multilingual_librispeech-german_test/real_rir.4-*
- split: resample.4
path: multilingual_librispeech-german_test/resample.4-*
- split: gain.4
path: multilingual_librispeech-german_test/gain.4-*
- split: echo.4
path: multilingual_librispeech-german_test/echo.4-*
- split: phaser.4
path: multilingual_librispeech-german_test/phaser.4-*
- split: tempo_up.4
path: multilingual_librispeech-german_test/tempo_up.4-*
- split: tempo_down.4
path: multilingual_librispeech-german_test/tempo_down.4-*
- split: lowpass.4
path: multilingual_librispeech-german_test/lowpass.4-*
- split: highpass.4
path: multilingual_librispeech-german_test/highpass.4-*
- split: music.4
path: multilingual_librispeech-german_test/music.4-*
- split: crosstalk.4
path: multilingual_librispeech-german_test/crosstalk.4-*
- split: tremolo.4
path: multilingual_librispeech-german_test/tremolo.4-*
- split: treble.4
path: multilingual_librispeech-german_test/treble.4-*
- split: bass.4
path: multilingual_librispeech-german_test/bass.4-*
- split: chorus.4
path: multilingual_librispeech-german_test/chorus.4-*
- config_name: multilingual_librispeech-spanish_test
data_files:
- split: None.0
path: multilingual_librispeech-spanish_test/None.0-*
- split: gnoise.1
path: multilingual_librispeech-spanish_test/gnoise.1-*
- split: gnoise.2
path: multilingual_librispeech-spanish_test/gnoise.2-*
- split: gnoise.3
path: multilingual_librispeech-spanish_test/gnoise.3-*
- split: gnoise.4
path: multilingual_librispeech-spanish_test/gnoise.4-*
- split: env_noise.1
path: multilingual_librispeech-spanish_test/env_noise.1-*
- split: env_noise.2
path: multilingual_librispeech-spanish_test/env_noise.2-*
- split: env_noise.3
path: multilingual_librispeech-spanish_test/env_noise.3-*
- split: env_noise.4
path: multilingual_librispeech-spanish_test/env_noise.4-*
- split: rir.1
path: multilingual_librispeech-spanish_test/rir.1-*
- split: rir.2
path: multilingual_librispeech-spanish_test/rir.2-*
- split: rir.3
path: multilingual_librispeech-spanish_test/rir.3-*
- split: rir.4
path: multilingual_librispeech-spanish_test/rir.4-*
- split: speedup.1
path: multilingual_librispeech-spanish_test/speedup.1-*
- split: speedup.2
path: multilingual_librispeech-spanish_test/speedup.2-*
- split: speedup.3
path: multilingual_librispeech-spanish_test/speedup.3-*
- split: speedup.4
path: multilingual_librispeech-spanish_test/speedup.4-*
- split: slowdown.1
path: multilingual_librispeech-spanish_test/slowdown.1-*
- split: slowdown.2
path: multilingual_librispeech-spanish_test/slowdown.2-*
- split: slowdown.3
path: multilingual_librispeech-spanish_test/slowdown.3-*
- split: slowdown.4
path: multilingual_librispeech-spanish_test/slowdown.4-*
- split: pitch_up.3
path: multilingual_librispeech-spanish_test/pitch_up.3-*
- split: pitch_up.4
path: multilingual_librispeech-spanish_test/pitch_up.4-*
- split: pitch_down.1
path: multilingual_librispeech-spanish_test/pitch_down.1-*
- split: pitch_down.2
path: multilingual_librispeech-spanish_test/pitch_down.2-*
- split: pitch_down.3
path: multilingual_librispeech-spanish_test/pitch_down.3-*
- split: pitch_down.4
path: multilingual_librispeech-spanish_test/pitch_down.4-*
- split: pitch_up.1
path: multilingual_librispeech-spanish_test/pitch_up.1-*
- split: pitch_up.2
path: multilingual_librispeech-spanish_test/pitch_up.2-*
- split: resample.2
path: multilingual_librispeech-spanish_test/resample.2-*
- split: resample.3
path: multilingual_librispeech-spanish_test/resample.3-*
- split: resample.4
path: multilingual_librispeech-spanish_test/resample.4-*
- split: env_noise_esc50.1
path: multilingual_librispeech-spanish_test/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: multilingual_librispeech-spanish_test/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: multilingual_librispeech-spanish_test/env_noise_esc50.3-*
- split: env_noise_esc50.4
path: multilingual_librispeech-spanish_test/env_noise_esc50.4-*
- split: resample.1
path: multilingual_librispeech-spanish_test/resample.1-*
- split: gain.1
path: multilingual_librispeech-spanish_test/gain.1-*
- split: gain.2
path: multilingual_librispeech-spanish_test/gain.2-*
- split: gain.3
path: multilingual_librispeech-spanish_test/gain.3-*
- split: gain.4
path: multilingual_librispeech-spanish_test/gain.4-*
- split: echo.4
path: multilingual_librispeech-spanish_test/echo.4-*
- split: echo.1
path: multilingual_librispeech-spanish_test/echo.1-*
- split: echo.2
path: multilingual_librispeech-spanish_test/echo.2-*
- split: echo.3
path: multilingual_librispeech-spanish_test/echo.3-*
- split: tempo_up.1
path: multilingual_librispeech-spanish_test/tempo_up.1-*
- split: tempo_up.2
path: multilingual_librispeech-spanish_test/tempo_up.2-*
- split: tempo_up.3
path: multilingual_librispeech-spanish_test/tempo_up.3-*
- split: tempo_up.4
path: multilingual_librispeech-spanish_test/tempo_up.4-*
- split: tempo_down.1
path: multilingual_librispeech-spanish_test/tempo_down.1-*
- split: tempo_down.2
path: multilingual_librispeech-spanish_test/tempo_down.2-*
- split: tempo_down.3
path: multilingual_librispeech-spanish_test/tempo_down.3-*
- split: tempo_down.4
path: multilingual_librispeech-spanish_test/tempo_down.4-*
- split: lowpass.1
path: multilingual_librispeech-spanish_test/lowpass.1-*
- split: lowpass.2
path: multilingual_librispeech-spanish_test/lowpass.2-*
- split: lowpass.3
path: multilingual_librispeech-spanish_test/lowpass.3-*
- split: lowpass.4
path: multilingual_librispeech-spanish_test/lowpass.4-*
- split: highpass.1
path: multilingual_librispeech-spanish_test/highpass.1-*
- split: highpass.2
path: multilingual_librispeech-spanish_test/highpass.2-*
- split: highpass.3
path: multilingual_librispeech-spanish_test/highpass.3-*
- split: highpass.4
path: multilingual_librispeech-spanish_test/highpass.4-*
- split: phaser.1
path: multilingual_librispeech-spanish_test/phaser.1-*
- split: phaser.2
path: multilingual_librispeech-spanish_test/phaser.2-*
- split: phaser.3
path: multilingual_librispeech-spanish_test/phaser.3-*
- split: phaser.4
path: multilingual_librispeech-spanish_test/phaser.4-*
- split: env_noise_musan.1
path: multilingual_librispeech-spanish_test/env_noise_musan.1-*
- split: env_noise_musan.2
path: multilingual_librispeech-spanish_test/env_noise_musan.2-*
- split: env_noise_musan.3
path: multilingual_librispeech-spanish_test/env_noise_musan.3-*
- split: env_noise_musan.4
path: multilingual_librispeech-spanish_test/env_noise_musan.4-*
- split: music.1
path: multilingual_librispeech-spanish_test/music.1-*
- split: music.2
path: multilingual_librispeech-spanish_test/music.2-*
- split: music.3
path: multilingual_librispeech-spanish_test/music.3-*
- split: music.4
path: multilingual_librispeech-spanish_test/music.4-*
- split: crosstalk.1
path: multilingual_librispeech-spanish_test/crosstalk.1-*
- split: crosstalk.2
path: multilingual_librispeech-spanish_test/crosstalk.2-*
- split: crosstalk.3
path: multilingual_librispeech-spanish_test/crosstalk.3-*
- split: crosstalk.4
path: multilingual_librispeech-spanish_test/crosstalk.4-*
- split: env_noise_wham.1
path: multilingual_librispeech-spanish_test/env_noise_wham.1-*
- split: env_noise_wham.2
path: multilingual_librispeech-spanish_test/env_noise_wham.2-*
- split: env_noise_wham.3
path: multilingual_librispeech-spanish_test/env_noise_wham.3-*
- split: env_noise_wham.4
path: multilingual_librispeech-spanish_test/env_noise_wham.4-*
- split: tremolo.1
path: multilingual_librispeech-spanish_test/tremolo.1-*
- split: tremolo.2
path: multilingual_librispeech-spanish_test/tremolo.2-*
- split: tremolo.4
path: multilingual_librispeech-spanish_test/tremolo.4-*
- split: treble.1
path: multilingual_librispeech-spanish_test/treble.1-*
- split: treble.2
path: multilingual_librispeech-spanish_test/treble.2-*
- split: treble.3
path: multilingual_librispeech-spanish_test/treble.3-*
- split: treble.4
path: multilingual_librispeech-spanish_test/treble.4-*
- split: bass.1
path: multilingual_librispeech-spanish_test/bass.1-*
- split: bass.2
path: multilingual_librispeech-spanish_test/bass.2-*
- split: bass.3
path: multilingual_librispeech-spanish_test/bass.3-*
- split: bass.4
path: multilingual_librispeech-spanish_test/bass.4-*
- split: chorus.1
path: multilingual_librispeech-spanish_test/chorus.1-*
- split: chorus.2
path: multilingual_librispeech-spanish_test/chorus.2-*
- split: chorus.3
path: multilingual_librispeech-spanish_test/chorus.3-*
- split: chorus.4
path: multilingual_librispeech-spanish_test/chorus.4-*
- split: tremolo.3
path: multilingual_librispeech-spanish_test/tremolo.3-*
- split: voice_conversion_bark.1
path: multilingual_librispeech-spanish_test/voice_conversion_bark.1-*
- config_name: multilingual_librispeech-spanish_test_pertEval_500_30
data_files:
- split: gnoise.1
path: multilingual_librispeech-spanish_test_pertEval_500_30/gnoise.1-*
- split: env_noise_esc50.1
path: multilingual_librispeech-spanish_test_pertEval_500_30/env_noise_esc50.1-*
- config_name: tedlium-release3_test
data_files:
- split: gnoise.1
path: tedlium-release3_test/gnoise.1-*
- split: gnoise.2
path: tedlium-release3_test/gnoise.2-*
- split: gnoise.3
path: tedlium-release3_test/gnoise.3-*
- split: env_noise_esc50.1
path: tedlium-release3_test/env_noise_esc50.1-*
- split: env_noise_esc50.2
path: tedlium-release3_test/env_noise_esc50.2-*
- split: env_noise_esc50.3
path: tedlium-release3_test/env_noise_esc50.3-*
- split: speedup.1
path: tedlium-release3_test/speedup.1-*
- split: speedup.2
path: tedlium-release3_test/speedup.2-*
- split: speedup.3
path: tedlium-release3_test/speedup.3-*
- split: slowdown.1
path: tedlium-release3_test/slowdown.1-*
- split: slowdown.2
path: tedlium-release3_test/slowdown.2-*
- split: slowdown.3
path: tedlium-release3_test/slowdown.3-*
- split: pitch_up.1
path: tedlium-release3_test/pitch_up.1-*
- split: pitch_up.2
path: tedlium-release3_test/pitch_up.2-*
- split: pitch_up.3
path: tedlium-release3_test/pitch_up.3-*
- split: pitch_down.1
path: tedlium-release3_test/pitch_down.1-*
- split: pitch_down.2
path: tedlium-release3_test/pitch_down.2-*
- split: pitch_down.3
path: tedlium-release3_test/pitch_down.3-*
- split: rir.1
path: tedlium-release3_test/rir.1-*
- split: rir.2
path: tedlium-release3_test/rir.2-*
- split: rir.3
path: tedlium-release3_test/rir.3-*
- split: voice_conversion_vctk.1
path: tedlium-release3_test/voice_conversion_vctk.1-*
- split: resample.1
path: tedlium-release3_test/resample.1-*
- split: resample.2
path: tedlium-release3_test/resample.2-*
- split: resample.3
path: tedlium-release3_test/resample.3-*
- split: gain.1
path: tedlium-release3_test/gain.1-*
- split: gain.2
path: tedlium-release3_test/gain.2-*
- split: gain.3
path: tedlium-release3_test/gain.3-*
- split: echo.1
path: tedlium-release3_test/echo.1-*
- split: echo.2
path: tedlium-release3_test/echo.2-*
- split: echo.3
path: tedlium-release3_test/echo.3-*
- split: phaser.1
path: tedlium-release3_test/phaser.1-*
- split: phaser.2
path: tedlium-release3_test/phaser.2-*
- split: phaser.3
path: tedlium-release3_test/phaser.3-*
- split: tempo_up.1
path: tedlium-release3_test/tempo_up.1-*
- split: tempo_up.2
path: tedlium-release3_test/tempo_up.2-*
- split: tempo_up.3
path: tedlium-release3_test/tempo_up.3-*
- split: tempo_down.1
path: tedlium-release3_test/tempo_down.1-*
- split: tempo_down.2
path: tedlium-release3_test/tempo_down.2-*
- split: tempo_down.3
path: tedlium-release3_test/tempo_down.3-*
- split: lowpass.1
path: tedlium-release3_test/lowpass.1-*
- split: lowpass.2
path: tedlium-release3_test/lowpass.2-*
- split: lowpass.3
path: tedlium-release3_test/lowpass.3-*
- split: highpass.1
path: tedlium-release3_test/highpass.1-*
- split: highpass.2
path: tedlium-release3_test/highpass.2-*
- split: highpass.3
path: tedlium-release3_test/highpass.3-*
- split: gnoise.4
path: tedlium-release3_test/gnoise.4-*
- split: env_noise_esc50.4
path: tedlium-release3_test/env_noise_esc50.4-*
- split: speedup.4
path: tedlium-release3_test/speedup.4-*
- split: slowdown.4
path: tedlium-release3_test/slowdown.4-*
- split: pitch_up.4
path: tedlium-release3_test/pitch_up.4-*
- split: pitch_down.4
path: tedlium-release3_test/pitch_down.4-*
- split: rir.4
path: tedlium-release3_test/rir.4-*
- split: resample.4
path: tedlium-release3_test/resample.4-*
- split: gain.4
path: tedlium-release3_test/gain.4-*
- split: echo.4
path: tedlium-release3_test/echo.4-*
- split: phaser.4
path: tedlium-release3_test/phaser.4-*
- split: tempo_up.4
path: tedlium-release3_test/tempo_up.4-*
- split: tempo_down.4
path: tedlium-release3_test/tempo_down.4-*
- split: lowpass.4
path: tedlium-release3_test/lowpass.4-*
- split: highpass.4
path: tedlium-release3_test/highpass.4-*
- split: None.0
path: tedlium-release3_test/None.0-*
- split: music.1
path: tedlium-release3_test/music.1-*
- split: music.2
path: tedlium-release3_test/music.2-*
- split: music.3
path: tedlium-release3_test/music.3-*
- split: music.4
path: tedlium-release3_test/music.4-*
- split: crosstalk.1
path: tedlium-release3_test/crosstalk.1-*
- split: crosstalk.2
path: tedlium-release3_test/crosstalk.2-*
- split: crosstalk.3
path: tedlium-release3_test/crosstalk.3-*
- split: crosstalk.4
path: tedlium-release3_test/crosstalk.4-*
- split: env_noise_musan.1
path: tedlium-release3_test/env_noise_musan.1-*
- split: env_noise_musan.2
path: tedlium-release3_test/env_noise_musan.2-*
- split: env_noise_musan.3
path: tedlium-release3_test/env_noise_musan.3-*
- split: env_noise_musan.4
path: tedlium-release3_test/env_noise_musan.4-*
- split: real_rir.1
path: tedlium-release3_test/real_rir.1-*
- split: real_rir.2
path: tedlium-release3_test/real_rir.2-*
- split: real_rir.3
path: tedlium-release3_test/real_rir.3-*
- split: real_rir.4
path: tedlium-release3_test/real_rir.4-*
- split: env_noise.1
path: tedlium-release3_test/env_noise.1-*
- split: env_noise.2
path: tedlium-release3_test/env_noise.2-*
- split: env_noise.3
path: tedlium-release3_test/env_noise.3-*
- split: env_noise.4
path: tedlium-release3_test/env_noise.4-*
- split: env_noise_wham.1
path: tedlium-release3_test/env_noise_wham.1-*
- split: env_noise_wham.2
path: tedlium-release3_test/env_noise_wham.2-*
- split: env_noise_wham.3
path: tedlium-release3_test/env_noise_wham.3-*
- split: env_noise_wham.4
path: tedlium-release3_test/env_noise_wham.4-*
- split: tremolo.1
path: tedlium-release3_test/tremolo.1-*
- split: tremolo.2
path: tedlium-release3_test/tremolo.2-*
- split: tremolo.3
path: tedlium-release3_test/tremolo.3-*
- split: tremolo.4
path: tedlium-release3_test/tremolo.4-*
- split: treble.1
path: tedlium-release3_test/treble.1-*
- split: treble.2
path: tedlium-release3_test/treble.2-*
- split: treble.3
path: tedlium-release3_test/treble.3-*
- split: treble.4
path: tedlium-release3_test/treble.4-*
- split: bass.1
path: tedlium-release3_test/bass.1-*
- split: bass.2
path: tedlium-release3_test/bass.2-*
- split: bass.3
path: tedlium-release3_test/bass.3-*
- split: bass.4
path: tedlium-release3_test/bass.4-*
- split: chorus.1
path: tedlium-release3_test/chorus.1-*
- split: chorus.2
path: tedlium-release3_test/chorus.2-*
- split: chorus.4
path: tedlium-release3_test/chorus.4-*
- split: chorus.3
path: tedlium-release3_test/chorus.3-*
---
# Dataset Card for "speech_robust_bench"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
lukaemon/bbh | lukaemon | "2023-02-02T01:14:46Z" | 9,483 | 53 | [
"size_categories:1K<n<10K",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2023-02-01T07:46:51Z" | ---
dataset_info:
- config_name: boolean_expressions
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 11790
num_examples: 250
download_size: 17172
dataset_size: 11790
- config_name: causal_judgement
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 198021
num_examples: 187
download_size: 202943
dataset_size: 198021
- config_name: date_understanding
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 54666
num_examples: 250
download_size: 61760
dataset_size: 54666
- config_name: disambiguation_qa
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 78620
num_examples: 250
download_size: 85255
dataset_size: 78620
- config_name: dyck_languages
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38432
num_examples: 250
download_size: 43814
dataset_size: 38432
- config_name: formal_fallacies
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 138224
num_examples: 250
download_size: 145562
dataset_size: 138224
- config_name: geometric_shapes
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 68560
num_examples: 250
download_size: 77242
dataset_size: 68560
- config_name: hyperbaton
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38574
num_examples: 250
download_size: 44706
dataset_size: 38574
- config_name: logical_deduction_five_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 148595
num_examples: 250
download_size: 155477
dataset_size: 148595
- config_name: logical_deduction_seven_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 191022
num_examples: 250
download_size: 198404
dataset_size: 191022
- config_name: logical_deduction_three_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 105831
num_examples: 250
download_size: 112213
dataset_size: 105831
- config_name: movie_recommendation
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 50985
num_examples: 250
download_size: 57684
dataset_size: 50985
- config_name: multistep_arithmetic_two
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 12943
num_examples: 250
download_size: 18325
dataset_size: 12943
- config_name: navigate
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 49031
num_examples: 250
download_size: 55163
dataset_size: 49031
- config_name: object_counting
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 30508
num_examples: 250
download_size: 35890
dataset_size: 30508
- config_name: penguins_in_a_table
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 70062
num_examples: 146
download_size: 74516
dataset_size: 70062
- config_name: reasoning_about_colored_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 89579
num_examples: 250
download_size: 98694
dataset_size: 89579
- config_name: ruin_names
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 46537
num_examples: 250
download_size: 53178
dataset_size: 46537
- config_name: salient_translation_error_detection
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 277110
num_examples: 250
download_size: 286443
dataset_size: 277110
- config_name: snarks
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 38223
num_examples: 178
download_size: 42646
dataset_size: 38223
- config_name: sports_understanding
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 22723
num_examples: 250
download_size: 28617
dataset_size: 22723
- config_name: temporal_sequences
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 139546
num_examples: 250
download_size: 148176
dataset_size: 139546
- config_name: tracking_shuffled_objects_five_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 162590
num_examples: 250
download_size: 169722
dataset_size: 162590
- config_name: tracking_shuffled_objects_seven_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 207274
num_examples: 250
download_size: 214906
dataset_size: 207274
- config_name: tracking_shuffled_objects_three_objects
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 122104
num_examples: 250
download_size: 128736
dataset_size: 122104
- config_name: web_of_lies
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 47582
num_examples: 250
download_size: 52964
dataset_size: 47582
- config_name: word_sorting
features:
- name: input
dtype: string
- name: target
dtype: string
splits:
- name: test
num_bytes: 60918
num_examples: 250
download_size: 66300
dataset_size: 60918
---
# BIG-bench Hard dataset
homepage: https://github.com/suzgunmirac/BIG-Bench-Hard
```
@article{suzgun2022challenging,
title={Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them},
author={Suzgun, Mirac and Scales, Nathan and Sch{\"a}rli, Nathanael and Gehrmann, Sebastian and Tay, Yi and Chung, Hyung Won and Chowdhery, Aakanksha and Le, Quoc V and Chi, Ed H and Zhou, Denny and and Wei, Jason},
journal={arXiv preprint arXiv:2210.09261},
year={2022}
}
``` |
liwu/MNBVC | liwu | "2024-08-23T02:21:05Z" | 9,436 | 507 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:other",
"language_creators:other",
"multilinguality:monolingual",
"source_datasets:original",
"language:zh",
"license:mit",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2023-02-13T14:00:47Z" | ---
annotations_creators:
- other
language:
- zh
language_creators:
- other
license:
- mit
multilinguality:
- monolingual
pretty_name: MNBVC
size_categories:
- unknown
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
---
# Dataset Card for MNBVC
## Table of Contents
- [Dataset Card for MNBVC](#dataset-card-for-mnbvc)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [数据集介绍](#数据集介绍)
- [数据子集](#数据子集)
- [数据格式](#数据格式)
- [文本数据](#文本数据)
- [问答数据](#问答数据)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** http://mnbvc.253874.net/
- **Repository:** https://github.com/esbatmop/MNBVC
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** N/A
### 数据集介绍
中文互联网上最古老最神秘(没有之一)的里屋社区于2023.1.1庄重宣布:
在英明神武的里屋管子带领下,决心发挥社区所长(哪都长),帮助开源社区长期更新一份最大的中文互联网语料集。
Huggingface上的MNBVC数据集在逐渐更新中,请到[https://github.com/esbatmop/MNBVC](https://github.com/esbatmop/MNBVC) 获取未完成清洗的更多数据。
可以使用如下脚本加载:
```python
from datasets import load_dataset
dataset = load_dataset("liwu/MNBVC", 'law_judgement', split='train', streaming=True)
next(iter(dataset)) # get the first line
```
## 数据子集
MNBVC数据集包含数个子集:
- `law_judgement`: 来自法律文书的文本。
- `gov_xuexiqiangguo`: 来自学习强国的文本。
- `gov_report`: 来自政府工作报告的文本。
- `co_ann_report`: 企业年报文本。
- `code_metadata`: 代码元数据。
- `qa_zhihu`: 来自[知乎](https://huggingface.co./datasets/wangrui6/Zhihu-KOL)的问答数据。
- `qa_wikihow`: 来自wikihow的问答数据。
- `qa_mfa`: 外交部问答数据。
- `news_peoples_daily`: 来自人民日报的文本数据。
- `wikipedia`: 来自维基百科的文本数据。
- `qa_stackexchange`: 来自StackExchange的问答数据。
- `qa_chatgpt`: 使用ChatGPT构造的问答语料,感谢[genggui001](https://github.com/genggui001)贡献语料。
- `math`:
- `math_qa `: 和数学领域有关的问答数据。
- `emath` :中国数学爱好者论坛语料数据
- `math_chat`: 和数学领域有关的对话数据数据,可以提升模型Chain of Thought的能力。
- `crawler_oscar`: 从CommonCrawl中清洗出来的通用文本数据。
- `game` : 一些游戏的平行语料数据。
- `Hogwarts_legacy` : 霍格沃茨指遗
- `The_Wither_3` : 巫师三
## 数据格式
目前MNBVC数据集包含如下几类数据:
- 通用文本
- 问答语料
- 代码语料
- 多轮对话
- 论坛语料
- 平行语料
可以在[MNBVC的wiki页面](https://wiki.mnbvc.org/doku.php/%E7%8E%B0%E6%9C%89%E8%AF%AD%E6%96%99%E6%A0%BC%E5%BC%8F)上查看这几类数据的具体格式。
项目早期所上传的数据使用如下格式,以后这一格式会被废弃,相应数据也会重新上传:
```json
{
"text": datasets.Value("string"),
"meta": datasets.Value("string")
}
```
### Contributions
Thanks to the [Liwu community](http://mnbvc.253874.net/) for constructing this dataset.
Thanks to [silver](https://github.com/silverriver) and [jiaming](https://huggingface.co./Yjiaming) for adding and uploading this dataset to Huggingface.
### Citation
Please cite the repo if you use the data or code in this repo.
```
@misc{mnbvc,
author = {{MOP-LIWU Community} and {MNBVC Team}},
title = {MNBVC: Massive Never-ending BT Vast Chinese corpus},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/esbatmop/MNBVC}},
}
```
|
OpenGVLab/OmniCorpus-CC | OpenGVLab | "2024-11-17T07:08:46Z" | 9,408 | 12 | [
"task_categories:image-to-text",
"task_categories:visual-question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2406.08418",
"region:us"
] | [
"image-to-text",
"visual-question-answering"
] | "2024-08-30T06:16:02Z" | ---
language:
- en
license: cc-by-4.0
size_categories:
- 100M<n<1B
task_categories:
- image-to-text
- visual-question-answering
dataset_info:
- config_name: CC-MAIN-2013-20
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 19908676196
num_examples: 3878063
download_size: 9303464923
dataset_size: 19908676196
- config_name: CC-MAIN-2013-48
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 15282078925
num_examples: 3091537
download_size: 6965036866
dataset_size: 15282078925
- config_name: CC-MAIN-2014-10
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7227087609
num_examples: 1390034
download_size: 3259239561
dataset_size: 7227087609
- config_name: CC-MAIN-2014-15
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 10106913108
num_examples: 1968361
download_size: 4567738362
dataset_size: 10106913108
- config_name: CC-MAIN-2014-23
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7997621043
num_examples: 1455331
download_size: 3468852905
dataset_size: 7997621043
- config_name: CC-MAIN-2014-35
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6228103779
num_examples: 1219200
download_size: 2849584613
dataset_size: 6228103779
- config_name: CC-MAIN-2014-41
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 8321822952
num_examples: 1573955
download_size: 3775989970
dataset_size: 8321822952
- config_name: CC-MAIN-2014-42
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7732679416
num_examples: 1511931
download_size: 3505766162
dataset_size: 7732679416
- config_name: CC-MAIN-2014-49
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4473311810
num_examples: 837735
download_size: 1982728919
dataset_size: 4473311810
- config_name: CC-MAIN-2014-52
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7292722888
num_examples: 1304730
download_size: 2957626766
dataset_size: 7292722888
- config_name: CC-MAIN-2015-06
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5775826679
num_examples: 1061940
download_size: 2462379667
dataset_size: 5775826679
- config_name: CC-MAIN-2015-11
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6263650452
num_examples: 1129411
download_size: 2528026633
dataset_size: 6263650452
- config_name: CC-MAIN-2015-14
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4524425019
num_examples: 885221
download_size: 1939222111
dataset_size: 4524425019
- config_name: CC-MAIN-2015-18
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6195227565
num_examples: 1104115
download_size: 2634204322
dataset_size: 6195227565
- config_name: CC-MAIN-2015-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7008276790
num_examples: 1290530
download_size: 2913627974
dataset_size: 7008276790
- config_name: CC-MAIN-2015-27
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4320140953
num_examples: 784496
download_size: 1828575226
dataset_size: 4320140953
- config_name: CC-MAIN-2015-32
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4952806590
num_examples: 875601
download_size: 2065207099
dataset_size: 4952806590
- config_name: CC-MAIN-2015-35
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 6053257306
num_examples: 1086470
download_size: 2632032769
dataset_size: 6053257306
- config_name: CC-MAIN-2015-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5206096790
num_examples: 924036
download_size: 2203603087
dataset_size: 5206096790
- config_name: CC-MAIN-2015-48
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 8343050753
num_examples: 1537468
download_size: 3489600630
dataset_size: 8343050753
- config_name: CC-MAIN-2016-07
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 9329220105
num_examples: 1738650
download_size: 4005599785
dataset_size: 9329220105
- config_name: CC-MAIN-2016-18
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 3897220786
num_examples: 747570
download_size: 1675500816
dataset_size: 3897220786
- config_name: CC-MAIN-2016-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 4623903344
num_examples: 857060
download_size: 2000624854
dataset_size: 4623903344
- config_name: CC-MAIN-2016-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 3414418701
num_examples: 627995
download_size: 1403890884
dataset_size: 3414418701
- config_name: CC-MAIN-2016-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 7244342539
num_examples: 1183776
download_size: 2913394840
dataset_size: 7244342539
- config_name: CC-MAIN-2016-36
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5402565529
num_examples: 915878
download_size: 2248454753
dataset_size: 5402565529
- config_name: CC-MAIN-2016-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 5938544915
num_examples: 1113534
download_size: 2530904625
dataset_size: 5938544915
- config_name: CC-MAIN-2016-44
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 15819536321
num_examples: 3528637
download_size: 6516546200
dataset_size: 15819536321
- config_name: CC-MAIN-2016-50
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 10822695594
num_examples: 2215939
download_size: 4439728574
dataset_size: 10822695594
- config_name: CC-MAIN-2017-04
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 11949732148
num_examples: 2441316
download_size: 5045763620
dataset_size: 11949732148
- config_name: CC-MAIN-2017-09
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 12473370126
num_examples: 2561539
download_size: 5398993614
dataset_size: 12473370126
- config_name: CC-MAIN-2017-13
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 12209904783
num_examples: 2458486
download_size: 5422393873
dataset_size: 12209904783
- config_name: CC-MAIN-2017-17
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 13763109013
num_examples: 2615558
download_size: 6025106556
dataset_size: 13763109013
- config_name: CC-MAIN-2017-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 14456991831
num_examples: 2775332
download_size: 6258001465
dataset_size: 14456991831
- config_name: CC-MAIN-2017-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 15036103558
num_examples: 2973499
download_size: 6813218532
dataset_size: 15036103558
- config_name: CC-MAIN-2017-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 18833639414
num_examples: 3870197
download_size: 8464443468
dataset_size: 18833639414
- config_name: CC-MAIN-2017-34
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 25828116836
num_examples: 4848154
download_size: 11599137919
dataset_size: 25828116836
- config_name: CC-MAIN-2017-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 29432150311
num_examples: 4840435
download_size: 13172655761
dataset_size: 29432150311
- config_name: CC-MAIN-2017-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 32672966840
num_examples: 5724493
download_size: 15041820212
dataset_size: 32672966840
- config_name: CC-MAIN-2017-47
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 34301891443
num_examples: 5291581
download_size: 15593452226
dataset_size: 34301891443
- config_name: CC-MAIN-2017-51
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 30012533603
num_examples: 5466672
download_size: 14005518471
dataset_size: 30012533603
- config_name: CC-MAIN-2018-05
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 47738703452
num_examples: 8053879
download_size: 22533983733
dataset_size: 47738703452
- config_name: CC-MAIN-2018-09
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 45503126107
num_examples: 8045410
download_size: 21900491411
dataset_size: 45503126107
- config_name: CC-MAIN-2018-13
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43904789090
num_examples: 7980931
download_size: 21178075620
dataset_size: 43904789090
- config_name: CC-MAIN-2018-17
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 44481167440
num_examples: 8699878
download_size: 21623780968
dataset_size: 44481167440
- config_name: CC-MAIN-2018-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 64369136465
num_examples: 13332059
download_size: 32293951649
dataset_size: 64369136465
- config_name: CC-MAIN-2018-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 81232597180
num_examples: 16249638
download_size: 41007491366
dataset_size: 81232597180
- config_name: CC-MAIN-2018-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 151537007358
num_examples: 32535697
download_size: 77517210537
dataset_size: 151537007358
- config_name: CC-MAIN-2018-34
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 39026071869
num_examples: 6347230
download_size: 19285382621
dataset_size: 39026071869
- config_name: CC-MAIN-2018-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 35948493161
num_examples: 6372711
download_size: 17597722170
dataset_size: 35948493161
- config_name: CC-MAIN-2018-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 50928918805
num_examples: 8758225
download_size: 25291022646
dataset_size: 50928918805
- config_name: CC-MAIN-2018-47
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43961213014
num_examples: 7270815
download_size: 22024998684
dataset_size: 43961213014
- config_name: CC-MAIN-2018-51
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 58902353921
num_examples: 10215384
download_size: 29497256483
dataset_size: 58902353921
- config_name: CC-MAIN-2019-04
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 54814836003
num_examples: 9930553
download_size: 27458854931
dataset_size: 54814836003
- config_name: CC-MAIN-2019-09
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 54426174385
num_examples: 8897510
download_size: 28125345656
dataset_size: 54426174385
- config_name: CC-MAIN-2019-13
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48712051219
num_examples: 7803004
download_size: 25156014252
dataset_size: 48712051219
- config_name: CC-MAIN-2019-18
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48203751852
num_examples: 7532171
download_size: 24844412087
dataset_size: 48203751852
- config_name: CC-MAIN-2019-22
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 51674379059
num_examples: 8339842
download_size: 26257475492
dataset_size: 51674379059
- config_name: CC-MAIN-2019-26
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43336967638
num_examples: 7320268
download_size: 21900316910
dataset_size: 43336967638
- config_name: CC-MAIN-2019-30
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 46313133200
num_examples: 7682281
download_size: 23262218065
dataset_size: 46313133200
- config_name: CC-MAIN-2019-35
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 49570657315
num_examples: 8098108
download_size: 24938729240
dataset_size: 49570657315
- config_name: CC-MAIN-2019-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 43538081906
num_examples: 7102645
download_size: 21728983014
dataset_size: 43538081906
- config_name: CC-MAIN-2019-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 52817470138
num_examples: 8567061
download_size: 26105523209
dataset_size: 52817470138
- config_name: CC-MAIN-2019-47
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 42252827792
num_examples: 6775943
download_size: 21228532199
dataset_size: 42252827792
- config_name: CC-MAIN-2019-51
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 38926356094
num_examples: 6415558
download_size: 19510339598
dataset_size: 38926356094
- config_name: CC-MAIN-2020-05
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48189844491
num_examples: 7921372
download_size: 24235687030
dataset_size: 48189844491
- config_name: CC-MAIN-2020-10
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 48904133840
num_examples: 8211791
download_size: 24576159189
dataset_size: 48904133840
- config_name: CC-MAIN-2020-16
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 51243682770
num_examples: 8578633
download_size: 25485035979
dataset_size: 51243682770
- config_name: CC-MAIN-2020-24
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 59424939072
num_examples: 10438139
download_size: 29827361603
dataset_size: 59424939072
- config_name: CC-MAIN-2020-29
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 66229730938
num_examples: 11475631
download_size: 33030161773
dataset_size: 66229730938
- config_name: CC-MAIN-2020-34
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 54287690582
num_examples: 9495610
download_size: 27018821467
dataset_size: 54287690582
- config_name: CC-MAIN-2020-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 71587907978
num_examples: 12058149
download_size: 35795677487
dataset_size: 71587907978
- config_name: CC-MAIN-2020-45
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 59172857400
num_examples: 9694734
download_size: 29495814784
dataset_size: 59172857400
- config_name: CC-MAIN-2020-50
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 58557861606
num_examples: 9539918
download_size: 29083801775
dataset_size: 58557861606
- config_name: CC-MAIN-2021-04
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 74507336015
num_examples: 12273028
download_size: 36874887518
dataset_size: 74507336015
- config_name: CC-MAIN-2021-10
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 60802783945
num_examples: 10176190
download_size: 30326513365
dataset_size: 60802783945
- config_name: CC-MAIN-2021-17
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 75061494488
num_examples: 12343366
download_size: 37345114890
dataset_size: 75061494488
- config_name: CC-MAIN-2021-21
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 70036417178
num_examples: 11584034
download_size: 34806730527
dataset_size: 70036417178
- config_name: CC-MAIN-2021-25
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 73653674063
num_examples: 12065281
download_size: 36581310312
dataset_size: 73653674063
- config_name: CC-MAIN-2021-31
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 79535885182
num_examples: 13383552
download_size: 39702500971
dataset_size: 79535885182
- config_name: CC-MAIN-2021-39
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 90302065651
num_examples: 14794773
download_size: 45211764750
dataset_size: 90302065651
- config_name: CC-MAIN-2021-43
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 108356023335
num_examples: 17698206
download_size: 54292215300
dataset_size: 108356023335
- config_name: CC-MAIN-2021-49
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 95867022229
num_examples: 15643875
download_size: 47902433321
dataset_size: 95867022229
- config_name: CC-MAIN-2022-05
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 97602903488
num_examples: 15772898
download_size: 48711364812
dataset_size: 97602903488
- config_name: CC-MAIN-2022-21
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 127495492928
num_examples: 21745889
download_size: 63379692210
dataset_size: 127495492928
- config_name: CC-MAIN-2022-27
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 128061655541
num_examples: 21580054
download_size: 63763936007
dataset_size: 128061655541
- config_name: CC-MAIN-2022-33
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 126436062118
num_examples: 21495687
download_size: 63067252044
dataset_size: 126436062118
- config_name: CC-MAIN-2022-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 123806739937
num_examples: 20206120
download_size: 61929035270
dataset_size: 123806739937
- config_name: CC-MAIN-2022-49
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 152577158166
num_examples: 24634059
download_size: 76529854484
dataset_size: 152577158166
- config_name: CC-MAIN-2023-06
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 174815301023
num_examples: 28962355
download_size: 87301203013
dataset_size: 174815301023
- config_name: CC-MAIN-2023-14
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 227631152876
num_examples: 37223376
download_size: 114188282465
dataset_size: 227631152876
- config_name: CC-MAIN-2023-23
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 321036722459
num_examples: 52119692
download_size: 161491274249
dataset_size: 321036722459
- config_name: CC-MAIN-2023-40
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 648032999611
num_examples: 101292016
download_size: 317965522325
dataset_size: 648032999611
- config_name: CC-MAIN-2023-50
features:
- name: general_metadata
struct:
- name: domain
sequence: string
- name: fluency_prob
dtype: float64
- name: id
dtype: string
- name: non_advertisement_prob
dtype: float64
- name: politics_prob
dtype: float64
- name: porn_prob
dtype: float64
- name: toxic_prob
dtype: float64
- name: url
dtype: string
- name: images
sequence: string
- name: texts
sequence: string
- name: metadata
list:
- name: aesthetic_prob
dtype: float64
- name: bytes
dtype: int64
- name: d_hash
dtype: string
- name: d_hash_dup_count
dtype: int64
- name: height
dtype: int64
- name: img_url_sha
dtype: string
- name: p_hash
dtype: string
- name: p_hash_dup_count
dtype: int64
- name: unsafe_prob
dtype: float64
- name: width
dtype: int64
splits:
- name: train
num_bytes: 744768384551
num_examples: 117073004
download_size: 365332295606
dataset_size: 744768384551
configs:
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: CC-MAIN-2013-20/train-*
- config_name: CC-MAIN-2013-48
data_files:
- split: train
path: CC-MAIN-2013-48/train-*
- config_name: CC-MAIN-2014-10
data_files:
- split: train
path: CC-MAIN-2014-10/train-*
- config_name: CC-MAIN-2014-15
data_files:
- split: train
path: CC-MAIN-2014-15/train-*
- config_name: CC-MAIN-2014-23
data_files:
- split: train
path: CC-MAIN-2014-23/train-*
- config_name: CC-MAIN-2014-35
data_files:
- split: train
path: CC-MAIN-2014-35/train-*
- config_name: CC-MAIN-2014-41
data_files:
- split: train
path: CC-MAIN-2014-41/train-*
- config_name: CC-MAIN-2014-42
data_files:
- split: train
path: CC-MAIN-2014-42/train-*
- config_name: CC-MAIN-2014-49
data_files:
- split: train
path: CC-MAIN-2014-49/train-*
- config_name: CC-MAIN-2014-52
data_files:
- split: train
path: CC-MAIN-2014-52/train-*
- config_name: CC-MAIN-2015-06
data_files:
- split: train
path: CC-MAIN-2015-06/train-*
- config_name: CC-MAIN-2015-11
data_files:
- split: train
path: CC-MAIN-2015-11/train-*
- config_name: CC-MAIN-2015-14
data_files:
- split: train
path: CC-MAIN-2015-14/train-*
- config_name: CC-MAIN-2015-18
data_files:
- split: train
path: CC-MAIN-2015-18/train-*
- config_name: CC-MAIN-2015-22
data_files:
- split: train
path: CC-MAIN-2015-22/train-*
- config_name: CC-MAIN-2015-27
data_files:
- split: train
path: CC-MAIN-2015-27/train-*
- config_name: CC-MAIN-2015-32
data_files:
- split: train
path: CC-MAIN-2015-32/train-*
- config_name: CC-MAIN-2015-35
data_files:
- split: train
path: CC-MAIN-2015-35/train-*
- config_name: CC-MAIN-2015-40
data_files:
- split: train
path: CC-MAIN-2015-40/train-*
- config_name: CC-MAIN-2015-48
data_files:
- split: train
path: CC-MAIN-2015-48/train-*
- config_name: CC-MAIN-2016-07
data_files:
- split: train
path: CC-MAIN-2016-07/train-*
- config_name: CC-MAIN-2016-18
data_files:
- split: train
path: CC-MAIN-2016-18/train-*
- config_name: CC-MAIN-2016-22
data_files:
- split: train
path: CC-MAIN-2016-22/train-*
- config_name: CC-MAIN-2016-26
data_files:
- split: train
path: CC-MAIN-2016-26/train-*
- config_name: CC-MAIN-2016-30
data_files:
- split: train
path: CC-MAIN-2016-30/train-*
- config_name: CC-MAIN-2016-36
data_files:
- split: train
path: CC-MAIN-2016-36/train-*
- config_name: CC-MAIN-2016-40
data_files:
- split: train
path: CC-MAIN-2016-40/train-*
- config_name: CC-MAIN-2016-44
data_files:
- split: train
path: CC-MAIN-2016-44/train-*
- config_name: CC-MAIN-2016-50
data_files:
- split: train
path: CC-MAIN-2016-50/train-*
- config_name: CC-MAIN-2017-04
data_files:
- split: train
path: CC-MAIN-2017-04/train-*
- config_name: CC-MAIN-2017-09
data_files:
- split: train
path: CC-MAIN-2017-09/train-*
- config_name: CC-MAIN-2017-13
data_files:
- split: train
path: CC-MAIN-2017-13/train-*
- config_name: CC-MAIN-2017-17
data_files:
- split: train
path: CC-MAIN-2017-17/train-*
- config_name: CC-MAIN-2017-22
data_files:
- split: train
path: CC-MAIN-2017-22/train-*
- config_name: CC-MAIN-2017-26
data_files:
- split: train
path: CC-MAIN-2017-26/train-*
- config_name: CC-MAIN-2017-30
data_files:
- split: train
path: CC-MAIN-2017-30/train-*
- config_name: CC-MAIN-2017-34
data_files:
- split: train
path: CC-MAIN-2017-34/train-*
- config_name: CC-MAIN-2017-39
data_files:
- split: train
path: CC-MAIN-2017-39/train-*
- config_name: CC-MAIN-2017-43
data_files:
- split: train
path: CC-MAIN-2017-43/train-*
- config_name: CC-MAIN-2017-47
data_files:
- split: train
path: CC-MAIN-2017-47/train-*
- config_name: CC-MAIN-2017-51
data_files:
- split: train
path: CC-MAIN-2017-51/train-*
- config_name: CC-MAIN-2018-05
data_files:
- split: train
path: CC-MAIN-2018-05/train-*
- config_name: CC-MAIN-2018-09
data_files:
- split: train
path: CC-MAIN-2018-09/train-*
- config_name: CC-MAIN-2018-13
data_files:
- split: train
path: CC-MAIN-2018-13/train-*
- config_name: CC-MAIN-2018-17
data_files:
- split: train
path: CC-MAIN-2018-17/train-*
- config_name: CC-MAIN-2018-22
data_files:
- split: train
path: CC-MAIN-2018-22/train-*
- config_name: CC-MAIN-2018-26
data_files:
- split: train
path: CC-MAIN-2018-26/train-*
- config_name: CC-MAIN-2018-30
data_files:
- split: train
path: CC-MAIN-2018-30/train-*
- config_name: CC-MAIN-2018-34
data_files:
- split: train
path: CC-MAIN-2018-34/train-*
- config_name: CC-MAIN-2018-39
data_files:
- split: train
path: CC-MAIN-2018-39/train-*
- config_name: CC-MAIN-2018-43
data_files:
- split: train
path: CC-MAIN-2018-43/train-*
- config_name: CC-MAIN-2018-47
data_files:
- split: train
path: CC-MAIN-2018-47/train-*
- config_name: CC-MAIN-2018-51
data_files:
- split: train
path: CC-MAIN-2018-51/train-*
- config_name: CC-MAIN-2019-04
data_files:
- split: train
path: CC-MAIN-2019-04/train-*
- config_name: CC-MAIN-2019-09
data_files:
- split: train
path: CC-MAIN-2019-09/train-*
- config_name: CC-MAIN-2019-13
data_files:
- split: train
path: CC-MAIN-2019-13/train-*
- config_name: CC-MAIN-2019-18
data_files:
- split: train
path: CC-MAIN-2019-18/train-*
- config_name: CC-MAIN-2019-22
data_files:
- split: train
path: CC-MAIN-2019-22/train-*
- config_name: CC-MAIN-2019-26
data_files:
- split: train
path: CC-MAIN-2019-26/train-*
- config_name: CC-MAIN-2019-30
data_files:
- split: train
path: CC-MAIN-2019-30/train-*
- config_name: CC-MAIN-2019-35
data_files:
- split: train
path: CC-MAIN-2019-35/train-*
- config_name: CC-MAIN-2019-39
data_files:
- split: train
path: CC-MAIN-2019-39/train-*
- config_name: CC-MAIN-2019-43
data_files:
- split: train
path: CC-MAIN-2019-43/train-*
- config_name: CC-MAIN-2019-47
data_files:
- split: train
path: CC-MAIN-2019-47/train-*
- config_name: CC-MAIN-2019-51
data_files:
- split: train
path: CC-MAIN-2019-51/train-*
- config_name: CC-MAIN-2020-05
data_files:
- split: train
path: CC-MAIN-2020-05/train-*
- config_name: CC-MAIN-2020-10
data_files:
- split: train
path: CC-MAIN-2020-10/train-*
- config_name: CC-MAIN-2020-16
data_files:
- split: train
path: CC-MAIN-2020-16/train-*
- config_name: CC-MAIN-2020-24
data_files:
- split: train
path: CC-MAIN-2020-24/train-*
- config_name: CC-MAIN-2020-29
data_files:
- split: train
path: CC-MAIN-2020-29/train-*
- config_name: CC-MAIN-2020-34
data_files:
- split: train
path: CC-MAIN-2020-34/train-*
- config_name: CC-MAIN-2020-40
data_files:
- split: train
path: CC-MAIN-2020-40/train-*
- config_name: CC-MAIN-2020-45
data_files:
- split: train
path: CC-MAIN-2020-45/train-*
- config_name: CC-MAIN-2020-50
data_files:
- split: train
path: CC-MAIN-2020-50/train-*
- config_name: CC-MAIN-2021-04
data_files:
- split: train
path: CC-MAIN-2021-04/train-*
- config_name: CC-MAIN-2021-10
data_files:
- split: train
path: CC-MAIN-2021-10/train-*
- config_name: CC-MAIN-2021-17
data_files:
- split: train
path: CC-MAIN-2021-17/train-*
- config_name: CC-MAIN-2021-21
data_files:
- split: train
path: CC-MAIN-2021-21/train-*
- config_name: CC-MAIN-2021-25
data_files:
- split: train
path: CC-MAIN-2021-25/train-*
- config_name: CC-MAIN-2021-31
data_files:
- split: train
path: CC-MAIN-2021-31/train-*
- config_name: CC-MAIN-2021-39
data_files:
- split: train
path: CC-MAIN-2021-39/train-*
- config_name: CC-MAIN-2021-43
data_files:
- split: train
path: CC-MAIN-2021-43/train-*
- config_name: CC-MAIN-2021-49
data_files:
- split: train
path: CC-MAIN-2021-49/train-*
- config_name: CC-MAIN-2022-05
data_files:
- split: train
path: CC-MAIN-2022-05/train-*
- config_name: CC-MAIN-2022-21
data_files:
- split: train
path: CC-MAIN-2022-21/train-*
- config_name: CC-MAIN-2022-27
data_files:
- split: train
path: CC-MAIN-2022-27/train-*
- config_name: CC-MAIN-2022-33
data_files:
- split: train
path: CC-MAIN-2022-33/train-*
- config_name: CC-MAIN-2022-40
data_files:
- split: train
path: CC-MAIN-2022-40/train-*
- config_name: CC-MAIN-2022-49
data_files:
- split: train
path: CC-MAIN-2022-49/train-*
- config_name: CC-MAIN-2023-06
data_files:
- split: train
path: CC-MAIN-2023-06/train-*
- config_name: CC-MAIN-2023-14
data_files:
- split: train
path: CC-MAIN-2023-14/train-*
- config_name: CC-MAIN-2023-23
data_files:
- split: train
path: CC-MAIN-2023-23/train-*
- config_name: CC-MAIN-2023-40
data_files:
- split: train
path: CC-MAIN-2023-40/train-*
- config_name: CC-MAIN-2023-50
data_files:
- split: train
path: CC-MAIN-2023-50/train-*
---
⭐️ **NOTE:** Several parquet files were marked unsafe (viruses) by official scaning of hf, while they are reported safe by ClamAV and Virustotal.
We found [many false positive cases](https://discuss.huggingface.co/u/mcpotato/summary) of the hf automatic scanning in hf discussions and raise [one discussion](https://discuss.huggingface.co/t/one-parquet-file-of-my-dataset-was-marked-unsafe/113745) to ask for a re-scanning.
# OmniCorpus-CC
This is the repository of OmniCorpus-CC, which contains 988 million image-text interleaved documents collected from [Common Crawl](https://commoncrawl.org/).
- Repository: https://github.com/OpenGVLab/OmniCorpus
- Paper: https://arxiv.org/abs/2406.08418
OmniCorpus dataset is a large-scale image-text interleaved dataset, which pushes the boundaries of scale and diversity by encompassing **8.6 billion images** interleaved with **1,696 text tokens** from diverse sources, significantly surpassing previous datasets.
This dataset demonstrates several advantages over its counterparts:
1. **Larger data scale:** Our dataset is 1.7 times larger in images and 12.5 times larger in texts compared to the previously largest multimodal dataset, LAION-5B, while maintaining excellent data quality.
2. **Richer data diversity:** Drawing from a broader range of data sources, our dataset is more diverse than other image-text interleaved datasets. It includes bilingual multimodal data in both Chinese and English, and encompasses text-centric and vision-centric documents extracted from common websites and video platforms.
3. **More flexible format:** The streaming data format of our dataset offers exceptional flexibility, allowing adaptation to various data structures, including pure text corpora, image-text pairs, and interleaved data formats.
<img width="578" alt="image" src="https://github.com/OpenGVLab/OmniCorpus/assets/47669167/641a6427-ba50-41e6-8634-8810113fd803">
The OmniCorpus contains three sections:
- **OmniCorpus-CC**: processed from dumps in Common Crawl from 2013 to Nov./Dec. 2023.
- **OmniCorpus-CW**: sourced from Chinese internet resources, will be availiable in [OpenDataLab](https://opendatalab.com/) platform.
- **OmniCorpus-YT**: samples Youtube video frames as images and collects subtitles as texts.
Code for pre-training, evaluating, main body extracting, and filtering have been released in the official [repository](https://github.com/OpenGVLab/OmniCorpus). A pre-trained model is availiable [here](https://huggingface.co./Qingyun/OmniCorpus-InternVL).
# Data Pipeline
Our data pipeline consists of five key stages: main body extraction, preliminary text filtering, document deduplication, image downloading \& filtering, and detailed text filtering. Each stage efficiently reduces the dataset to retain only high-quality data.
Please refer to our paper for more details about the data pipeline.
<img width="723" alt="image" src="https://github.com/OpenGVLab/OmniCorpus/assets/47669167/a6de8928-58fb-4ff4-8ef9-4bd90e9ada5f">
# Usages
The image-text interleaved documents are recommanded for the following usages:
- Pre-training multimodal large language model (MLLM): Recent MLLMs (such as Flamingo series, EMU series, IDEFICS series, MM1, Cambrian-1, and xGen-MM) have shown that image-text interleaved data aids multimodal in-context learning and maintains the capabilities of large language models during multimodal fine-tuning.
- Long text-image retrieval: We provide image-text similarities calculated with CLIP, which can convert the documents to image-text retrieval dataset with longer text. A retrieval model pre-trained on such data can retrieval images based on longer text, which can be used for multimodal RAG, converting pure text to multimodal sample, etc.
- Source for futher dataset research: Our data is large-scale, which can serve as the source for researches for data curation strategies. We provide many useful attributes as metadata for each document, which can enrich the filtering strategy and reduce the cost.
- ......
# Data Format
Following common practices, the data is organized into Parquet file format.
You might encounter errors when using `pandas.read_parquet` (because the data structure contains nested elements). We recommend using fastparquet to load the parquet files.
```Python
import fastparquet
df = fastparquet.ParquetFile(parquet_file_path).to_pandas()
# You can also use iter_batches
parquet_file = pq.ParquetFile(filepath)
for batch in parquet_file.iter_batches():
df = batch.to_pandas()
```
You can convert the i-th document and convert it into a dictionary.
```Python
doc_dict = df.iloc[i].to_dict()
```
The document format is as follow:
```json
{
'images': [
<str: image_1_url>,
None,
<str: image_2_url>,
None,
],
'texts': [
None,
<str: text_paragraph_1_content>
None,
<str: text_paragraph_2_content>,
]
'metadata': [
<dict: image_1_metadata>,
None,
<dict: image_2_metadata>,
None
],
'general_metadata': {
"url": <str: document url>,
"id": <str: document id>,
"domain": <list[str]: domains extracted from document url>,
"fluency_prob": <float: the probability of fluency>,
"non_advertisement_prob": <float: the probability of non-advertisement>,
"porn_prob": <float: the probability of porn content>,
"politics_prob": <float: the probability of politics content>,
"toxic_prob": <float: the probability of toxic content>,
}
}
```
Each image metadata is as follow:
```json
{
"img_url_sha": <str: sha code of image url>,
"width": <int: image width>,
"height": <int: image height>,
"bytes": <int: byte number of the image file>,
"d_hash": <str: d_hash code of the image, used for image deduplication>,
"p_hash": <str: p_hash code of the image, used for image deduplication>,
"d_hash_dup_count": <int: duplicated times detected by d_hash code>,
"p_hash_dup_count": <int: duplicated times detected by p_hash code>,
"aesthetic prob": <float: aesthetic probility>,
"unsafe prob": <float: NSFW probility>,
}
```
# License
OmniCorpus is released under a [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/deed.en) license, with the primary intent of supporting research activities.
# Citation
```
@article{li2024omnicorpus,
title={OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text},
author={Li, Qingyun and Chen, Zhe and Wang, Weiyun and Wang, Wenhai and Ye, Shenglong and Jin, Zhenjiang and others},
journal={arXiv preprint arXiv:2406.08418},
year={2024}
}
```
|
allenai/reward-bench-results | allenai | "2025-01-10T19:23:29Z" | 9,373 | 2 | [
"region:us"
] | null | "2023-12-20T21:21:33Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: chosen_model
dtype: string
- name: rejected
dtype: string
- name: rejected_model
dtype: string
- name: subset
dtype: string
- name: id
dtype: int64
- name: text_chosen
dtype: string
- name: text_rejected
dtype: string
- name: results
dtype: int64
splits:
- name: filtered
num_bytes: 8126708
num_examples: 2093
download_size: 4062729
dataset_size: 8126708
configs:
- config_name: default
data_files:
- split: filtered
path: data/filtered-*
---
# Results for Holisitic Evaluation of Reward Models (HERM) Benchmark
Here, you'll find the raw scores for the HERM project.
The repository is structured as follows.
```
├── best-of-n/ <- Nested directory for different completions on Best of N challenge
| ├── alpaca_eval/ └── results for each reward model
| | ├── tulu-13b/{org}/{model}.json
| | └── zephyr-7b/{org}/{model}.json
| └── mt_bench/
| ├── tulu-13b/{org}/{model}.json
| └── zephyr-7b/{org}/{model}.json
├── eval-set-scores/{org}/{model}.json <- Per-prompt scores on our core evaluation set.
├── eval-set/ <- Aggregated results on our core eval. set.
├── pref-sets-scores/{org}/{model}.json <- Per-prompt scores on existing test sets.
└── pref-sets/ <- Aggregated results on existing test sets.
```
The data is loaded by the other projects in this repo and released for further research.
See the [GitHub repo](https://github.com/allenai/herm) or the [leaderboard source code](https://huggingface.co./spaces/ai2-adapt-dev/HERM-Leaderboard/tree/main) for examples on loading and manipulating the data.
Tools for analysis are found on [GitHub](https://github.com/allenai/reward-bench/blob/main/analysis/utils.py).
Contact: `nathanl at allenai dot org`
For example, this data can be used to aggregate the distribution of scores across models (it also powers our leaderboard)!
<img src="https://huggingface.co./datasets/allenai/blog-images/resolve/main/reward-bench/dist.png" alt="RewardBench Distribution" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> |
zalando-datasets/fashion_mnist | zalando-datasets | "2024-08-08T06:10:25Z" | 9,347 | 50 | [
"task_categories:image-classification",
"task_ids:multi-class-image-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:1708.07747",
"region:us"
] | [
"image-classification"
] | "2022-03-02T23:29:22Z" | ---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- image-classification
task_ids:
- multi-class-image-classification
paperswithcode_id: fashion-mnist
pretty_name: FashionMNIST
dataset_info:
config_name: fashion_mnist
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': T - shirt / top
'1': Trouser
'2': Pullover
'3': Dress
'4': Coat
'5': Sandal
'6': Shirt
'7': Sneaker
'8': Bag
'9': Ankle boot
splits:
- name: train
num_bytes: 31049107.0
num_examples: 60000
- name: test
num_bytes: 5192560.0
num_examples: 10000
download_size: 36106894
dataset_size: 36241667.0
configs:
- config_name: fashion_mnist
data_files:
- split: train
path: fashion_mnist/train-*
- split: test
path: fashion_mnist/test-*
default: true
---
# Dataset Card for FashionMNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)
- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)
- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Fashion-MNIST is a dataset of Zalando's article images—consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.
### Supported Tasks and Leaderboards
- `image-classification`: The goal of this task is to classify a given image of Zalando's article into one of 10 classes. The leaderboard is available [here](https://paperswithcode.com/sota/image-classification-on-fashion-mnist).
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
A data point comprises an image and its label.
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=L size=28x28 at 0x27601169DD8>,
'label': 9
}
```
### Data Fields
- `image`: A `PIL.Image.Image` object containing the 28x28 image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`.
- `label`: an integer between 0 and 9 representing the classes with the following mapping:
| Label | Description |
| --- | --- |
| 0 | T-shirt/top |
| 1 | Trouser |
| 2 | Pullover |
| 3 | Dress |
| 4 | Coat |
| 5 | Sandal |
| 6 | Shirt |
| 7 | Sneaker |
| 8 | Bag |
| 9 | Ankle boot |
### Data Splits
The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.
## Dataset Creation
### Curation Rationale
**From the arXiv paper:**
The original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. "If it doesn't work on MNIST, it won't work at all", they said. "Well, if it does work on MNIST, it may still fail on others."
Here are some good reasons:
- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read "Most pairs of MNIST digits can be distinguished pretty well by just one pixel."
- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.
- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author François Chollet.
### Source Data
#### Initial Data Collection and Normalization
**From the arXiv paper:**
Fashion-MNIST is based on the assortment on Zalando’s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 × 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.
We use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 × 73) are then fed into the following conversion pipeline:
1. Converting the input to a PNG image.
2. Trimming any edges that are close to the color of the corner pixels. The “closeness” is defined by the distance within 5% of the maximum possible intensity in RGB space.
3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.
4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.
5. Extending the shortest edge to 28 and put the image to the center of the canvas.
6. Negating the intensities of the image.
7. Converting the image to 8-bit grayscale pixels.
#### Who are the source language producers?
**From the arXiv paper:**
Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.
### Annotations
#### Annotation process
**From the arXiv paper:**
For the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe’s largest online fashion platform. Each product contains only one silhouette code.
#### Who are the annotators?
**From the arXiv paper:**
The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Han Xiao and Kashif Rasul and Roland Vollgraf
### Licensing Information
MIT Licence
### Citation Information
```
@article{DBLP:journals/corr/abs-1708-07747,
author = {Han Xiao and
Kashif Rasul and
Roland Vollgraf},
title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
Algorithms},
journal = {CoRR},
volume = {abs/1708.07747},
year = {2017},
url = {http://arxiv.org/abs/1708.07747},
archivePrefix = {arXiv},
eprint = {1708.07747},
timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},
biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
### Contributions
Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset. |
Mutonix/Vript | Mutonix | "2024-06-11T10:38:10Z" | 9,296 | 17 | [
"task_categories:video-classification",
"task_categories:visual-question-answering",
"task_categories:text-to-video",
"task_categories:text-to-image",
"task_categories:image-to-video",
"language:en",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2406.06040",
"region:us"
] | [
"video-classification",
"visual-question-answering",
"text-to-video",
"text-to-image",
"image-to-video"
] | "2024-04-10T12:17:42Z" | ---
task_categories:
- video-classification
- visual-question-answering
- text-to-video
- text-to-image
- image-to-video
language:
- en
size_categories:
- 100K<n<1M
configs:
- config_name: vript-long
data_files:
- split: train
path: vript_captions/vript_long_videos_captions.jsonl
- config_name: vript-short
data_files:
- split: train
path: vript_captions/vript_short_videos_captions.jsonl
---
# 🎬 Vript: Refine Video Captioning into Video Scripting [[Github Repo](https://github.com/mutonix/Vript)]
---
We construct a **fine-grained** video-text dataset with 12K annotated high-resolution videos **(~400k clips)**. The annotation of this dataset is inspired by the video script. If we want to make a video, we have to first write a script to organize how to shoot the scenes in the videos. To shoot a scene, we need to decide the content, shot type (medium shot, close-up, etc), and how the camera moves (panning, tilting, etc). Therefore, we extend video captioning to video scripting by annotating the videos in the format of video scripts. Different from the previous video-text datasets, we densely annotate the entire videos without discarding any scenes and each scene has a caption with **~145** words. Besides the vision modality, we transcribe the voice-over into text and put it along with the video title to give more background information for annotating the videos.
**_<font color=red>Warning: Some zip files may contain empty folders. You can ignore them as these folders have no video clips and no annotation files.</font>_**
<p align="center">
<img src="assets/Vript-overview_00.png" width="800">
</p>
## Getting Started
**By downloading these datasets, you agree to the terms of the [License](#License).**
The captions of the videos in the Vript dataset are structured as follows:
```
{
"meta": {
"video_id": "339dXVNQXac",
"video_title": "...",
"num_clips": ...,
"integrity": true,
},
"data": {
"339dXVNQXac-Scene-001": {
"video_id": "339dXVNQXac",
"clip_id": "339dXVNQXac-Scene-001",
"video_title": "...",
"caption":{
"shot_type": "...",
"camera_movement": "...",
"content": "...",
"scene_title": "...",
},
"voiceover": ["..."],
},
"339dXVNQXac-Scene-002": {
...
}
}
}
```
- `video_id`: The ID of the video from YouTube.
- `video_title`: The title of the video.
- `num_clips`: The number of clips in the video. If the `integrity` is `false`, some clips may not be captioned.
- `integrity`: Whether all clips are captioned.
- `clip_id`: The ID of the clip in the video, which is the concatenation of the `video_id` and the scene number.
- `caption`: The caption of the scene, including the shot type, camera movement, content, and scene title.
- `voiceover`: The transcription of the voice-over in the scene.
The data is organized as follows:
```
Vript/
|
├── vript_meta/
│ ├── vript_long_videos_meta.json
│ └── vript_short_videos_meta.json
│
├── vript_captions/
│ ├── vript_long_videos_captions.zip
│ │ ├── 007EvOaWFOA_caption.json
│ │ └── ...
│ └── vript_short_videos_captions.zip
│ └── ...
│
├── vript_long_videos/
│ ├── video_1_of_1095.zip
│ │ ├── 007EvOaWFOA.mp4
│ │ └── ...
│ ├── video_2_of_1095.zip
│ └── ...
│
├── vript_short_videos/
│ ├── short_video_1_of_42.zip
│ │ ├── 02toZL7p4_0.mp4
│ │ └── ...
│ ├── short_video_2_of_42.zip
│ └── ...
│
├── vript_long_videos_clips/
│ ├── clips_1_of_1095.zip
│ │ ├── 007EvOaWFOA/
│ │ │ ├── 007EvOaWFOA_cut_meta.json
│ │ │ ├── 007EvOaWFOA_asr.jsonl
│ │ │ ├── 007EvOaWFOA-Scene-001.mp4
│ │ │ └── ...
│ │ └── ...
│ ├── clips_2_of_1095.zip
│ └── ...
│
└── vript_short_videos_clips/
├── shorts_clips_1_of_42.zip
│ ├── 02toZL7p4_0/
│ │ ├── 02toZL7p4_0_cut_meta.json
│ │ ├── 02toZL7p4_0_asr.jsonl
│ │ ├── 02toZL7p4_0-Scene-001.mp4
│ │ └── ...
│ └── ...
├── shorts_clips_2_of_42.zip
└── ...
```
- `vript_meta/`: The meta information of the videos in the Vript dataset, including the video id, title, url, description, category, etc.
- `vript_captions/`: The video captions of the videos in the Vript dataset, which are structured as described above.
- `vript_long_videos/` (667 GB) and `vript_short_videos/` (8.8 GB): The untrimmed videos in the Vript dataset. Long videos are from YouTube, and short videos are from YouTube Shorts and TikTok. We divide the whole data into multiple zip files, each containing 10 long videos / 50 short videos.
All the videos are in **720p** resolution, and _we will provide the videos in the highest quality (up to 2K) available later_ (or you can download them from YouTube directly).
- `vript_long_videos_clips/` (822 GB) and `vript_short_videos_clips/` (12 GB): The trimmed video clips in the Vript dataset, which correspond to scenes in the `video_captions`.
- `xxx_cut_meta.json`: The meta information about how the video is trimmed, including the start time, end time, and the duration of the scene.
- `xxx_asr.jsonl`: The transcription of the voice-over in the scene.
## License
By downloading or using the data or model, you understand, acknowledge, and agree to all the terms in the following agreement.
- ACADEMIC USE ONLY
Any content from Vript/Vript-Bench dataset and Vriptor model is available for academic research purposes only. You agree not to reproduce, duplicate, copy, trade, or exploit for any commercial purposes
- NO DISTRIBUTION
Respect the privacy of personal information of the original source. Without the permission of the copyright owner, you are not allowed to perform any form of broadcasting, modification or any other similar behavior to the data set content.
- RESTRICTION AND LIMITATION OF LIABILITY
In no event shall we be liable for any other damages whatsoever arising out of the use of, or inability to use this dataset and its associated software, even if we have been advised of the possibility of such damages.
- DISCLAIMER
You are solely responsible for legal liability arising from your improper use of the dataset content. We reserve the right to terminate your access to the dataset at any time. You should delete the Vript/Vript-Bench dataset or Vriptor model if required.
This license is modified from the [HD-VG-100M](https://github.com/daooshee/HD-VG-130M) license.
<!-- ## Citation
```
``` -->
## Contact
**Dongjie Yang**: [[email protected]]([email protected])
Paper: arxiv.org/abs/2406.06040 |
isp-uv-es/WorldFloodsv2 | isp-uv-es | "2024-01-10T11:38:21Z" | 9,241 | 6 | [
"license:cc-by-nc-4.0",
"modality:geospatial",
"modality:image",
"doi:10.57967/hf/3149",
"region:us",
"remote sensing",
"sentinel2",
"landsat",
"floods"
] | null | "2023-12-13T17:43:27Z" | ---
license: cc-by-nc-4.0
pipeline_tag: image-segmentation
tags:
- remote sensing
- sentinel2
- landsat
- floods
---
# *WorldFloodsv2* dataset
This repository contains the *WorldFloodsv2* dataset released with the publication:
> E. Portalés-Julià, G. Mateo-García, C. Purcell, and L. Gómez-Chova [Global flood extent segmentation in optical satellite images](https://www.nature.com/articles/s41598-023-47595-7). _Scientific Reports 13, 20316_ (2023). DOI: 10.1038/s41598-023-47595-7.
The [*WorldFloodsv2* database](https://www.nature.com/articles/s41598-023-47595-7) contains 509 pairs of Sentinel-2 images and flood segmentation masks. Splitted in train, val and test sets.
It requires approximately 76GB of hard-disk storage.
<img src="worldfloods_v2.png" alt="licence" width="65%"/>
## Download the dataset
```
huggingface-cli download --cache-dir /path/to/cachedir --local-dir /path/to/localdir/WorldFloodsv2 --repo-type dataset isp-uv-es/WorldFloodsv2
```
## Explore the dataset
The [exploring *WorldFloodsv2*](https://spaceml-org.github.io/ml4floods/content/prep/exploring_worldfloods.html) tutorial in the [ml4floods](https://github.com/spaceml-org/ml4floods) package shows how to
process the dataset and plot the images and masks.
## Licence
The *WorldFloods* database and all pre-trained models are released under a [Creative Commons non-commercial licence](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)
## Cite
If you find this work useful, please cite:
```
@article{portales-julia_global_2023,
title = {Global flood extent segmentation in optical satellite images},
volume = {13},
issn = {2045-2322},
doi = {10.1038/s41598-023-47595-7},
number = {1},
urldate = {2023-11-30},
journal = {Scientific Reports},
author = {Portalés-Julià, Enrique and Mateo-García, Gonzalo and Purcell, Cormac and Gómez-Chova, Luis},
month = nov,
year = {2023},
pages = {20316},
}
```
|
knkarthick/dialogsum | knkarthick | "2023-10-03T10:56:21Z" | 9,215 | 191 | [
"task_categories:summarization",
"task_categories:text2text-generation",
"task_categories:text-generation",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:cc-by-nc-sa-4.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dialogue-summary",
"one-liner-summary",
"meeting-title",
"email-subject"
] | [
"summarization",
"text2text-generation",
"text-generation"
] | "2022-06-28T10:17:20Z" | ---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
- text2text-generation
- text-generation
task_ids: []
pretty_name: DIALOGSum Corpus
tags:
- dialogue-summary
- one-liner-summary
- meeting-title
- email-subject
---
# Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
- **Point of Contact:** https://huggingface.co./knkarthick
### Dataset Summary
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 (Plus 100 holdout data for topic generation) dialogues with corresponding manually labeled summaries and topics.
### Languages
English
## Dataset Structure
### Data Instances
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues (+1000 tests) split into train, test and validation.
The first instance in the training set:
{'id': 'train_0', 'summary': "Mr. Smith's getting a check-up, and Doctor Hawkins advises him to have one every year. Hawkins'll give some information about their classes and medications to help Mr. Smith quit smoking.", 'dialogue': "#Person1#: Hi, Mr. Smith. I'm Doctor Hawkins. Why are you here today?\n#Person2#: I found it would be a good idea to get a check-up.\n#Person1#: Yes, well, you haven't had one for 5 years. You should have one every year.\n#Person2#: I know. I figure as long as there is nothing wrong, why go see the doctor?\n#Person1#: Well, the best way to avoid serious illnesses is to find out about them early. So try to come at least once a year for your own good.\n#Person2#: Ok.\n#Person1#: Let me see here. Your eyes and ears look fine. Take a deep breath, please. Do you smoke, Mr. Smith?\n#Person2#: Yes.\n#Person1#: Smoking is the leading cause of lung cancer and heart disease, you know. You really should quit.\n#Person2#: I've tried hundreds of times, but I just can't seem to kick the habit.\n#Person1#: Well, we have classes and some medications that might help. I'll give you more information before you leave.\n#Person2#: Ok, thanks doctor.", 'topic': "get a check-up}
### Data Fields
- dialogue: text of dialogue.
- summary: human written summary of the dialogue.
- topic: human written topic/one liner of the dialogue.
- id: unique file id of an example.
### Data Splits
- train: 12460
- val: 500
- test: 1500
- holdout: 100 [Only 3 features: id, dialogue, topic]
## Dataset Creation
### Curation Rationale
In paper:
We collect dialogue data for DialogSum from three public dialogue corpora, namely Dailydialog (Li et al., 2017), DREAM (Sun et al., 2019) and MuTual (Cui et al., 2019), as well as an English speaking practice website. These datasets contain face-to-face spoken dialogues that cover a wide range of daily-life topics, including schooling, work, medication, shopping, leisure, travel. Most conversations take place between friends, colleagues, and between service providers and customers.
Compared with previous datasets, dialogues from DialogSum have distinct characteristics:
Under rich real-life scenarios, including more diverse task-oriented scenarios;
Have clear communication patterns and intents, which is valuable to serve as summarization sources;
Have a reasonable length, which comforts the purpose of automatic summarization.
We ask annotators to summarize each dialogue based on the following criteria:
Convey the most salient information;
Be brief;
Preserve important named entities within the conversation;
Be written from an observer perspective;
Be written in formal language.
### Who are the source language producers?
linguists
### Who are the annotators?
language experts
## Licensing Information
CC BY-NC-SA 4.0
## Citation Information
```
@inproceedings{chen-etal-2021-dialogsum,
title = "{D}ialog{S}um: {A} Real-Life Scenario Dialogue Summarization Dataset",
author = "Chen, Yulong and
Liu, Yang and
Chen, Liang and
Zhang, Yue",
booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.findings-acl.449",
doi = "10.18653/v1/2021.findings-acl.449",
pages = "5062--5074",
```
## Contributions
Thanks to [@cylnlp](https://github.com/cylnlp) for adding this dataset. |
ltg/en-wiki-paraphrased | ltg | "2024-03-29T01:15:47Z" | 9,209 | 7 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-28T21:20:36Z" | ---
license: apache-2.0
---
|
allenai/s2-naip | allenai | "2024-05-31T21:06:47Z" | 9,200 | 17 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | null | "2024-03-06T03:10:43Z" | ---
license: apache-2.0
---
AI2-S2-NAIP is a remote sensing dataset consisting of aligned NAIP, Sentinel-2, Sentinel-1, and Landsat images spanning the entire continental US.
Data is divided into tiles.
Each tile spans 512x512 pixels at 1.25 m/pixel in one of the 10 UTM projections covering the continental US.
At each tile, the following data is available:
- [National Agriculture Imagery Program (NAIP)](https://www.usgs.gov/centers/eros/science/usgs-eros-archive-aerial-photography-national-agriculture-imagery-program-naip): an image from 2019-2021 at 1.25 m/pixel (512x512).
- [Sentinel-2 (L1C)](https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-2): between 16 and 32 images captured within a few months of the NAIP image at 10 m/pixel (64x64).
- [Sentinel-1](https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1): between 2 and 8 images captured within a few months of the NAIP image at 10 m/pixel (64x64).
- [Landsat-8/9](https://www.usgs.gov/landsat-missions/landsat-8): 4 images captured in the same year as the NAIP image at 10 m/pixel (64x64).
- [OpenStreetMap](https://www.openstreetmap.org): a GeoJSON containing buildings, roads, and 30 other categories. It uses pixel coordinates relative to the 512x512 NAIP image.
- [WorldCover](https://worldcover2021.esa.int/): the 2021 land cover image at 10 m/pixel (64x64).
AI2-S2-NAIP is applicable to several supervised and unsupervised tasks in remote sensing, including super-resolution (e.g. NAIP -> Sentinel-2), segmentation and detection (e.g. NAIP or Sentinel-2 -> OpenStreetMap or WorldCover), and multi-modal masked autoencoder pre-training.
For questions or feedback about AI2-S2-NAIP, please open an issue on Github at https://github.com/allenai/satlas.
![Example images for one tile in the dataset.](example_images/combined.png)
Structure
---------
Once extracted, the dataset contains the different data types in different folders.
Each folder contains files named by a tile ID, which consists of the UTM projection, column, and row.
The column and row are based on tiles that are 512x512 pixels with pixel coordinates at 1.25 m/pixel, e.g. `32612_960_-6049.png` spans (614400, -3871360) to (615040, -3870720) in EPSG:32612 projection units.
Here is an example of NAIP data:
```
naip/
32612_960_-6049.png
32612_960_-6050.png
32612_960_-6051.png
...
```
And an example of Sentinel-2 data:
```
sentinel2/
32612_960_-6049_16.tif
32612_960_-6049_32.tif
32612_960_-6049_8.tif
32612_960_-6050_16.tif
...
```
The Sentinel-2, Sentinel-1, and Landsat images are GeoTIFFS so they contain georeference metadata.
Other data does not have georeference metadata, but data at each tile is aligned, so the georeference metadata from the above images is applicable to the other data as well with only a resolution shift.
Mapping Longitude and Latitude to Tile
--------------------------------------
Here is an example of mapping longitude and latitude to a tile.
First install packages:
pip install rasterio shapely utm
Then launch Python shell:
from rasterio.crs import CRS
from rasterio.warp import transform_geom
import shapely
import utm
# Define source location.
src_crs = CRS.from_epsg(4326)
src_point = shapely.Point(-122.331711, 47.648450)
# Get UTM zone.
_, _, zone_suffix, _ = utm.from_latlon(src_point.y, src_point.x)
epsg_code = 32600 + zone_suffix
dst_crs = CRS.from_epsg(epsg_code)
# Transform to UTM CRS.
dst_point = transform_geom(src_crs, dst_crs, src_point)
dst_point = shapely.geometry.shape(dst_point)
# dst_point is in projection coordinates (meters).
# Now convert to pixel coordinates at 1.25 m/pixel.
col = int(dst_point.x/1.25)
row = int(dst_point.y/-1.25)
# Print the prefix for the image filenames.
print(f"{epsg_code}_{col//512}_{row//512}")
# Print the prefix for the tar filenames to know which one to download.
# These group together many 1.25 m/pixel 512x512 tiles into one tar file.
print(f"{epsg_code}_{col//512//32}_{row//512//32}")
So then you would download the tar file from the second prefix, extract it, and look at the file with name matching the first prefix.
See visualize_tile.py for example of visualizing the data at a particular tile.
Sentinel-2
----------
The 10 m/pixel (`_8.tif`), 20 m/pixel (`_16.tif`), and 60 m/pixel (`_32.tif`) bands are stored separately.
Pixel values are the L1C 16-bit values.
The band order is as follows:
- _8.tif (64x64): B02, B03, B04, B08
- _16.tif (32x32): B05, B06, B07, B8A, B11, B12
- _32.tif (16x16): B01, B09, B10
The GeoTIFFs contain multiple images concatenated along the channel axis.
The CSV shows the original Sentinel-2 scene ID of each image.
Sentinel-1
----------
The Sentinel-1 bands are 10 m/pixel and ordered VV then VH.
Only IW VV+VH scenes are used.
The pixel values are 32-bit floating point values representing decibels 10*log10(x).
We obtain the radiometric-calibrated and terrain-corrected images from Google Earth Engine so see
https://developers.google.com/earth-engine/datasets/catalog/COPERNICUS_S1_GRD for details.
The GeoTIFFs contain multiple images concatenated along the channel axis.
The CSV shows the original Sentinel-1 scene ID of each image.
NAIP
----
The NAIP image is 512x512 with four 8-bit bands: R, G, B, IR.
It is encoded as PNG but the IR is alpha mask so cannot be visualized correctly in image viewer without removing the alpha mask.
There are two NAIP images available, one under "naip" (2019-2022) and one under "oldnaip" (2015-2018).
The CSV shows the original NAIP scene ID of each image.
Landsat
-------
We include OLI-TIRS images from Landsat-8 and Landsat-9.
As with Sentinel-2, we select Landsat images that were captured within a few months of the NAIP image.
We store the 15 m/pixel bands (i.e. B8) at 10 m/pixel, and the 30 m/pixel bands (all the others) at 20 m/pixel.
There are separate GeoTIFFs for the 10 m/pixel (`_8.tif`) and 20 m/pixel (`_16.tif`).
All pixel values are 16-bit.
The band order is as follows:
- _8.tif (64x64): B8
- _16.tif (32x32): B1, B2, B3, B4, B5, B6, B7, B9, B10, B11
The GeoTIFFS contain multiple images concatenated along the channel axis.
The CSV shows the original Landsat scene ID of each image.
|
pixparse/cc12m-wds | pixparse | "2023-12-15T04:50:41Z" | 9,135 | 19 | [
"task_categories:image-to-text",
"license:other",
"size_categories:10M<n<100M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2102.08981",
"region:us"
] | [
"image-to-text"
] | "2023-12-12T23:59:59Z" | ---
license: other
license_name: conceptual-12m
license_link: LICENSE
task_categories:
- image-to-text
size_categories:
- 10M<n<100M
---
# Dataset Card for Conceptual Captions 12M (CC12M)
## Dataset Description
- **Repository:** [Conceptual 12M repository](https://github.com/google-research-datasets/conceptual-12m)
- **Paper:** [Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts](https://arxiv.org/abs/2102.08981)
- **Point of Contact:** [Conceptual Captions e-mail](mailto:[email protected])
### Dataset Summary
Conceptual 12M (CC12M) is a dataset with 12 million image-text pairs specifically meant to be used for visionand-language pre-training.
Its data collection pipeline is a relaxed version of the one used in Conceptual Captions 3M (CC3M).
### Usage
This instance of Conceptual Captions is in [webdataset](https://github.com/webdataset/webdataset/commits/main) .tar format. It can be used with webdataset library or upcoming releases of Hugging Face `datasets`.
...More Detail TBD
### Data Splits
This dataset was downloaded using img2dataset. Images resized on download if shortest edge > 512 to shortest edge = 512.
#### Train
* `cc12m-train-*.tar`
* Downloaded on 2021/18/22
* 2176 shards, 10968539 samples
## Additional Information
### Dataset Curators
Soravit Changpinyo, Piyush Sharma, Nan Ding and Radu Soricut.
### Licensing Information
The dataset may be freely used for any purpose, although acknowledgement of
Google LLC ("Google") as the data source would be appreciated. The dataset is
provided "AS IS" without any warranty, express or implied. Google disclaims all
liability for any damages, direct or indirect, resulting from the use of the
dataset.
### Citation Information
```bibtex
@inproceedings{changpinyo2021cc12m,
title = {{Conceptual 12M}: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts},
author = {Changpinyo, Soravit and Sharma, Piyush and Ding, Nan and Soricut, Radu},
booktitle = {CVPR},
year = {2021},
}
``` |
tiange/Cap3D | tiange | "2024-12-28T16:57:52Z" | 9,095 | 99 | [
"task_categories:text-to-3d",
"task_categories:image-to-3d",
"license:odc-by",
"arxiv:2306.07279",
"arxiv:2404.07984",
"arxiv:2212.08051",
"arxiv:2307.05663",
"arxiv:2110.06199",
"arxiv:1512.03012",
"region:us"
] | [
"text-to-3d",
"image-to-3d"
] | "2023-05-28T18:31:58Z" | ---
license: odc-by
viewer: false
task_categories:
- text-to-3d
- image-to-3d
---
## Dataset Description
- **Paper:** [Scalable 3D Captioning with Pretrained Models](https://arxiv.org/abs/2306.07279)
- **Paper:** [View Selection for 3D Captioning via Diffusion Ranking](https://arxiv.org/abs/2404.07984)
- **Repository**: [Github_Cap3D](https://github.com/crockwell/Cap3D)
- **Repository**: [Github_DiffuRank](https://github.com/tiangeluo/DiffuRank)
- **Project**: [Project](https://cap3d-um.github.io/)
This repository hosts data for [Scalable 3D Captioning with Pretrained Models](https://cap3d-um.github.io/) and [View Selection for 3D Captioning via Diffusion Ranking](http://arxiv.org/abs/2404.07984), including descriptive **captions** for 3D objects in [Objaverse](https://arxiv.org/abs/2212.08051), [Objaverse-XL](https://arxiv.org/pdf/2307.05663.pdf), [ABO](https://arxiv.org/abs/2110.06199), and [ShapeNet](https://arxiv.org/abs/1512.03012). This repo also includes **point clouds** and **rendered images with camera, depth, and MatAlpha information** of Objaverse objects, as well as their Shap-E latent codes. All the captions and data provided by our papers are released under ODC-By 1.0 license.
## Usage
Please download and unzip files from [**Page**](https://huggingface.co./datasets/tiange/Cap3D/tree/main) according to your usage. Below is a table listing fiels descriptions, followed by example Python scripts for data loading.
| Filename | Description |
| -------------------------------------- | ------------------------------------------------------------ |
| **Cap3D_automated_Objaverse_full.csv** | By integrating text descriptions initially generated by [Cap3D](https://arxiv.org/abs/2306.07279) and subsequently refined by [DiffuRank](https://arxiv.org/abs/2404.07984), we have produced a total of **1,006,782** 3D-caption pairs. Out of the total, **785,150** pairs have been contributed to the whole [Objaverse](https://arxiv.org/abs/2212.08051) dataset, with the balance for the [Objaverse-XL](https://arxiv.org/pdf/2307.05663.pdf) dataset (specifically the highquality subset described in Section 4.1 Alignment Finetuning of [Objaverse-XL](https://proceedings.neurips.cc/paper_files/paper/2023/file/70364304877b5e767de4e9a2a511be0c-Paper-Datasets_and_Benchmarks.pdf)). For the object identifier in the left column, strings with a length of 32 characters are **UIDs** from Objaverse 1.0 (retrieved using `import objaverse; uids = objaverse.load_uids()`). Strings with a length of 64 characters are **SHA256** hashes provided by Objaverse-XL. |
| Cap3D_automated_**ABO**.csv | Our captions generated by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984) for the [ABO dataset](https://arxiv.org/abs/2110.06199), including both general and compositional descriptions. |
| Cap3D_automated_**ShapeNet**.csv | Our captions generated by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984) for the [ShapeNet dataset](https://arxiv.org/abs/1512.03012). |
| **PointCloud_zips** | Provided by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984), **1,006,782** PointClouds (16,384 colorful points) extracted from Objaverse objects. Saved as `.ply` file. |
| PointCloud_zips_**ABO** | Provided by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984), **7,953** PointClouds (16,384 colorful points) extracted from ABO objects. Saved as `.ply` file. |
| PointCloud_zips_**ShapeNet** | Provided by [Cap3D](https://arxiv.org/abs/2306.07279) and [DiffuRank](https://arxiv.org/abs/2404.07984), **52,472** PointClouds (16,384 colorful points) extracted from ShapeNet objects. Saved as `.ply` file. |
| **RenderedImage_perobj_zips** | Provided by [DiffuRank](https://arxiv.org/abs/2404.07984), Rendered images for **1,006,782** Objaverse objects. Once unzip `compressed_imgs_perobj_xx.zip` will have multiple zip files which consists of **20** rendered images along with camera details (intrinsic & extrinsic), depth data, and masks ([one example](https://huggingface.co./datasets/tiange/Cap3D/tree/main/RenderedImage_perobj_zips/example_zipfile)). Please specify the unzip path, such as `unzip ed51a51909ee46c780db3a85e821feb2.zip -d ed51a51909ee46c780db3a85e821feb2`. More information are in [here](https://huggingface.co./datasets/tiange/Cap3D/blob/main/RenderedImage_perobj_zips/README.md). |
| RenderedImage_perobj_zips_**ABO** | Provided by [DiffuRank](https://arxiv.org/abs/2404.07984), Rendered images for **7,953** ABO objects. Details similar to the above. |
| RenderedImage_perobj_zips_**ShapeNet** | Provided by [DiffuRank](https://arxiv.org/abs/2404.07984), Rendered images for **52,472** ShapeNet objects. Similar to the above but with 8 rendered images. |
| misc | Including miscellaneous files such as human-authored captions, finetuned models, objaverse pointclouds stored as .pt, shapE latent codes, and etc. Please refer to this [README](https://huggingface.co./datasets/tiange/Cap3D/blob/main/misc/README.md) |
``` python
# load our captions
import pandas as pd
captions = pd.read_csv('Cap3D_automated_Objaverse_full.csv', header=None)
## captions:
## 0 1
## 0 ed51a51909ee46c780db3a85e821feb2 Matte green rifle with a long barrel, stock, a...
## 1 9110b606f6c547b2980fcb3c8c4b6a1c Rustic single-story building with a weathered ...
## 2 80d9caaa1fa04502af666135196456e1 a pair of purple and black swords with white h...
## 3 28d43a218cd8466a8c1f82b29b71e314 3D model of a cluttered outdoor scene with veg...
## 4 75582285fab442a2ba31733f9c8fae66 Floating terrain piece with grassy landscape a...
## ... ... ...
## 1002417 3623e74f34c1c3c523af6b2bb8ffcbe2d2dce897ef61b9... Abstract 3D composition with human figures and...
## 1002418 64e9f7b7a1fc4c4ec56ed8b5917dfd610930043ac5e15f... 3D object with a rough, irregular pink surface...
## 1002419 fcd089d6a237fee21dfd5f0d6d9b74b2fd1150cdc61c7f... Bright pink abstract 3D model of a building wi...
## 1002420 f812dc980050f2d5f4b37df2a8620372f810dd6456a5f2... Monochromatic gray 3D model of a stylized huma...
## 1002421 77c09500b4d8e4b881e1ce6929d56c23658b87173c0996... Modular futuristic spacecraft with red and ora...
## if u want to obtain the caption for specific UID
caption = captions[captions[0] == '80d9caaa1fa04502af666135196456e1'][1].values[0]
# load point clouds (unzip https://huggingface.co./datasets/tiange/Cap3D/tree/main/PointCloud_pt_zips)
import torch
pts = torch.load('Cap3D_pcs_pt/80d9caaa1fa04502af666135196456e1.pt')
## pts.shape == torch.Size([6, 16384])
```
If you have any questions, please contact [Tiange](mailto:[email protected]) or [Chris](mailto:[email protected]).
## Citation Information
If you find our data or code useful, please consider citing:
```bibtex
@article{luo2023scalable,
title={Scalable 3D Captioning with Pretrained Models},
author={Luo, Tiange and Rockwell, Chris and Lee, Honglak and Johnson, Justin},
journal={arXiv preprint arXiv:2306.07279},
year={2023}
}
@article{luo2024view,
title={View Selection for 3D Captioning via Diffusion Ranking},
author={Luo, Tiange and Johnson, Justin and Lee, Honglak},
journal={arXiv preprint arXiv:2404.07984},
year={2024}
}
```
Please cite ***Objaverse*** and ***ABO*** paper accordingly, if you use related data.
```
@inproceedings{deitke2023objaverse,
title={Objaverse: A universe of annotated 3d objects},
author={Deitke, Matt and Schwenk, Dustin and Salvador, Jordi and Weihs, Luca and Michel, Oscar and VanderBilt, Eli and Schmidt, Ludwig and Ehsani, Kiana and Kembhavi, Aniruddha and Farhadi, Ali},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13142--13153},
year={2023}
}
@article{deitke2024objaverse,
title={Objaverse-xl: A universe of 10m+ 3d objects},
author={Deitke, Matt and Liu, Ruoshi and Wallingford, Matthew and Ngo, Huong and Michel, Oscar and Kusupati, Aditya and Fan, Alan and Laforte, Christian and Voleti, Vikram and Gadre, Samir Yitzhak and others},
journal={Advances in Neural Information Processing Systems},
volume={36},
year={2024}
}
@inproceedings{collins2022abo,
title={Abo: Dataset and benchmarks for real-world 3d object understanding},
author={Collins, Jasmine and Goel, Shubham and Deng, Kenan and Luthra, Achleshwar and Xu, Leon and Gundogdu, Erhan and Zhang, Xi and Vicente, Tomas F Yago and Dideriksen, Thomas and Arora, Himanshu and others},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={21126--21136},
year={2022}
}
```
|
mteb/sts15-sts | mteb | "2022-09-27T19:12:14Z" | 9,037 | 1 | [
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2022-04-20T10:48:04Z" | ---
language:
- en
--- |
deepghs/gelbooru_full | deepghs | "2025-01-02T20:35:28Z" | 9,016 | 37 | [
"task_categories:image-classification",
"task_categories:zero-shot-image-classification",
"task_categories:text-to-image",
"annotations_creators:no-annotation",
"source_datasets:gelbooru",
"language:en",
"license:other",
"size_categories:10M<n<100M",
"region:us",
"art",
"anime",
"not-for-all-audiences"
] | [
"image-classification",
"zero-shot-image-classification",
"text-to-image"
] | "2024-04-26T18:57:28Z" | ---
license: other
task_categories:
- image-classification
- zero-shot-image-classification
- text-to-image
language:
- en
tags:
- art
- anime
- not-for-all-audiences
size_categories:
- 10M<n<100M
annotations_creators:
- no-annotation
source_datasets:
- gelbooru
---
# Gelbooru Full Dataset
This is the full dataset of [gelbooru.com](https://gelbooru.com/). And all the original images are maintained here.
# How to Painlessly Use This
Use [cheesechaser](https://github.com/deepghs/cheesechaser) to quickly get images from this repository.
Before using this code, you have to **grant the access from this gated repository**. And then **set your personal HuggingFace token into `HF_TOKEN` environment variable** to give the code authorization for this repository.
```python
from cheesechaser.datapool import GelbooruDataPool
pool = GelbooruDataPool()
pool.batch_download_to_directory(
# download images #7000000-7000100, any ranges or id lists are okay
resource_ids=range(7000000, 7000100),
# save to directory /data/gelbooru
dst_dir='/data/gelbooru',
)
```
# Information
## Images
There are 10102818 images in total. The maximum ID of these images is 11191859. Last updated at `2025-01-03 05:29:50 JST`.
These are the information of recent 50 images:
| id | filename | width | height | mimetype | tags | file_size | file_url |
|---------:|:--------------|--------:|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------:|:-----------------------------------------------------------------------------|
| 11191859 | 11191859.jpeg | 1211 | 2048 | image/jpeg | 1boy black_bodysuit blue_eyes bodysuit closed_mouth colored_skin dragon_ball dragon_ball_super earrings expressionless fused_zamasu green_skin jewelry long_sleeves looking_at_viewer luna_306 pointy_ears potara_earrings red_sash sash solo spiked_hair white_hair | 304541 | https://img3.gelbooru.com/images/77/34/7734c747ab552629e27b6781697af073.jpeg |
| 11191858 | 11191858.jpeg | 1953 | 2859 | image/jpeg | 1girl aile_(mega_man_zx) alternate_breast_size arms_at_sides artist_name black_bodysuit black_footwear blannis blue_background blue_footwear blue_jacket bodysuit boots bracelet breasts brown_hair capcom covered_collarbone covered_navel cropped_jacket embarrassed eyebrows eyelashes female_focus from_above full_body green_eyes grin jacket jewelry large_breasts lips long_hair looking_at_viewer looking_up low_ponytail mega_man_(series) multicolored_footwear multicolored_jacket nervous nervous_smile open_clothes open_jacket ponytail shorts simple_background smile solo standing teeth two-tone_jacket white_shorts yellow_footwear yellow_jacket | 308768 | https://img3.gelbooru.com/images/8e/fa/8efa0dfc7e83f44bd45ffc8133746e35.jpeg |
| 11191857 | 11191857.png | 1993 | 2464 | image/png | 1boy 1girl absurdres ahegao alternate_hairstyle anal artist_name bandages belly black_eyes black_thighhighs blunt_bangs breasts breasts_out clenched_hand clothes_lift clothes_pull colored_pubic_hair covering_privates cum cum_in_ass cum_on_ass cum_on_body cum_on_lower_body drooling english_text eyelashes female_masturbation fingernails frilled_shirt frilled_sleeves frills grabbing_another's_breast green_hair green_pubic_hair groping heart heart-shaped_pupils highres large_breasts long_hair long_sleeves looking_ahead looking_at_another lying makeup masturbating_while_penetrated masturbation midriff miniskirt moaning nail_polish navel nipples on_back one_piece open_mouth patreon_username pelvic_curtain penis perona pink_eyes pink_hair pink_nails pleated_skirt pussy pussy_juice pussy_juice_trail pussy_rub red_skirt roronoa_zoro runny_makeup saliva saliva_trail sex shirt shirt_pull short_hair simple_background skirt skirt_lift skull_and_crossbones skull_print snegovski sparkle_background speech_bubble star_(symbol) star_print star_tattoo steam steaming_body stomach striped_clothes striped_thighhighs sweat sweatdrop symbol-shaped_pupils tattoo tearing_up teeth testicles thighhighs tongue topless_male torn_clothes torn_thighhighs two-tone_thighhighs veins very_long_hair white_shirt white_thighhighs | 5741788 | https://img3.gelbooru.com/images/2e/e9/2ee9bfe452b6bef6948c1d37e0529ff0.png |
| 11191856 | 11191856.jpeg | 3568 | 2813 | image/jpeg | 2boys bodysuit closed_mouth dougi dragon_ball dragon_ball_super earrings expressionless eye_contact gimdol_z goku_black greyscale jewelry long_sleeves looking_at_another mohawk monochrome multiple_boys parted_lips pointy_ears potara_earrings saiyan shirt short_hair spiked_hair white_background white_hair zamasu | 734125 | https://img3.gelbooru.com/images/82/48/8248dc9d6efc694637011522741149af.jpeg |
| 11191855 | 11191855.jpeg | 768 | 1024 | image/jpeg | 1girl 2boys :d ash_ketchum asymmetrical_hair black_footwear black_hair blush boots bow brock_(pokemon) brown_hair brown_pants capelet christmas closed_eyes closed_mouth creatures_(company) fur_trim game_freak gen_1_pokemon gift gloves green_eyes hat holding hood jacket misty_(pokemon) multiple_boys nintendo onix open_mouth orange_hair pants pikachu pokemon ponytail psyduck sack santa_costume santa_hat shoes short_hair side_ponytail sitting smile squatting sweatdrop tongue | 156370 | https://img3.gelbooru.com/images/16/75/1675e80998438d8f47b6e474eeddae3d.jpeg |
| 11191854 | 11191854.jpeg | 1307 | 1200 | image/jpeg | 1girl :3 blush closed_eyes closed_mouth creatures_(company) eevee game_freak gen_1_pokemon highres misty_(pokemon) nintendo open_mouth orange_hair pokemon pokemon_rgby seashell shell smile starfish starmie staryu swimming vaporeon water yurano_(upao) | 815648 | https://img3.gelbooru.com/images/78/a1/78a1edb96b87093ee4394ab1d90c40ef.jpeg |
| 11191853 | 11191853.jpg | 1879 | 2756 | image/jpeg | 1girl absurdres bare_shoulders breasts cigarette cleavage collarbone dark-skinned_female dark_skin ear_piercing highres holding holding_cigarette huge_breasts monochrome name_tag norman_maggot off-shoulder_shirt off_shoulder olverse original pencil_skirt piercing sarah_(ol-chan) shirt short_hair skirt solo swept_bangs thighs | 244863 | https://img3.gelbooru.com/images/8f/be/8fbe58951b2a4845a4b9ac002ba1e566.jpg |
| 11191852 | 11191852.png | 1198 | 1255 | image/png | 1boy 1girl aristocratic_clothes blue_eyes color_guide commentary dated english_commentary fire_emblem fire_emblem_fates gradient_clothes gradient_hair highres japanese_clothes limited_palette long_hair male_focus multicolored_hair nintendo one_eye_closed ponytail red_eyes red_ribbon ribbon sandragh signature simple_background solo takumi_(fire_emblem) two-tone_eyes very_long_hair | 971525 | https://img3.gelbooru.com/images/82/39/823936c08bf752e98dab4cf5c8ed45eb.png |
| 11191851 | 11191851.jpg | 2468 | 1635 | image/jpeg | 6+girls ahoge akari_(blue_archive) angcha apron bad_food black_bow black_coat black_hair black_hat black_horns black_shirt blonde_hair blue_archive blue_halo blunt_bangs bow braid breasts breath_weapon breathing_fire brown_eyes brown_hair closed_eyes closed_mouth coat collared_shirt cooking demon_girl demon_horns eating fire food frilled_sleeves frills fuuka_(blue_archive) hair_on_horn halo haruna_(blue_archive) hat headscarf highres holding holding_detonator holding_food horns iori_(blue_archive) izumi_(blue_archive) junko_(blue_archive) juri_(blue_archive) kettle lantern large_breasts long_hair long_sleeves multiple_girls necktie night open_mouth puffy_long_sleeves puffy_sleeves purple_halo red_hair red_halo red_necktie sanpaku scarf school_uniform shirt shoes short_necktie side_braid sidelocks smile tearing_up twintails white_apron white_hair white_shirt wiping_mouth yellow_halo | 499829 | https://img3.gelbooru.com/images/4e/44/4e44b9ab450725b80d042a8565a0f98c.jpg |
| 11191850 | 11191850.jpg | 1390 | 900 | image/jpeg | 1girl alternate_costume blue_eyes blue_sky border breasts commentary elysia_(honkai_impact) english_commentary hairband honkai_(series) honkai_impact_3rd long_hair lying nami_(tznami) on_back parted_lips pink_hair sky smile snow solo sweater tree twitter_username white_border white_hairband white_sweater | 141503 | https://img3.gelbooru.com/images/54/61/54618b41d4a01d14d6b9aab3f051ba63.jpg |
| 11191849 | 11191849.png | 1296 | 1148 | image/png | 1boy abs ass bald bandaged_chest bandages blood bloody_bandages bodysuit clenched_hand closed_eyes closed_mouth commentary_request dragon_ball dragon_ball_super from_behind greyscale hit_(dragon_ball) kamatama_(bobojobobo) looking_down male_focus monochrome multiple_views muscular muscular_male solo sound_effects topless_male twitter_username veins veiny_arms | 640513 | https://img3.gelbooru.com/images/f7/5d/f75ded0d0af252383e61a56a9e8ded37.png |
| 11191848 | 11191848.jpg | 3000 | 3000 | image/jpeg | 1boy absurdres artist_name black_fur bracelet cape christmas chromatic_aberration closed_mouth commentary feet_out_of_frame film_grain fur-trimmed_headwear fur_trim furry furry_male glint gloves hat hedgehog_boy highres instagram_logo instagram_username jewelry looking_at_viewer male_focus red_cape red_eyes red_fur red_hat santa_hat shadow_the_hedgehog shiiyou shoes smile snowflakes solo sonic_(series) symbol-only_commentary twitter_username twitter_x_logo two-tone_fur white_footwear white_gloves | 2152633 | https://img3.gelbooru.com/images/f4/23/f423150983af5d9a5c54808bb0e10d0c.jpg |
| 11191847 | 11191847.jpg | 1763 | 3294 | image/jpeg | 1girl :d absurdres artist_name belt black_thighhighs blonde_hair blurry blurry_background breasts burnice_white commentary cup drinking_glass english_commentary eyewear_on_head fingernails fire foxyreine highres holding holding_cup large_breasts looking_at_viewer medium_hair nail_polish naked_ribbon navel open_mouth red_belt red_eyes red_nails ribbon smile solo standing stomach sunglasses teeth thighhighs thighs twintails zenless_zone_zero | 672121 | https://img3.gelbooru.com/images/8a/e4/8ae4b90c57c2908788eb44e05464f762.jpg |
| 11191846 | 11191846.jpg | 2718 | 4096 | image/jpeg | 1girl absurdres akami_chiyo black_hair black_necktie black_pants black_tail chromatic_aberration collared_shirt ear_piercing earrings fins fish_tail highres indie_virtual_youtuber jewelry light_blush long_hair looking_at_viewer necktie pants piercing plunplun red_eyes red_hair red_shirt sharp_teeth shirt solo tail teeth virtual_youtuber wing_collar | 830643 | https://img3.gelbooru.com/images/5b/2b/5b2bb15468ebb759d55825e6ef0640bc.jpg |
| 11191845 | 11191845.png | 1012 | 1433 | image/png | 1girl :p bike_shorts blue_eyes blush breasts brown_hair cameltoe clothes_pull covering_breasts covering_privates creatures_(company) game_freak grey_background groin highres holding huuuna large_breasts licking_lips looking_at_viewer may_(pokemon) navel nintendo pokemon pokemon_rse pulling_own_clothes red_shirt shirt short_hair shorts skirt skirt_pull solo thighs tongue tongue_out undressing | 746214 | https://img3.gelbooru.com/images/cc/d3/ccd32a632db8e9d2e7eadfacc03473a0.png |
| 11191844 | 11191844.png | 2200 | 2200 | image/png | 2girls :d adapted_costume alternate_costume armpits bare_shoulders bat_tattoo beelstarmon bell belt bikini bikini_top_only black_belt black_gloves black_hair black_leotard black_mask black_shorts black_vest blonde_hair blue_eyes box breasts capelet christmas chromatic_aberration cleavage collarbone commentary covered_navel cropped_legs cropped_vest dclaret demon_girl demon_wings detached_sleeves digimon digimon_(creature) elbow_gloves english_commentary eye_mask eyeshadow facial_mark forehead_mark forehead_tattoo fur-trimmed_bikini fur-trimmed_capelet fur-trimmed_shorts fur_trim gift gift_box gloves green_ribbon grin groin hair_ornament hand_up highres holding holding_gift holding_sack horns jingle_bell large_breasts leotard lilithmon long_hair looking_at_viewer makeup mask merry_christmas multiple_girls navel one_eye_closed open_fly open_mouth parted_bangs pointy_ears purple_eyeshadow purple_lips red_background red_bikini red_capelet red_ribbon red_sleeves ribbon sack see-through_cleavage see-through_clothes short_hair short_shorts shorts sidelocks smile standing stomach striped_ribbon swimsuit tattoo teeth third_eye twitter_username underboob v vest wings | 10262725 | https://img3.gelbooru.com/images/6c/0a/6c0a6f3cf889cc2bde83a79394b0b961.png |
| 11191843 | 11191843.jpg | 1508 | 2048 | image/jpeg | 1girl black_hair black_ribbon blazer bloom blurry blurry_background book closed_mouth commentary depth_of_field fangs flower fukumaru_koito grey_jacket grey_skirt hair_down hair_flower hair_ornament highres holding holding_book hugging_book hugging_object idolmaster idolmaster_shiny_colors jacket lens_flare light_rays long_sleeves looking_at_viewer medium_hair neck_ribbon pink_eyes pleated_skirt qinglai_haiji ribbon school_uniform shirt sidelighting skin_fangs skirt smile solo sparkle sparkling_aura symbol-only_commentary wavy_hair white_shirt | 313690 | https://img3.gelbooru.com/images/81/95/81957e2dc97875dde99456b6ed4eb835.jpg |
| 11191842 | 11191842.jpg | 1920 | 1200 | image/jpeg | 1boy 2girls :3 :d apple_on_head arm_wrap arrow_through_apple baggy_pants black_gloves black_hair black_shirt blonde_hair blue_eyes blue_pants blush bow brown_jacket brown_pants brown_sash brown_shirt brown_skirt colored_skin copyright_notice creature crescent crop_top flower flower_on_head full_body fur-trimmed_sleeves fur_trim gloves green_skin guest_art gun gunslinger_(ragnarok_online) hair_bow hanbok highres holding holding_gun holding_weapon ice ice_cave in-universe_location jacket korean_clothes logo long_sleeves looking_at_another medium_hair multiple_girls navel ninja_(ragnarok_online) obi obijime official_art official_wallpaper open_clothes open_jacket open_mouth orc pants pink_bow pink_flower ragnarok_online sash shirt siroma sitting skirt sleeveless sleeveless_shirt smile soul_linker_(ragnarok_online) standing sui_hutami weapon | 1250973 | https://img3.gelbooru.com/images/b4/17/b41738db807d035a9a2dd8e06bddce8c.jpg |
| 11191841 | 11191841.jpg | 1910 | 2048 | image/jpeg | 2girls ? bare_shoulders black_gloves breasts brown_jacket cleavage clothing_cutout collared_shirt commentary dress gloves grey_hair highres jacket jewelry large_breasts long_hair long_sleeves multiple_girls necklace o_204 off_shoulder parted_bangs path_to_nowhere pricilla_(path_to_nowhere) purple_hair red_dress red_lips shirt simple_background strapless strapless_dress swept_bangs tetra_(path_to_nowhere) underboob_cutout upper_body white_background white_shirt | 373507 | https://img3.gelbooru.com/images/dc/65/dc65c57fef534bf69fa3b96dfc0837a2.jpg |
| 11191840 | 11191840.jpg | 1920 | 1200 | image/jpeg | 1girl 2boys armor armored_boots bandeau beret bikini bikini_bottom_only black_hair black_tabard blue_bikini blue_hat blush boots breastplate cape chibi copyright_notice cross detached_sleeves flat_chest full_body green_eyes green_hat guest_art gypsy_(ragnarok_online) hat highres hitting jewelry kankiriko leg_armor living_armor logo long_hair lord_knight_(ragnarok_online) multiple_boys navel necklace official_art official_wallpaper open_mouth pauldrons poop_on_head purple_hair ragnarok_online red_cape red_sleeves running shaded_face shoulder_armor solo_focus swimsuit sword_guardian tabard white_bandeau | 1685423 | https://img3.gelbooru.com/images/4d/16/4d160a96e4e268ba4b7d371b7c947c5a.jpg |
| 11191839 | 11191839.jpg | 3096 | 4096 | image/jpeg | 1girl absurdres anna_o_renko bandeau bikini blurry blurry_background blush breasts earrings elbow_gloves elle_(vtuber) fur-trimmed_gloves fur-trimmed_headwear fur_trim gloves hat highres hoop_earrings indie_virtual_youtuber jewelry large_breasts looking_at_viewer medium_breasts navel reaching reaching_towards_viewer red_bikini red_hair revealing_clothes santa_bikini santa_gloves santa_hat skindentation smile solo swimsuit twitter_username virtual_youtuber yellow_eyes | 776765 | https://img3.gelbooru.com/images/c7/b4/c7b4ae4c490365e6fd23a37fc0c9b175.jpg |
| 11191838 | 11191838.jpg | 4440 | 7761 | image/jpeg | 1girl absurdres alternate_color arm_under_breasts armpit_crease artist_name ascot bare_shoulders blurry blurry_background bottomless censored check_clothing christmas_tree closed_mouth clothing_request commentary convenient_censoring covered_eyes cup drinking_glass english_commentary english_text feather_boa fingernails flower hair_ribbon hat hat_flower herta_(honkai:_star_rail) highres holding holding_cup honkai:_star_rail honkai_(series) indoors jewelry key key_necklace lilpalette long_hair merry_christmas nail_polish necklace red_flower red_hat red_nails red_ribbon ribbon smile solo the_herta_(honkai:_star_rail) very_long_hair white_ascot witch_hat | 9162960 | https://img3.gelbooru.com/images/03/93/0393d5cbc0e589baee262b5ae1f302de.jpg |
| 11191837 | 11191837.jpg | 1189 | 2048 | image/jpeg | animal_print bat_print breasts burning burnt collarbone covered_erect_nipples covered_pussy darkstalkers fire green_hair head_wings highres large_breasts messy_hair midriff morrigan_aensland navel nude pantyhose print_pantyhose rokuba tagme torn_clothes wings | 241410 | https://img3.gelbooru.com/images/a3/a6/a3a6d416b9c5f962412d839152a86345.jpg |
| 11191836 | 11191836.jpg | 2894 | 4093 | image/jpeg | 1girl :d absurdres blush bow brown_eyes brown_hair coat commentary_request eyelashes hair_bow happy high_ponytail highres inukai_komugi jacket long_hair looking_at_viewer one_eye_closed open_mouth ponytail precure rice_rabbit22 scarf smile snow snowman solo white_jacket winter_clothes winter_coat wonderful_precure! | 1625799 | https://img3.gelbooru.com/images/05/b3/05b3a84394b6e680826e4756c70d1af9.jpg |
| 11191835 | 11191835.jpg | 1920 | 1200 | image/jpeg | 3boys 3girls animal_ears armor assassin_(ragnarok_online) assassin_cross_(ragnarok_online) bandages black_gloves black_leotard black_shirt black_wings blunt_bangs bob_cut bow bow_bra bra breasts brown_gloves brown_hair brown_jacket castle cat_ears cat_girl cherry_blossoms cleavage closed_mouth copyright_notice dagger dark-skinned_female dark_skin demon_girl demon_horns demon_wings dog_boy dog_ears elbow_gloves falling fingerless_gloves fish_in_mouth food fox_mask gloves green_eyes grey_eyes grey_hair grin guest_art head_wings highres holding holding_dagger holding_knife holding_weapon horns jacket jamadhar knife large_breasts leotard logo long_hair long_sleeves looking_at_viewer mask mask_on_head meat medium_breasts multiple_boys multiple_girls official_art official_wallpaper one_eye_closed open_mouth pauldrons pointy_ears purple_shirt ragnarok_online red_bow red_scarf scarf shirt short_hair shoulder_armor shrug_(clothing) sideboob sleeveless sleeveless_shirt small_breasts smile thief_(ragnarok_online) torn_clothes torn_scarf underwear upper_body vambraces weapon white_bra wings yuikawa_kazuno | 1428894 | https://img3.gelbooru.com/images/19/4a/194a6d01a2c0c0247e8e7b6d15744f0c.jpg |
| 11191834 | 11191834.jpg | 2894 | 4093 | image/jpeg | 1girl absurdres animal_ear_fluff animal_ear_piercing animal_ears black_bow black_bowtie black_choker black_skirt blouse bow bowtie brooch cat_ears cat_girl cat_tail choker chromatic_aberration collared_shirt crossed_bangs enya_yoruneko flying_sweatdrops frilled_shirt frilled_skirt frills green_eyes hair_bow heart heart_brooch highres indie_virtual_youtuber jewelry leaning_forward light_blush long_hair looking_at_viewer multicolored_hair pink_hair pink_tail plunplun puffy_short_sleeves puffy_sleeves shirt short_sleeves skirt smile solo streaked_hair suspender_skirt suspenders tail virtual_youtuber white_hair white_shirt | 1346220 | https://img3.gelbooru.com/images/ac/7c/ac7c2284d81e38a484313f5fb2fc7465.jpg |
| 11191833 | 11191833.png | 4303 | 6314 | image/png | 1girl absurdres ass ass_focus back black_choker blending breasts ceres_fauna choker colored_inner_hair commentary cowboy_shot cross-laced_clothes cross-laced_dress dress earrings english_commentary frilled_dress frills from_behind greatodoggo green_hair hair_over_one_eye hair_over_shoulder hand_on_own_chest highres hololive hololive_english horns huge_ass jewelry large_breasts long_hair looking_at_viewer mole mole_under_eye multicolored_hair negative_space one_eye_covered panties parted_lips short_dress sideboob skindentation smile solo thick_thighs thighhighs thighs tree_horns underwear virtual_youtuber white_background white_dress white_hair white_panties white_thighhighs yellow_eyes | 6700316 | https://img3.gelbooru.com/images/f5/a0/f5a036874d1805474906e7e31966bf7b.png |
| 11191832 | 11191832.jpg | 1248 | 1135 | image/jpeg | 1other burning exposed_heart fire headless mask najashi oni_mask original rope skeleton standing undead white_rope | 242779 | https://img3.gelbooru.com/images/4e/cc/4eccaf1f020fece510abe916ec25430b.jpg |
| 11191831 | 11191831.jpeg | 1000 | 1000 | image/jpeg | 1boy 1girl agpro back-to-back black_hair blue_hair closed_mouth creatures_(company) dress eyelashes game_freak grey_eyes hand_up hat misty_(pokemon) nintendo petals pokemon pokemon_adventures red_(pokemon) shirt short_hair smile upper_body | 876912 | https://img3.gelbooru.com/images/4c/90/4c904a5209c1c207aa07d70e03583bd7.jpeg |
| 11191830 | 11191830.jpeg | 699 | 992 | image/jpeg | 1boy 1girl aged_up agpro bad_tag blush breasts closed_mouth creatures_(company) game_freak gym_leader hair_between_eyes looking_at_viewer looking_back misty_(pokemon) nintendo pokemon pokemon_adventures red_(pokemon) shirt short_hair smile | 251624 | https://img3.gelbooru.com/images/37/b5/37b59d6bbc46c52d9ce575a1ec4ac53f.jpeg |
| 11191829 | 11191829.png | 2500 | 3125 | image/png | 1girl all_fours animal_ears bare_shoulders black_thighhighs bleach breasts cat_ears cat_paws collarbone dae_dae dark-skinned_female dark_skin fake_animal_ears fake_tail female_focus hair_ornament hanging_breasts highres large_breasts lips long_hair panties parted_bangs ponytail purple_hair shihouin_yoruichi shiny_skin simple_background skindentation tail thighhighs thighs underwear white_background yellow_eyes | 2195504 | https://img3.gelbooru.com/images/11/2a/112a7124e50bc594fa9cd061e707c003.png |
| 11191828 | 11191828.jpeg | 2507 | 3541 | image/jpeg | 1girl aftersex akan_mori bikini blonde_hair character_request cum cum_in_pussy dot_nose eyelashes female_focus gloves looking_at_viewer open_mouth pink_hair pink_theme ponytail pussy smile solo sweat swimsuit thick_thighs thighhighs thighs v white_bikini white_gloves white_thighhighs | 585834 | https://img3.gelbooru.com/images/44/52/4452b3447c6676572805642b1bbc71ed.jpeg |
| 11191827 | 11191827.png | 3069 | 2170 | image/png | 1boy 4girls absurdres adventurer ahoge arm_behind_head armor belt bikini_armor black_hair blonde_hair blue_eyes breasts closed_eyes collar commission commissioner_upload dagger elf flat_chest glasses green_eyes hair_bun highres hilda_(someinternetloser) knife large_breasts loincloth medium_breasts monika_tatsumi multiple_girls nipples original pointy_ears ponytail purple_hair red_eyes red_hair shield short_hair slave staff sword tattoo_on_neck the_dark_mangaka topless topless_male warrior weapon witch yellow_eyes | 5497138 | https://img3.gelbooru.com/images/84/93/8493d5c5db2721fd856a2d0d3d4b1bfe.png |
| 11191826 | 11191826.jpeg | 1000 | 1414 | image/jpeg | 2girls blood christmas gift_bag loli multiple_girls no_panties nosebleed onii-chan_wa_oshimai! oyama_mahiro | 374780 | https://img3.gelbooru.com/images/21/2b/212b47ee1726af6044e1a19a6a364015.jpeg |
| 11191825 | 11191825.jpeg | 2859 | 1953 | image/jpeg | 2girls arms_under_breasts artist_name bare_legs black_hair black_jacket black_pants blannis blush breasts closed_mouth coat creatures_(company) crossed_arms crossed_legs dendra_(pokemon) embarrassed eye_contact eyebrows eyelashes eyeshadow feet_out_of_frame female_focus fingerless_gloves fingernails from_side game_freak gloves green_eyes hand_in_pocket huge_breasts implied_yuri indoors jacket large_breasts leaning_forward lips long_fingernails long_sleeves looking_at_another makeup medium_hair miriam_(pokemon) multicolored_hair multicolored_jacket multiple_girls nail_polish nervous nervous_sweating nintendo nose_blush orange_eyes pants parted_lips pink_eyeshadow pink_hair pokemon pokemon_sv ponytail purple_hair purple_skirt sidelocks sitting skirt streaked_hair sweat sweater table teeth track_jacket two-tone_hair two-tone_jacket two-tone_pants wavy_mouth white_coat yellow_jacket yellow_nails yellow_pants yellow_sweater | 602270 | https://img3.gelbooru.com/images/b1/a5/b1a5b076fe70e3b1dfc819160733dd7a.jpeg |
| 11191824 | 11191824.jpeg | 1351 | 2048 | image/jpeg | 1girl blue_hair blush boots collar crop_top earrings eyeliner heart jewelry long_hair makeup nintendo octoling red_eyes shiver_(splatoon) sitting skirt smile solo splatoon_(series) splatoon_3 tentacle_hair | 229399 | https://img3.gelbooru.com/images/b9/b1/b9b15820baaead6c7441369132b0fd77.jpeg |
| 11191823 | 11191823.jpeg | 1900 | 2600 | image/jpeg | crossover dark-skinned_female dark_skin green_hair hands_behind_own_head iggybomb jungle_de_ikou mecha_pilot_suit mii_(jungle_de_ikou) neon_genesis_evangelion plugsuit shiny_clothes shiny_skin smile tan | 254683 | https://img3.gelbooru.com/images/bc/94/bc945eab8fe815fc06b64e6a5b45df7c.jpeg |
| 11191822 | 11191822.jpeg | 1792 | 2600 | image/jpeg | breasts cleavage crossover eiken gigantic_breasts hand_on_own_hip huge_breasts iggybomb long_hair mecha_pilot_suit misono_kirika neon_genesis_evangelion plugsuit purple_eyes purple_hair salute shiny_clothes sidelocks smile v very_long_hair wide_hips | 291791 | https://img3.gelbooru.com/images/b8/b4/b8b4dca4d61f8bc15be6293ff35f70b0.jpeg |
| 11191821 | 11191821.jpeg | 1884 | 2600 | image/jpeg | blonde_hair breasts crossover hellsing huge_breasts iggybomb large_breasts mecha_pilot_suit medium_hair neon_genesis_evangelion plugsuit salute seras_victoria | 264715 | https://img3.gelbooru.com/images/99/d3/99d342cf945304401337cfd03ec8c0b6.jpeg |
| 11191820 | 11191820.jpeg | 4096 | 1936 | image/jpeg | blonde_hair blue_eyes breasts cleavage crossover dark-skinned_female dark_skin eiken gigantic_breasts green_hair hand_on_own_hip hands_behind_own_head hellsing huge_breasts iggybomb jungle_de_ikou large_breasts long_hair mecha_pilot_suit medium_hair mii_(jungle_de_ikou) misono_kirika multiple_crossover neon_genesis_evangelion plugsuit purple_eyes purple_hair salute seras_victoria shiny_clothes sidelocks smile tan v very_long_hair wide_hips | 545268 | https://img3.gelbooru.com/images/01/19/0119d42770015a5b58c0bb4323af30d9.jpeg |
| 11191819 | 11191819.jpeg | 3541 | 2507 | image/jpeg | 1girl akan_mori blonde_hair blue_background blue_flower breasts fang female_focus fingerless_gloves flower gloves hood horns japanese_text large_breasts looking_at_viewer open_mouth purple_eyes skin_fang smile solo translation_request v | 789262 | https://img3.gelbooru.com/images/63/1d/631d8b6c472a66179d18ec5a8b4777c1.jpeg |
| 11191818 | 11191818.jpeg | 1000 | 1000 | image/jpeg | 3girls animal_costume antlers bell chloe_von_einzbern christmas fate/kaleid_liner_prisma_illya fate_(series) hanagata_kai horns illyasviel_von_einzbern looking_at_viewer miyu_edelfelt multiple_girls reindeer_antlers reindeer_costume ribbon | 330059 | https://img3.gelbooru.com/images/08/dc/08dc542046dc19486ec9d187c70dca61.jpeg |
| 11191817 | 11191817.jpeg | 768 | 1024 | image/jpeg | 1girl adversarial_noise blunt_bangs blush collarbone grey_hair humanization kokei_hakai long_hair looking_at_viewer marie_(splatoon) mole mole_under_eye nintendo open_mouth solo splatoon_(series) splatoon_3 strapless suspenders tube_top upper_body yellow_eyes | 117805 | https://img3.gelbooru.com/images/a7/c8/a7c812144b3be77a4b6614196c92eb32.jpeg |
| 11191816 | 11191816.png | 2800 | 2600 | image/png | 1boy 1girl absurdly_long_hair absurdres akari_(pokemon) arms_behind_back ass bdsm black_bodysuit black_eyes blue_hair blue_jacket blush bodysuit bondage bound bound_wrists breasts creatures_(company) crotch_rope cursed_delta drooling fellatio female_focus fur-trimmed_sleeves fur_trim game_freak headband headscarf highres huge_ass jacket large_breasts long_hair long_sleeves looking_at_viewer looking_to_the_side looking_up multiple_views nintendo nipples obi obijime open_clothes open_jacket oral penis pokemon pokemon_legends:_arceus red_scarf rope saliva sash scarf seductive_gaze seductive_smile shibari shibari_over_clothes short_sleeves sidelocks simple_background skin_tight skindentation smile solo_focus thick_thighs thighs very_long_hair white_background white_headband white_headscarf | 3176723 | https://img3.gelbooru.com/images/7f/18/7f18952ec046882bb5772a1adf336e71.png |
| 11191815 | 11191815.png | 2800 | 2600 | image/png | 1boy 1girl absurdly_long_hair absurdres akari_(pokemon) arms_behind_back ass ball_gag bdsm black_bodysuit black_eyes blue_hair blue_jacket blush bodysuit bondage bound bound_wrists breasts creatures_(company) crotch_rope cursed_delta drooling fellatio female_focus fur-trimmed_sleeves fur_trim gag gagged game_freak headband headscarf highres huge_ass jacket large_breasts long_hair long_sleeves looking_at_viewer looking_to_the_side looking_up multiple_views nintendo nipples obi obijime open_clothes open_jacket oral penis pokemon pokemon_legends:_arceus red_scarf rope saliva sash scarf seductive_gaze seductive_smile shibari shibari_over_clothes short_sleeves sidelocks simple_background skin_tight skindentation smile solo_focus thick_thighs thighs very_long_hair white_background white_headband white_headscarf | 2794396 | https://img3.gelbooru.com/images/bd/1e/bd1e33b5310b354cd4b9bd1fe5ef19d2.png |
| 11191814 | 11191814.png | 2800 | 2600 | image/png | 1boy 1girl absurdly_long_hair absurdres akari_(pokemon) arms_behind_back ass ball_gag bdsm black_bodysuit black_eyes blue_hair blue_jacket blush bodysuit bondage bound bound_wrists breasts creatures_(company) crotch_rope cursed_delta drooling fellatio female_focus fur-trimmed_sleeves fur_trim gag gagged game_freak headband headscarf highres huge_ass jacket large_breasts long_hair long_sleeves looking_at_viewer looking_to_the_side looking_up multiple_views nintendo obi obijime oral penis pokemon pokemon_legends:_arceus red_scarf rope saliva sash scarf seductive_gaze seductive_smile shibari shibari_over_clothes short_sleeves sidelocks simple_background skin_tight skindentation smile solo_focus thick_thighs thighs very_long_hair white_background white_headband white_headscarf | 2776945 | https://img3.gelbooru.com/images/29/c5/29c578f9d1bcfb82470bab61f0c11e9c.png |
| 11191813 | 11191813.png | 2894 | 4093 | image/png | 1girl absurdres animal_ear_piercing animal_ears ball bare_shoulders beachball bikini bikini_top_only bow breasts brown_hair brown_tail camouflage_bikini_top collarbone commentary_request cowlick double_bikini eyewear_on_head fangs full_body gold_necklace hair_ornament hairclip highres horse_ears horse_girl horse_tail jewelry jungle_pocket_(umamusume) layered_bikini light_blush looking_at_viewer navel nebusoku necklace open_mouth sidelocks sky small_breasts solo swimsuit tail umamusume water water_drop wet yellow_bikini yellow_eyes | 4551495 | https://img3.gelbooru.com/images/b0/3c/b03cbbe84d24b14211e74c3c25477c02.png |
| 11191812 | 11191812.jpeg | 1277 | 1381 | image/jpeg | 1girl child cool-kyou_shinja dragon_girl dragon_horns dragon_tail dress hat horns kanna_kamui kobayashi-san_chi_no_maidragon official_art santa_costume santa_dress santa_hat speech_bubble tail | 177509 | https://img3.gelbooru.com/images/54/63/5463202c29fa1e71219f7225670fb487.jpeg |
| 11191811 | 11191811.jpg | 2970 | 4096 | image/jpeg | 1girl absurdres animal_ears artist_name black_pants blue_coat branch coat creature feet_out_of_frame fox_ears fox_girl grey_scarf hair_between_eyes hair_ornament hairclip hat highres holding holding_creature light_particles looking_at_viewer muted_color outdoors pants phase_connect purple_eyes santa_hat scarf short_hair sitting snow snowing swing taku_artworks tenma_maemi theo_(tenma_maemi) watermark white_hair winter winter_clothes | 856135 | https://img3.gelbooru.com/images/e3/63/e363f0ca671d10baa7653d1c4938756f.jpg |
| 11191810 | 11191810.jpg | 1320 | 1978 | image/jpeg | 18dikart 5girls animal_ears apricot_the_lich bipsygg black_hair blonde_hair blue_eyes blush box breasts cat_tail christmas_tree commentary demon_horns dress dyarikku_(vtuber) english_commentary fur-trimmed_dress fur-trimmed_headwear fur_trim green_eyes grey_hair hat highres horns huge_breasts in_box in_container indie_virtual_youtuber long_hair looking_at_viewer mini_person minigirl multicolored_hair multiple_girls obkatiekat pink_hair purple_horns red_dress santa_dress santa_hat second-party_source smile solo solo_focus tail twintails twitter_username two-tone_hair virtual_youtuber vshojo yellow_eyes | 362979 | https://img3.gelbooru.com/images/64/cd/64cd174856c93216ca37f5b9947add02.jpg |
## Tags
There are 969173 tags in total.
These are the top 30 tags (125 tags in total) of type `unknown (-1)`:
| id | name | type | count | ambiguous |
|-----:|:--------------------------------------------------------------------------------------------------------------------------------------------|-------:|--------:|:------------|
| -1 | straightchromia | -1 | 263 | False |
| -1 | gekijigen_tag:_blanc_+_neptune_vs_zombie_gundan | -1 | 171 | False |
| -1 | seiki_kyushu | -1 | 23 | False |
| -1 | toyotaro | -1 | 15 | False |
| -1 | ensemble_stars!;character:akehoshi_subaru;happy_elements;male | -1 | 9 | False |
| -1 | _double_dash!! | -1 | 7 | False |
| -1 | dash!! | -1 | 7 | False |
| -1 | fubuki_kyoko | -1 | 7 | False |
| -1 | mario_k | -1 | 7 | False |
| -1 | star_\(symbol\) | -1 | 7 | False |
| -1 | \// | -1 | 6 | False |
| -1 | shrug_\(clothing\) | -1 | 6 | False |
| -1 | € | -1 | 6 | False |
| -1 | kami-sama_onegai!_onii-chan_no_aka-chan_ninshin_shitai_no!_~tsundere_imouto_&_seijun_imouto_to_ecchi_na_kiseki_de_trouble_kozukuri_zanmai♪~ | -1 | 5 | False |
| -1 | slime_\(creature\) | -1 | 5 | False |
| -1 | \\// | -1 | 4 | False |
| -1 | akizuki_rasenn | -1 | 4 | False |
| -1 | juju_(pixiv4563634) | -1 | 4 | False |
| -1 | pom_pom_\(clothes\) | -1 | 4 | False |
| -1 | source:https://nijie.info/view.php?id=151930 | -1 | 4 | False |
| -1 | +\l.l./+_(path_to_nowhere) | -1 | 3 | False |
| -1 | handing_breasts | -1 | 3 | False |
| -1 | star_\(sky\) | -1 | 3 | False |
| -1 | /tm | -1 | 2 | False |
| -1 | compl\pussy | -1 | 2 | False |
| -1 | mahitoⅶ | -1 | 2 | False |
| -1 | to_heart:_remember_my_memories | -1 | 2 | False |
| -1 | ulquiorra_schiffer | -1 | 2 | False |
| -1 | violet_plan | -1 | 2 | False |
| -1 | "artist: | -1 | 1 | False |
These are the top 30 tags (454442 tags in total) of type `general (0)`:
| id | name | type | count | ambiguous |
|-------:|:------------------|-------:|--------:|:------------|
| 152532 | 1girl | 0 | 6971160 | False |
| 12336 | solo | 0 | 5465193 | False |
| 265 | long_hair | 0 | 4666744 | True |
| 27 | breasts | 0 | 4408977 | False |
| 33975 | looking_at_viewer | 0 | 3646980 | False |
| 92 | blush | 0 | 3543354 | False |
| 796 | smile | 0 | 3091345 | False |
| 1100 | open_mouth | 0 | 2664446 | False |
| 52 | short_hair | 0 | 2404809 | False |
| 67 | blue_eyes | 0 | 2040706 | False |
| 28545 | simple_background | 0 | 1996572 | False |
| 21905 | large_breasts | 0 | 1992413 | False |
| 271 | blonde_hair | 0 | 1792287 | False |
| 1999 | shirt | 0 | 1781163 | False |
| 66 | black_hair | 0 | 1706716 | True |
| 337 | brown_hair | 0 | 1703242 | False |
| 179739 | white_background | 0 | 1633759 | False |
| 138893 | 1boy | 0 | 1621309 | False |
| 107 | skirt | 0 | 1589035 | False |
| 98 | gloves | 0 | 1424182 | True |
| 1864 | hair_ornament | 0 | 1422965 | False |
| 175068 | multiple_girls | 0 | 1412393 | False |
| 51 | red_eyes | 0 | 1392377 | True |
| 13957 | long_sleeves | 0 | 1371983 | False |
| 3477 | navel | 0 | 1345295 | False |
| 432 | nipples | 0 | 1327179 | False |
| 123 | dress | 0 | 1312217 | False |
| 23 | thighhighs | 0 | 1298307 | False |
| 6383 | holding | 0 | 1233592 | False |
| 153 | animal_ears | 0 | 1187147 | False |
These are the top 30 tags (281452 tags in total) of type `artist (1)`:
| id | name | type | count | ambiguous |
|--------:|:-------------------------|-------:|--------:|:------------|
| 46733 | qp:flapper | 1 | 15602 | False |
| 555502 | kagami_hirotaka | 1 | 8255 | False |
| 219408 | nel-zel_formula | 1 | 8087 | False |
| 594229 | ebifurya | 1 | 5771 | False |
| 719488 | aoi_nagisa_(metalder) | 1 | 5407 | False |
| 470499 | haruyama_kazunori | 1 | 5386 | False |
| 25270 | lolita_channel | 1 | 4910 | False |
| 401040 | hammer_(sunset_beach) | 1 | 4824 | False |
| 603058 | butcha-u | 1 | 4539 | False |
| 56027 | yaegashi_nan | 1 | 4460 | False |
| 67040 | piromizu | 1 | 4309 | False |
| 38088 | yoko_juusuke | 1 | 4149 | False |
| 21718 | drawfag | 1 | 4005 | False |
| 652987 | ruu_(tksymkw) | 1 | 3879 | False |
| 118829 | kanon_(kurogane_knights) | 1 | 3876 | False |
| 487842 | boris_(noborhys) | 1 | 3760 | False |
| 76506 | circle_anco | 1 | 3733 | False |
| 410 | azasuke | 1 | 3622 | False |
| 1128557 | kou_hiyoyo | 1 | 3409 | False |
| 380097 | matsunaga_kouyou | 1 | 3399 | False |
| 1069930 | tony_taka | 1 | 3397 | False |
| 481438 | itomugi-kun | 1 | 3288 | False |
| 729154 | naga_u | 1 | 3177 | False |
| 1051176 | hara_(harayutaka) | 1 | 3069 | False |
| 569895 | ojipon | 1 | 3047 | False |
| 354817 | bow_(bhp) | 1 | 3023 | False |
| 445614 | blade_(galaxist) | 1 | 2989 | False |
| 355711 | rebecca_(keinelove) | 1 | 2960 | False |
| 14795 | awa | 1 | 2856 | True |
| 509171 | neocoill | 1 | 2814 | False |
These are the top 30 tags (39877 tags in total) of type `copyright (3)`:
| id | name | type | count | ambiguous |
|--------:|:----------------------------|-------:|--------:|:------------|
| 118 | original | 3 | 1271600 | False |
| 126 | touhou | 3 | 780895 | False |
| 44106 | nintendo | 3 | 622394 | False |
| 448625 | kantai_collection | 3 | 423460 | True |
| 43567 | pokemon | 3 | 360475 | False |
| 306228 | game_freak | 3 | 358171 | False |
| 875834 | creatures_(company) | 3 | 358153 | False |
| 342429 | fate_(series) | 3 | 331416 | False |
| 1037340 | blue_archive | 3 | 247426 | False |
| 545521 | fate/grand_order | 3 | 239095 | False |
| 86 | idolmaster | 3 | 235513 | True |
| 943985 | genshin_impact | 3 | 225557 | False |
| 865663 | hololive | 3 | 217524 | False |
| 44086 | vocaloid | 3 | 162662 | False |
| 705390 | love_live! | 3 | 150758 | False |
| 807658 | arknights | 3 | 135727 | False |
| 805915 | azur_lane | 3 | 130432 | False |
| 338818 | idolmaster_cinderella_girls | 3 | 120701 | False |
| 5074 | fire_emblem | 3 | 112653 | False |
| 924 | digimon | 3 | 110402 | False |
| 851189 | umamusume | 3 | 102438 | False |
| 238 | final_fantasy | 3 | 98925 | False |
| 878809 | honkai_(series) | 3 | 88965 | False |
| 247 | one_piece | 3 | 81736 | False |
| 374082 | girls_und_panzer | 3 | 66277 | False |
| 237493 | mahou_shoujo_madoka_magica | 3 | 64504 | False |
| 1048700 | hololive_english | 3 | 63359 | False |
| 845788 | nijisanji | 3 | 62314 | False |
| 1121184 | girls'_frontline | 3 | 61995 | False |
| 7 | gundam | 3 | 60344 | True |
These are the top 30 tags (189513 tags in total) of type `character (4)`:
| id | name | type | count | ambiguous |
|--------:|:-------------------------|-------:|--------:|:------------|
| 14087 | hatsune_miku | 4 | 110217 | False |
| 855 | hakurei_reimu | 4 | 60150 | False |
| 130 | kirisame_marisa | 4 | 50925 | False |
| 486 | flandre_scarlet | 4 | 46348 | False |
| 850 | remilia_scarlet | 4 | 40828 | False |
| 1141830 | artoria_pendragon_(fate) | 4 | 37556 | False |
| 849 | izayoi_sakuya | 4 | 34169 | False |
| 36382 | komeiji_koishi | 4 | 30410 | False |
| 848 | konpaku_youmu | 4 | 29152 | False |
| 1293 | cirno | 4 | 27961 | False |
| 127 | alice_margatroid | 4 | 26944 | False |
| 484 | patchouli_knowledge | 4 | 26748 | False |
| 14543 | kochiya_sanae | 4 | 26447 | False |
| 658 | yakumo_yukari | 4 | 25307 | False |
| 83295 | souryuu_asuka_langley | 4 | 23178 | False |
| 481 | shameimaru_aya | 4 | 22636 | False |
| 237491 | akemi_homura | 4 | 21506 | False |
| 847 | reisen_udongein_inaba | 4 | 21363 | False |
| 237492 | kaname_madoka | 4 | 21311 | False |
| 24290 | nami_(one_piece) | 4 | 20849 | False |
| 36383 | komeiji_satori | 4 | 20748 | False |
| 761745 | saber_(fate) | 4 | 20348 | False |
| 125 | fujiwara_no_mokou | 4 | 20203 | False |
| 493 | saigyouji_yuyuko | 4 | 20145 | False |
| 14131 | kagamine_rin | 4 | 20139 | False |
| 15099 | inubashiri_momiji | 4 | 20050 | False |
| 804254 | artoria_pendragon_(all) | 4 | 20043 | False |
| 1059472 | ganyu_(genshin_impact) | 4 | 18785 | False |
| 881 | link | 4 | 18193 | False |
| 2335 | tifa_lockhart | 4 | 17440 | False |
These are the top 30 tags (407 tags in total) of type `metadata (5)`:
| id | name | type | count | ambiguous |
|--------:|:-----------------------|-------:|--------:|:------------|
| 262 | highres | 5 | 5951899 | False |
| 559 | absurdres | 5 | 1992877 | False |
| 150649 | commentary_request | 5 | 1188848 | False |
| 136261 | bad_id | 5 | 831072 | False |
| 760546 | bad_pixiv_id | 5 | 698226 | False |
| 25266 | commentary | 5 | 653731 | False |
| 136 | translation_request | 5 | 522206 | False |
| 1306 | official_art | 5 | 324500 | False |
| 52372 | tagme | 5 | 300150 | False |
| 23213 | artist_request | 5 | 247617 | False |
| 831896 | english_commentary | 5 | 243145 | False |
| 69 | game_cg | 5 | 189273 | True |
| 209468 | commission | 5 | 155812 | False |
| 13710 | translated | 5 | 155707 | False |
| 2229 | lowres | 5 | 113509 | False |
| 47252 | character_request | 5 | 110757 | False |
| 755092 | bad_twitter_id | 5 | 103829 | False |
| 19982 | traditional_media | 5 | 84434 | False |
| 789724 | chinese_commentary | 5 | 83837 | False |
| 888401 | non-web_source | 5 | 73076 | False |
| 853984 | third-party_edit | 5 | 64375 | False |
| 426 | scan | 5 | 64344 | False |
| 66551 | copyright_request | 5 | 63246 | False |
| 323949 | revision | 5 | 56986 | False |
| 1139885 | symbol-only_commentary | 5 | 56930 | False |
| 1034097 | skeb_commission | 5 | 53527 | False |
| 1223605 | paid_reward_available | 5 | 44609 | False |
| 191513 | md5_mismatch | 5 | 39240 | False |
| 2481 | source_request | 5 | 38997 | False |
| 63275 | huge_filesize | 5 | 34145 | True |
These are the top 30 tags (3357 tags in total) of type `deprecated (6)`:
| id | name | type | count | ambiguous |
|--------:|:------------------|-------:|--------:|:------------|
| 275 | silver_hair | 6 | 336682 | False |
| 205 | striped | 6 | 264451 | False |
| 24219 | see-through | 6 | 160050 | False |
| 286 | uniform | 6 | 159560 | True |
| 214641 | grabbing | 6 | 145175 | False |
| 3404 | plaid | 6 | 137228 | False |
| 889718 | black_headwear | 6 | 112049 | False |
| 264199 | black_legwear | 6 | 102146 | False |
| 56035 | light_brown_hair | 6 | 87604 | False |
| 850221 | white_headwear | 6 | 82957 | False |
| 264263 | white_legwear | 6 | 80489 | False |
| 47407 | looking_away | 6 | 65435 | False |
| 2818 | ass_grab | 6 | 59394 | False |
| 918581 | red_headwear | 6 | 41346 | False |
| 82157 | french_braid | 6 | 36466 | False |
| 54997 | multiple_penises | 6 | 36286 | False |
| 853048 | blue_headwear | 6 | 35686 | False |
| 17164 | breast_hold | 6 | 35279 | False |
| 18310 | vertical_stripes | 6 | 34542 | False |
| 1048560 | light_purple_hair | 6 | 33258 | False |
| 26046 | arm_grab | 6 | 26380 | False |
| 712440 | red_neckwear | 6 | 26251 | False |
| 488916 | oni_horns | 6 | 25061 | False |
| 670 | wallpaper | 6 | 24516 | False |
| 268269 | torn_legwear | 6 | 21955 | False |
| 51586 | screencap | 6 | 20524 | False |
| 842746 | green_headwear | 6 | 20207 | False |
| 918583 | brown_headwear | 6 | 20205 | False |
| 265499 | striped_legwear | 6 | 19590 | False |
| 20303 | turret | 6 | 17887 | False |
|
mlabonne/FineTome-100k | mlabonne | "2024-07-29T09:52:30Z" | 9,002 | 144 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-27T18:34:47Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: source
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 239650960.7474458
num_examples: 100000
download_size: 116531415
dataset_size: 239650960.7474458
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# FineTome-100k
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/75I3ffI4XnRlheOQ7kNJ3.jpeg)
The FineTome dataset is a subset of [arcee-ai/The-Tome](https://huggingface.co./datasets/arcee-ai/The-Tome) (without arcee-ai/qwen2-72b-magpie-en), re-filtered using [HuggingFaceFW/fineweb-edu-classifier](https://huggingface.co./HuggingFaceFW/fineweb-edu-classifier).
It was made for my article ["Fine-tune Llama 3.1 Ultra-Efficiently with Unsloth"](https://huggingface.co./blog/mlabonne/sft-llama3). |
laion/laion-audio-preview | laion | "2024-12-04T09:09:14Z" | 8,987 | 10 | [
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:audio",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | null | "2024-11-30T09:06:10Z" | ---
license: apache-2.0
---
|
lmms-lab/textvqa | lmms-lab | "2024-03-08T05:07:57Z" | 8,921 | 8 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-01-16T12:28:43Z" | ---
dataset_info:
features:
- name: image_id
dtype: string
- name: question_id
dtype: int32
- name: question
dtype: string
- name: question_tokens
sequence: string
- name: image
dtype: image
- name: image_width
dtype: int32
- name: image_height
dtype: int32
- name: flickr_original_url
dtype: string
- name: flickr_300k_url
dtype: string
- name: answers
sequence: string
- name: image_classes
sequence: string
- name: set_name
dtype: string
- name: ocr_tokens
sequence: string
splits:
- name: train
num_bytes: 9839776032.652
num_examples: 34602
- name: validation
num_bytes: 1438831837.0
num_examples: 5000
- name: test
num_bytes: 1712000724.844
num_examples: 5734
download_size: 8097805782
dataset_size: 12990608594.496
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
<p align="center" width="100%">
<img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
</p>
# Large-scale Multi-modality Models Evaluation Suite
> Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co./lmms-lab)
# This Dataset
This is a formatted version of [TextVQA](https://textvqa.org/). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
```
@inproceedings{singh2019towards,
title={Towards vqa models that can read},
author={Singh, Amanpreet and Natarajan, Vivek and Shah, Meet and Jiang, Yu and Chen, Xinlei and Batra, Dhruv and Parikh, Devi and Rohrbach, Marcus},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={8317--8326},
year={2019}
}
```
|
OpenDriveLab/OpenScene | OpenDriveLab | "2024-04-22T18:51:08Z" | 8,902 | 4 | [
"license:cc-by-nc-sa-4.0",
"size_categories:n<1K",
"modality:text",
"region:us"
] | null | "2024-03-02T04:33:04Z" | ---
license: cc-by-nc-sa-4.0
--- |
common-canvas/commoncatalog-cc-by | common-canvas | "2024-05-16T19:01:29Z" | 8,887 | 26 | [
"task_categories:text-to-image",
"language:en",
"license:cc-by-4.0",
"size_categories:10M<n<100M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2310.16825",
"region:us"
] | [
"text-to-image"
] | "2024-04-22T18:07:35Z" | ---
license: cc-by-4.0
dataset_info:
features:
- name: jpg
dtype: image
- name: blip2_caption
dtype: string
- name: caption
dtype: string
- name: licensename
dtype: string
- name: licenseurl
dtype: string
- name: width
dtype: int32
- name: height
dtype: int32
- name: original_width
dtype: int32
- name: original_height
dtype: int32
- name: photoid
dtype: int64
- name: uid
dtype: string
- name: unickname
dtype: string
- name: datetaken
dtype: timestamp[us]
- name: dateuploaded
dtype: int64
- name: capturedevice
dtype: string
- name: title
dtype: string
- name: usertags
dtype: string
- name: machinetags
dtype: string
- name: longitude
dtype: float64
- name: latitude
dtype: float64
- name: accuracy
dtype: int64
- name: pageurl
dtype: string
- name: downloadurl
dtype: string
- name: serverid
dtype: int64
- name: farmid
dtype: int64
- name: secret
dtype: string
- name: secretoriginal
dtype: string
- name: ext
dtype: string
- name: url
dtype: string
- name: key
dtype: string
- name: status
dtype: string
- name: error_message
dtype: string
- name: exif
dtype: string
- name: sha256
dtype: string
- name: description
dtype: string
task_categories:
- text-to-image
language:
- en
---
# Dataset Card for CommonCatalog CC-BY
This dataset is a large collection of high-resolution Creative Common images (composed of different licenses, see paper Table 1 in the Appendix) collected in 2014 from users of Yahoo Flickr.
The dataset contains images of up to 4k resolution, making this one of the highest resolution captioned image datasets.
## Dataset Details
### Dataset Description
We provide captions synthetic captions to approximately 100 million high resolution images collected from Yahoo Flickr Creative Commons (YFCC).
- **Curated by:** Aaron Gokaslan
- **Language(s) (NLP):** en
- **License:** See relevant yaml tag / dataset name.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/mosaicml/diffusion
- **Paper:** https://arxiv.org/abs/2310.16825
- **Demo:** See CommonCanvas Gradios
## Uses
We use CommonCatalog to train a family latent diffusion models called CommonCanvas.
The goal is to produce a model that is competitive with Stable Diffusion 2, but to do so using an easily accessible dataset of known provenance.
Doing so makes replicating the model significantly easier, and provides a clearer mechanism for applying training-data attribution techniques.
### Direct Use
Training text-to-image models
Training image-to-text models
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
* Crafting content that is offensive or injurious towards individuals, including negative portrayals of their living conditions, cultural backgrounds, religious beliefs, etc.
* Deliberately creating or spreading content that is discriminatory or reinforces harmful stereotypes.
* Falsely representing individuals without their permission.
* Generating sexual content that may be seen by individuals without their consent.
* Producing or disseminating false or misleading information.
* Creating content that depicts extreme violence or bloodshed.
* Distributing content that modifies copyrighted or licensed material in a way that breaches its usage terms.
## Dataset Structure
The dataset is divided into 10 subsets each containing parquets about 4GB each. Each subfolder within contains a resolution range of the images and their respective aspect ratios.
The dataset is also divided along images licensed for commercial use (C) and those that are not (NC).
## Dataset Creation
### Curation Rationale
Creating a standardized, accessible dataset with synthetic caption and releasing it so other people can train on a common dataset for open source image generation.
### Source Data
Yahoo Flickr Creative Commons 100M Dataset and Synthetically Generated Caption Data.
#### Data Collection and Processing
All synthetic captions were generated with BLIP2. See paper for more details.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
Users of Flickr
## Bias, Risks, and Limitations
See Yahoo Flickr Creative Commons 100M dataset for more information. The information was collected circa 2014 and known to have a bias towards internet connected Western countries. Some areas such as the global south lack representation.
## Citation
**BibTeX:**
```
@article{gokaslan2023commoncanvas,
title={CommonCanvas: An Open Diffusion Model Trained with Creative-Commons Images},
author={Gokaslan, Aaron and Cooper, A Feder and Collins, Jasmine and Seguin, Landan and Jacobson, Austin and Patel, Mihir and Frankle, Jonathan and Stephenson, Cory and Kuleshov, Volodymyr},
journal={arXiv preprint arXiv:2310.16825},
year={2023}
}
```
## Dataset Card Authors
[Aaron Gokaslan](https://huggingface.co./Skylion007)
## Dataset Card Contact
[Aaron Gokaslan](https://huggingface.co./Skylion007)
|
togethercomputer/RedPajama-Data-1T-Sample | togethercomputer | "2023-07-19T06:59:10Z" | 8,885 | 124 | [
"task_categories:text-generation",
"language:en",
"size_categories:100K<n<1M",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | "2023-04-16T23:12:30Z" | ---
task_categories:
- text-generation
language:
- en
pretty_name: Red Pajama 1T Sample
---
# Dataset Card for Dataset Name
### Dataset Summary
RedPajama is a clean-room, fully open-source implementation of the LLaMa dataset.
This HuggingFace repo contains a 1B-token sample of the RedPajama dataset.
The full dataset has the following token counts and is available for [download]( https://huggingface.co./datasets/togethercomputer/RedPajama-Data-1T):
| Dataset | Token Count |
|---------------|-------------|
| Commoncrawl | 878 Billion |
| C4 | 175 Billion |
| GitHub | 59 Billion |
| Books | 26 Billion |
| ArXiv | 28 Billion |
| Wikipedia | 24 Billion |
| StackExchange | 20 Billion |
| Total | 1.2 Trillion |
A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
### Languages
Primarily English, though the Wikipedia slice contains multiple languages.
## Dataset Structure
The dataset structure is as follows:
```
{
"text": ...,
"meta": {"url": "...", "timestamp": "...", "source": "...", "language": "...", ...}
}
```
## Dataset Creation
This dataset was created to follow the LLaMa paper as closely as possible to try to reproduce its recipe.
### Source Data
#### Commoncrawl
We download five dumps from Commoncrawl, and run the dumps through the official `cc_net` pipeline.
We then deduplicate on the paragraph level, and filter out low quality text using a linear classifier trained to
classify paragraphs as Wikipedia references or random Commoncrawl samples.
#### C4
C4 is downloaded from Huggingface. The only preprocessing step is to bring the data into our own format.
#### GitHub
The raw GitHub data is downloaded from Google BigQuery. We deduplicate on the file level and filter out low quality
files and only keep projects that are distributed under the MIT, BSD, or Apache license.
#### Wikipedia
We use the Wikipedia dataset available on Huggingface, which is based on the Wikipedia dump from 2023-03-20 and contains
text in 20 different languages. The dataset comes in preprocessed format, so that hyperlinks, comments and other
formatting boilerplate has been removed.
#### Gutenberg and Books3
The PG19 subset of the Gutenberg Project and Books3 datasets are downloaded from Huggingface. After downloading, we use
simhash to remove near duplicates.
#### ArXiv
ArXiv data is downloaded from Amazon S3 in the `arxiv` requester pays bucket. We only keep latex source files and
remove preambles, comments, macros and bibliographies.
#### Stackexchange
The Stack Exchange split of the dataset is download from the
[Internet Archive](https://archive.org/download/stackexchange). Here we only keep the posts from the 28 largest sites,
remove html tags, group the posts into question-answer pairs, and order answers by their score.
<!--
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
--> |
mlfoundations/MINT-1T-PDF-CC-2023-14 | mlfoundations | "2024-09-19T21:07:39Z" | 8,851 | 1 | [
"task_categories:image-to-text",
"task_categories:text-generation",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"arxiv:2406.11271",
"region:us",
"multimodal"
] | [
"image-to-text",
"text-generation"
] | "2024-07-12T05:44:44Z" | ---
license: cc-by-4.0
task_categories:
- image-to-text
- text-generation
language:
- en
tags:
- multimodal
pretty_name: MINT-1T
size_categories:
- 100B<n<1T
---
<h1 align="center">
🍃 MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
</h1>
🍃 MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump `CC-2023-14`. For other PDF, HTML, and ArXiv subsets, refer to the [🍃 MINT-1T collection](https://huggingface.co./collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
![Examples](interleaved-example-twitter.png)
## Updates
### 9/19/24
We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.
### 8/8/24
We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.
## Dataset Details
### Dataset Sources
- **Repository**: https://github.com/mlfoundations/MINT-1T
- **Paper:** https://arxiv.org/abs/2406.11271
- **Blog:** https://blog.salesforceairesearch.com/mint-1t/
## Uses
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as [Idefics2](https://huggingface.co./HuggingFaceM4/idefics2-8b), [XGen-MM](https://huggingface.co./Salesforce/xgen-mm-phi3-mini-instruct-r-v1), and [Chameleon](https://huggingface.co./facebook/chameleon-30b).
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
🍃 MINT-1T was built to make research into large multimodal models more accessible. Using
the dataset to train models that ingest or generate personally identifying information (such
as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.
## Dataset Creation
### Curation Rationale
🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.
### Source Data
The dataset is a comprehensive collection of multimodal documents from various sources:
- HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
- PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
- ArXiv documents: A subset of papers from the ArXiv repository
In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:
- 1029.4 million HTML documents
- 24.0 million PDF documents
- 0.6 million ArXiv documents
#### Data Collection and Processing
The data collection and processing involved several steps:
1. Document Extraction:
- HTML documents were parsed from CommonCrawl WARC files
- PDF documents were extracted from CommonCrawl WAT files
- ArXiv papers were directly sourced from ArXiv S3 buckets
2. Filtering Process:
- Applied text quality filters to ensure content relevance and readability
- Removed duplicate content at both paragraph and document levels
- Filtered out undesirable content based on predefined criteria
- Verified image availability and quality for HTML documents
- Limited PDF size to 50MB and 50 pages to manage dataset size and quality
3. Image Processing:
- Used NSFW image detection to remove pornographic or otherwise undesirable images
- Removed images smaller than 150 pixels or larger than 20,000 pixels
- Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
4. Text Processing:
- Used fasttext for language identification, focusing on English content
- Masked personally identifiable information such as email addresses and IP addresses
- Applied paragraph and document-level deduplication using Bloom filters
5. PDF Specific Processing:
- Used PyMuPDF for parsing PDFs and extracting reading order
- Clustered text blocks based on columns and ordered from top left to bottom right
6. ArXiv Specific Processing:
- Used TexSoup to parse LaTeX source code and interleave images with text
- Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags
Various open-source tools were utilized in this process, including fasttext, [PyMuPDF](https://github.com/pymupdf/PyMuPDF), and [DCLM](https://www.datacomp.ai/dclm/) and [bff](https://github.com/revbucket/bff) for deduplication and content filtering.
#### Personal and Sensitive Information
Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:
- Email addresses and IP addresses were masked to protect privacy
- An NSFW image classifierto remove inappropriate visual content
- URLs containing substrings associated with undesirable or sensitive content were filtered out
However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.
## Bias, Risks, and Limitations
Several potential biases, risks, and limitations have been identified:
1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.
2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.
3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.
4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.
5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.
### Recommendations
Given these considerations, the following recommendations are provided:
1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.
2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.
3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.
## License
We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.
## Citation
```
@article{awadalla2024mint1t,
title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens},
author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
year={2024}
}
``` |
fixie-ai/peoples_speech | fixie-ai | "2024-08-11T17:26:01Z" | 8,844 | 2 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-05T18:35:01Z" | ---
dataset_info:
- config_name: clean
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: validation
num_bytes: 2511523987.692
num_examples: 18622
- name: test
num_bytes: 4259695510.794
num_examples: 34898
- name: train
num_bytes: 401646320552.671
num_examples: 1501271
download_size: 398922548670
dataset_size: 408417540051
- config_name: dirty_sa
features:
- name: id
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: duration_ms
dtype: int32
- name: text
dtype: string
- name: continuation
dtype: string
splits:
- name: train
num_bytes: 144432442623.054
num_examples: 548014
- name: validation
num_bytes: 2511524241.692
num_examples: 18622
- name: test
num_bytes: 4259695588.794
num_examples: 34898
download_size: 149491764186
dataset_size: 151203662453.53998
configs:
- config_name: clean
data_files:
- split: validation
path: clean/validation-*
- split: test
path: clean/test-*
- split: train
path: data/train-*
- config_name: dirty_sa
data_files:
- split: train
path: dirty_sa/train-*
- split: validation
path: dirty_sa/validation-*
- split: test
path: dirty_sa/test-*
---
|
deepghs/danbooru2023_index | deepghs | "2024-12-08T09:18:58Z" | 8,831 | 3 | [
"task_categories:image-classification",
"task_categories:image-to-image",
"task_categories:text-to-image",
"language:en",
"language:ja",
"license:mit",
"size_categories:1M<n<10M",
"region:us"
] | [
"image-classification",
"image-to-image",
"text-to-image"
] | "2024-04-21T09:24:05Z" | ---
license: mit
task_categories:
- image-classification
- image-to-image
- text-to-image
language:
- en
- ja
size_categories:
- 1M<n<10M
---
Tar index files for [nyanko7/danbooru2023](https://huggingface.co./datasets/nyanko7/danbooru2023).
You can download images from both [nyanko7/danbooru2023](https://huggingface.co./datasets/nyanko7/danbooru2023) and [deepghs/danbooru_newest](https://huggingface.co./datasets/deepghs/danbooru_newest) with [cheesechaser](https://github.com/deepghs/cheesechaser).
```python
from cheesechaser.datapool import DanbooruNewestDataPool
pool = DanbooruNewestDataPool()
# download danbooru original images from 7200000-7201000, to directory /data/danbooru_original
pool.batch_download_to_directory(
resource_ids=range(7200000, 7201000),
dst_dir='/data/danbooru_original',
max_workers=12,
)
```
|
MMMU/MMMU | MMMU | "2024-09-19T17:11:03Z" | 8,801 | 213 | [
"task_categories:question-answering",
"task_categories:visual-question-answering",
"task_categories:multiple-choice",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:2311.16502",
"region:us",
"biology",
"medical",
"finance",
"chemistry",
"music",
"art",
"art_theory",
"design",
"business",
"accounting",
"economics",
"manage",
"marketing",
"health",
"medicine",
"basic_medical_science",
"clinical",
"pharmacy",
"public_health",
"humanities",
"social_science",
"history",
"literature",
"sociology",
"psychology",
"science",
"geography",
"math",
"physics",
"engineering",
"agriculture",
"architecture",
"computer_science",
"electronics",
"energy_and_power",
"materials",
"mechanical_engineering"
] | [
"question-answering",
"visual-question-answering",
"multiple-choice"
] | "2023-11-27T17:52:01Z" | ---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- question-answering
- visual-question-answering
- multiple-choice
pretty_name: mmmu
dataset_info:
- config_name: Accounting
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 262599.0
num_examples: 5
- name: validation
num_bytes: 1598285.0
num_examples: 30
- name: test
num_bytes: 22135625.0
num_examples: 380
download_size: 37363379
dataset_size: 23996509.0
- config_name: Agriculture
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 22082656.0
num_examples: 5
- name: validation
num_bytes: 119217558.0
num_examples: 30
- name: test
num_bytes: 993664077.0
num_examples: 287
download_size: 1158036990
dataset_size: 1134964291.0
- config_name: Architecture_and_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 137750.0
num_examples: 5
- name: validation
num_bytes: 721378.0
num_examples: 30
- name: test
num_bytes: 16054607.0
num_examples: 551
download_size: 48763955
dataset_size: 16913735.0
- config_name: Art
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 6241184.0
num_examples: 5
- name: validation
num_bytes: 29934534.0
num_examples: 30
- name: test
num_bytes: 237801390.0
num_examples: 231
download_size: 585798641
dataset_size: 273977108.0
- config_name: Art_Theory
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 7435106.0
num_examples: 5
- name: validation
num_bytes: 33481558.0
num_examples: 30
- name: test
num_bytes: 553174647.0
num_examples: 429
download_size: 930525695
dataset_size: 594091311.0
- config_name: Basic_Medical_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 814310.0
num_examples: 5
- name: validation
num_bytes: 4125930.0
num_examples: 30
- name: test
num_bytes: 48125891.0
num_examples: 326
download_size: 84666454
dataset_size: 53066131.0
- config_name: Biology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 574342.0
num_examples: 5
- name: validation
num_bytes: 8491863.0
num_examples: 30
- name: test
num_bytes: 132966151.0
num_examples: 345
download_size: 410242502
dataset_size: 142032356.0
- config_name: Chemistry
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 262397.0
num_examples: 5
- name: validation
num_bytes: 1518573.0
num_examples: 30
- name: test
num_bytes: 37219529.0
num_examples: 603
download_size: 108345562
dataset_size: 39000499.0
- config_name: Clinical_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1467945.0
num_examples: 5
- name: validation
num_bytes: 10882484.0
num_examples: 30
- name: test
num_bytes: 98201863.0
num_examples: 325
download_size: 160611488
dataset_size: 110552292.0
- config_name: Computer_Science
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 440523.0
num_examples: 5
- name: validation
num_bytes: 2072018.0
num_examples: 30
- name: test
num_bytes: 32047381.0
num_examples: 371
download_size: 55640991
dataset_size: 34559922.0
- config_name: Design
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2259873.0
num_examples: 5
- name: validation
num_bytes: 17923120.0
num_examples: 30
- name: test
num_bytes: 77676331.0
num_examples: 169
download_size: 142866617
dataset_size: 97859324.0
- config_name: Diagnostics_and_Laboratory_Medicine
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2056117.0
num_examples: 5
- name: validation
num_bytes: 37106233.0
num_examples: 30
- name: test
num_bytes: 157003069.0
num_examples: 162
download_size: 603957093
dataset_size: 196165419.0
- config_name: Economics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 171434.0
num_examples: 5
- name: validation
num_bytes: 1487048.0
num_examples: 30
- name: test
num_bytes: 11852300.0
num_examples: 267
download_size: 20777635
dataset_size: 13510782.0
- config_name: Electronics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 123632.0
num_examples: 5
- name: validation
num_bytes: 641377.0
num_examples: 30
- name: test
num_bytes: 5717686.0
num_examples: 256
download_size: 11602832
dataset_size: 6482695.0
- config_name: Energy_and_Power
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 105006.0
num_examples: 5
- name: validation
num_bytes: 1641935.0
num_examples: 30
- name: test
num_bytes: 14748428.0
num_examples: 432
download_size: 35246567
dataset_size: 16495369.0
- config_name: Finance
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 296124.0
num_examples: 5
- name: validation
num_bytes: 1071060.0
num_examples: 30
- name: test
num_bytes: 12065803.0
num_examples: 355
download_size: 29551521
dataset_size: 13432987.0
- config_name: Geography
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1494060.0
num_examples: 5
- name: validation
num_bytes: 6671316.0
num_examples: 30
- name: test
num_bytes: 137218400.0
num_examples: 565
download_size: 374766631
dataset_size: 145383776.0
- config_name: History
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1444231.0
num_examples: 5
- name: validation
num_bytes: 8819857.0
num_examples: 30
- name: test
num_bytes: 115228815.0
num_examples: 278
download_size: 232549641
dataset_size: 125492903.0
- config_name: Literature
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 2451201.0
num_examples: 5
- name: validation
num_bytes: 14241046.0
num_examples: 30
- name: test
num_bytes: 50301541.0
num_examples: 112
download_size: 132145895
dataset_size: 66993788.0
- config_name: Manage
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 449514.0
num_examples: 5
- name: validation
num_bytes: 3277436.0
num_examples: 30
- name: test
num_bytes: 29963963.0
num_examples: 245
download_size: 51186888
dataset_size: 33690913.0
- config_name: Marketing
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 116960.0
num_examples: 5
- name: validation
num_bytes: 1472981.0
num_examples: 30
- name: test
num_bytes: 7732976.0
num_examples: 181
download_size: 13146078
dataset_size: 9322917.0
- config_name: Materials
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 239632.0
num_examples: 5
- name: validation
num_bytes: 2305223.0
num_examples: 30
- name: test
num_bytes: 25256854.0
num_examples: 458
download_size: 105773156
dataset_size: 27801709.0
- config_name: Math
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 175839.0
num_examples: 5
- name: validation
num_bytes: 1444496.0
num_examples: 30
- name: test
num_bytes: 27701845.0
num_examples: 505
download_size: 174098418
dataset_size: 29322180.0
- config_name: Mechanical_Engineering
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 152542.0
num_examples: 5
- name: validation
num_bytes: 874988.0
num_examples: 30
- name: test
num_bytes: 15093746.0
num_examples: 429
download_size: 30450114
dataset_size: 16121276.0
- config_name: Music
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 1417615.0
num_examples: 5
- name: validation
num_bytes: 9359372.0
num_examples: 30
- name: test
num_bytes: 134096770.0
num_examples: 334
download_size: 174725052
dataset_size: 144873757.0
- config_name: Pharmacy
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 207924.0
num_examples: 5
- name: validation
num_bytes: 1656342.0
num_examples: 30
- name: test
num_bytes: 31866248.0
num_examples: 430
download_size: 62721263
dataset_size: 33730514.0
- config_name: Physics
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 233734.0
num_examples: 5
- name: validation
num_bytes: 1114130.0
num_examples: 30
- name: test
num_bytes: 15905705.0
num_examples: 408
download_size: 35238571
dataset_size: 17253569.0
- config_name: Psychology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 600864.0
num_examples: 5
- name: validation
num_bytes: 4403886.0
num_examples: 30
- name: test
num_bytes: 53813915.0
num_examples: 305
download_size: 102466671
dataset_size: 58818665.0
- config_name: Public_Health
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 234781.0
num_examples: 5
- name: validation
num_bytes: 1508761.0
num_examples: 30
- name: test
num_bytes: 32150088.0
num_examples: 509
download_size: 48231609
dataset_size: 33893630.0
- config_name: Sociology
features:
- name: id
dtype: string
- name: question
dtype: string
- name: options
dtype: string
- name: explanation
dtype: string
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: image_3
dtype: image
- name: image_4
dtype: image
- name: image_5
dtype: image
- name: image_6
dtype: image
- name: image_7
dtype: image
- name: img_type
dtype: string
- name: answer
dtype: string
- name: topic_difficulty
dtype: string
- name: question_type
dtype: string
- name: subfield
dtype: string
splits:
- name: dev
num_bytes: 3769220.0
num_examples: 5
- name: validation
num_bytes: 18455336.0
num_examples: 30
- name: test
num_bytes: 144301123.0
num_examples: 252
download_size: 310313826
dataset_size: 166525679.0
configs:
- config_name: Accounting
data_files:
- split: dev
path: Accounting/dev-*
- split: validation
path: Accounting/validation-*
- split: test
path: Accounting/test-*
- config_name: Agriculture
data_files:
- split: dev
path: Agriculture/dev-*
- split: validation
path: Agriculture/validation-*
- split: test
path: Agriculture/test-*
- config_name: Architecture_and_Engineering
data_files:
- split: dev
path: Architecture_and_Engineering/dev-*
- split: validation
path: Architecture_and_Engineering/validation-*
- split: test
path: Architecture_and_Engineering/test-*
- config_name: Art
data_files:
- split: dev
path: Art/dev-*
- split: validation
path: Art/validation-*
- split: test
path: Art/test-*
- config_name: Art_Theory
data_files:
- split: dev
path: Art_Theory/dev-*
- split: validation
path: Art_Theory/validation-*
- split: test
path: Art_Theory/test-*
- config_name: Basic_Medical_Science
data_files:
- split: dev
path: Basic_Medical_Science/dev-*
- split: validation
path: Basic_Medical_Science/validation-*
- split: test
path: Basic_Medical_Science/test-*
- config_name: Biology
data_files:
- split: dev
path: Biology/dev-*
- split: validation
path: Biology/validation-*
- split: test
path: Biology/test-*
- config_name: Chemistry
data_files:
- split: dev
path: Chemistry/dev-*
- split: validation
path: Chemistry/validation-*
- split: test
path: Chemistry/test-*
- config_name: Clinical_Medicine
data_files:
- split: dev
path: Clinical_Medicine/dev-*
- split: validation
path: Clinical_Medicine/validation-*
- split: test
path: Clinical_Medicine/test-*
- config_name: Computer_Science
data_files:
- split: dev
path: Computer_Science/dev-*
- split: validation
path: Computer_Science/validation-*
- split: test
path: Computer_Science/test-*
- config_name: Design
data_files:
- split: dev
path: Design/dev-*
- split: validation
path: Design/validation-*
- split: test
path: Design/test-*
- config_name: Diagnostics_and_Laboratory_Medicine
data_files:
- split: dev
path: Diagnostics_and_Laboratory_Medicine/dev-*
- split: validation
path: Diagnostics_and_Laboratory_Medicine/validation-*
- split: test
path: Diagnostics_and_Laboratory_Medicine/test-*
- config_name: Economics
data_files:
- split: dev
path: Economics/dev-*
- split: validation
path: Economics/validation-*
- split: test
path: Economics/test-*
- config_name: Electronics
data_files:
- split: dev
path: Electronics/dev-*
- split: validation
path: Electronics/validation-*
- split: test
path: Electronics/test-*
- config_name: Energy_and_Power
data_files:
- split: dev
path: Energy_and_Power/dev-*
- split: validation
path: Energy_and_Power/validation-*
- split: test
path: Energy_and_Power/test-*
- config_name: Finance
data_files:
- split: dev
path: Finance/dev-*
- split: validation
path: Finance/validation-*
- split: test
path: Finance/test-*
- config_name: Geography
data_files:
- split: dev
path: Geography/dev-*
- split: validation
path: Geography/validation-*
- split: test
path: Geography/test-*
- config_name: History
data_files:
- split: dev
path: History/dev-*
- split: validation
path: History/validation-*
- split: test
path: History/test-*
- config_name: Literature
data_files:
- split: dev
path: Literature/dev-*
- split: validation
path: Literature/validation-*
- split: test
path: Literature/test-*
- config_name: Manage
data_files:
- split: dev
path: Manage/dev-*
- split: validation
path: Manage/validation-*
- split: test
path: Manage/test-*
- config_name: Marketing
data_files:
- split: dev
path: Marketing/dev-*
- split: validation
path: Marketing/validation-*
- split: test
path: Marketing/test-*
- config_name: Materials
data_files:
- split: dev
path: Materials/dev-*
- split: validation
path: Materials/validation-*
- split: test
path: Materials/test-*
- config_name: Math
data_files:
- split: dev
path: Math/dev-*
- split: validation
path: Math/validation-*
- split: test
path: Math/test-*
- config_name: Mechanical_Engineering
data_files:
- split: dev
path: Mechanical_Engineering/dev-*
- split: validation
path: Mechanical_Engineering/validation-*
- split: test
path: Mechanical_Engineering/test-*
- config_name: Music
data_files:
- split: dev
path: Music/dev-*
- split: validation
path: Music/validation-*
- split: test
path: Music/test-*
- config_name: Pharmacy
data_files:
- split: dev
path: Pharmacy/dev-*
- split: validation
path: Pharmacy/validation-*
- split: test
path: Pharmacy/test-*
- config_name: Physics
data_files:
- split: dev
path: Physics/dev-*
- split: validation
path: Physics/validation-*
- split: test
path: Physics/test-*
- config_name: Psychology
data_files:
- split: dev
path: Psychology/dev-*
- split: validation
path: Psychology/validation-*
- split: test
path: Psychology/test-*
- config_name: Public_Health
data_files:
- split: dev
path: Public_Health/dev-*
- split: validation
path: Public_Health/validation-*
- split: test
path: Public_Health/test-*
- config_name: Sociology
data_files:
- split: dev
path: Sociology/dev-*
- split: validation
path: Sociology/validation-*
- split: test
path: Sociology/test-*
tags:
- biology
- medical
- finance
- chemistry
- music
- art
- art_theory
- design
- music
- business
- accounting
- economics
- finance
- manage
- marketing
- health
- medicine
- basic_medical_science
- clinical
- pharmacy
- public_health
- humanities
- social_science
- history
- literature
- sociology
- psychology
- science
- biology
- chemistry
- geography
- math
- physics
- engineering
- agriculture
- architecture
- computer_science
- electronics
- energy_and_power
- materials
- mechanical_engineering
---
# MMMU (A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI)
[**🌐 Homepage**](https://mmmu-benchmark.github.io/) | [**🏆 Leaderboard**](https://mmmu-benchmark.github.io/#leaderboard) | [**🤗 Dataset**](https://huggingface.co./datasets/MMMU/MMMU/) | [**🤗 Paper**](https://huggingface.co./papers/2311.16502) | [**📖 arXiv**](https://arxiv.org/abs/2311.16502) | [**GitHub**](https://github.com/MMMU-Benchmark/MMMU)
## 🔔News
- **🛠️[2024-05-30]: Fixed duplicate option issues in Materials dataset items (validation_Materials_25; test_Materials_17, 242) and content error in validation_Materials_25.**
- **🛠️[2024-04-30]: Fixed missing "-" or "^" signs in Math dataset items (dev_Math_2, validation_Math_11, 12, 16; test_Math_8, 23, 43, 113, 164, 223, 236, 287, 329, 402, 498) and corrected option errors in validation_Math_2. If you encounter any issues with the dataset, please contact us promptly!**
- **🚀[2024-01-31]: We added Human Expert performance on the [Leaderboard](https://mmmu-benchmark.github.io/#leaderboard)!🌟**
- **🔥[2023-12-04]: Our evaluation server for test set is now availble on [EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview). We welcome all submissions and look forward to your participation! 😆**
## Dataset Details
### Dataset Description
We introduce MMMU: a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning. MMMU includes **11.5K meticulously collected multimodal questions** from college exams, quizzes, and textbooks, covering six core disciplines: Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, and Tech & Engineering. These questions span **30 subjects** and **183 subfields**, comprising **30 highly heterogeneous image types**, such as charts, diagrams, maps, tables, music sheets, and chemical structures. We believe MMMU will stimulate the community to build next-generation multimodal foundation models towards expert artificial general intelligence (AGI).
🎯 **We have released a full set comprising 150 development samples and 900 validation samples. We have released 10,500 test questions without their answers.**
The development set is used for few-shot/in-context learning, and the validation set is used for debugging models, selecting hyperparameters, or quick evaluations. The answers and explanations for the test set questions are withheld. You can submit your model's predictions for the **test set** on **[EvalAI](https://eval.ai/web/challenges/challenge-page/2179/overview)**.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6230d750d93e84e233882dbc/2Ulh9yznm1dvISV4xJ_Ok.png)
### Dataset Creation
MMMU was created to challenge multimodal models with tasks that demand college-level subject knowledge and deliberate reasoning, pushing the boundaries of what these models can achieve in terms of expert-level perception and reasoning.
The data for the MMMU dataset was manually collected by a team of college students from various disciplines, using online sources, textbooks, and lecture materials.
- **Content:** The dataset contains 11.5K college-level problems across six broad disciplines (Art & Design, Business, Science, Health & Medicine, Humanities & Social Science, Tech & Engineering) and 30 college subjects.
- **Image Types:** The dataset includes 30 highly heterogeneous image types, such as charts, diagrams, maps, tables, music sheets, and chemical structures, interleaved with text.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6230d750d93e84e233882dbc/Mbf8O5lEH8I8czprch0AG.png)
## 🏆 Mini-Leaderboard
We show a mini-leaderboard here and please find more information in our paper or [**homepage**](https://mmmu-benchmark.github.io/).
| Model | Val (900) | Test (10.5K) |
|--------------------------------|:---------:|:------------:|
| Expert (Best) | 88.6 | - |
| Expert (Medium) | 82.6 | - |
| Expert (Worst) | 76.2 | - |
| GPT-4o* | **69.1** | - |
| Gemini 1.5 Pro* | 62.2 | - |
| InternVL2-Pro* | 62.0 | **55.7** |
| Gemini 1.0 Ultra* | 59.4 | - |
| Claude 3 Opus* | 59.4 | - |
| GPT-4V(ision) (Playground) | 56.8 | **55.7** |
| Reka Core* | 56.3 | - |
| Gemini 1.5 Flash* | 56.1 | - |
| SenseChat-Vision-0423-Preview* | 54.6 | 50.3 |
| Reka Flash* | 53.3 | - |
| Claude 3 Sonnet* | 53.1 | - |
| HPT Pro* | 52.0 | - |
| VILA1.5* | 51.9 | 46.9 |
| Qwen-VL-MAX* | 51.4 | 46.8 |
| InternVL-Chat-V1.2* | 51.6 | 46.2 |
| Skywork-VL* | 51.4 | 46.2 |
| LLaVA-1.6-34B* | 51.1 | 44.7 |
| Claude 3 Haiku* | 50.2 | - |
| Adept Fuyu-Heavy* | 48.3 | - |
| Gemini 1.0 Pro* | 47.9 | - |
| Marco-VL-Plus* | 46.2 | 44.3 |
| Yi-VL-34B* | 45.9 | 41.6 |
| Qwen-VL-PLUS* | 45.2 | 40.8 |
| HPT Air* | 44.0 | - |
| Reka Edge* | 42.8 | - |
| Marco-VL* | 41.2 | 40.4 |
| OmniLMM-12B* | 41.1 | 40.4 |
| Bunny-8B* | 43.3 | 39.0 |
| Bunny-4B* | 41.4 | 38.4 |
| Weitu-VL-1.0-15B* | - | 38.4 |
| InternLM-XComposer2-VL* | 43.0 | 38.2 |
| Yi-VL-6B* | 39.1 | 37.8 |
| InfiMM-Zephyr-7B* | 39.4 | 35.5 |
| InternVL-Chat-V1.1* | 39.1 | 35.3 |
| Math-LLaVA-13B* | 38.3 | 34.6 |
| SVIT* | 38.0 | 34.1 |
| MiniCPM-V* | 37.2 | 34.1 |
| MiniCPM-V-2* | 37.1 | - |
| Emu2-Chat* | 36.3 | 34.1 |
| BLIP-2 FLAN-T5-XXL | 35.4 | 34.0 |
| InstructBLIP-T5-XXL | 35.7 | 33.8 |
| LLaVA-1.5-13B | 36.4 | 33.6 |
| Bunny-3B* | 38.2 | 33.0 |
| Qwen-VL-7B-Chat | 35.9 | 32.9 |
| SPHINX* | 32.9 | 32.9 |
| mPLUG-OWL2* | 32.7 | 32.1 |
| BLIP-2 FLAN-T5-XL | 34.4 | 31.0 |
| InstructBLIP-T5-XL | 32.9 | 30.6 |
| Gemini Nano2* | 32.6 | - |
| CogVLM | 32.1 | 30.1 |
| Otter | 32.2 | 29.1 |
| LLaMA-Adapter2-7B | 29.8 | 27.7 |
| MiniGPT4-Vicuna-13B | 26.8 | 27.6 |
| Adept Fuyu-8B | 27.9 | 27.4 |
| Kosmos2 | 24.4 | 26.6 |
| OpenFlamingo2-9B | 28.7 | 26.3 |
| Frequent Choice | 22.1 | 23.9 |
| Random Choice | 26.8 | 25.8 |
*: results provided by the authors.
## Limitations
Despite its comprehensive nature, MMMU, like any benchmark, is not without limitations. The manual curation process, albeit thorough, may carry biases.
And the focus on college-level subjects might not fully be a sufficient test for Expert AGI.
However, we believe it should be necessary for an Expert AGI to achieve strong performance on MMMU to demonstrate their broad and deep subject knowledge as well as expert-level understanding and reasoning capabilities.
In future work, we plan to incorporate human evaluations into MMMU. This will provide a more grounded comparison between model capabilities and expert performance, shedding light on the proximity of current AI systems to achieving Expert AGI.
## Disclaimers
The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution.
Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to notify us. Upon verification, such samples will be promptly removed.
## Contact
- Xiang Yue: [email protected]
- Yu Su: [email protected]
- Wenhu Chen: [email protected]
## Citation
**BibTeX:**
```bibtex
@inproceedings{yue2023mmmu,
title={MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI},
author={Xiang Yue and Yuansheng Ni and Kai Zhang and Tianyu Zheng and Ruoqi Liu and Ge Zhang and Samuel Stevens and Dongfu Jiang and Weiming Ren and Yuxuan Sun and Cong Wei and Botao Yu and Ruibin Yuan and Renliang Sun and Ming Yin and Boyuan Zheng and Zhenzhu Yang and Yibo Liu and Wenhao Huang and Huan Sun and Yu Su and Wenhu Chen},
booktitle={Proceedings of CVPR},
year={2024},
}
``` |
japanese-asr/whisper_transcriptions.reazon_speech_all.wer_10.0.vectorized | japanese-asr | "2024-09-17T13:53:02Z" | 8,754 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-12T10:10:35Z" | ---
dataset_info:
- config_name: subset_0
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44407083236
num_examples: 28889
download_size: 6430216790
dataset_size: 44407083236
- config_name: subset_1
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44089216600
num_examples: 28682
download_size: 6385763048
dataset_size: 44089216600
- config_name: subset_10
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43927652252
num_examples: 28577
download_size: 6336100250
dataset_size: 43927652252
- config_name: subset_100
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44365586824
num_examples: 28862
download_size: 6435201244
dataset_size: 44365586824
- config_name: subset_101
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44324247868
num_examples: 28835
download_size: 6431762006
dataset_size: 44324247868
- config_name: subset_102
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43821526656
num_examples: 28508
download_size: 6367882564
dataset_size: 43821526656
- config_name: subset_103
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44084293668
num_examples: 28679
download_size: 6363475471
dataset_size: 44084293668
- config_name: subset_104
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44042930672
num_examples: 28652
download_size: 6381242681
dataset_size: 44042930672
- config_name: subset_106
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43912140892
num_examples: 28567
download_size: 6343450605
dataset_size: 43912140892
- config_name: subset_107
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43931998624
num_examples: 28580
download_size: 6358400755
dataset_size: 43931998624
- config_name: subset_108
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44042913000
num_examples: 28652
download_size: 6405970862
dataset_size: 44042913000
- config_name: subset_109
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44642253680
num_examples: 29042
download_size: 6437990632
dataset_size: 44642253680
- config_name: subset_11
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44256762756
num_examples: 28791
download_size: 6393712860
dataset_size: 44256762756
- config_name: subset_110
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43889022688
num_examples: 28552
download_size: 6360561092
dataset_size: 43889022688
- config_name: subset_111
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44129144280
num_examples: 28708
download_size: 6408022759
dataset_size: 44129144280
- config_name: subset_112
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44041454396
num_examples: 28651
download_size: 6391629995
dataset_size: 44041454396
- config_name: subset_113
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44218161920
num_examples: 28766
download_size: 6397865173
dataset_size: 44218161920
- config_name: subset_114
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44311827300
num_examples: 28827
download_size: 6392228352
dataset_size: 44311827300
- config_name: subset_115
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43438751460
num_examples: 28259
download_size: 6261293593
dataset_size: 43438751460
- config_name: subset_116
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43895154544
num_examples: 28556
download_size: 6347517025
dataset_size: 43895154544
- config_name: subset_117
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43969041880
num_examples: 28604
download_size: 6375498562
dataset_size: 43969041880
- config_name: subset_118
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44027316104
num_examples: 28642
download_size: 6354466340
dataset_size: 44027316104
- config_name: subset_119
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44289059560
num_examples: 28812
download_size: 6416432647
dataset_size: 44289059560
- config_name: subset_12
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44293612564
num_examples: 28815
download_size: 6433586401
dataset_size: 44293612564
- config_name: subset_120
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44530056588
num_examples: 28969
download_size: 6437978882
dataset_size: 44530056588
- config_name: subset_121
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3074160
num_examples: 2
download_size: 556271
dataset_size: 3074160
- config_name: subset_122
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44136739628
num_examples: 28713
download_size: 6404302139
dataset_size: 44136739628
- config_name: subset_123
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44096634284
num_examples: 28687
download_size: 6389251368
dataset_size: 44096634284
- config_name: subset_124
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44001467124
num_examples: 28625
download_size: 6385493649
dataset_size: 44001467124
- config_name: subset_125
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44128863696
num_examples: 28708
download_size: 6364505444
dataset_size: 44128863696
- config_name: subset_126
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44382486420
num_examples: 28873
download_size: 6441197752
dataset_size: 44382486420
- config_name: subset_127
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44006092176
num_examples: 28628
download_size: 6361537304
dataset_size: 44006092176
- config_name: subset_128
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43759809728
num_examples: 28468
download_size: 6336544958
dataset_size: 43759809728
- config_name: subset_129
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44024331328
num_examples: 28640
download_size: 6359644430
dataset_size: 44024331328
- config_name: subset_13
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44357930276
num_examples: 28857
download_size: 6420201483
dataset_size: 44357930276
- config_name: subset_130
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44192010836
num_examples: 28749
download_size: 6422867143
dataset_size: 44192010836
- config_name: subset_131
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44099663532
num_examples: 28689
download_size: 6371664563
dataset_size: 44099663532
- config_name: subset_132
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44095360096
num_examples: 28686
download_size: 6383911332
dataset_size: 44095360096
- config_name: subset_133
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43730827940
num_examples: 28449
download_size: 6313519416
dataset_size: 43730827940
- config_name: subset_134
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44007518388
num_examples: 28629
download_size: 6389179458
dataset_size: 44007518388
- config_name: subset_135
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43707840616
num_examples: 28434
download_size: 6317643688
dataset_size: 43707840616
- config_name: subset_136
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44033774672
num_examples: 28646
download_size: 6373240832
dataset_size: 44033774672
- config_name: subset_137
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3074344
num_examples: 2
download_size: 557594
dataset_size: 3074344
- config_name: subset_138
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43841655788
num_examples: 28521
download_size: 6370669259
dataset_size: 43841655788
- config_name: subset_139
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43793963000
num_examples: 28490
download_size: 6351019624
dataset_size: 43793963000
- config_name: subset_14
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44413389620
num_examples: 28893
download_size: 6406524573
dataset_size: 44413389620
- config_name: subset_140
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43978329680
num_examples: 28610
download_size: 6341082690
dataset_size: 43978329680
- config_name: subset_141
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44030464856
num_examples: 28644
download_size: 6383471765
dataset_size: 44030464856
- config_name: subset_142
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43793910464
num_examples: 28490
download_size: 6348275681
dataset_size: 43793910464
- config_name: subset_143
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44448732656
num_examples: 28916
download_size: 6450504968
dataset_size: 44448732656
- config_name: subset_144
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43686238792
num_examples: 28420
download_size: 6334779676
dataset_size: 43686238792
- config_name: subset_145
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44141228568
num_examples: 28716
download_size: 6363170999
dataset_size: 44141228568
- config_name: subset_146
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43800179728
num_examples: 28494
download_size: 6358878988
dataset_size: 43800179728
- config_name: subset_147
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44294909712
num_examples: 28816
download_size: 6412779644
dataset_size: 44294909712
- config_name: subset_148
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43886264664
num_examples: 28550
download_size: 6377384251
dataset_size: 43886264664
- config_name: subset_149
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44030547976
num_examples: 28644
download_size: 6383895865
dataset_size: 44030547976
- config_name: subset_15
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44014998072
num_examples: 28634
download_size: 6373512015
dataset_size: 44014998072
- config_name: subset_150
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43899790252
num_examples: 28559
download_size: 6346605145
dataset_size: 43899790252
- config_name: subset_151
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43913769264
num_examples: 28568
download_size: 6389364151
dataset_size: 43913769264
- config_name: subset_152
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44493036076
num_examples: 28945
download_size: 6441659355
dataset_size: 44493036076
- config_name: subset_153
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4611236
num_examples: 3
download_size: 671590
dataset_size: 4611236
- config_name: subset_154
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43904573672
num_examples: 28562
download_size: 6353845259
dataset_size: 43904573672
- config_name: subset_155
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44414946296
num_examples: 28894
download_size: 6399004665
dataset_size: 44414946296
- config_name: subset_156
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43787907000
num_examples: 28486
download_size: 6361131234
dataset_size: 43787907000
- config_name: subset_157
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43838676140
num_examples: 28519
download_size: 6377464479
dataset_size: 43838676140
- config_name: subset_158
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43920002016
num_examples: 28572
download_size: 6365562506
dataset_size: 43920002016
- config_name: subset_159
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44079873808
num_examples: 28676
download_size: 6385289404
dataset_size: 44079873808
- config_name: subset_16
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44135044504
num_examples: 28712
download_size: 6367990267
dataset_size: 44135044504
- config_name: subset_160
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44261370184
num_examples: 28794
download_size: 6435970157
dataset_size: 44261370184
- config_name: subset_161
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44096758836
num_examples: 28687
download_size: 6411447660
dataset_size: 44096758836
- config_name: subset_162
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43901416400
num_examples: 28560
download_size: 6394315107
dataset_size: 43901416400
- config_name: subset_163
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44322671320
num_examples: 28834
download_size: 6421064852
dataset_size: 44322671320
- config_name: subset_164
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43978582144
num_examples: 28610
download_size: 6362813793
dataset_size: 43978582144
- config_name: subset_165
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44018298496
num_examples: 28636
download_size: 6376999923
dataset_size: 44018298496
- config_name: subset_166
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44251922632
num_examples: 28788
download_size: 6419837278
dataset_size: 44251922632
- config_name: subset_167
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44104251680
num_examples: 28692
download_size: 6408687778
dataset_size: 44104251680
- config_name: subset_168
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43790884880
num_examples: 28488
download_size: 6371985468
dataset_size: 43790884880
- config_name: subset_169
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6147752
num_examples: 4
download_size: 527132
dataset_size: 6147752
- config_name: subset_17
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44179626060
num_examples: 28741
download_size: 6410813569
dataset_size: 44179626060
- config_name: subset_170
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44238190244
num_examples: 28779
download_size: 6425085842
dataset_size: 44238190244
- config_name: subset_171
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43855344672
num_examples: 28530
download_size: 6351374612
dataset_size: 43855344672
- config_name: subset_172
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43744717352
num_examples: 28458
download_size: 6322671761
dataset_size: 43744717352
- config_name: subset_173
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43993634824
num_examples: 28620
download_size: 6324282823
dataset_size: 43993634824
- config_name: subset_174
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44381122280
num_examples: 28872
download_size: 6448679863
dataset_size: 44381122280
- config_name: subset_175
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44391843308
num_examples: 28879
download_size: 6448621992
dataset_size: 44391843308
- config_name: subset_176
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44158323572
num_examples: 28727
download_size: 6408233260
dataset_size: 44158323572
- config_name: subset_177
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44033693424
num_examples: 28646
download_size: 6415876282
dataset_size: 44033693424
- config_name: subset_178
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42685714068
num_examples: 27769
download_size: 6200737024
dataset_size: 42685714068
- config_name: subset_179
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42648659092
num_examples: 27745
download_size: 6171525632
dataset_size: 42648659092
- config_name: subset_18
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43836770424
num_examples: 28518
download_size: 6326151956
dataset_size: 43836770424
- config_name: subset_180
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42551809752
num_examples: 27682
download_size: 6168382243
dataset_size: 42551809752
- config_name: subset_181
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42434879408
num_examples: 27606
download_size: 6123055947
dataset_size: 42434879408
- config_name: subset_182
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42408752772
num_examples: 27589
download_size: 6152174336
dataset_size: 42408752772
- config_name: subset_183
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42966849416
num_examples: 27952
download_size: 6194170724
dataset_size: 42966849416
- config_name: subset_184
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42540803548
num_examples: 27675
download_size: 6179994976
dataset_size: 42540803548
- config_name: subset_185
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4610940
num_examples: 3
download_size: 510678
dataset_size: 4610940
- config_name: subset_186
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42273847412
num_examples: 27501
download_size: 6135274899
dataset_size: 42273847412
- config_name: subset_187
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42547162108
num_examples: 27679
download_size: 6140828239
dataset_size: 42547162108
- config_name: subset_188
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42513408276
num_examples: 27657
download_size: 6141115163
dataset_size: 42513408276
- config_name: subset_189
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42247299832
num_examples: 27484
download_size: 6114021654
dataset_size: 42247299832
- config_name: subset_19
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43870784704
num_examples: 28540
download_size: 6361457035
dataset_size: 43870784704
- config_name: subset_190
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42559396388
num_examples: 27687
download_size: 6144933007
dataset_size: 42559396388
- config_name: subset_191
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42727058440
num_examples: 27796
download_size: 6159613829
dataset_size: 42727058440
- config_name: subset_192
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42465891192
num_examples: 27626
download_size: 6137572406
dataset_size: 42465891192
- config_name: subset_193
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42687083448
num_examples: 27770
download_size: 6156875941
dataset_size: 42687083448
- config_name: subset_194
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43269701988
num_examples: 28149
download_size: 6279255539
dataset_size: 43269701988
- config_name: subset_195
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43100379428
num_examples: 28039
download_size: 6244533477
dataset_size: 43100379428
- config_name: subset_196
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43182000120
num_examples: 28092
download_size: 6246268592
dataset_size: 43182000120
- config_name: subset_197
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42488819788
num_examples: 27641
download_size: 6178356059
dataset_size: 42488819788
- config_name: subset_198
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43151315408
num_examples: 28072
download_size: 6236447434
dataset_size: 43151315408
- config_name: subset_199
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43020760060
num_examples: 27987
download_size: 6246173797
dataset_size: 43020760060
- config_name: subset_2
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43987672944
num_examples: 28616
download_size: 6372442472
dataset_size: 43987672944
- config_name: subset_20
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44078517716
num_examples: 28675
download_size: 6385824155
dataset_size: 44078517716
- config_name: subset_200
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43286671192
num_examples: 28160
download_size: 6280144588
dataset_size: 43286671192
- config_name: subset_201
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3073928
num_examples: 2
download_size: 379680
dataset_size: 3073928
- config_name: subset_202
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42862469264
num_examples: 27884
download_size: 6203880452
dataset_size: 42862469264
- config_name: subset_203
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42893042416
num_examples: 27904
download_size: 6220561824
dataset_size: 42893042416
- config_name: subset_204
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43033034108
num_examples: 27995
download_size: 6252547275
dataset_size: 43033034108
- config_name: subset_205
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43129968864
num_examples: 28058
download_size: 6242739407
dataset_size: 43129968864
- config_name: subset_206
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43139090800
num_examples: 28064
download_size: 6235515866
dataset_size: 43139090800
- config_name: subset_207
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43152809356
num_examples: 28073
download_size: 6283290397
dataset_size: 43152809356
- config_name: subset_208
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42942228856
num_examples: 27936
download_size: 6201443185
dataset_size: 42942228856
- config_name: subset_209
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 42900706308
num_examples: 27909
download_size: 6209468923
dataset_size: 42900706308
- config_name: subset_21
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3073968
num_examples: 2
download_size: 340735
dataset_size: 3073968
- config_name: subset_210
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43097615852
num_examples: 28037
download_size: 6250699366
dataset_size: 43097615852
- config_name: subset_211
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43345131936
num_examples: 28198
download_size: 6290127680
dataset_size: 43345131936
- config_name: subset_212
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43369720992
num_examples: 28214
download_size: 6322218871
dataset_size: 43369720992
- config_name: subset_213
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43273017772
num_examples: 28151
download_size: 6290984482
dataset_size: 43273017772
- config_name: subset_214
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43339017792
num_examples: 28194
download_size: 6291790140
dataset_size: 43339017792
- config_name: subset_215
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43148309288
num_examples: 28070
download_size: 6274426221
dataset_size: 43148309288
- config_name: subset_216
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43554083872
num_examples: 28334
download_size: 6316086000
dataset_size: 43554083872
- config_name: subset_217
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 6148384
num_examples: 4
download_size: 787021
dataset_size: 6148384
- config_name: subset_218
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43388064416
num_examples: 28226
download_size: 6284993121
dataset_size: 43388064416
- config_name: subset_219
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43400316424
num_examples: 28234
download_size: 6293046087
dataset_size: 43400316424
- config_name: subset_22
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44242802888
num_examples: 28782
download_size: 6406171080
dataset_size: 44242802888
- config_name: subset_220
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43246544032
num_examples: 28134
download_size: 6276081988
dataset_size: 43246544032
- config_name: subset_221
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43091341748
num_examples: 28033
download_size: 6246844874
dataset_size: 43091341748
- config_name: subset_222
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43282260444
num_examples: 28157
download_size: 6273569814
dataset_size: 43282260444
- config_name: subset_223
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43218862392
num_examples: 28116
download_size: 6267480974
dataset_size: 43218862392
- config_name: subset_53
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43020592356
num_examples: 27987
download_size: 6237193214
dataset_size: 43020592356
- config_name: subset_105
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43217581324
num_examples: 28115
download_size: 6241162732
dataset_size: 43217581324
- config_name: subset_23
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44087547940
num_examples: 28681
download_size: 6378825677
dataset_size: 44087547940
- config_name: subset_24
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44250388180
num_examples: 28787
download_size: 6399288392
dataset_size: 44250388180
- config_name: subset_25
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44682379040
num_examples: 29068
download_size: 6472664846
dataset_size: 44682379040
- config_name: subset_26
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43988774372
num_examples: 28617
download_size: 6351536356
dataset_size: 43988774372
- config_name: subset_27
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44124322548
num_examples: 28705
download_size: 6384396942
dataset_size: 44124322548
- config_name: subset_28
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44274970012
num_examples: 28803
download_size: 6405118297
dataset_size: 44274970012
- config_name: subset_29
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44127365308
num_examples: 28707
download_size: 6394981446
dataset_size: 44127365308
- config_name: subset_3
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44010774700
num_examples: 28631
download_size: 6385129614
dataset_size: 44010774700
- config_name: subset_30
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43959947880
num_examples: 28598
download_size: 6351099073
dataset_size: 43959947880
- config_name: subset_31
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43939721468
num_examples: 28585
download_size: 6349698481
dataset_size: 43939721468
- config_name: subset_32
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43701336432
num_examples: 28430
download_size: 6317498365
dataset_size: 43701336432
- config_name: subset_33
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43912133780
num_examples: 28567
download_size: 6347741424
dataset_size: 43912133780
- config_name: subset_34
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43924879268
num_examples: 28575
download_size: 6385061613
dataset_size: 43924879268
- config_name: subset_35
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44198269620
num_examples: 28753
download_size: 6417152268
dataset_size: 44198269620
- config_name: subset_36
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43958143980
num_examples: 28597
download_size: 6371530333
dataset_size: 43958143980
- config_name: subset_37
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 1536892
num_examples: 1
download_size: 145043
dataset_size: 1536892
- config_name: subset_38
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43907738296
num_examples: 28564
download_size: 6370745101
dataset_size: 43907738296
- config_name: subset_39
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43783169540
num_examples: 28483
download_size: 6360636678
dataset_size: 43783169540
- config_name: subset_4
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44035016260
num_examples: 28647
download_size: 6356360790
dataset_size: 44035016260
- config_name: subset_40
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43876677072
num_examples: 28544
download_size: 6363545223
dataset_size: 43876677072
- config_name: subset_41
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44039928304
num_examples: 28650
download_size: 6400395515
dataset_size: 44039928304
- config_name: subset_42
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43550868688
num_examples: 28332
download_size: 6288205442
dataset_size: 43550868688
- config_name: subset_43
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43772245200
num_examples: 28476
download_size: 6312411517
dataset_size: 43772245200
- config_name: subset_44
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44043101784
num_examples: 28652
download_size: 6367757278
dataset_size: 44043101784
- config_name: subset_45
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43839830568
num_examples: 28520
download_size: 6302918743
dataset_size: 43839830568
- config_name: subset_46
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44418011720
num_examples: 28896
download_size: 6420581627
dataset_size: 44418011720
- config_name: subset_47
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44239609176
num_examples: 28780
download_size: 6409168799
dataset_size: 44239609176
- config_name: subset_48
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43441872132
num_examples: 28261
download_size: 6279351848
dataset_size: 43441872132
- config_name: subset_49
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43803148032
num_examples: 28496
download_size: 6348966745
dataset_size: 43803148032
- config_name: subset_5
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 3073752
num_examples: 2
download_size: 269532
dataset_size: 3073752
- config_name: subset_50
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43892315672
num_examples: 28554
download_size: 6352365538
dataset_size: 43892315672
- config_name: subset_51
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44030510104
num_examples: 28644
download_size: 6357746911
dataset_size: 44030510104
- config_name: subset_52
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44004611300
num_examples: 28627
download_size: 6395577673
dataset_size: 44004611300
- config_name: subset_54
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43924607164
num_examples: 28575
download_size: 6394467746
dataset_size: 43924607164
- config_name: subset_55
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43766336872
num_examples: 28472
download_size: 6382887005
dataset_size: 43766336872
- config_name: subset_56
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43924612260
num_examples: 28575
download_size: 6358387007
dataset_size: 43924612260
- config_name: subset_57
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44125903328
num_examples: 28706
download_size: 6429743630
dataset_size: 44125903328
- config_name: subset_58
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44061228392
num_examples: 28664
download_size: 6403276947
dataset_size: 44061228392
- config_name: subset_59
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44005810400
num_examples: 28628
download_size: 6399433408
dataset_size: 44005810400
- config_name: subset_6
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44378012200
num_examples: 28870
download_size: 6424397700
dataset_size: 44378012200
- config_name: subset_60
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44225890868
num_examples: 28771
download_size: 6419332378
dataset_size: 44225890868
- config_name: subset_61
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43650843212
num_examples: 28397
download_size: 6326376655
dataset_size: 43650843212
- config_name: subset_62
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43827520656
num_examples: 28512
download_size: 6330616794
dataset_size: 43827520656
- config_name: subset_63
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44172218520
num_examples: 28736
download_size: 6409944210
dataset_size: 44172218520
- config_name: subset_64
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43918314476
num_examples: 28571
download_size: 6359242235
dataset_size: 43918314476
- config_name: subset_65
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43906125500
num_examples: 28563
download_size: 6375398199
dataset_size: 43906125500
- config_name: subset_66
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44075027964
num_examples: 28673
download_size: 6398349127
dataset_size: 44075027964
- config_name: subset_67
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43609456344
num_examples: 28370
download_size: 6307862180
dataset_size: 43609456344
- config_name: subset_68
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43666361020
num_examples: 28407
download_size: 6328770887
dataset_size: 43666361020
- config_name: subset_69
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44025932180
num_examples: 28641
download_size: 6372276607
dataset_size: 44025932180
- config_name: subset_7
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44059710956
num_examples: 28663
download_size: 6383885034
dataset_size: 44059710956
- config_name: subset_70
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43781700552
num_examples: 28482
download_size: 6318262101
dataset_size: 43781700552
- config_name: subset_71
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44175190528
num_examples: 28738
download_size: 6420404767
dataset_size: 44175190528
- config_name: subset_72
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44059988804
num_examples: 28663
download_size: 6403791239
dataset_size: 44059988804
- config_name: subset_73
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44242682800
num_examples: 28782
download_size: 6393278746
dataset_size: 44242682800
- config_name: subset_74
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43664734768
num_examples: 28406
download_size: 6293869164
dataset_size: 43664734768
- config_name: subset_75
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43830625696
num_examples: 28514
download_size: 6347303356
dataset_size: 43830625696
- config_name: subset_76
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43924502708
num_examples: 28575
download_size: 6368149688
dataset_size: 43924502708
- config_name: subset_77
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43754158544
num_examples: 28464
download_size: 6347205297
dataset_size: 43754158544
- config_name: subset_78
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43781508304
num_examples: 28482
download_size: 6362656422
dataset_size: 43781508304
- config_name: subset_79
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43978478208
num_examples: 28610
download_size: 6398609121
dataset_size: 43978478208
- config_name: subset_8
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44007563004
num_examples: 28629
download_size: 6358760125
dataset_size: 44007563004
- config_name: subset_80
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43807663524
num_examples: 28499
download_size: 6383713010
dataset_size: 43807663524
- config_name: subset_81
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43958216180
num_examples: 28597
download_size: 6360362244
dataset_size: 43958216180
- config_name: subset_82
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44018307032
num_examples: 28636
download_size: 6388770182
dataset_size: 44018307032
- config_name: subset_83
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43431184792
num_examples: 28254
download_size: 6273446746
dataset_size: 43431184792
- config_name: subset_84
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 4611316
num_examples: 3
download_size: 813473
dataset_size: 4611316
- config_name: subset_85
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43873788512
num_examples: 28542
download_size: 6358732185
dataset_size: 43873788512
- config_name: subset_86
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43505081840
num_examples: 28302
download_size: 6336792534
dataset_size: 43505081840
- config_name: subset_87
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44099477124
num_examples: 28689
download_size: 6376905811
dataset_size: 44099477124
- config_name: subset_88
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43800091792
num_examples: 28494
download_size: 6331140342
dataset_size: 43800091792
- config_name: subset_89
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44185886628
num_examples: 28745
download_size: 6399823294
dataset_size: 44185886628
- config_name: subset_9
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43959761872
num_examples: 28598
download_size: 6369092508
dataset_size: 43959761872
- config_name: subset_90
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43943002092
num_examples: 28587
download_size: 6384008687
dataset_size: 43943002092
- config_name: subset_91
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43709159980
num_examples: 28435
download_size: 6348468066
dataset_size: 43709159980
- config_name: subset_92
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43803194856
num_examples: 28496
download_size: 6384519799
dataset_size: 43803194856
- config_name: subset_93
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43806228672
num_examples: 28498
download_size: 6353242379
dataset_size: 43806228672
- config_name: subset_94
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43918235972
num_examples: 28571
download_size: 6359165774
dataset_size: 43918235972
- config_name: subset_95
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44013722788
num_examples: 28633
download_size: 6372836215
dataset_size: 44013722788
- config_name: subset_96
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43912328076
num_examples: 28567
download_size: 6360540190
dataset_size: 43912328076
- config_name: subset_97
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43784551296
num_examples: 28484
download_size: 6341270112
dataset_size: 43784551296
- config_name: subset_98
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 44568669984
num_examples: 28994
download_size: 6461359260
dataset_size: 44568669984
- config_name: subset_99
features:
- name: transcription
sequence: int64
- name: transcription/en_gpt3.5
sequence: int64
- name: whisper_transcription
sequence: int64
- name: whisper_transcription/en_gpt3.5
sequence: int64
- name: input_features
sequence:
sequence: float32
splits:
- name: train
num_bytes: 43989120876
num_examples: 28617
download_size: 6385093647
dataset_size: 43989120876
configs:
- config_name: subset_0
data_files:
- split: train
path: subset_0/train-*
- config_name: subset_1
data_files:
- split: train
path: subset_1/train-*
- config_name: subset_10
data_files:
- split: train
path: subset_10/train-*
- config_name: subset_100
data_files:
- split: train
path: subset_100/train-*
- config_name: subset_101
data_files:
- split: train
path: subset_101/train-*
- config_name: subset_102
data_files:
- split: train
path: subset_102/train-*
- config_name: subset_103
data_files:
- split: train
path: subset_103/train-*
- config_name: subset_104
data_files:
- split: train
path: subset_104/train-*
- config_name: subset_106
data_files:
- split: train
path: subset_106/train-*
- config_name: subset_107
data_files:
- split: train
path: subset_107/train-*
- config_name: subset_108
data_files:
- split: train
path: subset_108/train-*
- config_name: subset_109
data_files:
- split: train
path: subset_109/train-*
- config_name: subset_11
data_files:
- split: train
path: subset_11/train-*
- config_name: subset_110
data_files:
- split: train
path: subset_110/train-*
- config_name: subset_111
data_files:
- split: train
path: subset_111/train-*
- config_name: subset_112
data_files:
- split: train
path: subset_112/train-*
- config_name: subset_113
data_files:
- split: train
path: subset_113/train-*
- config_name: subset_114
data_files:
- split: train
path: subset_114/train-*
- config_name: subset_115
data_files:
- split: train
path: subset_115/train-*
- config_name: subset_116
data_files:
- split: train
path: subset_116/train-*
- config_name: subset_117
data_files:
- split: train
path: subset_117/train-*
- config_name: subset_118
data_files:
- split: train
path: subset_118/train-*
- config_name: subset_119
data_files:
- split: train
path: subset_119/train-*
- config_name: subset_12
data_files:
- split: train
path: subset_12/train-*
- config_name: subset_120
data_files:
- split: train
path: subset_120/train-*
- config_name: subset_121
data_files:
- split: train
path: subset_121/train-*
- config_name: subset_122
data_files:
- split: train
path: subset_122/train-*
- config_name: subset_123
data_files:
- split: train
path: subset_123/train-*
- config_name: subset_124
data_files:
- split: train
path: subset_124/train-*
- config_name: subset_125
data_files:
- split: train
path: subset_125/train-*
- config_name: subset_126
data_files:
- split: train
path: subset_126/train-*
- config_name: subset_127
data_files:
- split: train
path: subset_127/train-*
- config_name: subset_128
data_files:
- split: train
path: subset_128/train-*
- config_name: subset_129
data_files:
- split: train
path: subset_129/train-*
- config_name: subset_13
data_files:
- split: train
path: subset_13/train-*
- config_name: subset_130
data_files:
- split: train
path: subset_130/train-*
- config_name: subset_131
data_files:
- split: train
path: subset_131/train-*
- config_name: subset_132
data_files:
- split: train
path: subset_132/train-*
- config_name: subset_133
data_files:
- split: train
path: subset_133/train-*
- config_name: subset_134
data_files:
- split: train
path: subset_134/train-*
- config_name: subset_135
data_files:
- split: train
path: subset_135/train-*
- config_name: subset_136
data_files:
- split: train
path: subset_136/train-*
- config_name: subset_137
data_files:
- split: train
path: subset_137/train-*
- config_name: subset_138
data_files:
- split: train
path: subset_138/train-*
- config_name: subset_139
data_files:
- split: train
path: subset_139/train-*
- config_name: subset_14
data_files:
- split: train
path: subset_14/train-*
- config_name: subset_140
data_files:
- split: train
path: subset_140/train-*
- config_name: subset_141
data_files:
- split: train
path: subset_141/train-*
- config_name: subset_142
data_files:
- split: train
path: subset_142/train-*
- config_name: subset_143
data_files:
- split: train
path: subset_143/train-*
- config_name: subset_144
data_files:
- split: train
path: subset_144/train-*
- config_name: subset_145
data_files:
- split: train
path: subset_145/train-*
- config_name: subset_146
data_files:
- split: train
path: subset_146/train-*
- config_name: subset_147
data_files:
- split: train
path: subset_147/train-*
- config_name: subset_148
data_files:
- split: train
path: subset_148/train-*
- config_name: subset_149
data_files:
- split: train
path: subset_149/train-*
- config_name: subset_15
data_files:
- split: train
path: subset_15/train-*
- config_name: subset_150
data_files:
- split: train
path: subset_150/train-*
- config_name: subset_151
data_files:
- split: train
path: subset_151/train-*
- config_name: subset_152
data_files:
- split: train
path: subset_152/train-*
- config_name: subset_153
data_files:
- split: train
path: subset_153/train-*
- config_name: subset_154
data_files:
- split: train
path: subset_154/train-*
- config_name: subset_155
data_files:
- split: train
path: subset_155/train-*
- config_name: subset_156
data_files:
- split: train
path: subset_156/train-*
- config_name: subset_157
data_files:
- split: train
path: subset_157/train-*
- config_name: subset_158
data_files:
- split: train
path: subset_158/train-*
- config_name: subset_159
data_files:
- split: train
path: subset_159/train-*
- config_name: subset_16
data_files:
- split: train
path: subset_16/train-*
- config_name: subset_160
data_files:
- split: train
path: subset_160/train-*
- config_name: subset_161
data_files:
- split: train
path: subset_161000/train-*
- config_name: subset_162
data_files:
- split: train
path: subset_162/train-*
- config_name: subset_163
data_files:
- split: train
path: subset_163/train-*
- config_name: subset_164
data_files:
- split: train
path: subset_164/train-*
- config_name: subset_165
data_files:
- split: train
path: subset_165/train-*
- config_name: subset_166
data_files:
- split: train
path: subset_166/train-*
- config_name: subset_167
data_files:
- split: train
path: subset_167/train-*
- config_name: subset_168
data_files:
- split: train
path: subset_168/train-*
- config_name: subset_169
data_files:
- split: train
path: subset_169/train-*
- config_name: subset_17
data_files:
- split: train
path: subset_17/train-*
- config_name: subset_170
data_files:
- split: train
path: subset_170/train-*
- config_name: subset_171
data_files:
- split: train
path: subset_171/train-*
- config_name: subset_172
data_files:
- split: train
path: subset_172/train-*
- config_name: subset_173
data_files:
- split: train
path: subset_173/train-*
- config_name: subset_174
data_files:
- split: train
path: subset_174/train-*
- config_name: subset_175
data_files:
- split: train
path: subset_175/train-*
- config_name: subset_176
data_files:
- split: train
path: subset_176/train-*
- config_name: subset_177
data_files:
- split: train
path: subset_177/train-*
- config_name: subset_178
data_files:
- split: train
path: subset_178/train-*
- config_name: subset_179
data_files:
- split: train
path: subset_179/train-*
- config_name: subset_18
data_files:
- split: train
path: subset_18/train-*
- config_name: subset_180
data_files:
- split: train
path: subset_180/train-*
- config_name: subset_181
data_files:
- split: train
path: subset_181/train-*
- config_name: subset_182
data_files:
- split: train
path: subset_182/train-*
- config_name: subset_183
data_files:
- split: train
path: subset_183/train-*
- config_name: subset_184
data_files:
- split: train
path: subset_184/train-*
- config_name: subset_185
data_files:
- split: train
path: subset_185/train-*
- config_name: subset_186
data_files:
- split: train
path: subset_186/train-*
- config_name: subset_187
data_files:
- split: train
path: subset_187/train-*
- config_name: subset_188
data_files:
- split: train
path: subset_188/train-*
- config_name: subset_189
data_files:
- split: train
path: subset_189/train-*
- config_name: subset_19
data_files:
- split: train
path: subset_19000/train-*
- config_name: subset_190
data_files:
- split: train
path: subset_190/train-*
- config_name: subset_191
data_files:
- split: train
path: subset_191/train-*
- config_name: subset_192
data_files:
- split: train
path: subset_192/train-*
- config_name: subset_193
data_files:
- split: train
path: subset_193/train-*
- config_name: subset_194
data_files:
- split: train
path: subset_194/train-*
- config_name: subset_195
data_files:
- split: train
path: subset_195/train-*
- config_name: subset_196
data_files:
- split: train
path: subset_196/train-*
- config_name: subset_197
data_files:
- split: train
path: subset_197/train-*
- config_name: subset_198
data_files:
- split: train
path: subset_198/train-*
- config_name: subset_199
data_files:
- split: train
path: subset_199/train-*
- config_name: subset_2
data_files:
- split: train
path: subset_2/train-*
- config_name: subset_20
data_files:
- split: train
path: subset_20/train-*
- config_name: subset_200
data_files:
- split: train
path: subset_200/train-*
- config_name: subset_201
data_files:
- split: train
path: subset_201/train-*
- config_name: subset_202
data_files:
- split: train
path: subset_202/train-*
- config_name: subset_203
data_files:
- split: train
path: subset_203/train-*
- config_name: subset_204
data_files:
- split: train
path: subset_204/train-*
- config_name: subset_205
data_files:
- split: train
path: subset_205000/train-*
- config_name: subset_206
data_files:
- split: train
path: subset_206000/train-*
- config_name: subset_207
data_files:
- split: train
path: subset_207/train-*
- config_name: subset_208
data_files:
- split: train
path: subset_208000/train-*
- config_name: subset_209
data_files:
- split: train
path: subset_209/train-*
- config_name: subset_21
data_files:
- split: train
path: subset_21/train-*
- config_name: subset_210
data_files:
- split: train
path: subset_210/train-*
- config_name: subset_211
data_files:
- split: train
path: subset_211/train-*
- config_name: subset_212
data_files:
- split: train
path: subset_212/train-*
- config_name: subset_213
data_files:
- split: train
path: subset_213/train-*
- config_name: subset_214
data_files:
- split: train
path: subset_214000/train-*
- config_name: subset_215
data_files:
- split: train
path: subset_215/train-*
- config_name: subset_216
data_files:
- split: train
path: subset_216/train-*
- config_name: subset_217
data_files:
- split: train
path: subset_217/train-*
- config_name: subset_218
data_files:
- split: train
path: subset_218/train-*
- config_name: subset_219
data_files:
- split: train
path: subset_219/train-*
- config_name: subset_22
data_files:
- split: train
path: subset_22/train-*
- config_name: subset_220
data_files:
- split: train
path: subset_220/train-*
- config_name: subset_221
data_files:
- split: train
path: subset_221/train-*
- config_name: subset_222
data_files:
- split: train
path: subset_222/train-*
- config_name: subset_223
data_files:
- split: train
path: subset_223/train-*
- config_name: subset_53
data_files:
- split: train
path: subset_224/train-*
- config_name: subset_105
data_files:
- split: train
path: subset_225/train-*
- config_name: subset_23
data_files:
- split: train
path: subset_23/train-*
- config_name: subset_24
data_files:
- split: train
path: subset_24/train-*
- config_name: subset_25
data_files:
- split: train
path: subset_25/train-*
- config_name: subset_26
data_files:
- split: train
path: subset_26/train-*
- config_name: subset_27
data_files:
- split: train
path: subset_27/train-*
- config_name: subset_28
data_files:
- split: train
path: subset_28/train-*
- config_name: subset_29
data_files:
- split: train
path: subset_29/train-*
- config_name: subset_3
data_files:
- split: train
path: subset_3/train-*
- config_name: subset_30
data_files:
- split: train
path: subset_30/train-*
- config_name: subset_31
data_files:
- split: train
path: subset_31/train-*
- config_name: subset_32
data_files:
- split: train
path: subset_32/train-*
- config_name: subset_33
data_files:
- split: train
path: subset_33/train-*
- config_name: subset_34
data_files:
- split: train
path: subset_34/train-*
- config_name: subset_35
data_files:
- split: train
path: subset_35/train-*
- config_name: subset_36
data_files:
- split: train
path: subset_36/train-*
- config_name: subset_37
data_files:
- split: train
path: subset_37/train-*
- config_name: subset_38
data_files:
- split: train
path: subset_38/train-*
- config_name: subset_39
data_files:
- split: train
path: subset_39/train-*
- config_name: subset_4
data_files:
- split: train
path: subset_4/train-*
- config_name: subset_40
data_files:
- split: train
path: subset_40/train-*
- config_name: subset_41
data_files:
- split: train
path: subset_41/train-*
- config_name: subset_42
data_files:
- split: train
path: subset_42/train-*
- config_name: subset_43
data_files:
- split: train
path: subset_43/train-*
- config_name: subset_44
data_files:
- split: train
path: subset_44/train-*
- config_name: subset_45
data_files:
- split: train
path: subset_45/train-*
- config_name: subset_46
data_files:
- split: train
path: subset_46/train-*
- config_name: subset_47
data_files:
- split: train
path: subset_47/train-*
- config_name: subset_48
data_files:
- split: train
path: subset_48/train-*
- config_name: subset_49
data_files:
- split: train
path: subset_49/train-*
- config_name: subset_5
data_files:
- split: train
path: subset_5/train-*
- config_name: subset_50
data_files:
- split: train
path: subset_50/train-*
- config_name: subset_51
data_files:
- split: train
path: subset_51/train-*
- config_name: subset_52
data_files:
- split: train
path: subset_52/train-*
- config_name: subset_54
data_files:
- split: train
path: subset_54/train-*
- config_name: subset_55
data_files:
- split: train
path: subset_55/train-*
- config_name: subset_56
data_files:
- split: train
path: subset_56/train-*
- config_name: subset_57
data_files:
- split: train
path: subset_57/train-*
- config_name: subset_58
data_files:
- split: train
path: subset_58/train-*
- config_name: subset_59
data_files:
- split: train
path: subset_59/train-*
- config_name: subset_6
data_files:
- split: train
path: subset_6/train-*
- config_name: subset_60
data_files:
- split: train
path: subset_60/train-*
- config_name: subset_61
data_files:
- split: train
path: subset_61/train-*
- config_name: subset_62
data_files:
- split: train
path: subset_62/train-*
- config_name: subset_63
data_files:
- split: train
path: subset_63/train-*
- config_name: subset_64
data_files:
- split: train
path: subset_64/train-*
- config_name: subset_65
data_files:
- split: train
path: subset_65/train-*
- config_name: subset_66
data_files:
- split: train
path: subset_66/train-*
- config_name: subset_67
data_files:
- split: train
path: subset_67/train-*
- config_name: subset_68
data_files:
- split: train
path: subset_68/train-*
- config_name: subset_69
data_files:
- split: train
path: subset_69/train-*
- config_name: subset_7
data_files:
- split: train
path: subset_7/train-*
- config_name: subset_70
data_files:
- split: train
path: subset_70/train-*
- config_name: subset_71
data_files:
- split: train
path: subset_71/train-*
- config_name: subset_72
data_files:
- split: train
path: subset_72/train-*
- config_name: subset_73
data_files:
- split: train
path: subset_73/train-*
- config_name: subset_74
data_files:
- split: train
path: subset_74/train-*
- config_name: subset_75
data_files:
- split: train
path: subset_75/train-*
- config_name: subset_76
data_files:
- split: train
path: subset_76/train-*
- config_name: subset_77
data_files:
- split: train
path: subset_77/train-*
- config_name: subset_78
data_files:
- split: train
path: subset_78/train-*
- config_name: subset_79
data_files:
- split: train
path: subset_79/train-*
- config_name: subset_8
data_files:
- split: train
path: subset_8/train-*
- config_name: subset_80
data_files:
- split: train
path: subset_80/train-*
- config_name: subset_81
data_files:
- split: train
path: subset_81/train-*
- config_name: subset_82
data_files:
- split: train
path: subset_82/train-*
- config_name: subset_83
data_files:
- split: train
path: subset_83/train-*
- config_name: subset_84
data_files:
- split: train
path: subset_84/train-*
- config_name: subset_85
data_files:
- split: train
path: subset_85/train-*
- config_name: subset_86
data_files:
- split: train
path: subset_86/train-*
- config_name: subset_87
data_files:
- split: train
path: subset_87/train-*
- config_name: subset_88
data_files:
- split: train
path: subset_88/train-*
- config_name: subset_89
data_files:
- split: train
path: subset_89/train-*
- config_name: subset_9
data_files:
- split: train
path: subset_9/train-*
- config_name: subset_90
data_files:
- split: train
path: subset_90/train-*
- config_name: subset_91
data_files:
- split: train
path: subset_91/train-*
- config_name: subset_92
data_files:
- split: train
path: subset_92/train-*
- config_name: subset_93
data_files:
- split: train
path: subset_93/train-*
- config_name: subset_94
data_files:
- split: train
path: subset_94/train-*
- config_name: subset_95
data_files:
- split: train
path: subset_95/train-*
- config_name: subset_96
data_files:
- split: train
path: subset_96/train-*
- config_name: subset_97
data_files:
- split: train
path: subset_97/train-*
- config_name: subset_98
data_files:
- split: train
path: subset_98/train-*
- config_name: subset_99
data_files:
- split: train
path: subset_99/train-*
---
|
applied-ai-018/pretraining_v1-omega_books | applied-ai-018 | "2024-08-05T19:01:31Z" | 8,753 | 1 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-31T08:53:54Z" | ---
dataset_info:
config_name: CC-MAIN-2013-20
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 235476901236
num_examples: 51901183
download_size: 138494178972
dataset_size: 235476901236
configs:
- config_name: CC-MAIN-2013-20
data_files:
- split: train
path: CC-MAIN-2013-20/train-*
---
|