language: | |
- en | |
dataset_info: | |
features: | |
- name: text | |
dtype: string | |
- name: id | |
dtype: string | |
- name: metadata | |
struct: | |
- name: date | |
dtype: timestamp[us] | |
- name: dump | |
dtype: string | |
- name: file_path | |
dtype: string | |
- name: int_score | |
dtype: int64 | |
- name: language | |
dtype: string | |
- name: language_score | |
dtype: float64 | |
- name: score | |
dtype: float64 | |
- name: token_count | |
dtype: int64 | |
- name: url | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 5292938151.266562 | |
num_examples: 999245 | |
download_size: 2716629909 | |
dataset_size: 5292938151.266562 | |
configs: | |
- config_name: default | |
data_files: | |
- split: train | |
path: data/train-* | |
## Qwark Corpus | |
1.3B+ high quality tokens from the internet, based on [HuggingFaceTB/smollm-corpus](https://huggingface.co./datasets/HuggingFaceTB/smollm-corpus)'s `fineweb-edu-dedup` subset as well as [FineMath-4+](HuggingFaceTB/finemath). | |
Filtering process: | |
| Step | Description | Rows | | |
|--------------------------------------------------------|--------------------------------------------------------|----------------| | |
| 1. Stream dataset until 600K samples have been selected from SmolLM Corpus | Keep only items with score >= 3.5 | 600,000 | | |
| 2. Remove items with length > 50,000 | Filter items exceeding 50,000 characters in length | 597,142 | | |
| 3. Combine with a selection of 4,000 TED transcripts | Add educational TED talk transcripts to the dataset | 601,147 | | |
| 4. Stream 400K samples from FineMath-4+ | Keep only items with score >= 4.0 | 1,001,147 | | |
| 5. Remove items with length > 50,000 | Filter items exceeding 50,000 characters in length | 999,245| | |