Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
dclm-edu / README.md
loubnabnl's picture
loubnabnl HF staff
Update README.md
dbad8ad verified
|
raw
history blame
4.32 kB
metadata
license: cc-by-4.0
language:
  - en

DCLM-Edu

Description

This is a filtered version of DCLM dataset using FineWeb-Edu educational quality classifier. We annotate each web page based on the educational quality on a scale from 0 to 5 and only keep samples with a score higher than 2. This dataset is intended for small language models training and was used to train SmolLM2-135M and SmolLM2-360M.

Note: As show in the performance section, we find that further filtering the dataset to only keep samples with edu_int_score>=3 yields even better downstream performance when training small laguage models. We include score 2 samples to allow for rebalancing and added diversity, but you can filter the dataset with datasets or datatrove as shown below.

How to use

Using datasets

from datasets import load_dataset

fw = load_dataset("HuggingFaceTB/dclm-edu", split="train", streaming=True)

Using 🏭 datatrove

from datatrove.pipeline.readers import ParquetReader

# limit determines how many documents will be streamed (remove for all)
data_reader = ParquetReader("hf://datasets/HuggingFaceTB/dclm-edu", glob_pattern="data/*.parquet", limit=1000)
for document in data_reader():
    # do something with document
    print(document)

###############################    
# OR for a processing pipeline:
###############################

from datatrove.executor import LocalPipelineExecutor
from datatrove.pipeline.readers import ParquetReader
from datatrove.pipeline.filters import LambdaFilter
from datatrove.pipeline.writers import ParquetWriter

pipeline_exec = LocalPipelineExecutor(
    pipeline=[
        ParquetReader("hf://datasets/HuggingFaceTB/dclm-edu", limit=1000),
        LambdaFilter(lambda doc: doc.metadata["edu_int_score"] >= 3),
        ParquetWriter("some-output-path")
    ],
    tasks=10
)
pipeline_exec.run()

Performance

Results of 360M ablation We train a 360M model (using SmolLM2 setup) on 200B tokens from DCLM, FineWeb-Edu and DCLM-Edu and evaluate on different benchmarks. DCLM-Edu denotes DCLM samples with an educational score higher than 3. We find that the model trained on DCLM-Edu performs better on knowledge and reasoning tasks (MMLU & ARC):

image

We invite users to experiment with different data mixing depending on their model size.

Results of 1.7B ablation: We also conducted some ablations at 1.7B scale, we use an intermediate checkpoint of SmolLM2 1.7B (3T tokens) and doing a decay on different subsets of DCLM using the edu filtering with thresholds 2, 3 and 4.

image However we find that the gains from introducing this dataset mid-training during SmolLM2 1.7B training (which was trained on a mix of DCLM and FineWeb-Edu for 6T+ tokens) weren't consistent with the ablation findings, so we only use the dataset for SmolLM2 135M and 360M.

License

Following DCLM-Baseline, this dataset is licensed under CC-BY-4.0.

Citation

@misc{allal2025smollm2smolgoesbig,
      title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model}, 
      author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
      year={2025},
      eprint={2502.02737},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.02737}, 
}