---
license: cc-by-4.0
language:
- en
---
# DCLM-Edu
## Description
This is a filtered version of [DCLM](https://huggingface.co./datasets/mlfoundations/dclm-baseline-1.0) dataset using FineWeb-Edu educational quality [classifier](https://huggingface.co./HuggingFaceFW/fineweb-edu-classifier). We annotate each web page based on the educational quality
on a scale from 0 to 5 and only keep samples with a score higher than 2. This dataset is intended for small language models training and was used to train [SmolLM2-135M](https://huggingface.co./HuggingFaceTB/SmolLM2-135M) and [SmolLM2-360M](https://huggingface.co./HuggingFaceTB/SmolLM2-360M).
**_Note:_** As show in the performance section, we find that further filtering the dataset to only keep **samples with `edu_int_score>=3` yields even better downstream performance when training small laguage models**. We include score 2 samples to allow for rebalancing and added diversity, but you can filter the dataset with `datasets` or `datatrove` as shown below.
## How to use
### Using `datasets`
```python
from datasets import load_dataset
fw = load_dataset("HuggingFaceTB/dclm-edu", split="train", streaming=True)
```
### Using đ [`datatrove`](https://github.com/huggingface/datatrove/)
```python
from datatrove.pipeline.readers import ParquetReader
# limit determines how many documents will be streamed (remove for all)
data_reader = ParquetReader("hf://datasets/HuggingFaceTB/dclm-edu", glob_pattern="data/*.parquet", limit=1000)
for document in data_reader():
# do something with document
print(document)
###############################
# OR for a processing pipeline:
###############################
from datatrove.executor import LocalPipelineExecutor
from datatrove.pipeline.readers import ParquetReader
from datatrove.pipeline.filters import LambdaFilter
from datatrove.pipeline.writers import ParquetWriter
pipeline_exec = LocalPipelineExecutor(
pipeline=[
ParquetReader("hf://datasets/HuggingFaceTB/dclm-edu", limit=1000),
LambdaFilter(lambda doc: doc.metadata["edu_int_score"] >= 3),
ParquetWriter("some-output-path")
],
tasks=10
)
pipeline_exec.run()
```
## Performance
**Results of 360M ablation**
We train a 360M model (using [SmolLM2](https://huggingface.co./HuggingFaceTB/SmolLM2-360M) setup) on 200B tokens from DCLM, FineWeb-Edu and DCLM-Edu and evaluate on different benchmarks. DCLM-Edu denotes DCLM samples with an educational score higher than 3.
We find that the model trained on DCLM-Edu performs better on knowledge and reasoning tasks (MMLU & ARC):
We invite users to experiment with different data mixing depending on their model size.
**Results of 1.7B ablation:**
We also conducted some ablations at 1.7B scale, we use an intermediate checkpoint of SmolLM2 1.7B (3T tokens) and doing a decay on different subsets of DCLM using the edu filtering with thresholds 2, 3 and 4.
However we find that the gains from introducing this dataset mid-training during SmolLM2 1.7B training (which was trained on a mix of DCLM and FineWeb-Edu for 6T+ tokens) weren't consistent with the ablation findings, so we only use the dataset for SmolLM2 135M and 360M.
## License
Following DCLM-Baseline, this dataset is licensed under CC-BY-4.0.
## Citation
```bash
@misc{allal2025smollm2smolgoesbig,
title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel MartĂn BlĂĄzquez and Guilherme Penedo and Lewis Tunstall and AndrĂ©s Marafioti and Hynek KydlĂÄek and AgustĂn Piqueres LajarĂn and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and ClĂ©mentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
year={2025},
eprint={2502.02737},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02737},
}
```