File size: 4,320 Bytes
c09e73a 32054a8 c09e73a 14bfbb0 c09e73a 32054a8 c09e73a dbad8ad c09e73a 32054a8 c09e73a 32054a8 c09e73a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
license: cc-by-4.0
language:
- en
---
# DCLM-Edu
## Description
This is a filtered version of [DCLM](https://huggingface.co./datasets/mlfoundations/dclm-baseline-1.0) dataset using FineWeb-Edu educational quality [classifier](https://huggingface.co./HuggingFaceFW/fineweb-edu-classifier). We annotate each web page based on the educational quality
on a scale from 0 to 5 and only keep samples with a score higher than 2. This dataset is intended for small language models training and was used to train [SmolLM2-135M](https://huggingface.co./HuggingFaceTB/SmolLM2-135M) and [SmolLM2-360M](https://huggingface.co./HuggingFaceTB/SmolLM2-360M).
**_Note:_** As show in the performance section, we find that further filtering the dataset to only keep **samples with `edu_int_score>=3` yields even better downstream performance when training small laguage models**. We include score 2 samples to allow for rebalancing and added diversity, but you can filter the dataset with `datasets` or `datatrove` as shown below.
## How to use
### Using `datasets`
```python
from datasets import load_dataset
fw = load_dataset("HuggingFaceTB/dclm-edu", split="train", streaming=True)
```
### Using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/)
```python
from datatrove.pipeline.readers import ParquetReader
# limit determines how many documents will be streamed (remove for all)
data_reader = ParquetReader("hf://datasets/HuggingFaceTB/dclm-edu", glob_pattern="data/*.parquet", limit=1000)
for document in data_reader():
# do something with document
print(document)
###############################
# OR for a processing pipeline:
###############################
from datatrove.executor import LocalPipelineExecutor
from datatrove.pipeline.readers import ParquetReader
from datatrove.pipeline.filters import LambdaFilter
from datatrove.pipeline.writers import ParquetWriter
pipeline_exec = LocalPipelineExecutor(
pipeline=[
ParquetReader("hf://datasets/HuggingFaceTB/dclm-edu", limit=1000),
LambdaFilter(lambda doc: doc.metadata["edu_int_score"] >= 3),
ParquetWriter("some-output-path")
],
tasks=10
)
pipeline_exec.run()
```
## Performance
**Results of 360M ablation**
We train a 360M model (using [SmolLM2](https://huggingface.co./HuggingFaceTB/SmolLM2-360M) setup) on 200B tokens from DCLM, FineWeb-Edu and DCLM-Edu and evaluate on different benchmarks. DCLM-Edu denotes DCLM samples with an educational score higher than 3.
We find that the model trained on DCLM-Edu performs better on knowledge and reasoning tasks (MMLU & ARC):
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/hOFJRusg6fEEtCpN-RJaP.png" width="700" alt="image">
We invite users to experiment with different data mixing depending on their model size.
**Results of 1.7B ablation:**
We also conducted some ablations at 1.7B scale, we use an intermediate checkpoint of SmolLM2 1.7B (3T tokens) and doing a decay on different subsets of DCLM using the edu filtering with thresholds 2, 3 and 4.
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/ImwiEe712SN5TalxFOeeJ.png" width="700" alt="image">
However we find that the gains from introducing this dataset mid-training during SmolLM2 1.7B training (which was trained on a mix of DCLM and FineWeb-Edu for 6T+ tokens) weren't consistent with the ablation findings, so we only use the dataset for SmolLM2 135M and 360M.
## License
Following DCLM-Baseline, this dataset is licensed under CC-BY-4.0.
## Citation
```bash
@misc{allal2025smollm2smolgoesbig,
title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Guilherme Penedo and Lewis Tunstall and Andrés Marafioti and Hynek Kydlíček and Agustín Piqueres Lajarín and Vaibhav Srivastav and Joshua Lochner and Caleb Fahlgren and Xuan-Son Nguyen and Clémentine Fourrier and Ben Burtenshaw and Hugo Larcher and Haojun Zhao and Cyril Zakka and Mathieu Morlon and Colin Raffel and Leandro von Werra and Thomas Wolf},
year={2025},
eprint={2502.02737},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02737},
}
``` |