Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
loubnabnl HF staff commited on
Commit
32054a8
·
verified ·
1 Parent(s): 14bfbb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -6,8 +6,8 @@ language:
6
  # DCLM-Edu
7
 
8
  ## Description
9
- This is a filtered version of the [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) dataset using FineWeb-Edu educational quality [classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier). We annotate each web page based on the educational quality
10
- on a scale from 0 to 5 and only keep samples with a score higher than 2. This dataset is intended for language models training and was used to train [SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) and [SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M).
11
 
12
  **_Note:_** As show in the performance section, we find that further filtering the dataset to only keep **samples with `edu_int_score>=3` yields even better downstream performance when training small laguage models**. We include score 2 samples to allow for rebalancing and added diversity, but you can filter the dataset with `datasets` or `datatrove` as shown below.
13
 
@@ -53,20 +53,19 @@ pipeline_exec.run()
53
 
54
  ## Performance
55
  **Results of 360M ablation**
56
- We train a 360M model (using SmolLM2 setup) on 200B tokens from different datasets and evaluate on different benchmarks. DCLM-Edu denotes DCLM samples with an educational score higher than 3.
 
57
 
58
- <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/Nt4tu5s4iaoN7ZqmqndJK.png" width="600" alt="image">
59
 
60
- DCLM-Edu gives consistent gains. The plot below shows the per-benchmark performance:
61
- <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/L9sdCmfDVipTwDX5_dNcm.png" width="700" alt="image">
62
 
63
  We invite users to experiment with different data mixing depending on their model size.
64
 
65
  **Results of 1.7B ablation:**
66
- We also conducted some ablations at 1.7B scale but the gains for introducing the dataset mid-training durong SmolLM2 1.7B training weren't consistent with the ablation findings, so we only use the dataset for SmolLM2 135M and 360M.
67
- We use an intermediate checkpoint of SmolLM2 1.7B (3T tokens) anddodoing a decay on different subsets of DCLM using the edu filtering:
68
 
69
  <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/ImwiEe712SN5TalxFOeeJ.png" width="700" alt="image">
 
70
 
71
  ## License
72
  Following DCLM-Baseline, this dataset is licensed under CC-BY-4.0.
 
6
  # DCLM-Edu
7
 
8
  ## Description
9
+ This is a filtered version of [DCLM](https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0) dataset using FineWeb-Edu educational quality [classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier). We annotate each web page based on the educational quality
10
+ on a scale from 0 to 5 and only keep samples with a score higher than 2. This dataset is intended for small language models training and was used to train [SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) and [SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M).
11
 
12
  **_Note:_** As show in the performance section, we find that further filtering the dataset to only keep **samples with `edu_int_score>=3` yields even better downstream performance when training small laguage models**. We include score 2 samples to allow for rebalancing and added diversity, but you can filter the dataset with `datasets` or `datatrove` as shown below.
13
 
 
53
 
54
  ## Performance
55
  **Results of 360M ablation**
56
+ We train a 360M model (using [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) setup) on 200B tokens from DCLM, FineWeb-Edu and DCLM-Edu and evaluate on different benchmarks. DCLM-Edu denotes DCLM samples with an educational score higher than 3.
57
+ We find that the model trained on DCLM-Edu performs better on knowledge and reasoning tasks (MMLU & ARC):
58
 
59
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/hOFJRusg6fEEtCpN-RJaP.png)" width="700" alt="image">
60
 
 
 
61
 
62
  We invite users to experiment with different data mixing depending on their model size.
63
 
64
  **Results of 1.7B ablation:**
65
+ We also conducted some ablations at 1.7B scale, we use an intermediate checkpoint of SmolLM2 1.7B (3T tokens) and doing a decay on different subsets of DCLM using the edu filtering with thresholds 2, 3 and 4.
 
66
 
67
  <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/ImwiEe712SN5TalxFOeeJ.png" width="700" alt="image">
68
+ However we find that the gains from introducing this dataset mid-training during SmolLM2 1.7B training (which was trained on a mix of DCLM and FineWeb-Edu for 6T+ tokens) weren't consistent with the ablation findings, so we only use the dataset for SmolLM2 135M and 360M.
69
 
70
  ## License
71
  Following DCLM-Baseline, this dataset is licensed under CC-BY-4.0.