Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
loubnabnl HF staff commited on
Commit
dbad8ad
·
verified ·
1 Parent(s): 32054a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -56,7 +56,7 @@ pipeline_exec.run()
56
  We train a 360M model (using [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) setup) on 200B tokens from DCLM, FineWeb-Edu and DCLM-Edu and evaluate on different benchmarks. DCLM-Edu denotes DCLM samples with an educational score higher than 3.
57
  We find that the model trained on DCLM-Edu performs better on knowledge and reasoning tasks (MMLU & ARC):
58
 
59
- <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/hOFJRusg6fEEtCpN-RJaP.png)" width="700" alt="image">
60
 
61
 
62
  We invite users to experiment with different data mixing depending on their model size.
 
56
  We train a 360M model (using [SmolLM2](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) setup) on 200B tokens from DCLM, FineWeb-Edu and DCLM-Edu and evaluate on different benchmarks. DCLM-Edu denotes DCLM samples with an educational score higher than 3.
57
  We find that the model trained on DCLM-Edu performs better on knowledge and reasoning tasks (MMLU & ARC):
58
 
59
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/hOFJRusg6fEEtCpN-RJaP.png" width="700" alt="image">
60
 
61
 
62
  We invite users to experiment with different data mixing depending on their model size.