SlimPajama-6B / README.md
DKYoon's picture
Update README.md
9131cb8
|
raw
history blame
1.67 kB
---
task_categories:
- text-generation
language:
- en
pretty_name: SlimPajama-6B
size_categories:
- 100K<n<1M
---
Sampled version of [cerebras/SlimPajama-627B](https://huggingface.co./datasets/cerebras/SlimPajama-627B).
Based on the [fact that the original data was shuffled before chunking](https://huggingface.co./datasets/cerebras/SlimPajama-627B/discussions/4), I only downloaded chunk1 and further sampled 10% of the chunk.
This should result in roughly 6B tokens, hence SlimPajama-6B.
The dataset is roughly 24GBs in storage size and has 5489000 rows.
---
#### Data source proportions for SlimPajama-627B and SlimPajama-6B
For sanity check, I recaluclated the byte proportion of the sampled version, it rougly matches the original dataset.
| Data source | SlimPajama-627B | SlimPajama-6B |
| ------------- | ---------- | --------- |
| Commoncrawl | 52.2% | 54.1% |
| C4 | 26.7% | 28.7% |
| GitHub | 5.2% | 4.2% |
| Books | 4.2% | 3.7% |
| ArXiv | 4.6% | 3.4% |
| Wikpedia | 3.8% | 3.1% |
| StackExchange | 3.3% | 2.8% |
---
Please refer to the original dataset for other info.
```
@misc{cerebras2023slimpajama,
author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan},
title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}},
month = June,
year = 2023,
howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}},
url = {https://huggingface.co./datasets/cerebras/SlimPajama-627B},
}
```