File size: 1,600 Bytes
206b99c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
task_categories:
- text-generation
language:
- en
pretty_name: SlimPajama-6B
size_categories:
- 100K<n<1M
---
Sampled version of [cerebras/SlimPajama-627B](https://huggingface.co./datasets/cerebras/SlimPajama-627B).

Based on the [fact that the original data was shuffled before chunking](https://huggingface.co./datasets/cerebras/SlimPajama-627B/discussions/4), I only downloaded chunk1 and further sampled 10% of the chunk.
This should result in roughly 6B tokens, hence SlimPajama-6B.

#### Data source proportions for SlimPajama-627B and SlimPajama-6B
For sanity check, I recaluclated the byte proportion of the sampled version, it rougly matches the original dataset.


| Data source   | SlimPajama-627B | RedPajama-6B |
| ------------- | ---------- | --------- |
| Commoncrawl   | 52.2%      | 54.1%    |
| C4            | 26.7%      | 28.7%    |
| GitHub        | 5.2%       | 4.2%     |
| Books         | 4.2%       | 3.7%     |
| ArXiv         | 4.6%       | 3.4%     |
| Wikpedia      | 3.8%       | 3.1%     |
| StackExchange | 3.3%       | 2.8%     |


---
Please refer to the original dataset for other info.
```
@misc{cerebras2023slimpajama,
  author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan},
  title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}},
  month = June,
  year = 2023,
  howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}},
  url = {https://huggingface.co./datasets/cerebras/SlimPajama-627B},
}
```