File size: 2,341 Bytes
206b99c
 
 
 
9c48523
043a90f
 
 
 
 
 
 
 
8ada020
 
 
 
043a90f
 
 
 
 
 
 
 
 
 
 
 
 
 
8ada020
 
 
 
 
 
 
 
206b99c
 
 
6b78d1a
206b99c
6b78d1a
9131cb8
b5f90f4
 
9131cb8
206b99c
6b78d1a
206b99c
 
9131cb8
206b99c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
language:
- en
size_categories:
- 1M<n<10M
task_categories:
- text-generation
pretty_name: SlimPajama-6B
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: text
    dtype: string
  - name: meta
    struct:
    - name: redpajama_set_name
      dtype: string
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: train
    num_bytes: 23918118724
    num_examples: 5489000
  - name: validation
    num_bytes: 39109042
    num_examples: 9347
  - name: test
    num_bytes: 40114950
    num_examples: 9346
  download_size: 14048972121
  dataset_size: 23997342716
---
Sampled version of [cerebras/SlimPajama-627B](https://huggingface.co./datasets/cerebras/SlimPajama-627B).

[Since the original data was shuffled before chunking](https://huggingface.co./datasets/cerebras/SlimPajama-627B/discussions/4), I only downloaded train/chunk1 (of 10 total) and further sampled 10%. This should result in roughly 6B tokens, hence SlimPajama-6B.

The dataset is 24GBs in storage size when decompressed (original dataset is over 2TBs) and has 5489000 rows.

The validation set and test set were sampled as well.

---
#### Data source proportions for SlimPajama-627B and SlimPajama-6B
For sanity purpose, I caluclated the byte proportion of the sampled version.


| Data source   | SlimPajama-627B | SlimPajama-6B |
| ------------- | ---------- | --------- |
| Commoncrawl   | 52.2%      | 54.1%    |
| C4            | 26.7%      | 28.7%    |
| GitHub        | 5.2%       | 4.2%     |
| Books         | 4.2%       | 3.7%     |
| ArXiv         | 4.6%       | 3.4%     |
| Wikpedia      | 3.8%       | 3.1%     |
| StackExchange | 3.3%       | 2.8%     |


---
Please refer to the original dataset for other info.
```
@misc{cerebras2023slimpajama,
  author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan},
  title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}},
  month = June,
  year = 2023,
  howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}},
  url = {https://huggingface.co./datasets/cerebras/SlimPajama-627B},
}
```