File size: 1,275 Bytes
6e79fb4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
01187aa
6e79fb4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43ee332
 
 
 
6e79fb4
 
 
758a450
6e79fb4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
language:
- en
pretty_name: 'Pico Dataset: Pre-tokenized, Pre-shuffled Dolma'
size_categories:
- 100B<n<1T
---
## The Pico Dataset

A pre-tokenized, pre-shuffled version of [Dolma](https://huggingface.co./datasets/allenai/dolma), the high-quality text corpus from AI2.

### Overview

The Pico dataset simplifies training by providing:
- Pre-tokenized text in chunks of 2048 tokens, using the [OLMo Tokenizer](https://huggingface.co./allenai/OLMo-7B-0724-hf/blob/main/tokenizer_config.json)
- Pre-shuffled data for consistent training
- Streaming-friendly format
- 420B tokens total (perfect for 200K steps at batch size 1024)

### Benefits

- **Storage Efficient**: No need to download the full 10TB Dolma dataset
- **Memory Efficient**: Stream data directly with `load_dataset(..., streaming=True)`
- **Reproducible**: All models see identical data in identical order
- **Fast**: Skip tokenization during training
- **Simple**: Minimal boilerplate code needed

### Usage

1. Set up HuggingFace credentials in `.env`:
```
HF_USERNAME=your_username
HF_TOKEN=your_token # Get from https://huggingface.co./settings/tokens
```
2. Set up in python:
```
from datasets import load_dataset
dataset = load_dataset("pico-lm/pretokenized-dolma", streaming=True)
```