emozilla's picture
Create README.md
b32459f verified
|
raw
history blame
1.09 kB
---
license: odc-by
task_categories:
- text-generation
language:
- en
tags:
- language-modeling
- casual-lm
- llm
pretty_name: Dolma
size_categories:
- 100B<n<1T
---
Tokenized (Llama 2) verison of [NousResearch/dolma-v1_7-30B](https://huggingface.co./datasets/NousResearch/dolma-v1_7-30B) as a [Nanotron](https://github.com/huggingface/nanotron) dataset split into 10 GB chunks.
To download:
```shell
huggingface-cli download --repo-type dataset --local-dir dolma-v1_7-30B-tokenized-llama2-nanoset --local-dir-use-symlinks False NousResearch/dolma-v1_7-30B-tokenized-llama2-nanoset
```
To recombine:
```shell
cat dolma-v1_7-30B-tokenized-llama2-nanoset_input_ids.npy.* > dolma-v1_7-30B-tokenized-llama2-nanoset_input_ids.npy
rm -rf dolma-v1_7-305B-tokenized-llama3-nanoset
```
Can also be used directly with numpy, for example
```python
import numpy as np
dataset_buffer_mmap = np.memmap("dolma-v1_7-30B-tokenized-llama2-nanoset_input_ids.npy",
mode="r", order="C", dtype=np.int32)
dataset_buffer = memoryview(dataset_buffer_mmap)
dataset_number_of_tokens = int(len(dataset_buffer))
```