alasdairforsythe's picture
Upload README.md
7a4e28b
|
raw
history blame
2.96 kB
metadata
language:
  - en
pretty_name: 'TokenMonster Datasets: English, Code, Fiction, Non-fiction'
size_categories:
  - 1B<n<10B
tags:
  - text
  - english
  - fiction
  - nonfiction
  - non-fiction
  - code
  - code samples
  - tokenization
  - tokenization datasets
  - datasets
task_categories:
  - text-generation

TokenMonster Datasets: English, Code, Fiction, Non-fiction

Included are datasets that were used to generate the TokenMonster pre-built vocabularies.

The training data mostly came from Red Pajamas 1B Token Sample. However, to reduce formal English and emphasize other languages, informal writing and code, c4_sample & cc_sample were cropped to 100MB, and Reddit conversations data were added (also cropped to 100MB.)

Additionally, equally weighted code samples of 2MB per language (code_2mb) and 10MB per language (code_10mb) were added for 30 different programming languages to ensure all programming languages have representation. The source of the code samples was codeparrot/github-code. To ensure a range of coding styles, I allowed only 1 file per GitHub repository, and per file a maximum of 200 lines selected from the middle of the file.

Given the evolving nature of writing styles, I felt that book_sample.txt, which consists of out-of-copyright books, was not a good representation of contemporary fiction. To better represent a more modern style, I curated fiction.txt and fiction_100mb.txt by throwing together a few other datasets and cleaning it up.

Filename Filesize
arxiv_sample.txt 88,925,569
book_sample.txt 108,069,616
c4_sample.txt 100,560,318
cc_2023-06_sample.txt 100,852,231
code_2mb.txt 62,895,904
code_10mb.txt 314,006,799
fiction.txt 357,119,086
fiction_100mb.txt 94,235,489
github_sample.txt 191,123,094
stackexchange_sample.txt 71,940,138
wikipedia_sample.txt 79,181,873
reddit.txt 100,027,565

Note: fiction_100mb.txt is a subset of fiction.txt, and code_2mb.txt is a subset of code_10mb.txt.

License