metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 67744337720
num_examples: 79514
download_size: 1125510240
dataset_size: 67744337720
Dataset Card for "pg_books-tokenized-bos-eos-chunked-65536"
The pg19 dataset tokenized under LLaMA into 64k chunks, bookended with BOS and EOS