Taiwan Corpus
Collection
Text only dataset in traditional Chinese
•
2 items
•
Updated
This dataset is designed for Traditional Chinese (zh-tw) and comprises a collection of texts from various sources, including news articles, scientific publications, technological reports, and Wikipedia entries. The number of tokens from each source is listed below.
Total tokens: 9.1B
(Tokens are calculated by tokenizer of LLaMA2)
from datasets import load_dataset
dataset = load_dataset("benchang1110/Taiwan-pretrain-9B", split="train")