You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Description

This dataset was used to pre-train Co-Encoder's Context Encoder when we participated in LOCAL AI HACKATHON #000.

The number of tokens (Using tokenizer of calm2-chat)

Language The number of tokens
Japanese 4.7b
English 5b
Code 0.9b

NOTE

This dataset has not passed sentence end boundary determination or Perplexity Filtering, so there is room for improvement in quality.

Downloads last month
33

Models trained or fine-tuned on sudy-super/JetCopper-10B