metadata
dataset_info:
features:
- name: lang
dtype: string
- name: seed
dtype: string
splits:
- name: train
num_bytes: 3114466
num_examples: 10000
download_size: 1629429
dataset_size: 3114466
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
This dataset contains 10000 random snippets of 5-15 lines parsed from bigcode/starcoderdata
.
Specifically, I consider 10 languages: Haskell, Python, cpp, java, typescript, shell, csharp, rust, php, and swift. And, I collect 1000 documents for each language, and then extract 5-15 random lines from the document to create this dataset.
See MagiCoder and their seed collection process. In my usecase, I needed some inspiration documents for generating synthetic datasets.