|
--- |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: url |
|
dtype: string |
|
- name: dump |
|
dtype: string |
|
- name: lang |
|
dtype: string |
|
- name: source |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 3124929718.313386 |
|
num_examples: 518410 |
|
download_size: 2971113091 |
|
dataset_size: 3124929718.313386 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
- ru |
|
- zh |
|
- es |
|
tags: |
|
- code |
|
pretty_name: k |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
# Coding Tutorials |
|
|
|
This comprehensive dataset consists of **500,000** documents, summing up to around **1.5 billion** tokens. |
|
Predominantly composed of coding tutorials, it has been meticulously compiled from various web crawl datasets like **RefinedWeb**, **OSCAR**, and **Escorpius**. |
|
The selection process involved a stringent filtering of files using regular expressions to ensure the inclusion of content that contains programming code (most of them). |
|
|
|
These tutorials offer more than mere code snippets. |
|
They provide an extensive context, including the rationale behind the code, the problem being addressed, and detailed step-by-step instructions. |
|
This layered context is helpful for training a code-LM model, enabling it to discern the user intent behind a piece of code and thus facilitating more contextually relevant assistance. |
|
|
|
### Programming Language Distribution |
|
``` |
|
cpp β 39% βββββββββββββββββββββββββ |
|
python β 25% ββββββββββββββββ |
|
java β 16% βββββββββββ |
|
csharp β 3% ββ |
|
javascript β 1% β |
|
kotlin β 1% β |
|
other β 14% βββββββββ |
|
``` |
|
|
|
### Natural language distribution |
|
``` |
|
en β 80% βββββββββββββββββββββββββ |
|
ru β 16% βββββ |
|
zh β 2% β |
|
es β 2% β |
|
``` |