File size: 1,985 Bytes
7e82e1a e31f606 7e82e1a e31f606 7e82e1a a54e895 7e82e1a eac0a1a 7e82e1a eac0a1a 6ab3a36 eac0a1a a54e895 c3da880 a54e895 c3da880 a54e895 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
dataset_info:
features:
- name: text
dtype: string
- name: url
dtype: string
- name: dump
dtype: string
- name: lang
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 3124929718.313386
num_examples: 518410
download_size: 2971113091
dataset_size: 3124929718.313386
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-generation
language:
- en
- ru
- zh
- es
tags:
- code
pretty_name: k
size_categories:
- 100K<n<1M
---
# Coding Tutorials
This comprehensive dataset consists of **500,000** documents, summing up to around **1.5 billion** tokens.
Predominantly composed of coding tutorials, it has been meticulously compiled from various web crawl datasets like **RefinedWeb**, **OSCAR**, and **Escorpius**.
The selection process involved a stringent filtering of files using regular expressions to ensure the inclusion of content that contains programming code (most of them).
These tutorials offer more than mere code snippets.
They provide an extensive context, including the rationale behind the code, the problem being addressed, and detailed step-by-step instructions.
This layered context is helpful for training a code-LM model, enabling it to discern the user intent behind a piece of code and thus facilitating more contextually relevant assistance.
### Programming Language Distribution
```
cpp β 39% βββββββββββββββββββββββββ
python β 25% ββββββββββββββββ
java β 16% ββββββββββ
csharp β 3% ββ
javascript β 1% β
kotlin β 1% β
other β 14% βββββββββ
```
### Natural language distribution
```
en β 80% βββββββββββββββββββββββββ
ru β 16% βββββ
zh β 2% β
es β 2% β
``` |