Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
tags:
|
3 |
+
- python
|
4 |
+
- code
|
5 |
+
---
|
6 |
+
|
7 |
+
# CodeParrot 🦜 Dataset
|
8 |
+
|
9 |
+
## What is it?
|
10 |
+
|
11 |
+
The CodeParrot dataset is a dump of Python files from GitHub.
|
12 |
+
|
13 |
+
## Creation
|
14 |
+
|
15 |
+
It was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed) big. The SQL query to create the dataset is the following:
|
16 |
+
|
17 |
+
```sql
|
18 |
+
SELECT
|
19 |
+
f.repo_name, f.path, c.copies, c.size, c.content, l.license
|
20 |
+
FROM
|
21 |
+
`bigquery-public-data.github_repos.files` AS f
|
22 |
+
JOIN
|
23 |
+
`bigquery-public-data.github_repos.contents` AS c
|
24 |
+
ON
|
25 |
+
f.id = c.id
|
26 |
+
JOIN
|
27 |
+
`bigquery-public-data.github_repos.licenses` AS l
|
28 |
+
ON
|
29 |
+
f.repo_name = l.repo_name
|
30 |
+
WHERE
|
31 |
+
NOT c.binary
|
32 |
+
AND ((f.path LIKE '%.py')
|
33 |
+
AND (c.size BETWEEN 1024 AND 1048575))
|
34 |
+
```
|
35 |
+
|
36 |
+
## Duplication
|
37 |
+
Note that about 70% of the dataset is duplicated. If you use the dataset make sure to deal with them appropriately. See [codeparrot-clean](https://huggingface.co/datasets/lvwerra/codeparrot-clean) for a deduplicated version of this dataset.
|