dhuck commited on
Commit
97a19c4
1 Parent(s): 41e0243

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -74
README.md CHANGED
@@ -9,95 +9,66 @@ tags:
9
  pretty_name: Functional Code
10
  size_categories:
11
  - 100K<n<1M
12
- dataset_info:
13
- features:
14
- - name: id
15
- dtype: string
16
- - name: repository
17
- dtype: string
18
- - name: filename
19
- dtype: string
20
- - name: license
21
- dtype: 'null'
22
- - name: language
23
- dtype: string
24
- - name: content
25
- dtype: string
26
- splits:
27
- - name: train
28
- num_bytes: 4762634110
29
- num_examples: 628869
30
- - name: test
31
- num_bytes: 1192125580
32
- num_examples: 157218
33
- download_size: 2111299315
34
- dataset_size: 5954759690
35
  ---
36
  # Dataset Card for Dataset Name
37
 
38
  ## Dataset Description
39
 
40
- - **Homepage:**
41
- - **Repository:**
42
- - **Paper:**
43
- - **Leaderboard:**
44
- - **Point of Contact:**
45
 
46
- ### Dataset Summary
47
-
48
- This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
49
-
50
- ### Supported Tasks and Leaderboards
51
-
52
- [More Information Needed]
53
 
54
- ### Languages
55
 
56
- [More Information Needed]
57
 
58
  ## Dataset Structure
59
 
60
  ### Data Instances
61
 
62
- [More Information Needed]
 
 
 
 
 
 
 
 
 
63
 
64
  ### Data Fields
65
 
66
- [More Information Needed]
 
 
 
 
 
67
 
68
  ### Data Splits
69
 
70
- [More Information Needed]
71
 
72
  ## Dataset Creation
73
 
74
  ### Curation Rationale
75
 
76
- [More Information Needed]
77
 
78
  ### Source Data
79
 
80
  #### Initial Data Collection and Normalization
81
 
82
- [More Information Needed]
83
 
84
  #### Who are the source language producers?
85
 
86
- [More Information Needed]
87
-
88
- ### Annotations
89
-
90
- #### Annotation process
91
-
92
- [More Information Needed]
93
-
94
- #### Who are the annotators?
95
-
96
- [More Information Needed]
97
 
98
  ### Personal and Sensitive Information
99
 
100
- [More Information Needed]
101
 
102
  ## Considerations for Using the Data
103
 
@@ -107,26 +78,8 @@ This dataset card aims to be a base template for new datasets. It has been gener
107
 
108
  ### Discussion of Biases
109
 
110
- [More Information Needed]
111
 
112
  ### Other Known Limitations
113
 
114
- [More Information Needed]
115
-
116
- ## Additional Information
117
-
118
- ### Dataset Curators
119
-
120
- [More Information Needed]
121
-
122
- ### Licensing Information
123
-
124
- [More Information Needed]
125
-
126
- ### Citation Information
127
-
128
- [More Information Needed]
129
-
130
- ### Contributions
131
-
132
- [More Information Needed]
 
9
  pretty_name: Functional Code
10
  size_categories:
11
  - 100K<n<1M
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
  # Dataset Card for Dataset Name
14
 
15
  ## Dataset Description
16
 
17
+ Collection of functional programming languages from GitHub.
 
 
 
 
18
 
19
+ - **Point of Contact:** dhuck
 
 
 
 
 
 
20
 
21
+ ### Dataset Summary
22
 
23
+ This dataset is a collection of code examples of functional programming languages for code generation tasks. It was collected over a week long period in March 2023 as part of project in program synthesis.
24
 
25
  ## Dataset Structure
26
 
27
  ### Data Instances
28
 
29
+ ```
30
+ {
31
+ 'id': str
32
+ 'repository': str
33
+ 'filename': str
34
+ 'license': str or Empty
35
+ 'language': str
36
+ 'content': str
37
+ }
38
+ ```
39
 
40
  ### Data Fields
41
 
42
+ * `id`: SHA256 has of the content field. This ID scheme ensure that duplicate code examples via forks or other duplications are removed from the dataset.
43
+ * 'repository': The repository that the file was pulled from. This can be used for any attribution or to check updated licensing issues for the code example.
44
+ * 'filename': Filename of the code example from within the repository.
45
+ * 'license': Licensing information of the repository. This can be empty and further work is likely necessary to parse licensing information from individual files.
46
+ * 'language': Programming language of the file. For example, Haskell, Clojure, Lisp, etc...
47
+ * 'content': Source code of the file. This is full text of the source with some cleaning as described in the Curation section below. While many examples are short, others can be extremely long. This field will like require preprocessing for end tasks.
48
 
49
  ### Data Splits
50
 
51
+ More information to be provided at a later date. There are 157,218 test examples and 628,869 training examples. The split was created using `scikit-learn`' `test_train_split` function.
52
 
53
  ## Dataset Creation
54
 
55
  ### Curation Rationale
56
 
57
+ This dataset was put together for Programming Synthesis tasks. The majority of available datasets consist of imperative programming languages, while the program synthesis community has a rich history of methods using functional languages. This dataset aims to unify the two approaches by making a large training corpus of functional languages available to researchers.
58
 
59
  ### Source Data
60
 
61
  #### Initial Data Collection and Normalization
62
 
63
+ Code examples were collected in a similar manner to other existing programming language datasets. Each example was pulled from public repositories on GitHub over a week in March 2023. I performed this task by searching common file extensions of the target languages (Clojure, Elixir, Haskell, Lisp, OCAML, Racket and Scheme). The full source is included for each coding example, so padding or truncation will be necessary for any training tasks. Significant effort was made to remove any personal information from each coding example. For each code example, I removed any email address or websites using simple regex pattern matching. Spacy NER was used to identify proper names in the comments only. Any token which spanned a name was simply replaced with the token `PERSON` while email addresses and websites were dropped from each comment. Organizations and other information were left intact.
64
 
65
  #### Who are the source language producers?
66
 
67
+ Each example contains the repository the code originated from, identifying the source of each example.
 
 
 
 
 
 
 
 
 
 
68
 
69
  ### Personal and Sensitive Information
70
 
71
+ While great care was taken to remove proper names, email addresses, and websites, there may exist examples where pattern matching did not work. While I used the best spacy models available, I did witness false negatives on other tasks on other datasets. To ensure no personal information makes it into training data, it is advisable to remove all comments if the training task does not require them. I made several PR to the `comment_parser` python library to support the languages in this dataset. My version of the parsing library can be found at [https://github.com/d-huck/comment_parser](https://github.com/d-huck/comment_parser)
72
 
73
  ## Considerations for Using the Data
74
 
 
78
 
79
  ### Discussion of Biases
80
 
81
+ While code itself may not contain bias, programmers can use offensive, racist, homophobic, transphobic, misogynistic, etc words for variable names. Further updates to this dataset library will investigate and address these issues. Comments in the code examples could also contain hateful speech. Models trained on this dataset may need additional training on toxicity to remove these tendencies from the output.
82
 
83
  ### Other Known Limitations
84
 
85
+ The code present in this dataset has not been checked for quality in any way. It is possible and probable that several of the coding examples are of poor quality and do not actually compile or run in their target language. Furthermore, there exists a chance that some examples are not the language they claim to be, since github search matching is dependent only on the file extension and not the actual contents of any file.