jordiae commited on
Commit
eef9304
1 Parent(s): 7e95be1

Create new file

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ExeBench: an ML-scale dataset of executable C functions
2
+
3
+ ## Usage
4
+
5
+ ```
6
+ # Load dataset split. In this case, synthetic test split
7
+ dataset = load_dataset('jordiae/exebench', split='test_synth')
8
+ ```
9
+
10
+ See https://github.com/jordiae/exebench for more examples.
11
+
12
+ ## License
13
+
14
+ All C functions keep the original license as per their original Github repository (available in the metadata). All ExeBench contributions (I/O examples, boilerplate to run functions, etc) are released with an MIT license.
15
+
16
+ ## Citation
17
+
18
+ ```
19
+ @inproceedings{10.1145/3520312.3534867,
20
+ author = {Armengol-Estap\'{e}, Jordi and Woodruff, Jackson and Brauckmann, Alexander and Magalh\~{a}es, Jos\'{e} Wesley de Souza and O'Boyle, Michael F. P.},
21
+ title = {ExeBench: An ML-Scale Dataset of Executable C Functions},
22
+ year = {2022},
23
+ isbn = {9781450392730},
24
+ publisher = {Association for Computing Machinery},
25
+ address = {New York, NY, USA},
26
+ url = {https://doi.org/10.1145/3520312.3534867},
27
+ doi = {10.1145/3520312.3534867},
28
+ abstract = {Machine-learning promises to transform compilation and software engineering, yet is frequently limited by the scope of available datasets. In particular, there is a lack of runnable, real-world datasets required for a range of tasks ranging from neural program synthesis to machine learning-guided program optimization. We introduce a new dataset, ExeBench, which attempts to address this. It tackles two key issues with real-world code: references to external types and functions and scalable generation of IO examples. ExeBench is the first publicly available dataset that pairs real-world C code taken from GitHub with IO examples that allow these programs to be run. We develop a toolchain that scrapes GitHub, analyzes the code, and generates runnable snippets of code. We analyze our benchmark suite using several metrics, and show it is representative of real-world code. ExeBench contains 4.5M compilable and 700k executable C functions. This scale of executable, real functions will enable the next generation of machine learning-based programming tasks.},
29
+ booktitle = {Proceedings of the 6th ACM SIGPLAN International Symposium on Machine Programming},
30
+ pages = {50–59},
31
+ numpages = {10},
32
+ keywords = {Code Dataset, Program Synthesis, Mining Software Repositories, C, Machine Learning for Code, Compilers},
33
+ location = {San Diego, CA, USA},
34
+ series = {MAPS 2022}
35
+ }
36
+ ```
37
+
38
+ ## Credits
39
+
40
+ We thank Anghabench authors for their type inference-based synthetic dependencies generation for C functions.
41
+
42
+ ## Contact
43
+
44
+ ```
45
+ jordi.armengol.estape at ed.ac.uk
46
+ ```