|
--- |
|
tags: |
|
- code |
|
pretty_name: CoCoNuT-Python(2010) |
|
--- |
|
|
|
# Dataset Card for CoCoNuT-Python(2010) |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [CoCoNuT training data](https://github.com/lin-tan/CoCoNut-Artifact/releases/tag/training_data_1.0.0) |
|
- **Repository:** [CoCoNuT repository](https://github.com/lin-tan/CoCoNut-Artifact) |
|
- **Paper:** [CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair](https://dl.acm.org/doi/abs/10.1145/3395363.3397369) |
|
|
|
### Dataset Summary |
|
|
|
Part of the data used to train the models in the "CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair" paper. |
|
These datasets contain raw data extracted from GitHub, GitLab, and Bitbucket, and have neither been shuffled nor tokenized. |
|
The year in the dataset’s name is the cutting year that shows the year of the newest commit in the dataset. |
|
|
|
### Languages |
|
|
|
- Python |
|
|
|
## Dataset Structure |
|
|
|
### Data Fields |
|
|
|
The dataset consists of 4 columns: `add`, `rem`, `context`, and `meta`. |
|
These match the original dataset files: `add.txt`, `rem.txt`, `context.txt`, and `meta.txt`. |
|
|
|
### Data Instances |
|
|
|
There is a mapping between the 4 columns for each instance. |
|
For example: |
|
|
|
5 first rows of `rem` (i.e., the buggy line/hunk): |
|
|
|
``` |
|
1 public synchronized StringBuffer append(char ch) |
|
2 ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; |
|
3 public String substring(int beginIndex, int endIndex) |
|
4 if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length); |
|
5 public Object next() { |
|
``` |
|
|
|
5 first rows of add (i.e., the fixed line/hunk): |
|
|
|
``` |
|
1 public StringBuffer append(Object obj) |
|
2 return append(obj == null ? "null" : obj.toString()); |
|
3 public String substring(int begin) |
|
4 return substring(begin, count); |
|
5 public FSEntry next() { |
|
``` |
|
|
|
These map to the 5 instances: |
|
|
|
```diff |
|
- public synchronized StringBuffer append(char ch) |
|
+ public StringBuffer append(Object obj) |
|
``` |
|
|
|
```diff |
|
- ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; |
|
+ return append(obj == null ? "null" : obj.toString()); |
|
``` |
|
|
|
```diff |
|
- public String substring(int beginIndex, int endIndex) |
|
+ public String substring(int begin) |
|
``` |
|
|
|
```diff |
|
- if (beginIndex < 0 || endIndex > count || beginIndex > endIndex) throw new StringIndexOutOfBoundsException(); if (beginIndex == 0 && endIndex == count) return this; int len = endIndex - beginIndex; return new String(value, beginIndex + offset, len, (len << 2) >= value.length); |
|
+ return substring(begin, count); |
|
``` |
|
|
|
```diff |
|
- public Object next() { |
|
+ public FSEntry next() { |
|
``` |
|
|
|
`context` contains the associated "context". Context is the (in-lined) buggy function (including the buggy lines and comments). |
|
For example, the context of |
|
|
|
``` |
|
public synchronized StringBuffer append(char ch) |
|
``` |
|
|
|
is its associated function: |
|
|
|
```java |
|
public synchronized StringBuffer append(char ch) { ensureCapacity_unsynchronized(count + 1); value[count++] = ch; return this; } |
|
``` |
|
|
|
`meta` contains some metadata about the project: |
|
|
|
``` |
|
1056 /local/tlutelli/issta_data/temp/all_java0context/java/2006_temp/2006/1056/68a6301301378680519f2b146daec37812a1bc22/StringBuffer.java/buggy/core/src/classpath/java/java/lang/StringBuffer.java |
|
``` |
|
|
|
`1056` is the project id. `/local/...` is the absolute path to the buggy file. This can be parsed to extract the commit id: `68a6301301378680519f2b146daec37812a1bc22`, the file name: `StringBuffer.java` and the original path within the project |
|
`core/src/classpath/java/java/lang/StringBuffer.java` |
|
|
|
| Number of projects | Number of Instances | |
|
| ------------------ |-------------------- | |
|
| 13,899 | 480,777 | |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
Data is collected to train automated program repair (APR) models. |
|
|
|
### Citation Information |
|
|
|
```bib |
|
@inproceedings{lutellierCoCoNuTCombiningContextaware2020, |
|
title = {{{CoCoNuT}}: Combining Context-Aware Neural Translation Models Using Ensemble for Program Repair}, |
|
shorttitle = {{{CoCoNuT}}}, |
|
booktitle = {Proceedings of the 29th {{ACM SIGSOFT International Symposium}} on {{Software Testing}} and {{Analysis}}}, |
|
author = {Lutellier, Thibaud and Pham, Hung Viet and Pang, Lawrence and Li, Yitong and Wei, Moshi and Tan, Lin}, |
|
year = {2020}, |
|
month = jul, |
|
series = {{{ISSTA}} 2020}, |
|
pages = {101--114}, |
|
publisher = {{Association for Computing Machinery}}, |
|
address = {{New York, NY, USA}}, |
|
doi = {10.1145/3395363.3397369}, |
|
url = {https://doi.org/10.1145/3395363.3397369}, |
|
urldate = {2022-12-06}, |
|
isbn = {978-1-4503-8008-9}, |
|
keywords = {AI and Software Engineering,Automated program repair,Deep Learning,Neural Machine Translation} |
|
} |
|
``` |
|
|