license: apache-2.0
Dataset Card for Dataset Name
Dataset Description
- Repository: https://github.com/amazon-science/recode/tree/main
- Paper: https://arxiv.org/abs/2212.10264
Dataset Summary
The Recode benchmark proposes to apply code and natural language transformations to code-generation benchmarks to evaluate the robustness of code-generation models. This dataset contains the perturbed version of HumanEval that they released. It was automatically generated from the HumanEval dataset.
Subsets
There are four transformation categories that form the subsets of this dataset: func_name
, nlaugmenter
, natgen
and format
.
Languages
The programming problems are written in Python and contains docstrings and comments in English.
Dataset Structure
Data Instances
[More Information Needed]
Data Fields
task_id
: ID of the original HumanEval exampleprompt
: the perturbed promptentry_point
: entry point for testcanonical_solution
: solution for the problem in theprompt
test
: contains function to test generated code for correctnessseed
: seed of the perturbed promptperturbation_name
: name of the perturbationpartial
: partial solution to the problem. This field is only present for transformation categories that affect a partial solution:natgen
andformat
.
Data Splits
The dataset only has a test split.
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Annotations
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
@article{wang2022recode,
title={ReCode: Robustness Evaluation of Code Generation Models},
author={Wang, Shiqi and Li, Zheng and Qian, Haifeng and Yang, Chenghao and Wang, Zijian and Shang, Mingyue and Kumar, Varun and Tan, Samson and Ray, Baishakhi and Bhatia, Parminder and others},
journal={arXiv preprint arXiv:2212.10264},
year={2022}
}
Contributions
[More Information Needed]