|
--- |
|
license: apache-2.0 |
|
pretty_name: HumanEvalPack |
|
language: |
|
- code |
|
--- |
|
|
|
![Octopack](https://github.com/bigcode-project/octopack/blob/31f3320f098703c7910e43492c39366eeea68d83/banner.png?raw=true) |
|
|
|
# Dataset Card for HumanEvalPack |
|
|
|
## Table of Contents |
|
- [Table of Contents](#table-of-contents) |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Instances](#data-instances) |
|
- [Data Fields](#data-fields) |
|
- [Data Splits](#data-splits) |
|
- [Dataset Creation](#dataset-creation) |
|
- [Curation Rationale](#curation-rationale) |
|
- [Source Data](#source-data) |
|
- [Annotations](#annotations) |
|
- [Additional Information](#additional-information) |
|
- [Licensing Information](#licensing-information) |
|
- [Citation Information](#citation-information) |
|
- [Contributions](#contributions) |
|
|
|
## Dataset Description |
|
|
|
- **Repository:** https://github.com/bigcode-project/octopack |
|
- **Paper:** WIP |
|
- **Point of Contact:** [Niklas Muennighoff](mailto:[email protected]) |
|
|
|
### Dataset Summary |
|
|
|
> HumanEvalPack is ... |
|
> |
|
- **Languages:** Python, JavaScript, Java, Go, C++, Rust |
|
- **OctoPack🐙🎒:** |
|
|
|
<table> |
|
<tr> |
|
<th>Data</t> |
|
<td><a href=https://huggingface.co./datasets/bigcode/commitpack>CommitPack</a></td> |
|
<td>4TB of GitHub commits across 350 programming languages</td> |
|
</tr> |
|
<tr> |
|
<th></t> |
|
<td><a href=https://huggingface.co./datasets/bigcode/commitpackft>CommitPackFT</a></td> |
|
<td>Filtered version of CommitPack for high-quality commit messages that resemble instructions</td> |
|
</tr> |
|
<tr> |
|
<th>Model</t> |
|
<td><a href=https://huggingface.co./bigcode/octocoder>OctoCoder</a></td> |
|
<td>StarCoder (16B parameters) instruction tuned on CommitPackFT + OASST</td> |
|
</tr> |
|
<tr> |
|
<th>Evaluation</t> |
|
<td><a href=https://huggingface.co./datasets/bigcode/humanevalpack>HumanEvalPack</a></td> |
|
<td>Extension of OpenAI's HumanEval to cover 3 scenarios across 6 languages</td> |
|
</tr> |
|
</table> |
|
|
|
## Dataset Structure |
|
|
|
|
|
### Data Instances |
|
|
|
|
|
An example looks as follows: |
|
|
|
```json |
|
{ |
|
"task_id": "Python/0", |
|
"prompt": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n \"\"\" Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n False\n >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n True\n \"\"\"\n", |
|
"declaration": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n", |
|
"canonical_solution": " for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = abs(elem - elem2)\n if distance < threshold:\n return True\n\n return False\n", |
|
"buggy_solution": " for idx, elem in enumerate(numbers):\n for idx2, elem2 in enumerate(numbers):\n if idx != idx2:\n distance = elem - elem2\n if distance < threshold:\n return True\n\n return False\n", |
|
"bug_type": "missing logic", |
|
"failure_symptoms": "incorrect output", |
|
"entry_point": "has_close_elements", |
|
"import": "" |
|
"test_setup": "" |
|
"test": "\n\n\n\n\ndef check(has_close_elements):\n assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n assert has_close_elements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False\n assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True\n assert has_close_elements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False\n assert has_close_elements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True\n assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True\n assert has_close_elements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False\n\ncheck(has_close_elements)", |
|
"example_test": "def check(has_close_elements):\n assert has_close_elements([1.0, 2.0, 3.0], 0.5) == False\n assert has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) == True\ncheck(has_close_elements)\n", |
|
"signature": "has_close_elements(numbers: List[float], threshold: float) -> bool", |
|
"docstring": "Check if in given list of numbers, are any two numbers closer to each other than\ngiven threshold.\n>>> has_close_elements([1.0, 2.0, 3.0], 0.5)\nFalse\n>>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\nTrue", |
|
"instruction": "Write a Python function `has_close_elements(numbers: List[float], threshold: float) -> bool` to solve the following problem:\nCheck if in given list of numbers, are any two numbers closer to each other than\ngiven threshold.\n>>> has_close_elements([1.0, 2.0, 3.0], 0.5)\nFalse\n>>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\nTrue" |
|
} |
|
``` |
|
|
|
### Data Fields |
|
|
|
The data fields are the same among all splits: |
|
- `task_id`: task id (from 0 to 163) |
|
- `prompt`: the prompt for models relying on code continuation |
|
- `declaration`: the declaration of the function (same as prompt but without the docstring) |
|
- `canonical_solution`: the correct solution passing all unit tests for the problem |
|
- `buggy_solution`: same as `canonical_solution` but with a subtle human-written bug causing the unit tests to fail |
|
- `bug_type`: the type of the bug in `buggy_solution` (one of [`missing logic`, `excess logic`, `value misuse`, `operator misuse`, `variable misuse`, `function misuse`]) |
|
- `failure_symptoms`: the problem the bug causes (one of [`incorrect output`, `stackoverflow`, `infinite loop`]) |
|
- `entry_point`: the name of the function |
|
- 'import': imports necessary for the solution (only present for Go) |
|
- 'test_setup': imports necessary for the test execution (only present for Go) |
|
- `test`: the unit tests for the problem |
|
- `example_test`: additional unit tests different from `test` that could be e.g. provided to the model (these are not used in the paper) |
|
- `signature`: the signature of the function |
|
- `docstring`: the docstring describing the problem |
|
- `instruction`: an instruction for HumanEvalSynthesize in the form `Write a {language_name} function {signature} to solve the following problem:\n{docstring}` |
|
|
|
### Data Splits |
|
|
|
## Additional Information |
|
|
|
### Licensing Information |
|
|
|
Each sample has comes from a code repository with a permissive license. The license is provided by the `license` field for each sample. |
|
|
|
### Citation Information |
|
|
|
```bibtex |
|
``` |