Spaces:
Runtime error
Runtime error
update table
Browse files
evaluation/demo_humaneval.txt
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
We can load HumanEval dataset and pass@k metric from 🤗 [`datasets`](https://huggingface.co/docs/datasets/index)
|
3 |
+
|
4 |
+
```python
|
5 |
+
from datasets import load_dataset, load_metric
|
6 |
+
|
7 |
+
human_eval = load_dataset("openai_humaneval")
|
8 |
+
code_eval_metric = load_metric("code_eval")
|
9 |
+
```
|
10 |
+
|
11 |
+
We can easily compute the pass@k for a problem that asks for the implementation of a function that sums two integers:
|
12 |
+
|
13 |
+
```python
|
14 |
+
test_cases = ["assert add(2,3)==5"]
|
15 |
+
candidates = [["def add(a,b): return a*b", "def add(a, b): return a+b"]]
|
16 |
+
pass_at_k, results = code_eval_metric.compute(references=test_cases, predictions=candidates, k=[1, 2])
|
17 |
+
print(pass_at_k)
|
18 |
+
{'pass@1': 0.5, 'pass@2': 1.0}
|
19 |
+
```
|
20 |
+
|
21 |
+
To better understand how pass@k metric works, we will illustrate it with some concrete examples. We select two problems from the HumanEval dataset and see how CodeParrot 🦜 (110M) performs and which code completions pass the unit tests of the two problems below:
|
22 |
+
|
23 |
+
**Problem 1:**
|
24 |
+
|
25 |
+
```python
|
26 |
+
|
27 |
+
from typing import List
|
28 |
+
|
29 |
+
|
30 |
+
def separate_paren_groups(paren_string: str) -> List[str]:
|
31 |
+
""" Input to this function is a string containing multiple groups of nested parentheses. Your goal is to
|
32 |
+
separate those group into separate strings and return the list of those.
|
33 |
+
Separate groups are balanced (each open brace is properly closed) and not nested within each other
|
34 |
+
Ignore any spaces in the input string.
|
35 |
+
>>> separate_paren_groups('( ) (( )) (( )( ))')
|
36 |
+
['()', '(())', '(()())']
|
37 |
+
"""
|
38 |
+
````
|
39 |
+
**Problem 2:**
|
40 |
+
```python
|
41 |
+
|
42 |
+
def truncate_number(number: float) -> float:
|
43 |
+
""" Given a positive floating point number, it can be decomposed into
|
44 |
+
and integer part (largest integer smaller than given number) and decimals
|
45 |
+
(leftover part always smaller than 1).
|
46 |
+
|
47 |
+
Return the decimal part of the number.
|
48 |
+
>>> truncate_number(3.5)
|
49 |
+
0.5
|
50 |
+
"""
|
51 |
+
````
|
52 |
+
|
53 |
+
For each problem, instead of 200 candidate solutions, we will only generate 20 samples for illustration purposes. We use nucleus sampling with top-p where `p=0.95`, `temperature=0.2`, and sample tokens from the model until we encounter a stop sequence indicating the end of a method: ‘\nclass’, ‘\ndef’, ‘\n#’, ‘\nif’, or ‘\nprint’. For more details about decoding strategies for language generation, we recommend this [blog](https://huggingface.co/blog/how-to-generate).
|
54 |
+
|
55 |
+
**Remark**:
|
56 |
+
|
57 |
+
Regarding the temperature parameter, in [CodeGen](https://github.com/salesforce/CodeGen) paper, the authors observed that the best performing temperature increases as the number of samples permitted k increases. When a model is only allowed a few samples to pass unit tests, it is beneficial to use the learned distribution, through a low temperature, to select candidates that are likely to pass. But when a model is allowed for more chances with a high k, using a higher sampling temperature to tilt the learned model distribution lets it explore diverse samples and thus more likely to synthesize a correct program.
|
58 |
+
|
59 |
+
|
60 |
+
For our experiment, we compute pass@1, pass@10 and pass@20, each correspending to unit test pass rate when selecting respectively 1, 10 and 20 samples from the candidate solutions.
|
61 |
+
|
62 |
+
```
|
63 |
+
|
64 |
+
Results: {'pass@1': 0.0750, 'pass@10': 0.4473, 'pass@20': 0.5}
|
65 |
+
|
66 |
+
````
|
67 |
+
|
68 |
+
If we take a closer look at the unit test results for each candidate solution in the two problems, we find that 3 passed the test for the second problem, and none did for the first problem. This means that we have 3 correct solutions among 40, which corresponds to our pass@1 value `3/40 = 0.075`. The scores pass@10 and pass@20 are higher, because the more samples we select from the candidate completions, the more likely we are to include the correct implementation. As
|
69 |
+
for pass@20, it is `1/2 = 0.5`, since if we select all 20 candidates for each problem, the second problem get solved which gives 50% success rate. If you are curious about the candidate solutions that passed the tests, they all implemented this function:
|
70 |
+
|
71 |
+
```python
|
72 |
+
|
73 |
+
def truncate_number(number: float) -> float:
|
74 |
+
""" Given a positive floating point number, it can be decomposed into
|
75 |
+
and integer part (largest integer smaller than given number) and decimals
|
76 |
+
(leftover part always smaller than 1).
|
77 |
+
|
78 |
+
Return the decimal part of the number.
|
79 |
+
>>> truncate_number(3.5)
|
80 |
+
0.5
|
81 |
+
"""
|
82 |
+
return number % 1
|
83 |
+
```
|