loubnabnl HF staff commited on
Commit
114a369
1 Parent(s): 6998fd8

add humaneval example

Browse files
Files changed (1) hide show
  1. evaluation/intro.txt +3 -3
evaluation/intro.txt CHANGED
@@ -1,7 +1,7 @@
1
  A popular evaluation framework for code generation models is the [pass@k](https://huggingface.co/metrics/code_eval) metric on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported. Below are some examples for the selcted models.
2
  For most models, we sample 200 candidate program completions, and compute pass@1, pass@10, and pass@100 using an unbiased sampling estimator. The table below shows the humanEval scores of CodeParrot, InCoder, GPT-neo models, GPT-J and Codex (not open-source).
3
 
4
- <div align="center">
5
 
6
  | Model | pass@1 | pass@10 | pass@100|
7
  |-------|--------|---------|---------|
@@ -16,12 +16,12 @@ For most models, we sample 200 candidate program completions, and compute pass@1
16
  |GPT-neo (1.5B)| 4.79% | 7.47% | 16.30% |
17
  |GPT-J (6B)| 11.62% | 15.74% | 27.74% |
18
 
19
- </div>
20
 
21
 
22
  To better understand how pass@k metric works, we will illustrate it with some examples. We select 4 tasks from the HumanEval dataset and see how the models performs and which code completions pass the unit tests. We will use CodeParrot 🦜 . We select the three folowwing problem from HumanEval
23
 
24
- ```
25
 
26
  from typing import List
27
 
 
1
  A popular evaluation framework for code generation models is the [pass@k](https://huggingface.co/metrics/code_eval) metric on [HumanEval](https://huggingface.co/datasets/openai_humaneval) dataset, which was introduced in [Codex paper](https://arxiv.org/pdf/2107.03374v2.pdf). The dataset includes 164 handwritten programming problems. In the pass@k metric, k code samples are generated per problem, a problem is considered solved if any sample passes the unit tests and the total fraction of problems solved is reported. Below are some examples for the selcted models.
2
  For most models, we sample 200 candidate program completions, and compute pass@1, pass@10, and pass@100 using an unbiased sampling estimator. The table below shows the humanEval scores of CodeParrot, InCoder, GPT-neo models, GPT-J and Codex (not open-source).
3
 
4
+ <center>
5
 
6
  | Model | pass@1 | pass@10 | pass@100|
7
  |-------|--------|---------|---------|
 
16
  |GPT-neo (1.5B)| 4.79% | 7.47% | 16.30% |
17
  |GPT-J (6B)| 11.62% | 15.74% | 27.74% |
18
 
19
+ <center>
20
 
21
 
22
  To better understand how pass@k metric works, we will illustrate it with some examples. We select 4 tasks from the HumanEval dataset and see how the models performs and which code completions pass the unit tests. We will use CodeParrot 🦜 . We select the three folowwing problem from HumanEval
23
 
24
+ ```python
25
 
26
  from typing import List
27