HumanEval: Evaluating Large Language Models Trained on Code
Evaluating correctness for synthesizing Python programs from docstrings. Demonstrates custom scorers and sandboxing untrusted model code.
Overview
HumanEval is a benchmark to evaluate a model’s performance on synthesizing programs from docstrings. This implementation is based on the official implementation.
Usage
First, install the inspect_ai
and inspect_evals
Python packages with:
pip install inspect_ai
pip install git+https://github.com/UKGovernmentBEIS/inspect_evals
Then, evaluate against one or more models with:
inspect eval inspect_evals/humaneval --model openai/gpt-4o
After running evaluations, you can view their logs using the inspect view
command:
inspect view
If you don’t want to specify the --model
each time you run an evaluation, create a .env
configuration file in your working directory that defines the INSPECT_EVAL_MODEL
environment variable along with your API key. For example:
INSPECT_EVAL_MODEL=anthropic/claude-3-5-sonnet-20240620
ANTHROPIC_API_KEY=<anthropic-api-key>
Options
You can control a variety of options from the command line. For example:
inspect eval inspect_evals/humaneval --limit 10
inspect eval inspect_evals/humaneval --max-connections 10
inspect eval inspect_evals/humaneval --temperature 0.5
See inspect eval --help
for all available options.
Dataset
Here is an example prompt from the dataset:
from typing import List def has_close_elements(numbers: List[float], threshold: float) -> bool: """Check if in given list of numbers, are any two numbers closer to each other than given threshold. >>> has_close_elements([1.0, 2.0, 3.0], 0.5) False >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) True """
The model is then tasked to fill in the missing code pieces in order to make this a working function.
Scoring
Once a generation is completed, the entire function, accompanied with unit tests, is run in a subprocess and is considered a success if all unit tests pass.
The benchmark uses the \(\text{pass}@k\) metric to measure functional correctness. In brief terms, this is the per problem probability of at least 1 correct sample generation given \(k\) generations. It is defined using the following expectation:
\[ \text{pass@}k := \underset{\text{Problems}}{\mathbb{E}}\left[1-\frac{{n-c}\choose{k}}{{n}\choose{k}}\right]\]
where we sample \(n \geq k\) generations to reduce variance. Note that the default in this benchmark implementation is \(n = 5\), and we evaluate \(\text{pass}@k\) for \(k \in \\{1, 2, 5\\}\).