ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation
Evaluates LLMs on class-level code generation with 100 tasks constructed over 500 person-hours. The study shows that LLMs perform worse on class-level tasks compared to method-level tasks.
Overview
ClassEval is a benchmark for evaluating large language models (LLMs) on class-level code generation tasks. It contains 100 class-level Python code generation tasks, constructed over approximately 500 person-hours.
Usage
First, install the inspect_ai
and inspect_evals
Python packages with:
pip install inspect_ai
pip install git+https://github.com/UKGovernmentBEIS/inspect_evals
Then, evaluate against one or more models with:
inspect eval inspect_evals/class_eval --model openai/gpt-4o
After running evaluations, you can view their logs using the inspect view
command:
inspect view
If you don’t want to specify the --model
each time you run an evaluation, create a .env
configuration file in your working directory that defines the INSPECT_EVAL_MODEL
environment variable along with your API key. For example:
INSPECT_EVAL_MODEL=anthropic/claude-3-5-sonnet-20240620
ANTHROPIC_API_KEY=<anthropic-api-key>
[!NOTE] The bash and python code is executed inside a Docker container, so you will need to install Docker Engine in order to run the evaluation.
Docker containers are also used in some challenges for auxiliary services. When you run this, the containers are pulled from a public registry. The source code is still included, however, both for your reference and because some files are copied from them into the agent’s container at runtime.
Options
You can control a variety of options from the command line. For example:
inspect eval inspect_evals/class_eval --limit 10
inspect eval inspect_evals/class_eval --max-connections 10
inspect eval inspect_evals/class_eval --temperature 0.5
See inspect eval --help
for all available options.
Dataset
See https://huggingface.co/datasets/FudanSELab/ClassEval for more information.
Scoring
For each sample, the LLM-generated code is appended with test cases from the dataset, and the code is executed to check if it passes all the test cases. If all test cases pass, the sample is considered correct, and the score for the sample is 1. Otherwise, the score for the sample is 0.
The final score for the model is the average score across all samples.