CVEBench: Benchmark for AI Agents Ability to Exploit Real-World Web Application Vulnerabilities
Characterises an AI Agent’s capability to exploit real-world web application vulnerabilities. Aims to provide a realistic evaluation of an agent’s security reasoning capability using 40 real-world CVEs.
Overview
CVEBench characterises an AI Agent’s ability to exploit real-world web application vulnerabilities.
This evals supports running CVEBench in Docker and Kubernetes sandboxes.
The Kubernetes sandbox only supports a subset of CVEs. See K8S_SUPPORTED_CVE in cve_bench.py for the supported CVEs.
Prerequisites
Docker, and/or a Kubernetes cluster. See Inspect K8s sandbox documentation for the prerequisite for the Kubernetes cluster configuration.
Usage
Installation
There are two ways of using Inspect Evals, from pypi as a dependency of your own project and as a standalone checked out GitHub repository.
If you are using it from pypi, install the package and its dependencies via:
pip install inspect-evals[cvebench]If you are using Inspect Evals in its repository, start by installing the necessary dependencies with:
uv sync --extra cvebenchRunning evaluations
Now you can start evaluating models. For simplicity’s sake, this section assumes you are using Inspect Evals from the standalone repo. If that’s not the case and you are not using uv to manage dependencies in your own project, you can use the same commands with uv run dropped.
uv run inspect eval inspect_evals/cve_bench --model openai/gpt-5-nanoYou can also import tasks as normal Python objects and run them from python:
from inspect_ai import eval
from inspect_evals.cve_bench import cve_bench
eval(cve_bench)After running evaluations, you can view their logs using the inspect view command:
uv run inspect viewFor VS Code, you can also download Inspect AI extension for viewing logs.
If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example:
INSPECT_EVAL_MODEL=anthropic/claude-opus-4-1-20250805
ANTHROPIC_API_KEY=<anthropic-api-key>Parameters
cve_bench
challenges(str | list[str] | None): The CVEs that the eval will run on. If None, will run all CVEs. (default:<K8S_SUPPORTED_CVE.CVE_2023_37999: 'CVE-2023-37999'>)variants(str | list[str]): ‘one_day’ or ‘zero_day’. Variant of the prompt to use for the CVEs. ‘one_day’ hints about the specific exploit that can be used to exploit the web application. (default:<Variant.ONE_DAY: 'one_day'>)agent(Solver | None): Agent framework to run against CVE Bench. Must be compatible with the default agent on CVE Bench. (default:None)max_messages(int): The max messages allowed before the eval run terminates (for a given) CVE. (default:50)sandbox_type(Sandbox): ‘k8s’ or ‘docker’. the eval sandbox that the CVE is run against. For K8s, the remote or local cluster must be configured in a specific way; see the inspect k8s sandbox docs for details: (default:<Sandbox.K8S: 'k8s'>)
Options
You can control a variety of options from the command line. For example:
uv run inspect eval inspect_evals/cve_bench --limit 10
uv run inspect eval inspect_evals/cve_bench --max-connections 10
uv run inspect eval inspect_evals/cve_bench --temperature 0.5See uv run inspect eval --help for all available options.
Example invocations
Running with the default CVE:
uv run \
inspect eval inspect_evals/cve_bench \
--limit 1 \
--model openai/gpt-4o \
-T sandbox_type=k8s \
-T variants=one_day \
-T max_messages=100Running with all CVEs (challenges=) using both prompt variants:
uv run \
inspect eval inspect_evals/cve_bench \
--model openai/gpt-4o \
-T sandbox_type=k8s \
-T challenges= \
-T variants=one_day,zero_day \
-T max_messages=100Running with a specific CVE on docker:
uv run \
inspect eval inspect_evals/cve_bench \
--model openai/gpt-4o \
-T sandbox_type=docker \
-T challenges=CVE-2024-3495 \
-T variants=one_day \
-T max_messages=100