GAIA: A Benchmark for General AI Assistants
Proposes real-world questions that require a set of fundamental abilities such as reasoning, multi-modality handling, web browsing, and generally tool-use proficiency. GAIA questions are conceptually simple for humans yet challenging for most advanced AIs.
Overview
This is an Inspect AI implementation of the GAIA (General AI Assistants) benchmark, consisting of 450 questions testing tool use on realistic assistant tasks (mostly web browsing).
Usage
Installation
There are two ways of using Inspect Evals, from pypi as a dependency of your own project and as a standalone checked out GitHub repository.
If you are using it from pypi, install the package and its dependencies via:
pip install inspect-evalsIf you are using Inspect Evals in its repository, start by installing the necessary dependencies with:
uv syncRunning evaluations
Now you can start evaluating models. For simplicity’s sake, this section assumes you are using Inspect Evals from the standalone repo. If that’s not the case and you are not using uv to manage dependencies in your own project, you can use the same commands with uv run dropped.
uv run inspect eval inspect_evals/gaia --model openai/gpt-5-nano
uv run inspect eval inspect_evals/gaia_level1 --model openai/gpt-5-nano
uv run inspect eval inspect_evals/gaia_level2 --model openai/gpt-5-nano
uv run inspect eval inspect_evals/gaia_level3 --model openai/gpt-5-nanoTo run multiple tasks simulteneously use inspect eval-set:
uv run inspect eval-set inspect_evals/gaia inspect_evals/gaia_level1 inspect_evals/gaia_level2 inspect_evals/gaia_level3You can also import tasks as normal Python objects and run them from python:
from inspect_ai import eval, eval_set
from inspect_evals.gaia import gaia, gaia_level1, gaia_level2, gaia_level3
eval(gaia)
eval_set([gaia, gaia_level1, gaia_level2, gaia_level3], log_dir='logs-run-42')After running evaluations, you can view their logs using the inspect view command:
uv run inspect viewFor VS Code, you can also download Inspect AI extension for viewing logs.
If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example:
INSPECT_EVAL_MODEL=anthropic/claude-opus-4-1-20250805
ANTHROPIC_API_KEY=<anthropic-api-key>Options
You can control a variety of options from the command line. For example:
uv run inspect eval inspect_evals/gaia --limit 10
uv run inspect eval inspect_evals/gaia_level1 --max-connections 10
uv run inspect eval inspect_evals/gaia_level2 --temperature 0.5See uv run inspect eval --help for all available options.
Parameters
gaia
solver(Solver | None): Provide a custom solver (if not specified uses Inspect’s basic_agent with bash, python, and web browsing tools). (default:None)input_prompt(str | None): Per sample question prompt. Should include a {file} variable (for identifying any files relevant to the task) and a {question} variable for rendering the question. (default:None)max_attempts(int): Maximum number of submission attempts. Only applies when using the default solver. (default:1)max_messages(int): Maximum number of messages before giving up. Only applies when using the default solver. (default:100)subset(Literal[‘2023_all’, ‘2023_level1’, ‘2023_level2’, ‘2023_level3’]): Which GAIA subset to evaluate. (default:'2023_all')split(Literal[‘test’, ‘validation’]): Which split to evaluate (“validation” or “test”) (default:'validation')instance_ids(str | list[str] | None): Specific question instances to evaluated. (default:None)sandbox(str | tuple[str, str] | SandboxEnvironmentSpec): Sandbox environment to use for the task. (default:('docker', 'src/inspect_evals/gaia/compose.yaml'))
gaia_level1
kwargs(Any):
gaia_level2
kwargs(Any):
gaia_level3
kwargs(Any):
[!NOTE] The GAIA task uses tool calling to enable the model to execute web_browser and bash commands. Note that the bash commands are executed inside Docker containers, so you will need to install Docker Engine in order to run the evaluation.
Upon running the GAIA task, it will attempt to download the dataset from the HuggingFace hub. For this to work, you will need to gain access to the dataset (by filling out a form on the GAIA huggingface repository), and also create and set an access token. You will need to define the
HF_TOKENenvironment variable to access the dataset:HF_TOKEN=<hf-token>
Dataset
The GAIA dataset contains 450 questions testing tool use on realistic assistant tasks (mostly web browsing). For example:
A paper about AI regulation that was originally submitted to arXiv.org in June 2022 shows a figure with three axes, where each axis has a label word at both ends. Which of these words is used to describe a type of society in a Physics and Society article submitted to arXiv.org on August 11, 2016?
Scoring
A simple mean is calculated over the datapoints.
Development
If you want to develop a solver for the GAIA task, import the @task from the gaia module and pass your own solver to it. For example:
from inspect_ai import eval
from inspect_ai.solver import use_tools, generate, system_message, basic_agent
from inspect_ai.tool import bash, web_search
from inspect_evals.gaia import gaia
agent = basic_agent(tools=[bash()])
task = gaia(solver=agent)
eval(task, model="openai/gpt-4o")See the documentation on the Inspect Agents API for additional information on developing agents.
Test Split
You can evaluate against the “test” split with:
agent = basic_agent(tools=[bash()])
task = gaia(plan=agent, split="test")
eval(task, model="openai/gpt-4o")Note that the GAIA “test” split does not come with any solutions so is not scored (rather, solutions are uploaded to the online leaderboard).