Tau2
Evaluating Conversational Agents in a Dual-Control Environment
Overview
This is an inspect-native implementation of the Tau2 benchmark, Tau2 original code.
Warning
This benchmark uses a high amount of tokens. For example, airline on gpt5-nano might use 60,539,475 tokens [I: 57,885,918, CW: 0, CR: 49,725,440, O: 2,653,557, R: 2,254,976] For testing purposes, use a –limit 5 flag to limit the number of samples on which the eval is run, as well as -T message_limit=10 flag to limit the number of messages per sample.
Usage
Installation
There are two ways of using Inspect Evals, from pypi as a dependency of your own project and as a standalone checked out GitHub repository.
If you are using it from pypi, install the package and its dependencies via:
pip install inspect-evalsIf you are using Inspect Evals in its repository, start by installing the necessary dependencies with:
uv syncRunning evaluations
Now you can start evaluating models. For simplicity’s sake, this section assumes you are using Inspect Evals from the standalone repo. If that’s not the case and you are not using uv to manage dependencies in your own project, you can use the same commands with uv run dropped.
uv run inspect eval inspect_evals/tau2_airline --model openai/gpt-5-nano
uv run inspect eval inspect_evals/tau2_retail --model openai/gpt-5-nano
uv run inspect eval inspect_evals/tau2_telecom --model openai/gpt-5-nanoTo run multiple tasks simulteneously use inspect eval-set:
uv run inspect eval-set inspect_evals/tau2_airline inspect_evals/tau2_retail inspect_evals/tau2_telecomYou can also import tasks as normal Python objects and run them from python:
from inspect_ai import eval, eval_set
from inspect_evals.tau2 import tau2_airline, tau2_retail, tau2_telecom
eval(tau2_airline)
eval_set([tau2_airline, tau2_retail, tau2_telecom], log_dir='logs-run-42')After running evaluations, you can view their logs using the inspect view command:
uv run inspect viewFor VS Code, you can also download Inspect AI extension for viewing logs.
If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example:
INSPECT_EVAL_MODEL=anthropic/claude-opus-4-1-20250805
ANTHROPIC_API_KEY=<anthropic-api-key>Options
You can control a variety of options from the command line. For example:
uv run inspect eval inspect_evals/tau2_airline --limit 10
uv run inspect eval inspect_evals/tau2_retail --max-connections 10
uv run inspect eval inspect_evals/tau2_telecom --temperature 0.5See uv run inspect eval --help for all available options.
Parameters
tau2_airline, tau2_retail, tau2_telecom
shuffle(bool): (default:True)message_limit(int | None): (default:None)tasks_file_override(str | None): (default:None)
Dataset
The dataset for Tau2 is taken directly from the Tau2 benchmark repository. It is described in detail in the Tau2 paper and the Tau paper. There are 3 seperate domains, each with their own dataset:
- airline
- retail
- telecom
Telecom stands out as it has a significanlty larget dataset and it allows for the user-simulator to interact with the environment via tool calls.
Scoring
Again discussed in etail in the Tau2 paper and the Tau paper. Each domain has its own scoring function,
- airline: For each task an agent may have to perform certain tool calls, and relay certain information to the user. Then, the task also has to complete one or more goals described by natural language. A grading model is used to assess whether the goals have been met. The task is considered successful if all tool calls are made correctly and all goals are met.
- retail: For each task an agent may have to perform certain tool calls, and relay certain information to the user.
- telecom: For each task an agent may have to perform certain tool calls, and relay certain information to the user. Then, the end environment state is compared to the desired end state. The task is considered successful if all tool calls are made correctly and the end state matches the desired end state.
Evaluation Results
pass^1 rate Agent Model: GPT-5 - User model: gpt-4.1-2025-04-14
| Domain | Inspect Evals Accuracy[1] | Inspect Evals Stderr | Tau2 Leaderboard[2] |
|---|---|---|---|
| Retail | 0.825 | 0.036 | 0.816 |
| Airline | 0.580 | 0.071 | 0.625 |
| Telecom | 0.939 | 0.023 | 0.958 |
[1] Inspect Evals accuracy is calculated across 1 try for each task and message_limit is infinity. [2] The leaderboard accuracy is calculated across 4 tries for each task (equivalent to --epochs 4 in Inspect) and message_limit is 100.