The Art of Saying No: Contextual Noncompliance in Language Models
Dataset with 1001 samples to test noncompliance capabilities of language models. Contrast set of 379 samples.
Overview
Usage
Installation
There are two ways of using Inspect Evals, from pypi as a dependency of your own project and as a standalone checked out GitHub repository.
If you are using it from pypi, install the package and its dependencies via:
pip install inspect-evalsIf you are using Inspect Evals in its repository, start by installing the necessary dependencies with:
uv syncRunning evaluations
Now you can start evaluating models. For simplicity’s sake, this section assumes you are using Inspect Evals from the standalone repo. If that’s not the case and you are not using uv to manage dependencies in your own project, you can use the same commands with uv run dropped.
uv run inspect eval inspect_evals/coconot --model openai/gpt-5-nanoYou can also import tasks as normal Python objects and run them from python:
from inspect_ai import eval
from inspect_evals.coconot import coconot
eval(coconot)After running evaluations, you can view their logs using the inspect view command:
uv run inspect viewFor VS Code, you can also download Inspect AI extension for viewing logs.
If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example:
INSPECT_EVAL_MODEL=anthropic/claude-opus-4-1-20250805
ANTHROPIC_API_KEY=<anthropic-api-key>Options
You can control a variety of options from the command line. For example:
uv run inspect eval inspect_evals/coconot --limit 10
uv run inspect eval inspect_evals/coconot --max-connections 10
uv run inspect eval inspect_evals/coconot --temperature 0.5See uv run inspect eval --help for all available options.
Parameters
coconot
subset(Literal[‘original’, ‘contrast’]): The sample subset to use (default:'original')use_system_prompt(bool): Whether to use a system prompt in the solver (default:False)grader(list[str | Model | None] | str | Model | None): (str | Model | None): The grader model to use when grading samples (default:'openai/gpt-3.5-turbo')
Evaluation Report
Implementation Details
There are no known deviations from the reference implementation, limitations, or edge cases.
Results
The results below show some variation from the results reported in the paper. The paper reported little difference between using gpt-3.5-turbo and gpt-4 as scorers, but we see significant differences on some subsets. Likely the differences are due to different model versions (the paper did not report model versions). Looking at samples with disagreements between scorers, we observe that there are numerous borderline samples where disagreements between models are not surprising.
Despite these differences, the results overall appear to show similar trends to those reported in the paper and log inspection did not reveal any unexpected behavior.
Results are compliance rates - lower is better on the original data set (first 5 columns), higher is better on the contrast set (last column).
GPT-4 no system prompt
| scorer | Incomplete | Unsupported | Indeterminate | Safety | Humanizing | Contrast |
|---|---|---|---|---|---|---|
| paper | 29.8 | 11.5 | 14.1 | 11.4 | 6.1 | 97.4 |
| gpt-3.5-turbo | 11.1 | 10.8 | 2.1 | 10.4 | 6.1 | 96.3 |
| gpt-4 | 15.6 | 12.1 | 9.2 | 9.1 | 2.4 | 93.9 |
| gpt-4o | 27.1 | 15.9 | 14.1 | 12.9 | 4.9 | 96.3 |
GPT-4 system prompt
| scorer | Incomplete | Unsupported | Indeterminate | Safety | Humanizing | Contrast |
|---|---|---|---|---|---|---|
| paper | 19.6 | 3.2 | 0.0 | 0.3 | 2.4 | 94.7 |
| gpt-3.5-turbo | 0.4 | 2.5 | 0.0 | 1.3 | 1.2 | 88.4 |
| gpt-4 | 7.1 | 1.9 | 0.0 | 1.0 | 1.2 | 95.5 |
| gpt-4o | 10.7 | 3.8 | 1.4 | 1.5 | 2.4 | 92.3 |
GPT-4o no system prompt
| scorer | Incomplete | Unsupported | Indeterminate | Safety | Humanizing | Contrast |
|---|---|---|---|---|---|---|
| paper | 8.9 | 19.1 | 4.2 | 12.7 | 23.2 | 98.4 |
| gpt-3.5-turbo | 0.9 | 14.7 | 0.0 | 11.4 | 11.0 | 82.9 |
| gpt-4 | 16.4 | 13.4 | 4.2 | 9.9 | 7.3 | 95.8 |
| gpt-4o | 24.4 | 20.4 | 4.2 | 15.9 | 9.8 | 94.7 |
GPT-4o system prompt
| scorer | Incomplete | Unsupported | Indeterminate | Safety | Humanizing | Contrast |
|---|---|---|---|---|---|---|
| paper | 30.2 | 22.9 | 7.0 | 5.3 | 11.0 | 98.4 |
| gpt-3.5-turbo | 1.8 | 10.2 | 0.0 | 3.5 | 2.4 | 81.8 |
| gpt-4 | 15.6 | 8.3 | 2.1 | 2.8 | 0.0 | 94.7 |
| gpt-4o | 21.3 | 21.0 | 4.2 | 6.1 | 1.2 | 92.6 |
Reproducibility Information
Tested with:
| Model | Model ID |
|---|---|
| GPT-4 | gpt-4-0613 |
| GPT-4o | gpt-4o-2024-08-06 |
| GPT-3.5-turbo | gpt-3.5-turbo-0125 |
- Evaluation run on full data set (1001 samples in original, 379 in contrast)
- Results produced September 2025
- See above for instructions to run the evaluation. To test multiple configurations, update coconot_eval_set.py and run
python run_coconot_eval.py. - Single epoch used for results (5 epochs on one configuration produced consistent results with single epoch)