V*Bench: A Visual QA Benchmark with Detailed High-resolution Images

Multimodal

V*Bench is a visual question & answer benchmark that evaluates MLLMs in their ability to process high-resolution and visually crowded images to find and focus on small details.

Overview

**V*Bench** is a visual question & answer benchmark that evaluates MLLMs in their ability to process high-resolution and visually crowded images to find and focus on small details. This benchmark is in contrast with common multimodal benchmarks that focus on large, prominent visual elements.

V*Bench was introduced in V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs. Penghao Wu, Saining Xie, 2023.

This implementation evaluates performance of MLLMs on the benchmark and does not implement V* visual search algorithm introduced in the paper.

Dataset and Tasks

The dataset (craigwu/vstar_bench) consist of 191 multiple choice questions in two categories:

vstar_bench_attribute_recognition - Attribute Recognition task has 115 samples and requires the model to recognize a certain type of attribute (e.g. color, material) of an object. This task has 4 multiple choice answers.

vstar_bench_spatial_relationship_reasoning - Spatial Relationship Reasoning task has 76 samples and asks the model to determine the relative spatial relationship between two objects. This task has two multiple choice answers.

Usage

Installation

There are two ways of using Inspect Evals, from pypi as a dependency of your own project and as a standalone checked out GitHub repository.

If you are using it from pypi, install the package and its dependencies via:

pip install inspect-evals

If you are using Inspect Evals in its repository, start by installing the necessary dependencies with:

uv sync

Running evaluations

Now you can start evaluating models. For simplicity’s sake, this section assumes you are using Inspect Evals from the standalone repo. If that’s not the case and you are not using uv to manage dependencies in your own project, you can use the same commands with uv run dropped.

uv run inspect eval inspect_evals/vstar_bench_attribute_recognition --model openai/gpt-5-nano
uv run inspect eval inspect_evals/vstar_bench_spatial_relationship_reasoning --model openai/gpt-5-nano

To run multiple tasks simulteneously use inspect eval-set:

uv run inspect eval-set inspect_evals/vstar_bench_attribute_recognition inspect_evals/vstar_bench_spatial_relationship_reasoning

You can also import tasks as normal Python objects and run them from python:

from inspect_ai import eval, eval_set
from inspect_evals.vstar_bench import vstar_bench_attribute_recognition, vstar_bench_spatial_relationship_reasoning
eval(vstar_bench_attribute_recognition)
eval_set([vstar_bench_attribute_recognition, vstar_bench_spatial_relationship_reasoning], log_dir='logs-run-42')

After running evaluations, you can view their logs using the inspect view command:

uv run inspect view

For VS Code, you can also download Inspect AI extension for viewing logs.

If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example:

INSPECT_EVAL_MODEL=anthropic/claude-opus-4-1-20250805
ANTHROPIC_API_KEY=<anthropic-api-key>

Options

You can control a variety of options from the command line. For example:

uv run inspect eval inspect_evals/vstar_bench_attribute_recognition --limit 10
uv run inspect eval inspect_evals/vstar_bench_spatial_relationship_reasoning --max-connections 10
uv run inspect eval inspect_evals/vstar_bench_attribute_recognition --temperature 0.5

See uv run inspect eval --help for all available options.