Tests AI models on math problems that involve interpreting visual elements like diagrams and charts, requiring detailed visual comprehension and logical reasoning.
MathVista to evaluate model performance in answering mathematics problems that have a visual component. Each problem has an associated image which is required to solve the problem. The dataset is made up of 6,141 questions from 28 existing multimodal datasets as well as 3 new datasets created for MathVista, where each dataset contains different styles of questions. There are both multiple-choice questions and free-form questions, where the free-form questions ask for a response in the form of an integer, floating-point number, or list.
Usage
Installation
There are two ways of using Inspect Evals, from pypi as a dependency of your own project and as a standalone checked out GitHub repository.
If you are using it from pypi, install the package and its dependencies via:
pip install inspect-evals
If you are using Inspect Evals in its repository, start by installing the necessary dependencies with:
uv sync
Running evaluations
Now you can start evaluating models. For simplicity’s sake, this section assumes you are using Inspect Evals from the standalone repo. If that’s not the case and you are not using uv to manage dependencies in your own project, you can use the same commands with uv run dropped.
uv run inspect eval inspect_evals/mathvista --model openai/gpt-5-nano
You can also import tasks as normal Python objects and run them from python:
from inspect_ai importevalfrom inspect_evals.mathvista import mathvistaeval(mathvista)
After running evaluations, you can view their logs using the inspect view command:
If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example: