DocVQA: A Dataset for VQA on Document Images

Reasoning
Multimodal

DocVQA is a Visual Question Answering benchmark that consists of 50,000 questions covering 12,000+ document images. This implementation solves and scores the “validation” split.

Overview

DocVQA is a Visual Question Answering benchmark that consists of 50,000 questions covering 12,000+ document images. This implementation solves and scores the “validation” split.

Usage

First, install the inspect_ai and inspect_evals Python packages with:

pip install inspect_ai
pip install git+https://github.com/UKGovernmentBEIS/inspect_evals

Then, evaluate against one or more models with:

inspect eval inspect_evals/docvqa --model openai/gpt-4o

After running evaluations, you can view their logs using the inspect view command:

inspect view

If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example:

INSPECT_EVAL_MODEL=anthropic/claude-3-5-sonnet-20240620
ANTHROPIC_API_KEY=<anthropic-api-key>

Options

You can control a variety of options from the command line. For example:

inspect eval inspect_evals/docvqa --limit 10
inspect eval inspect_evals/docvqa --max-connections 10
inspect eval inspect_evals/docvqa --temperature 0.5

See inspect eval --help for all available options.

Dataset

The DocVQA dataset contains a “validation” split and a “test” split. To prevent leakage into training data, the authors of DocVQA have chosen to hold back the answers to the “test” split. Scoring on the “test” split requires coordinating with the DocVQA authors.

Each split contains several questions about each image. Here is an example image:

Diabetes in Australia

And associated example questions: * How many females are affected by diabetes? * What percentage of cases can be prevented? * What could lead to blindness or stroke diabetes?

The model is tasked to answer each question by referring to the image. The prompts are based on OpenAI’s simple-evals.

Scoring

DocVQA computes the Average Normalized Levenstein Similarity:

Average Normalized Levenstein Similarity definition