MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark
An enhanced dataset designed to extend the mostly knowledge-driven MMLU benchmark by integrating more challenging, reasoning-focused questions and expanding the choice set from four to ten options.
Overview
MMLU-Pro is a more robust and challenging massive multi-task understanding dataset tailored to more rigorously benchmark large language models’ capabilities. This dataset contains 12K complex questions across 14 disciplines (including ‘Other’).
There are three major differences compared to original MMLU:
- MMLU-Pro increases the number of options from 4 to 10, making the evaluation more realistic and challenging.
- In this dataset, the creators increase the problem difficulty and integrate more reasoning-focused problems.
- The benchmark is made more robust by increasing the number of distractor options.
Usage
First, install the inspect_ai
and inspect_evals
Python packages with:
pip install inspect_ai
pip install git+https://github.com/UKGovernmentBEIS/inspect_evals
Then, evaluate against one or more models with:
inspect eval inspect_evals/mmlu_pro --model openai/gpt-4o
After running evaluations, you can view their logs using the inspect view
command:
inspect view
If you don’t want to specify the --model
each time you run an evaluation, create a .env
configuration file in your working directory that defines the INSPECT_EVAL_MODEL
environment variable along with your API key. For example:
INSPECT_EVAL_MODEL=anthropic/claude-3-5-sonnet-20240620
ANTHROPIC_API_KEY=<anthropic-api-key>
Options
You can control a variety of options from the command line. For example:
inspect eval inspect_evals/mmlu_pro --limit 10
inspect eval inspect_evals/mmlu_pro --max-connections 10
inspect eval inspect_evals/mmlu_pro --temperature 0.5
See inspect eval --help
for all available options.
Dataset
Here is an example from the dataset:
Question: Approximately how far away is the Andromeda Galaxy?
Options: A) 5 million light years
B) 2.5 million light years
C) 2.1 million light years
D) 1.9 million light years
E) 3.2 million light years
F) 4 million light years
G) 1.7 million light years
H) 3.5 million light years
I) 1.2 million light years
J) 2.8 million light years
The model is tasked to answer the question and choose the appropriate option.
Evaluation
The prompts are based on EleutherAI’s lm-evaluation-harness and MultipleChoiceTemplate.SINGLE_ANSWER. The in-built choice
scorer is used for evaluation.