MMLU: Measuring Massive Multitask Language Understanding

Knowledge

Evaluate models on 57 tasks including elementary mathematics, US history, computer science, law, and more.

Overview

MMLU is a benchmark to measure the model’s multitask accuracy. This dataset covers 57 tasks such as elementary mathematics, US history, computer science, law, and more.

There are also two tasks you can run here: - 0-shot, Simple Evals style prompt (default, recommended for instruction-tuned models) - 5-shot, traditional MMLU paper prompt (recommended for base models)

Usage

First, install the inspect_ai and inspect_evals Python packages with:

pip install inspect_ai
pip install git+https://github.com/UKGovernmentBEIS/inspect_evals

Then, evaluate against one or more models with:

inspect eval inspect_evals/mmlu_0_shot --model openai/gpt-4o
inspect eval inspect_evals/mmlu_5_shot --model openai/gpt-4o

After running evaluations, you can view their logs using the inspect view command:

inspect view

If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example:

INSPECT_EVAL_MODEL=anthropic/claude-3-5-sonnet-20240620
ANTHROPIC_API_KEY=<anthropic-api-key>

Options

You can control a variety of options from the command line. For example:

inspect eval inspect_evals/mmlu_0_shot --limit 10
inspect eval inspect_evals/mmlu_5_shot --max-connections 10
inspect eval inspect_evals/mmlu_0_shot --temperature 0.5

See inspect eval --help for all available options.

You can also filter for specific MMLU subjects:

inspect eval inspect_evals/mmlu_0_shot -T subjects=anatomy
inspect eval inspect_evals/mmlu_5_shot -T subjects=anatomy,astronomy

0-shot Dataset

Here is an example prompt from the dataset (after it has been further processed by Inspect):

Answer the following multiple choice question. The entire content of your response should be of the following format: ‘ANSWER: $LETTER’ (without quotes) where LETTER is one of A,B,C,D.

The constellation … is a bright W-shaped constellation in the northern sky.

  1. Centaurus
  2. Cygnus
  3. Cassiopeia
  4. Cepheus

The model is then tasked to pick the correct choice.

5-shot Dataset

This prompt format is the same as the MMLU paper. It uses 5 examples of correctly answered questions in the subject (from the dev set) before posing the test question:

The following are multiple choice questions about computer security.

SHA-1 has a message digest of (A) 160 bits (B) 512 bits (C) 628 bits (D) 820 bits Answer: A

_____________ can modify data on your system – so that your system doesn’t run correctly or you can no longer access specific data, or it may even ask for ransom in order to give your access. (A) IM – Trojans (B) Backdoor Trojans (C) Trojan-Downloader (D) Ransom Trojan Answer: D

What is ethical hacking? (A) “Hacking” ethics so they justify unintended selfish behavior (B) Hacking systems (e.g., during penetration testing) to expose vulnerabilities so they can be fixed, rather than exploited (C) Hacking into systems run by those whose ethics you disagree with (D) A slang term for rapid software development, e.g., as part of hackathons Answer: B

Exploitation of the Heartbleed bug permits (A) overwriting cryptographic keys in memory (B) a kind of code injection (C) a read outside bounds of a buffer (D) a format string attack Answer: C

The ____________ is anything which your search engine cannot search. (A) Haunted web (B) World Wide Web (C) Surface web (D) Deep Web Answer: D

A packet filter firewall filters at the (A) Application or transport (B) Data link layer (C) Physical Layer (D) Network or transport layer Answer:

We recommend using pretrained models over instruction-tuned models for this evaluation. The MMLU prompt format is more of a ‘fill in the blank’ task which better suits pretrained models. Some instruction-tuned models can get confused and try to talk through the problem instead. For example, to evaluate Llama 3.1 we recommend llama3.1:8b-text-q4_0 over llama3.1:8b-instruct-q4_0.

The one difference from the paper is that we take the answer directly given by the model. The paper instead takes the logprobs for the tokens ‘A’, ‘B’, ‘C’ and ‘D’ and picks the greatest one. We’ve made this change because getting logprobs is not possible with many models supported by Inspect. This results in slightly lower scores than the paper’s method, as models can output invalid tokens which are marked incorrect.

Scoring

A simple accuracy is calculated over the datapoints.