MMLU: Measuring Massive Multitask Language Understanding

Knowledge

Evaluate models on 57 tasks including elementary mathematics, US history, computer science, law, and more.

Overview

MMLU is a benchmark to measure the model’s multitask accuracy. This dataset covers 57 tasks such as elementary mathematics, US history, computer science, law, and more. It has been translated to multiple language by OpenAI (https://huggingface.co/datasets/openai/MMMLU). The list of available languages and associated identifiers is provided on the dataset page.

There are also two tasks you can run here: - 0-shot, Simple Evals style prompt (default, recommended for instruction-tuned models) - 5-shot, traditional MMLU paper prompt (recommended for base models) in the form of a chat. The few shots are drawn from the dev split of original dataset (in english), and are not translated.

Usage

First, install the inspect_ai and inspect_evals Python packages with:

pip install inspect_ai
pip install git+https://github.com/UKGovernmentBEIS/inspect_evals

Or, if developing on a clone of the inspect_evals repo, you can install the package in editable mode with:

pip install -e ".[dev]"

Then, evaluate against one or more models with:

inspect eval inspect_evals/mmlu_0_shot --model openai/gpt-4o
inspect eval inspect_evals/mmlu_5_shot --model openai/gpt-4o

After running evaluations, you can view their logs using the inspect view command:

inspect view

If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example:

INSPECT_EVAL_MODEL=anthropic/claude-3-5-sonnet-20240620
ANTHROPIC_API_KEY=<anthropic-api-key>

Options

You can control a variety of options from the command line. For example:

inspect eval inspect_evals/mmlu_0_shot --limit 10
inspect eval inspect_evals/mmlu_5_shot --max-connections 10
inspect eval inspect_evals/mmlu_0_shot --temperature 0.5

See inspect eval --help for all available options.

You can control the parameters of the task using the -T command line flag.

To select which language the benchmark should target:

inspect eval inspect_evals/mmlu_0_shot -T language="FR_FR"

by default (when no language is specified) "EN_US" is assumed, leading to running the original MMLU benchmark; to run all languages at once (but "EN_US") set language to "default".

You can additionally filter for specific MMLU subjects:

inspect eval inspect_evals/mmlu_0_shot -T language="FR_FR" -T subjects=anatomy
inspect eval inspect_evals/mmlu_5_shot -T subjects=anatomy,astronomy

And use a chain of thoughts prompt for the inspect_evals/mmlu_0_shot task:

inspect eval inspect_evals/mmlu_0_shot -T language="FR_FR" -T subjects=anatomy -T cot=true

0-shot Dataset

Here is an example prompt from the dataset (after it has been further processed by Inspect):

Answer the following multiple choice question. The entire content of your response should be of the following format: ‘ANSWER: $LETTER’ (without quotes) where LETTER is one of A,B,C,D.

The constellation … is a bright W-shaped constellation in the northern sky.

  1. Centaurus
  2. Cygnus
  3. Cassiopeia
  4. Cepheus

The model is then tasked to pick the correct choice.

5-shot Dataset

This prompt format is similar to that of the MMLU paper. It uses 5 examples of correctly answered questions in the subject (from the dev set) before posing the test question: It differs in that it is provided to the model as a chat, instead of a single prompt. Moreover, the instruction is reminded for each one of the few-shots prompt.

Hereafter is an example of such chat:

{
   "messages":[
      {
         "role":"system",
         "content":"The following are multiple choice questions about anatomy."
      },
      {
         "role":"user",
         "content":"Answer the following multiple choice question. The entire content of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of A,B,C,D.\n\nWhat is the embryological origin of the hyoid bone?\n\n(A) The first pharyngeal arch\n(B) The first and second pharyngeal arches\n(C) The second pharyngeal arch\n(D) The second and third pharyngeal arches"
      },
      {
         "role":"assistant",
         "content":"Answer: D"
      },
      {
         "role":"user",
         "content":"Answer the following multiple choice question. The entire content of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of A,B,C,D.\n\nWhich of these branches of the trigeminal nerve contain somatic motor processes?\n\n(A) The supraorbital nerve\n(B) The infraorbital nerve\n(C) The mental nerve\n(D) None of the above"
      },
      {
         "role":"assistant",
         "content":"Answer: D"
      },
      {
         "role":"user",
         "content":"Answer the following multiple choice question. The entire content of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of A,B,C,D.\n\nThe pleura\n\n(A) have no sensory innervation.\n(B) are separated by a 2 mm space.\n(C) extend into the neck.\n(D) are composed of respiratory epithelium."
      },
      {
         "role":"assistant",
         "content":"Answer: C"
      },
      {
         "role":"user",
         "content":"Answer the following multiple choice question. The entire content of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of A,B,C,D.\n\nIn Angle's Class II Div 2 occlusion there is\n\n(A) excess overbite of the upper lateral incisors.\n(B) negative overjet of the upper central incisors.\n(C) excess overjet of the upper lateral incisors.\n(D) excess overjet of the upper central incisors."
      },
      {
         "role":"assistant",
         "content":"Answer: C"
      },
      {
         "role":"user",
         "content":"Answer the following multiple choice question. The entire content of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of A,B,C,D.\n\nWhich of the following is the body cavity that contains the pituitary gland?\n\n(A) Abdominal\n(B) Cranial\n(C) Pleural\n(D) Spinal"
      },
      {
         "role":"assistant",
         "content":"Answer: B"
      },
      {
         "role":"user",
         "content":"Answer the following multiple choice question. The entire content of your response should be of the following format: 'ANSWER: $LETTER' (without quotes) where LETTER is one of A,B,C,D.\n\nChez l'homme, laquelle des structures suivantes se trouve au niveau du col de la vessie et entoure l\u2019ur\u00e8tre\u00a0?\n\nA) L\u2019\u00e9pididyme\nB) La prostate\nC) Le scrotum\nD) Les v\u00e9sicules s\u00e9minales"
      }
   ]
}

We recommend using pretrained models over instruction-tuned models for this evaluation. The MMLU prompt format is more of a ‘fill in the blank’ task which better suits pretrained models. Some instruction-tuned models can get confused and try to talk through the problem instead. For example, to evaluate Llama 3.1 we recommend llama3.1:8b-text-q4_0 over llama3.1:8b-instruct-q4_0.

The one difference from the paper is that we take the answer directly given by the model. The paper instead takes the logprobs for the tokens ‘A’, ‘B’, ‘C’ and ‘D’ and picks the greatest one. We’ve made this change because getting logprobs is not possible with many models supported by Inspect. This results in slightly lower scores than the paper’s method, as models can output invalid tokens which are marked incorrect.

Scoring

A simple accuracy is calculated over the datapoints.