Contribution Guide

We welcome new evaluations and improvements to existing evaluations. You can contribute by installing the development version of inspect_evals, adding your evaluation, and opening a pull request in this repo.

Development

To develop a new evaluation:

  1. Clone the repository and install the package with the -e flag and [dev] optional dependencies:

    git clone https://github.com/UKGovernmentBEIS/inspect_evals.git
    cd inspect_evals
    pip install -e ".[dev]"
  2. Create a sub-directory in src/inspect_evals for your evaluation and add your evaluation task and related code. Note: do not pass a name parameter to the task - this is only used for dynamically created tasks (i.e. tasks that are not addressable on the filesystem or in a package).

  3. Create an __init__.py file in the directory and use it to export your task and other useful functions.

  4. Register your task(s) for execution by the inspect eval CLI command by importing into src/inspect_evals/_registry.py.

  5. Confirm that your evaluation is properly registered by running it:

    inspect eval inspect_evals/<my-task>

    Take note of the total number of samples in the dataset.

Submission

To prepare an evaluation for submission as a pull request:

  1. Create a README.md file in your sub-directory following the template used by other evaluations.

  2. Add custom dependencies in the pyproject.toml file. Don’t forget to add your evaluation to the [test] dependencies.

    [project.optional-dependencies]
    worldsense = ["pandas"]
    your_evaluation = ["example-python-package"]
    
    test = [
       "inspect_evals[dev]",
       "inspect_evals[your_evaluation]"
    ]

    Pin the package version if your evaluation might be sensitive to version updates.

    Make sure your module does not import your custom dependencies when the module is imported. This breaks the way inspect_ai imports tasks from the registry.

  3. Add pytest tests covering the custom functions included in your evaluation. Demonstrating an example input and output of your record_to_sample function is fantastic documentation. Look in the tests directory for examples.

  4. Run pytest to ensure all new and existing tests pass.

  5. Run linting, formatting, and type checking and address any issues:

    make check

    Note that we automatically run the make check commands on all incoming PRs, so be sure your contribution passes those before opening a PR.

  6. If an evaluation is based on an existing paper or ported from another evaluation suite, you should validate that it roughly matches the results reported there before submitting your PR. This is to ensure that your new implementation faithfully reproduces the original evaluation.

    Comparison results between existing implementations and your implementation should be included in the pull request description.

  7. Add your evaluation to src/inspect_evals/listing.yaml to include it in the documentation (which is automatically generated and deployed by Github Actions after merge).

     - title: "Example-Bench: Your Amazing AI Benchmark"
       description: Describe what the benchmark is about here.
       path: src/inspect_evals/your_evaluation
       arxiv: https://arxiv.org/abs/1234.12345
       group: Coding
       contributors: ["your-github-handle"]
       tasks:
         - name: task-name
           dataset_samples: 365  # number of samples in the dataset used for this task
           human_baseline:  # optional field for when a human baseline is available for the task
             metric: accuracy
             score: 0.875
             source: https://arxiv.org/abs/0000.00000
       dependency: "your_evaluation"  # optional field for custom dependency from pyproject.toml
       tags: ["Agent"]  # optional tag for agentic evaluations

    Don’t forget to add the custom dependency from pyproject.toml if applicable.

  8. Run tools/listing.py to format your evaluations’ README file according to our standards.

You’re expected to receive a PR review in a couple of days.

Examples

Here are some existing evaluations that serve as good examples of what is required in a new submission:

  • GPQA, a simple multiple-choice evaluation

  • GSM8K, a mathematics task with fewshot prompting,

  • HumanEval, a Python coding task.

  • InterCode, a capture the flag (CTF) cybersecurity task.