VimGolf: Evaluating LLMs in Vim Editing Proficiency
A benchmark that evaluates LLMs in their ability to operate Vim editor and complete editing challenges. This benchmark contrasts with common CUA benchmarks by focusing on Vim-specific editing capabilities.
Overview
VimGolf is an interactive terminal program using game originally designed for human players that evaluates LLMs in their ability to operate Vim editor and complete editing challenges. This benchmark is in contrast with common CUA benchmarks that focus on daily GUI program usages.
VimGolf is hosted on vimgolf.com. It has a Ruby backend igrigorik/vimgolf (original) and a Python backend dstein64/vimgolf. In this benchmark, we use the Python backend.
For isolation, we use Docker sandboxes to run the verifier. You need to be able to run Docker on your machine with the current user, either by adding the user into docker group or being root.
The Docker image can be built with docker build -t vimgolf-verifier:inspect_evals . at current directory.
This implementation evaluates single-turn dialogue performance of LLMs on the benchmark and does not utilize the interactive terminal environment introduced in james4ever0/vimgolf-gym.
Links:
Dataset and Tasks
The dataset (James4Ever0/vimgolf_challenges_and_solutions) consist of 612 public VimGolf challenges. Each challenge has a input text and a output text, and the task is to play the game and to use the fewest keystrokes possible to achieve the output text from the input text using Vim.
Each challenge is associated with a public worst solution for reference, which uses the most keystrokes to achieve the output text from the input text using Vim. The worst solution does not guarantee reusability or reproducability.
Here are the tasks made from the dataset:
vimgolf_single_turn- Single-Turn Dialogue, Environment-Free VimGolf Public Challenges task has 612 tasks. The model will produce an answer in a single dialogue without VimGolf environment interaction. It requires The model to produce an answer with at most as many keystrokes as the character count of the output. This limitation is to prevent naive solutions that simply copy the output text.
Usage
First, install the dependencies:
uv syncThen, evaluate against one or more models with:
uv run inspect eval inspect_evals/vimgolf_single_turn --model openai/gpt-5-nanoYou can also import tasks as normal Python objects and run them from python:
from inspect_ai import eval
from inspect_evals.vimgolf_challenges import vimgolf_single_turn
eval(vimgolf_single_turn)After running evaluations, you can view their logs using the inspect view command:
uv run inspect viewFor VS Code, you can also download Inspect AI extension for viewing logs.
If you don’t want to specify the --model each time you run an evaluation, create a .env configuration file in your working directory that defines the INSPECT_EVAL_MODEL environment variable along with your API key. For example:
INSPECT_EVAL_MODEL=anthropic/claude-opus-4-1-20250805
ANTHROPIC_API_KEY=<anthropic-api-key>Options
You can control a variety of options from the command line. For example:
uv run inspect eval inspect_evals/vimgolf_single_turn --limit 10
uv run inspect eval inspect_evals/vimgolf_single_turn --max-connections 10
uv run inspect eval inspect_evals/vimgolf_single_turn --temperature 0.5See uv run inspect eval --help for all available options.
Scoring
Running with parameters:
{
"model": "ollama/gpt-oss:20b",
"max_connections": 1,
"max_tasks": 1,
"max_subprocesses": 1,
"max_sandboxes": 1,
"dataset": "vimgolf_public_challenges_inspect_eval",
"task": "vimgolf_single_turn"
}The results are:
| Model | Accuracy | Stderr | Average Time Per Task (minute) | Incorrect | Correct |
|---|---|---|---|---|---|
| ollama/gpt-oss:20b | 0.118 | 0.013 | 1.785 | 540 | 72 |