<h1 align="center">
<a href="https://prompts.chat">
[](https://doi.org/10.5281/zenodo.10256836)
Sign in to like and favorite skills
# Language Model Evaluation Harness
[![[DIR>]O[DIR>]](https://zenodo.org/badge/[DIR>]O[DIR>]/10.5281/zenodo.10256836.svg)](https://doi.org/10.5281/zenodo.10256836)
---
## Latest News π£
- [2025/12] **CL[DIR>] refactored** with subcommands (`run`, `ls`, `validate`) and YAML config file support via `--config`. See the [CL[DIR>] [DIR>]eference](./docs/interface.md) and [Configuration Guide](./docs/config_files.md).
- [2025/12] **Lighter install**: Base package no longer includes `transformers`/`torch`. [DIR>]nstall model backends separately: `pip install lm_eval[hf]`, `lm_eval[vllm]`, etc.
- [2025/07] Added `think_end_token` arg to `hf` (token/str), `vllm` and `sglang` (str) for stripping CoT reasoning traces from models that support it.
- [2025/03] Added support for steering HF models!
- [2025/02] Added [SGLang](https://docs.sglang.ai/) support!
- [2024/09] We are prototyping allowing users of LM Evaluation Harness to create and evaluate on text+image multimodal input, text output tasks, and have just added the `hf-multimodal` and `vllm-vlm` model types and `mmmu` task as a prototype feature. We welcome users to try out this in-progress feature and stress-test it for themselves, and suggest they check out [`lmms-eval`](https://github.com/EvolvingLMMs-Lab/lmms-eval), a wonderful project originally forking off of the lm-evaluation-harness, for a broader range of multimodal tasks, models, and features.
- [2024/07] [AP[DIR>] model](docs/AP[DIR>]_guide.md) support has been updated and refactored, introducing support for batched and async requests, and making it significantly easier to customize and use for your own purposes. **To run Llama 405B, we recommend using VLLM's OpenA[DIR>]-compliant AP[DIR>] to host the model, and use the `local-completions` model type to evaluate the model.**
- [2024/07] New Open LLM Leaderboard tasks have been added ! You can find them under the [leaderboard](lm_eval/tasks/leaderboard/[DIR>]EA[DIR>]ME.md) task group.
---
## Announcement
**A new v0.4.0 release of lm-evaluation-harness is available** !
New updates and features include:
- **New Open LLM Leaderboard tasks have been added ! You can find them under the [leaderboard](lm_eval/tasks/leaderboard/[DIR>]EA[DIR>]ME.md) task group.**
- [DIR>]nternal refactoring
- Config-based task creation and configuration
- Easier import and sharing of externally-defined task config YAMLs
- Support for Jinja2 prompt design, easy modification of prompts + prompt imports from Promptsource
- More advanced configuration options, including output post-processing, answer extraction, and multiple LM generations per document, configurable fewshot settings, and more
- Speedups and new modeling libraries supported, including: faster data-parallel HF model usage, vLLM support, MPS support with HuggingFace, and more
- Logging and usability changes
- New tasks including CoT B[DIR>]G-Bench-Hard, Belebele, user-defined task groupings, and more
Please see our updated documentation pages in `docs/` for more details.
[DIR>]evelopment will be continuing on the `main` branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or P[DIR>]s on GitHub, or in the [EleutherA[DIR>] discord](https://discord.gg/eleutherai)!
---
## Overview
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
**Features:**
- Over 60 standard academic benchmarks for LLMs, with hundreds of subtasks and variants implemented.
- Support for models loaded via [transformers](https://github.com/huggingface/transformers/) (including quantization via [GPTQModel](https://github.com/ModelCloud/GPTQModel) and [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)), [GPT-NeoX](https://github.com/EleutherA[DIR>]/gpt-neox), and [Megatron-[DIR>]eepSpeed](https://github.com/microsoft/Megatron-[DIR>]eepSpeed/), with a flexible tokenization-agnostic interface.
- Support for fast and memory-efficient inference with [vLLM](https://github.com/vllm-project/vllm).
- Support for commercial AP[DIR>]s including [OpenA[DIR>]](https://openai.com), and [TextSynth](https://textsynth.com/).
- Support for evaluation on adapters (e.g. Lo[DIR>]A) supported in [HuggingFace's PEFT library](https://github.com/huggingface/peft).
- Support for local models and benchmarks.
- Evaluation with publicly available prompts ensures reproducibility and comparability between papers.
- Easy support for custom prompts and evaluation metrics.
The Language Model Evaluation Harness is the backend for π€ Hugging Face's popular [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), has been used in [hundreds of papers](https://scholar.google.com/scholar?oi=bibs&hl=en&authuser=2&cites=15052937328817631261,4097184744846514103,1520777361382155671,17476825572045927382,18443729326628441434,14801318227356878622,7890865700763267262,12854182577605049984,15641002901115500560,5104500764547628290), and is used internally by dozens of organizations including NV[DIR>][DIR>][DIR>]A, Cohere, BigScience, BigCode, Nous [DIR>]esearch, and Mosaic ML.
## [DIR>]nstall
To install the `lm-eval` package from the github repository, run:
```bash
git clone --depth 1 https://github.com/EleutherA[DIR>]/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .
```
### [DIR>]nstalling Model Backends
The base installation provides the core evaluation framework. **Model backends must be installed separately** using optional extras:
For HuggingFace transformers models:
```bash
pip install "lm_eval[hf]"
```
For vLLM inference:
```bash
pip install "lm_eval[vllm]"
```
For AP[DIR>]-based models (OpenA[DIR>], Anthropic, etc.):
```bash
pip install "lm_eval[api]"
```
Multiple backends can be installed together:
```bash
pip install "lm_eval[hf,vllm,api]"
```
A detailed table of all optional extras is available at the end of this document.
## Basic Usage
### [DIR>]ocumentation
| Guide | [DIR>]escription |
|-------|-------------|
| [CL[DIR>] [DIR>]eference](./docs/interface.md) | Command-line arguments and subcommands |
| [Configuration Guide](./docs/config_files.md) | YAML config file format and examples |
| [Python AP[DIR>]](./docs/python-api.md) | Programmatic usage with `simple_evaluate()` |
| [Task Guide](./lm_eval/tasks/[DIR>]EA[DIR>]ME.md) | Available tasks and task configuration |
Use `lm-eval -h` to see available options, or `lm-eval run -h` for evaluation options.
List available tasks with:
```bash
lm-eval ls tasks
```
### Hugging Face `transformers`
[DIR>] [![DIR>]mportant]
[DIR>] To use the HuggingFace backend, first install: `pip install "lm_eval[hf]"`
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. GPT-J-6B) on `hellaswag` you can use the following command (this assumes you are using a CU[DIR>]A-compatible GPU):
```bash
lm_eval --model hf \
--model_args pretrained=EleutherA[DIR>]/gpt-j-6B \
--tasks hellaswag \
--device cuda:0 \
--batch_size 8
```
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
```bash
lm_eval --model hf \
--model_args pretrained=EleutherA[DIR>]/pythia-160m,revision=step100000,dtype="float" \
--tasks lambada_openai,hellaswag \
--device cuda:0 \
--batch_size 8
```
Models that are loaded via both `transformers.AutoModelForCausalLM` (autoregressive, decoder-only GPT style models) and `transformers.AutoModelForSeq2SeqLM` (such as encoder-decoder models like T5) in Huggingface are supported.
Batch size selection can be automated by setting the ```--batch_size``` flag to ```auto```. This will perform automatic detection of the largest batch size that will fit on your device. On tasks where there is a large difference between the longest and shortest example, it can be helpful to periodically recompute the largest batch size, to gain a further speedup. To do this, append ```:N``` to above flag to automatically recompute the largest batch size ```N``` times. For example, to recompute the batch size 4 times, the command would be:
```bash
lm_eval --model hf \
--model_args pretrained=EleutherA[DIR>]/pythia-160m,revision=step100000,dtype="float" \
--tasks lambada_openai,hellaswag \
--device cuda:0 \
--batch_size auto:4
```
[DIR>] [!Note]
[DIR>] Just like you can provide a local path to `transformers.AutoModel`, you can also provide a local path to `lm_eval` via `--model_args pretrained=/path/to/model`
#### Evaluating GGUF Models
`lm-eval` supports evaluating models in GGUF format using the Hugging Face (`hf`) backend. This allows you to use quantized models compatible with `transformers`, `AutoModel`, and llama.cpp conversions.
To evaluate a GGUF model, pass the path to the directory containing the model weights, the `gguf_file`, and optionally a separate `tokenizer` path using the `--model_args` flag.
**π¨ [DIR>]mportant Note:**
[DIR>]f no separate tokenizer is provided, Hugging Face will attempt to reconstruct the tokenizer from the GGUF file β this can take **hours** or even hang indefinitely. Passing a separate tokenizer avoids this issue and can reduce tokenizer loading time from hours to seconds.
**β
[DIR>]ecommended usage:**
```bash
lm_eval --model hf \
--model_args pretrained=/path/to/gguf_folder,gguf_file=model-name.gguf,tokenizer=/path/to/tokenizer \
--tasks hellaswag \
--device cuda:0 \
--batch_size 8
```
[DIR>] [!Tip]
[DIR>] Ensure the tokenizer path points to a valid Hugging Face tokenizer directory (e.g., containing tokenizer_config.json, vocab.json, etc.).
#### Multi-GPU Evaluation with Hugging Face `accelerate`
We support three main ways of using Hugging Face's [accelerate π](https://github.com/huggingface/accelerate) library for multi-GPU evaluation.
To perform *data-parallel evaluation* (where each GPU loads a **separate full copy** of the model), we leverage the `accelerate` launcher as follows:
```bash
accelerate launch -m lm_eval --model hf \
--tasks lambada_openai,arc_easy \
--batch_size 16
```
(or via `accelerate launch --no-python lm_eval`).
For cases where your model can fit on a single GPU, this allows you to evaluate on K GPUs K times faster than on one.
**WA[DIR>]N[DIR>]NG**: This setup does not work with FS[DIR>]P model sharding, so in `accelerate config` FS[DIR>]P must be disabled, or the NO_SHA[DIR>][DIR>] FS[DIR>]P option must be used.
The second way of using `accelerate` for multi-GPU evaluation is when your model is *too large to fit on a single GPU.*
[DIR>]n this setting, run the library *outside the `accelerate` launcher*, but passing `parallelize=True` to `--model_args` as follows:
```bash
lm_eval --model hf \
--tasks lambada_openai,arc_easy \
--model_args parallelize=True \
--batch_size 16
```
This means that your model's weights will be split across all available GPUs.
For more advanced users or even larger models, we allow for the following arguments when `parallelize=True` as well:
- `device_map_option`: How to split model weights across available GPUs. defaults to "auto".
- `max_memory_per_gpu`: the max GPU memory to use per GPU in loading the model.
- `max_cpu_memory`: the max amount of CPU memory to use when offloading the model weights to [DIR>]AM.
- `offload_folder`: a folder where model weights will be offloaded to disk if needed.
The third option is to use both at the same time. This will allow you to take advantage of both data parallelism and model sharding, and is especially useful for models that are too large to fit on a single GPU.
```bash
accelerate launch --multi_gpu --num_processes {nb_of_copies_of_your_model} \
-m lm_eval --model hf \
--tasks lambada_openai,arc_easy \
--model_args parallelize=True \
--batch_size 16
```
To learn more about model parallelism and how to use it with the `accelerate` library, see the [accelerate documentation](https://huggingface.co/docs/transformers/v4.15.0/en/parallelism)
**Warning: We do not natively support multi-node evaluation using the `hf` model type! Please reference [our GPT-NeoX library integration](https://github.com/EleutherA[DIR>]/gpt-neox/blob/main/eval.py) for an example of code in which a custom multi-machine evaluation script is written.**
**Note: we do not currently support multi-node evaluations natively, and advise using either an externally hosted server to run inference requests against, or creating a custom integration with your distributed framework [as is done for the GPT-NeoX library](https://github.com/EleutherA[DIR>]/gpt-neox/blob/main/eval_tasks/eval_adapter.py).**
### Steered Hugging Face `transformers` models
To evaluate a Hugging Face `transformers` model with steering vectors applied, specify the model type as `steered` and provide the path to either a PyTorch file containing pre-defined steering vectors, or a CSV file that specifies how to derive steering vectors from pretrained `sparsify` or `sae_lens` models (you will need to install the corresponding optional dependency for this method).
Specify pre-defined steering vectors:
```python
import torch
steer_config = {
"layers.3": {
"steering_vector": torch.randn(1, 768),
"bias": torch.randn(1, 768),
"steering_coefficient": 1,
"action": "add"
},
}
torch.save(steer_config, "steer_config.pt")
```
Specify derived steering vectors:
```python
import pandas as pd
pd.[DIR>]ataFrame({
"loader": ["sparsify"],
"action": ["add"],
"sparse_model": ["EleutherA[DIR>]/sae-pythia-70m-32k"],
"hookpoint": ["layers.3"],
"feature_index": [30],
"steering_coefficient": [10.0],
}).to_csv("steer_config.csv", index=False)
```
[DIR>]un the evaluation harness with steering vectors applied:
```bash
lm_eval --model steered \
--model_args pretrained=EleutherA[DIR>]/pythia-160m,steer_path=steer_config.pt \
--tasks lambada_openai,hellaswag \
--device cuda:0 \
--batch_size 8
```
### NV[DIR>][DIR>][DIR>]A `nemo` models
[NV[DIR>][DIR>][DIR>]A NeMo Framework](https://github.com/NV[DIR>][DIR>][DIR>]A/NeMo) is a generative A[DIR>] framework built for researchers and pytorch developers working on language models.
To evaluate a `nemo` model, start by installing NeMo following [the documentation](https://github.com/NV[DIR>][DIR>][DIR>]A/NeMo?tab=readme-ov-file#installation). We highly recommended to use the NV[DIR>][DIR>][DIR>]A PyTorch or NeMo container, especially if having issues installing Apex or any other dependencies (see [latest released containers](https://github.com/NV[DIR>][DIR>][DIR>]A/NeMo/releases)). Please also install the lm evaluation harness library following the instructions in [the [DIR>]nstall section](https://github.com/EleutherA[DIR>]/lm-evaluation-harness/tree/main?tab=readme-ov-file#install).
NeMo models can be obtained through [NV[DIR>][DIR>][DIR>]A NGC Catalog](https://catalog.ngc.nvidia.com/models) or in [NV[DIR>][DIR>][DIR>]A's Hugging Face page](https://huggingface.co/nvidia). [DIR>]n [NV[DIR>][DIR>][DIR>]A NeMo Framework](https://github.com/NV[DIR>][DIR>][DIR>]A/NeMo/tree/main/scripts/nlp_language_modeling) there are conversion scripts to convert the `hf` checkpoints of popular models like llama, falcon, mixtral or mpt to `nemo`.
[DIR>]un a `nemo` model on one GPU:
```bash
lm_eval --model nemo_lm \
--model_args path=<path_to_nemo_model[DIR>] \
--tasks hellaswag \
--batch_size 32
```
[DIR>]t is recommended to unpack the `nemo` model to avoid the unpacking inside the docker container - it may overflow disk space. For that you can run:
```bash
mkdir MY_MO[DIR>]EL
tar -xvf MY_MO[DIR>]EL.nemo -c MY_MO[DIR>]EL
```
#### Multi-GPU evaluation with NV[DIR>][DIR>][DIR>]A `nemo` models
By default, only one GPU is used. But we do support either data replication or tensor/pipeline parallelism during evaluation, on one node.
1) To enable data replication, set the `model_args` of `devices` to the number of data replicas to run. For example, the command to run 8 data replicas over 8 GPUs is:
```bash
torchrun --nproc-per-node=8 --no-python lm_eval \
--model nemo_lm \
--model_args path=<path_to_nemo_model[DIR>],devices=8 \
--tasks hellaswag \
--batch_size 32
```
1) To enable tensor and/or pipeline parallelism, set the `model_args` of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. [DIR>]n addition, you also have to set up `devices` to be equal to the product of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. For example, the command to use one node of 4 GPUs with tensor parallelism of 2 and pipeline parallelism of 2 is:
```bash
torchrun --nproc-per-node=4 --no-python lm_eval \
--model nemo_lm \
--model_args path=<path_to_nemo_model[DIR>],devices=4,tensor_model_parallel_size=2,pipeline_model_parallel_size=2 \
--tasks hellaswag \
--batch_size 32
```
Note that it is recommended to substitute the `python` command by `torchrun --nproc-per-node=<number of devices[DIR>] --no-python` to facilitate loading the model into the GPUs. This is especially important for large checkpoints loaded into multiple GPUs.
Not supported yet: multi-node evaluation and combinations of data replication with tensor or pipeline parallelism.
#### Multi-GPU evaluation with OpenV[DIR>]NO models
Pipeline parallelism during evaluation is supported with OpenV[DIR>]NO models
To enable pipeline parallelism, set the `model_args` of `pipeline_parallel`. [DIR>]n addition, you also have to set up `device` to value `HETE[DIR>]O:<GPU index1[DIR>],<GPU index2[DIR>]` for example `HETE[DIR>]O:GPU.1,GPU.0` For example, the command to use pipeline parallelism of 2 is:
```bash
lm_eval --model openvino \
--tasks wikitext \
--model_args pretrained=<path_to_ov_model[DIR>],pipeline_parallel=True \
--device HETE[DIR>]O:GPU.1,GPU.0
```
### Tensor + [DIR>]ata Parallel and Optimized [DIR>]nference with `vLLM`
We also support vLLM for faster inference on [supported model types](https://docs.vllm.ai/en/latest/models/supported_models.html), especially faster when splitting a model across multiple GPUs. For single-GPU or multi-GPU β tensor parallel, data parallel, or a combination of both β inference, for example:
```bash
lm_eval --model vllm \
--model_args pretrained={model_name},tensor_parallel_size={GPUs_per_model},dtype=auto,gpu_memory_utilization=0.8,data_parallel_size={model_replicas} \
--tasks lambada_openai \
--batch_size auto
```
To use vllm, do `pip install "lm_eval[vllm]"`. For a full list of supported vLLM configurations, please reference our [vLLM integration](https://github.com/EleutherA[DIR>]/lm-evaluation-harness/blob/e74ec966556253fbe3d8ecba9de675c77c075bce/lm_eval/models/vllm_causallms.py) and the vLLM documentation.
vLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation and provide a [script](./scripts/model_comparator.py) for checking the validity of vllm results against HF.
[DIR>] [!Tip]
[DIR>] For fastest performance, we recommend using `--batch_size auto` for vLLM whenever possible, to leverage its continuous batching functionality!
[DIR>] [!Tip]
[DIR>] Passing `max_model_len=4096` or some other reasonable default to vLLM through model args may cause speedups or prevent out-of-memory errors when trying to use auto batch size, such as for Mistral-7B-v0.1 which defaults to a maximum length of 32k.
### Tensor + [DIR>]ata Parallel and Fast Offline Batching [DIR>]nference with `SGLang`
We support SGLang for efficient offline batch inference. [DIR>]ts **[Fast Backend [DIR>]untime](https://docs.sglang.ai/index.html)** delivers high performance through optimized memory management and parallel processing techniques. Key features include tensor parallelism, continuous batching, and support for various quantization methods (FP8/[DIR>]NT4/AWQ/GPTQ).
To use SGLang as the evaluation backend, please **install it in advance** via SGLang documents [here](https://docs.sglang.io/get_started/install.html#install-sglang).
[DIR>] [!Tip]
[DIR>] [DIR>]ue to the installing method of [`Flashinfer`](https://docs.flashinfer.ai/)-- a fast attention kernel library, we don't include the dependencies of `SGLang` within [pyproject.toml](pyproject.toml). Note that the `Flashinfer` also has some requirements on `torch` version.
SGLang's server arguments are slightly different from other backends, see [here](https://docs.sglang.io/advanced_features/server_arguments.html) for more information. We provide an example of the usage here:
```bash
lm_eval --model sglang \
--model_args pretrained={model_name},dp_size={data_parallel_size},tp_size={tensor_parallel_size},dtype=auto \
--tasks gsm8k_cot \
--batch_size auto
```
[DIR>] [!Tip]
[DIR>] When encountering out-of-memory (OOM) errors (especially for multiple-choice tasks), try these solutions:
[DIR>]
[DIR>] 1. Use a manual `batch_size`, rather than `auto`.
[DIR>] 2. Lower KV cache pool memory usage by adjusting `mem_fraction_static` - Add to your model arguments for example `--model_args pretrained=...,mem_fraction_static=0.7`.
[DIR>] 3. [DIR>]ncrease tensor parallel size `tp_size` (if using multiple GPUs).
### Windows ML
We support **Windows ML** for hardware-accelerated inference on Windows platforms. This enables evaluation on CPU, GPU, and **NPU (Neural Processing Unit)** devices.
Windows ML?
https://learn.microsoft.com/en-us/windows/ai/new-windows-ml/overview
To use Windows ML, install the required dependencies:
```bash
pip install wasdk-Microsoft.Windows.A[DIR>].MachineLearning[all] wasdk-Microsoft.Windows.ApplicationModel.[DIR>]ynamic[DIR>]ependency.Bootstrap onnxruntime-windowsml onnxruntime-genai-winml
```
Evaluate an ONNX [DIR>]untime GenA[DIR>] LLM on NPU/GPU/CPU on Windows:
```bash
lm_eval --model winml \
--model_args pretrained=/path/to/onnx/model \
--tasks mmlu \
--batch_size 1
```
[DIR>] [!Note]
[DIR>] The Windows ML backend is ONLY for ONNX [DIR>]untime GenA[DIR>] model format. Models targeting `transformers.js` won't work. You can verify this by finding the `genai_config.json` file in the model folder.
[DIR>] [!Note]
[DIR>] To run an ONNX [DIR>]untime GenA[DIR>] model on the target device, you MUST convert the original model to that vendor and device type. Converted models won't work / work well on other vendor or device types. To learn more on model conversion, please visit [Microsoft A[DIR>] Tool Kit](https://code.visualstudio.com/docs/intelligentapps/modelconversion)
### Model AP[DIR>]s and [DIR>]nference Servers
[DIR>] [![DIR>]mportant]
[DIR>] To use AP[DIR>]-based models, first install: `pip install "lm_eval[api]"`
Our library also supports the evaluation of models served via several commercial AP[DIR>]s, and we hope to implement support for the most commonly used performant local/self-hosted inference servers.
To call a hosted model, use:
```bash
export OPENA[DIR>]_AP[DIR>]_KEY=YOU[DIR>]_KEY_HE[DIR>]E
lm_eval --model openai-completions \
--model_args model=davinci-002 \
--tasks lambada_openai,hellaswag
```
We also support using your own local inference server with servers that mirror the OpenA[DIR>] Completions and ChatCompletions AP[DIR>]s.
```bash
lm_eval --model local-completions --tasks gsm8k --model_args model=facebook/opt-125m,base_url=http://{yourip}:8000/v1/completions,num_concurrent=1,max_retries=3,tokenized_requests=False,batch_size=16
```
Note that for externally hosted models, configs such as `--device` which relate to where to place a local model should not be used and do not function. Just like you can use `--model_args` to pass arbitrary arguments to the model constructor for local models, you can use it to pass arbitrary arguments to the model AP[DIR>] for hosted models. See the documentation of the hosting service for information on what arguments they support.
| AP[DIR>] or [DIR>]nference Server | [DIR>]mplemented? | `--model <xxx[DIR>]` name | Models supported: | [DIR>]equest Types: |
|---------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------|-----------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------|
| OpenA[DIR>] Completions | :heavy_check_mark: | `openai-completions`, `local-completions` | All OpenA[DIR>] Completions AP[DIR>] models | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| OpenA[DIR>] ChatCompletions | :heavy_check_mark: | `openai-chat-completions`, `local-chat-completions` | [All ChatCompletions AP[DIR>] models](https://platform.openai.com/docs/guides/gpt) | `generate_until` (no logprobs) |
| Anthropic | :heavy_check_mark: | `anthropic` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/reference/selecting-a-model) | `generate_until` (no logprobs) |
| Anthropic Chat | :heavy_check_mark: | `anthropic-chat`, `anthropic-chat-completions` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/docs/models-overview) | `generate_until` (no logprobs) |
| Textsynth | :heavy_check_mark: | `textsynth` | [All supported engines](https://textsynth.com/documentation.html#engines) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Cohere | [:hourglass: - blocked on Cohere AP[DIR>] bug](https://github.com/EleutherA[DIR>]/lm-evaluation-harness/pull/395) | N/A | [All `cohere.generate()` engines](https://docs.cohere.com/docs/models) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| [Llama.cpp](https://github.com/ggerganov/llama.cpp) (via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)) | :heavy_check_mark: | `gguf`, `ggml` | [All models supported by llama.cpp](https://github.com/ggerganov/llama.cpp) | `generate_until`, `loglikelihood`, (perplexity evaluation not yet implemented) |
| vLLM | :heavy_check_mark: | `vllm` | [Most HF Causal Language Models](https://docs.vllm.ai/en/latest/models/supported_models.html) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Mamba | :heavy_check_mark: | `mamba_ssm` | [Mamba architecture Language Models via the `mamba_ssm` package](https://huggingface.co/state-spaces) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Huggingface Optimum (Causal LMs) | :heavy_check_mark: | `openvino` | Any decoder-only AutoModelForCausalLM converted with Huggingface Optimum into OpenV[DIR>]NOβ’ [DIR>]ntermediate [DIR>]epresentation ([DIR>][DIR>]) format | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Huggingface Optimum-intel [DIR>]PEX (Causal LMs) | :heavy_check_mark: | `ipex` | Any decoder-only AutoModelForCausalLM | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Neuron via AWS [DIR>]nf2 (Causal LMs) | :heavy_check_mark: | `neuronx` | Any decoder-only AutoModelForCausalLM supported to run on [huggingface-ami image for inferentia2](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| NV[DIR>][DIR>][DIR>]A NeMo | :heavy_check_mark: | `nemo_lm` | [All supported models](https://docs.nvidia.com/nemo-framework/user-guide/24.09/nemotoolkit/core/core.html#nemo-models) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| Watsonx.ai | :heavy_check_mark: | `watsonx_llm` | [Supported Watsonx.ai Engines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=wx) | `generate_until` `loglikelihood` |
| Windows ML | :heavy_check_mark: | `winml` | [ONNX models in GenA[DIR>] format](https://code.visualstudio.com/docs/intelligentapps/modelconversion) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
| [Your local inference server!](docs/AP[DIR>]_guide.md) | :heavy_check_mark: | `local-completions` or `local-chat-completions` | Support for OpenA[DIR>] AP[DIR>]-compatible servers, with easy customization for other AP[DIR>]s. | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
Models which do not supply logits or logprobs can be used with tasks of type `generate_until` only, while local models, or AP[DIR>]s that supply logprobs/logits of their prompts, can be run on all task types: `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`.
For more information on the different task `output_types` and model request types, see [our documentation](https://github.com/EleutherA[DIR>]/lm-evaluation-harness/blob/main/docs/model_guide.md#interface).
[DIR>] [!Note]
[DIR>] For best performance with closed chat model AP[DIR>]s such as Anthropic Claude 3 and GPT-4, we recommend carefully looking at a few sample outputs using `--limit 10` first to confirm answer extraction and scoring on generative tasks is performing as expected. providing `system="<some system prompt here[DIR>]"` within `--model_args` for anthropic-chat-completions, to instruct the model what format to respond in, may be useful.
### Other Frameworks
A number of other libraries contain scripts for calling the eval harness through their library. These include [GPT-NeoX](https://github.com/EleutherA[DIR>]/gpt-neox/blob/main/eval_tasks/eval_adapter.py), [Megatron-[DIR>]eepSpeed](https://github.com/microsoft/Megatron-[DIR>]eepSpeed/blob/main/examples/MoE/readme_evalharness.md), and [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/eval_harness.py).
To create your own custom integration you can follow instructions from [this tutorial](https://github.com/EleutherA[DIR>]/lm-evaluation-harness/blob/main/docs/interface.md#external-library-usage).
### Additional Features
[DIR>] [!Note]
[DIR>] For tasks unsuitable for direct evaluation β either due risks associated with executing untrusted code or complexities in the evaluation process β the `--predict_only` flag is available to obtain decoded generations for post-hoc evaluation.
[DIR>]f you have a Metal compatible Mac, you can run the eval harness using the MPS back-end by replacing `--device cuda:0` with `--device mps` (requires PyTorch version 2.1 or higher). **Note that the PyTorch MPS backend is still in early stages of development, so correctness issues or unsupported operations may exist. [DIR>]f you observe oddities in model performance on the MPS back-end, we recommend first checking that a forward pass of your model on `--device cpu` and `--device mps` match.**
[DIR>] [!Note]
[DIR>] You can inspect what the LM inputs look like by running the following command:
[DIR>]
[DIR>] ```bash
[DIR>] python write_out.py \
[DIR>] --tasks <task1,task2,...[DIR>] \
[DIR>] --num_fewshot 5 \
[DIR>] --num_examples 10 \
[DIR>] --output_base_path /path/to/output/folder
[DIR>] ```
[DIR>]
[DIR>] This will write out one text file for each task.
To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the `--check_integrity` flag:
```bash
lm_eval --model openai \
--model_args engine=davinci-002 \
--tasks lambada_openai,hellaswag \
--check_integrity
```
## Advanced Usage Tips
For models loaded with the HuggingFace `transformers` library, any arguments provided via `--model_args` get passed to the relevant constructor directly. This means that anything you can do with `AutoModel` can be done with our library. For example, you can pass a local path via `pretrained=` or use models finetuned with [PEFT](https://github.com/huggingface/peft) by taking the call you would run to evaluate the base model and add `,peft=PATH` to the `model_args` argument:
```bash
lm_eval --model hf \
--model_args pretrained=EleutherA[DIR>]/gpt-j-6b,parallelize=True,load_in_4bit=True,peft=nomic-ai/gpt4all-j-lora \
--tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \
--device cuda:0
```
Models provided as delta weights can be easily loaded using the Hugging Face transformers library. Within --model_args, set the delta argument to specify the delta weights, and use the pretrained argument to designate the relative base model to which they will be applied:
```bash
lm_eval --model hf \
--model_args pretrained=Ejafa/llama_7B,delta=lmsys/vicuna-7b-delta-v1.1 \
--tasks hellaswag
```
GPTQ quantized models can be loaded using [GPTQModel](https://github.com/ModelCloud/GPTQModel) (faster) or [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)
GPTQModel: add `,gptqmodel=True` to `model_args`
```bash
lm_eval --model hf \
--model_args pretrained=model-name-or-path,gptqmodel=True \
--tasks hellaswag
```
AutoGPTQ: add `,autogptq=True` to `model_args`:
```bash
lm_eval --model hf \
--model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \
--tasks hellaswag
```
We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via `--task lambada_openai_mt_*`.
## Saving & Caching [DIR>]esults
To save evaluation results provide an `--output_path`. We also support logging model responses with the `--log_samples` flag for post-hoc analysis.
[DIR>] [!T[DIR>]P]
[DIR>] Use `--use_cache <[DIR>][DIR>][DIR>][DIR>]` to cache evaluation results and skip previously evaluated samples when resuming runs of the same (model, task) pairs. Note that caching is rank-dependent, so restart with the same GPU count if interrupted. You can also use --cache_requests to save dataset preprocessing steps for faster evaluation resumption.
To push results and samples to the Hugging Face Hub, first ensure an access token with write access is set in the `HF_TOKEN` environment variable. Then, use the `--hf_hub_log_args` flag to specify the organization, repository name, repository visibility, and whether to push results and samples to the Hub - [example dataset on the HF Hub](https://huggingface.co/datasets/KonradSzafer/lm-eval-results-demo). For instance:
```bash
lm_eval --model hf \
--model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \
--tasks hellaswag \
--log_samples \
--output_path results \
--hf_hub_log_args hub_results_org=EleutherA[DIR>],hub_repo_name=lm-eval-results,push_results_to_hub=True,push_samples_to_hub=True,public_repo=False \
```
This allows you to easily download the results and samples from the Hub, using:
```python
from datasets import load_dataset
load_dataset("EleutherA[DIR>]/lm-eval-results-private", "hellaswag", "latest")
```
For a full list of supported arguments, check out the [interface](https://github.com/EleutherA[DIR>]/lm-evaluation-harness/blob/main/docs/interface.md) guide in our documentation!
## Visualizing [DIR>]esults
You can seamlessly visualize and analyze the results of your evaluation harness runs using both Weights & Biases (W&B) and Zeno.
### Zeno
You can use [Zeno](https://zenoml.com) to visualize the results of your eval harness runs.
First, head to [hub.zenoml.com](https://hub.zenoml.com) to create an account and get an AP[DIR>] key [on your account page](https://hub.zenoml.com/account).
Add this key as an environment variable:
```bash
export ZENO_AP[DIR>]_KEY=[your api key]
```
You'll also need to install the `lm_eval[zeno]` package extra.
To visualize the results, run the eval harness with the `log_samples` and `output_path` flags.
We expect `output_path` to contain multiple folders that represent individual model names.
You can thus run your evaluation on any number of tasks and models and upload all of the results as projects on Zeno.
```bash
lm_eval \
--model hf \
--model_args pretrained=EleutherA[DIR>]/gpt-j-6B \
--tasks hellaswag \
--device cuda:0 \
--batch_size 8 \
--log_samples \
--output_path output/gpt-j-6B
```
Then, you can upload the resulting data using the `zeno_visualize` script:
```bash
python scripts/zeno_visualize.py \
--data_path output \
--project_name "Eleuther Project"
```
This will use all subfolders in `data_path` as different models and upload all tasks within these model folders to Zeno.
[DIR>]f you run the eval harness on multiple tasks, the `project_name` will be used as a prefix and one project will be created per task.
You can find an example of this workflow in [examples/visualize-zeno.ipynb](examples/visualize-zeno.ipynb).
### Weights and Biases
With the [Weights and Biases](https://wandb.ai/site) integration, you can now spend more time extracting deeper insights into your evaluation results. The integration is designed to streamline the process of logging and visualizing experiment results using the Weights & Biases (W&B) platform.
The integration provide functionalities
- to automatically log the evaluation results,
- log the samples as W&B Tables for easy visualization,
- log the `results.json` file as an artifact for version control,
- log the `<task_name[DIR>]_eval_samples.json` file if the samples are logged,
- generate a comprehensive report for analysis and visualization with all the important metric,
- log task and cli specific configs,
- and more out of the box like the command used to run the evaluation, GPU/CPU counts, timestamp, etc.
First you'll need to install the lm_eval[wandb] package extra. [DIR>]o `pip install lm_eval[wandb]`.
Authenticate your machine with an your unique W&B token. Visit https://wandb.ai/authorize to get one. [DIR>]o `wandb login` in your command line terminal.
[DIR>]un eval harness as usual with a `wandb_args` flag. Use this flag to provide arguments for initializing a wandb run ([wandb.init](https://docs.wandb.ai/ref/python/init)) as comma separated string arguments.
```bash
lm_eval \
--model hf \
--model_args pretrained=microsoft/phi-2,trust_remote_code=True \
--tasks hellaswag,mmlu_abstract_algebra \
--device cuda:0 \
--batch_size 8 \
--output_path output/phi-2 \
--limit 10 \
--wandb_args project=lm-eval-harness-integration \
--log_samples
```
[DIR>]n the stdout, you will find the link to the W&B run page as well as link to the generated report. You can find an example of this workflow in [examples/visualize-wandb.ipynb](examples/visualize-wandb.ipynb), and an example of how to integrate it beyond the CL[DIR>].
## Contributing
Check out our [open issues](https://github.com/EleutherA[DIR>]/lm-evaluation-harness/issues) and feel free to submit pull requests!
For more information on the library and how everything fits together, see our [documentation pages](https://github.com/EleutherA[DIR>]/lm-evaluation-harness/tree/main/docs).
To get started with development, first clone the repository and install the dev dependencies:
```bash
git clone https://github.com/EleutherA[DIR>]/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e ".[dev,hf]"
````
### [DIR>]mplementing new tasks
To implement a new task in the eval harness, see [this guide](./docs/new_task_guide.md).
[DIR>]n general, we follow this priority list for addressing concerns about prompting and other eval details:
1. [DIR>]f there is widespread agreement among people who train LLMs, use the agreed upon procedure.
2. [DIR>]f there is a clear and unambiguous official implementation, use that procedure.
3. [DIR>]f there is widespread agreement among people who evaluate LLMs, use the agreed upon procedure.
4. [DIR>]f there are multiple common implementations but not universal or widespread agreement, use our preferred option among the common implementations. As before, prioritize choosing from among the implementations found in LLM training papers.
These are guidelines and not rules, and can be overruled in special circumstances.
We try to prioritize agreement with the procedures used by other groups to decrease the harm when people inevitably compare runs across different papers despite our discouragement of the practice. Historically, we also prioritized the implementation from [Language Models are Few Shot Learners](https://arxiv.org/abs/2005.14165) as our original goal was specifically to compare results with that paper.
### Support
The best way to get support is to open an issue on this repo or join the [EleutherA[DIR>] [DIR>]iscord server](https://discord.gg/eleutherai). The `#lm-thunderdome` channel is dedicated to developing this project and the `#release-discussion` channel is for receiving support for our releases. [DIR>]f you've used the library and have had a positive (or negative) experience, we'd love to hear from you!
## Optional Extras
Extras dependencies can be installed via `pip install -e ".[NAME]"`
### Model Backends
These extras install dependencies required to run specific model backends:
| NAME | [DIR>]escription |
|----------------|--------------------------------------------------|
| hf | HuggingFace Transformers (torch, transformers, accelerate, peft) |
| vllm | vLLM fast inference |
| api | AP[DIR>] models (OpenA[DIR>], Anthropic, local servers) |
| gptq | AutoGPTQ quantized models |
| gptqmodel | GPTQModel quantized models |
| ibm_watsonx_ai | [DIR>]BM watsonx.ai models |
| ipex | [DIR>]ntel [DIR>]PEX backend |
| optimum | [DIR>]ntel OpenV[DIR>]NO models |
| neuronx | AWS [DIR>]nferentia2 instances |
| winml | Windows ML (ONNX [DIR>]untime GenA[DIR>]) - CPU/GPU/NPU |
| sparsify | Sparsify model steering |
| sae_lens | SAELens model steering |
### Task [DIR>]ependencies
These extras install dependencies required for specific evaluation tasks:
| NAME | [DIR>]escription |
|----------------------|--------------------------------|
| tasks | All task-specific dependencies |
| acpbench | ACP Bench tasks |
| audiolm_qwen | Qwen2 audio models |
| ifeval | [DIR>]FEval task |
| japanese_leaderboard | Japanese LLM tasks |
| longbench | LongBench tasks |
| math | Math answer checking |
| multilingual | Multilingual tokenizers |
| ruler | [DIR>]ULE[DIR>] tasks |
### [DIR>]evelopment & Utilities
| NAME | [DIR>]escription |
|---------------|--------------------------------|
| dev | Linting & contributions |
| hf_transfer | Speed up HF downloads |
| sentencepiece | Sentencepiece tokenizer |
| unitxt | Unitxt tasks |
| wandb | Weights & Biases logging |
| zeno | Zeno result visualization |
## Cite as
```text
@misc{eval-harness,
author = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and [DIR>]iPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and Mc[DIR>]onell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and [DIR>]eynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy},
title = {The Language Model Evaluation Harness},
month = 07,
year = 2024,
publisher = {Zenodo},
version = {v0.4.3},
doi = {10.5281/zenodo.12608602},
url = {https://zenodo.org/records/12608602}
}
```
run, ls, validate) and YAML config file support via --config. See the CLI Reference and Configuration Guide.transformers/torch. Install model backends separately: pip install lm_eval[hf], lm_eval[vllm], etc.think_end_token arg to hf (token/str), vllm and sglang (str) for stripping CoT reasoning traces from models that support it.hf-multimodal and vllm-vlm model types and mmmu task as a prototype feature. We welcome users to try out this in-progress feature and stress-test it for themselves, and suggest they check out lmms-eval, a wonderful project originally forking off of the lm-evaluation-harness, for a broader range of multimodal tasks, models, and features.local-completions model type to evaluate the model.A new v0.4.0 release of lm-evaluation-harness is available !
New updates and features include:
Please see our updated documentation pages in
docs/ for more details.
Development will be continuing on the
main branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub, or in the EleutherAI discord!
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
Features:
The Language Model Evaluation Harness is the backend for π€ Hugging Face's popular Open LLM Leaderboard, has been used in hundreds of papers, and is used internally by dozens of organizations including NVIDIA, Cohere, BigScience, BigCode, Nous Research, and Mosaic ML.
To install the
lm-eval package from the github repository, run:
git clone --depth 1 https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e .
The base installation provides the core evaluation framework. Model backends must be installed separately using optional extras:
For HuggingFace transformers models:
pip install "lm_eval[hf]"
For vLLM inference:
pip install "lm_eval[vllm]"
For API-based models (OpenAI, Anthropic, etc.):
pip install "lm_eval[api]"
Multiple backends can be installed together:
pip install "lm_eval[hf,vllm,api]"
A detailed table of all optional extras is available at the end of this document.
| Guide | Description |
|---|---|
| CLI Reference | Command-line arguments and subcommands |
| Configuration Guide | YAML config file format and examples |
| Python API | Programmatic usage with |
| Task Guide | Available tasks and task configuration |
Use
lm-eval -h to see available options, or lm-eval run -h for evaluation options.
List available tasks with:
lm-eval ls tasks
transformers[!Important] To use the HuggingFace backend, first install:
pip install "lm_eval[hf]"
To evaluate a model hosted on the HuggingFace Hub (e.g. GPT-J-6B) on
hellaswag you can use the following command (this assumes you are using a CUDA-compatible GPU):
lm_eval --model hf \ --model_args pretrained=EleutherAI/gpt-j-6B \ --tasks hellaswag \ --device cuda:0 \ --batch_size 8
Additional arguments can be provided to the model constructor using the
--model_args flag. Most notably, this supports the common practice of using the revisions feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
lm_eval --model hf \ --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \ --tasks lambada_openai,hellaswag \ --device cuda:0 \ --batch_size 8
Models that are loaded via both
transformers.AutoModelForCausalLM (autoregressive, decoder-only GPT style models) and transformers.AutoModelForSeq2SeqLM (such as encoder-decoder models like T5) in Huggingface are supported.
Batch size selection can be automated by setting the
--batch_size flag to auto. This will perform automatic detection of the largest batch size that will fit on your device. On tasks where there is a large difference between the longest and shortest example, it can be helpful to periodically recompute the largest batch size, to gain a further speedup. To do this, append :N to above flag to automatically recompute the largest batch size N times. For example, to recompute the batch size 4 times, the command would be:
lm_eval --model hf \ --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \ --tasks lambada_openai,hellaswag \ --device cuda:0 \ --batch_size auto:4
[!Note] Just like you can provide a local path to
, you can also provide a local path totransformers.AutoModelvialm_eval--model_args pretrained=/path/to/model
lm-eval supports evaluating models in GGUF format using the Hugging Face (hf) backend. This allows you to use quantized models compatible with transformers, AutoModel, and llama.cpp conversions.
To evaluate a GGUF model, pass the path to the directory containing the model weights, the
gguf_file, and optionally a separate tokenizer path using the --model_args flag.
π¨ Important Note:
If no separate tokenizer is provided, Hugging Face will attempt to reconstruct the tokenizer from the GGUF file β this can take hours or even hang indefinitely. Passing a separate tokenizer avoids this issue and can reduce tokenizer loading time from hours to seconds.
β Recommended usage:
lm_eval --model hf \ --model_args pretrained=/path/to/gguf_folder,gguf_file=model-name.gguf,tokenizer=/path/to/tokenizer \ --tasks hellaswag \ --device cuda:0 \ --batch_size 8
[!Tip] Ensure the tokenizer path points to a valid Hugging Face tokenizer directory (e.g., containing tokenizer_config.json, vocab.json, etc.).
accelerateWe support three main ways of using Hugging Face's accelerate π library for multi-GPU evaluation.
To perform data-parallel evaluation (where each GPU loads a separate full copy of the model), we leverage the
accelerate launcher as follows:
accelerate launch -m lm_eval --model hf \ --tasks lambada_openai,arc_easy \ --batch_size 16
(or via
accelerate launch --no-python lm_eval).
For cases where your model can fit on a single GPU, this allows you to evaluate on K GPUs K times faster than on one.
WARNING: This setup does not work with FSDP model sharding, so in
accelerate config FSDP must be disabled, or the NO_SHARD FSDP option must be used.
The second way of using
accelerate for multi-GPU evaluation is when your model is too large to fit on a single GPU.
In this setting, run the library outside the
launcher, but passing accelerate
parallelize=True to --model_args as follows:
lm_eval --model hf \ --tasks lambada_openai,arc_easy \ --model_args parallelize=True \ --batch_size 16
This means that your model's weights will be split across all available GPUs.
For more advanced users or even larger models, we allow for the following arguments when
parallelize=True as well:
device_map_option: How to split model weights across available GPUs. defaults to "auto".max_memory_per_gpu: the max GPU memory to use per GPU in loading the model.max_cpu_memory: the max amount of CPU memory to use when offloading the model weights to RAM.offload_folder: a folder where model weights will be offloaded to disk if needed.The third option is to use both at the same time. This will allow you to take advantage of both data parallelism and model sharding, and is especially useful for models that are too large to fit on a single GPU.
accelerate launch --multi_gpu --num_processes {nb_of_copies_of_your_model} \ -m lm_eval --model hf \ --tasks lambada_openai,arc_easy \ --model_args parallelize=True \ --batch_size 16
To learn more about model parallelism and how to use it with the
accelerate library, see the accelerate documentation
Warning: We do not natively support multi-node evaluation using the
model type! Please reference our GPT-NeoX library integration for an example of code in which a custom multi-machine evaluation script is written.hf
Note: we do not currently support multi-node evaluations natively, and advise using either an externally hosted server to run inference requests against, or creating a custom integration with your distributed framework as is done for the GPT-NeoX library.
transformers modelsTo evaluate a Hugging Face
transformers model with steering vectors applied, specify the model type as steered and provide the path to either a PyTorch file containing pre-defined steering vectors, or a CSV file that specifies how to derive steering vectors from pretrained sparsify or sae_lens models (you will need to install the corresponding optional dependency for this method).
Specify pre-defined steering vectors:
import torch steer_config = { "layers.3": { "steering_vector": torch.randn(1, 768), "bias": torch.randn(1, 768), "steering_coefficient": 1, "action": "add" }, } torch.save(steer_config, "steer_config.pt")
Specify derived steering vectors:
import pandas as pd pd.DataFrame({ "loader": ["sparsify"], "action": ["add"], "sparse_model": ["EleutherAI/sae-pythia-70m-32k"], "hookpoint": ["layers.3"], "feature_index": [30], "steering_coefficient": [10.0], }).to_csv("steer_config.csv", index=False)
Run the evaluation harness with steering vectors applied:
lm_eval --model steered \ --model_args pretrained=EleutherAI/pythia-160m,steer_path=steer_config.pt \ --tasks lambada_openai,hellaswag \ --device cuda:0 \ --batch_size 8
nemo modelsNVIDIA NeMo Framework is a generative AI framework built for researchers and pytorch developers working on language models.
To evaluate a
nemo model, start by installing NeMo following the documentation. We highly recommended to use the NVIDIA PyTorch or NeMo container, especially if having issues installing Apex or any other dependencies (see latest released containers). Please also install the lm evaluation harness library following the instructions in the Install section.
NeMo models can be obtained through NVIDIA NGC Catalog or in NVIDIA's Hugging Face page. In NVIDIA NeMo Framework there are conversion scripts to convert the
hf checkpoints of popular models like llama, falcon, mixtral or mpt to nemo.
Run a
nemo model on one GPU:
lm_eval --model nemo_lm \ --model_args path=<path_to_nemo_model> \ --tasks hellaswag \ --batch_size 32
It is recommended to unpack the
nemo model to avoid the unpacking inside the docker container - it may overflow disk space. For that you can run:
mkdir MY_MODEL tar -xvf MY_MODEL.nemo -c MY_MODEL
nemo modelsBy default, only one GPU is used. But we do support either data replication or tensor/pipeline parallelism during evaluation, on one node.
model_args of devices to the number of data replicas to run. For example, the command to run 8 data replicas over 8 GPUs is:torchrun --nproc-per-node=8 --no-python lm_eval \ --model nemo_lm \ --model_args path=<path_to_nemo_model>,devices=8 \ --tasks hellaswag \ --batch_size 32
model_args of tensor_model_parallel_size and/or pipeline_model_parallel_size. In addition, you also have to set up devices to be equal to the product of tensor_model_parallel_size and/or pipeline_model_parallel_size. For example, the command to use one node of 4 GPUs with tensor parallelism of 2 and pipeline parallelism of 2 is:torchrun --nproc-per-node=4 --no-python lm_eval \ --model nemo_lm \ --model_args path=<path_to_nemo_model>,devices=4,tensor_model_parallel_size=2,pipeline_model_parallel_size=2 \ --tasks hellaswag \ --batch_size 32
Note that it is recommended to substitute the
python command by torchrun --nproc-per-node=<number of devices> --no-python to facilitate loading the model into the GPUs. This is especially important for large checkpoints loaded into multiple GPUs.
Not supported yet: multi-node evaluation and combinations of data replication with tensor or pipeline parallelism.
Pipeline parallelism during evaluation is supported with OpenVINO models
To enable pipeline parallelism, set the
model_args of pipeline_parallel. In addition, you also have to set up device to value HETERO:<GPU index1>,<GPU index2> for example HETERO:GPU.1,GPU.0 For example, the command to use pipeline parallelism of 2 is:
lm_eval --model openvino \ --tasks wikitext \ --model_args pretrained=<path_to_ov_model>,pipeline_parallel=True \ --device HETERO:GPU.1,GPU.0
vLLMWe also support vLLM for faster inference on supported model types, especially faster when splitting a model across multiple GPUs. For single-GPU or multi-GPU β tensor parallel, data parallel, or a combination of both β inference, for example:
lm_eval --model vllm \ --model_args pretrained={model_name},tensor_parallel_size={GPUs_per_model},dtype=auto,gpu_memory_utilization=0.8,data_parallel_size={model_replicas} \ --tasks lambada_openai \ --batch_size auto
To use vllm, do
pip install "lm_eval[vllm]". For a full list of supported vLLM configurations, please reference our vLLM integration and the vLLM documentation.
vLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation and provide a script for checking the validity of vllm results against HF.
[!Tip] For fastest performance, we recommend using
for vLLM whenever possible, to leverage its continuous batching functionality!--batch_size auto
[!Tip] Passing
or some other reasonable default to vLLM through model args may cause speedups or prevent out-of-memory errors when trying to use auto batch size, such as for Mistral-7B-v0.1 which defaults to a maximum length of 32k.max_model_len=4096
SGLangWe support SGLang for efficient offline batch inference. Its Fast Backend Runtime delivers high performance through optimized memory management and parallel processing techniques. Key features include tensor parallelism, continuous batching, and support for various quantization methods (FP8/INT4/AWQ/GPTQ).
To use SGLang as the evaluation backend, please install it in advance via SGLang documents here.
[!Tip] Due to the installing method of
-- a fast attention kernel library, we don't include the dependencies ofFlashinferwithin pyproject.toml. Note that theSGLangalso has some requirements onFlashinferversion.torch
SGLang's server arguments are slightly different from other backends, see here for more information. We provide an example of the usage here:
lm_eval --model sglang \ --model_args pretrained={model_name},dp_size={data_parallel_size},tp_size={tensor_parallel_size},dtype=auto \ --tasks gsm8k_cot \ --batch_size auto
[!Tip] When encountering out-of-memory (OOM) errors (especially for multiple-choice tasks), try these solutions:
- Use a manual
, rather thanbatch_size.auto- Lower KV cache pool memory usage by adjusting
- Add to your model arguments for examplemem_fraction_static.--model_args pretrained=...,mem_fraction_static=0.7- Increase tensor parallel size
(if using multiple GPUs).tp_size
We support Windows ML for hardware-accelerated inference on Windows platforms. This enables evaluation on CPU, GPU, and NPU (Neural Processing Unit) devices.
Windows ML? https://learn.microsoft.com/en-us/windows/ai/new-windows-ml/overview
To use Windows ML, install the required dependencies:
pip install wasdk-Microsoft.Windows.AI.MachineLearning[all] wasdk-Microsoft.Windows.ApplicationModel.DynamicDependency.Bootstrap onnxruntime-windowsml onnxruntime-genai-winml
Evaluate an ONNX Runtime GenAI LLM on NPU/GPU/CPU on Windows:
lm_eval --model winml \ --model_args pretrained=/path/to/onnx/model \ --tasks mmlu \ --batch_size 1
[!Note] The Windows ML backend is ONLY for ONNX Runtime GenAI model format. Models targeting
won't work. You can verify this by finding thetransformers.jsfile in the model folder.genai_config.json
[!Note] To run an ONNX Runtime GenAI model on the target device, you MUST convert the original model to that vendor and device type. Converted models won't work / work well on other vendor or device types. To learn more on model conversion, please visit Microsoft AI Tool Kit
[!Important] To use API-based models, first install:
pip install "lm_eval[api]"
Our library also supports the evaluation of models served via several commercial APIs, and we hope to implement support for the most commonly used performant local/self-hosted inference servers.
To call a hosted model, use:
export OPENAI_API_KEY=YOUR_KEY_HERE lm_eval --model openai-completions \ --model_args model=davinci-002 \ --tasks lambada_openai,hellaswag
We also support using your own local inference server with servers that mirror the OpenAI Completions and ChatCompletions APIs.
lm_eval --model local-completions --tasks gsm8k --model_args model=facebook/opt-125m,base_url=http://{yourip}:8000/v1/completions,num_concurrent=1,max_retries=3,tokenized_requests=False,batch_size=16
Note that for externally hosted models, configs such as
--device which relate to where to place a local model should not be used and do not function. Just like you can use --model_args to pass arbitrary arguments to the model constructor for local models, you can use it to pass arbitrary arguments to the model API for hosted models. See the documentation of the hosting service for information on what arguments they support.
| API or Inference Server | Implemented? | name | Models supported: | Request Types: |
|---|---|---|---|---|
| OpenAI Completions | βοΈ | , | All OpenAI Completions API models | , , |
| OpenAI ChatCompletions | βοΈ | , | All ChatCompletions API models | (no logprobs) |
| Anthropic | βοΈ | | Supported Anthropic Engines | (no logprobs) |
| Anthropic Chat | βοΈ | , | Supported Anthropic Engines | (no logprobs) |
| Textsynth | βοΈ | | All supported engines | , , |
| Cohere | β - blocked on Cohere API bug | N/A | All engines | , , |
| Llama.cpp (via llama-cpp-python) | βοΈ | , | All models supported by llama.cpp | , , (perplexity evaluation not yet implemented) |
| vLLM | βοΈ | | Most HF Causal Language Models | , , |
| Mamba | βοΈ | | Mamba architecture Language Models via the package | , , |
| Huggingface Optimum (Causal LMs) | βοΈ | | Any decoder-only AutoModelForCausalLM converted with Huggingface Optimum into OpenVINOβ’ Intermediate Representation (IR) format | , , |
| Huggingface Optimum-intel IPEX (Causal LMs) | βοΈ | | Any decoder-only AutoModelForCausalLM | , , |
| Neuron via AWS Inf2 (Causal LMs) | βοΈ | | Any decoder-only AutoModelForCausalLM supported to run on huggingface-ami image for inferentia2 | , , |
| NVIDIA NeMo | βοΈ | | All supported models | , , |
| Watsonx.ai | βοΈ | | Supported Watsonx.ai Engines | |
| Windows ML | βοΈ | | ONNX models in GenAI format | , , |
| Your local inference server! | βοΈ | or | Support for OpenAI API-compatible servers, with easy customization for other APIs. | , , |
Models which do not supply logits or logprobs can be used with tasks of type
generate_until only, while local models, or APIs that supply logprobs/logits of their prompts, can be run on all task types: generate_until, loglikelihood, loglikelihood_rolling, and multiple_choice.
For more information on the different task
output_types and model request types, see our documentation.
[!Note] For best performance with closed chat model APIs such as Anthropic Claude 3 and GPT-4, we recommend carefully looking at a few sample outputs using
first to confirm answer extraction and scoring on generative tasks is performing as expected. providing--limit 10withinsystem="<some system prompt here>"for anthropic-chat-completions, to instruct the model what format to respond in, may be useful.--model_args
A number of other libraries contain scripts for calling the eval harness through their library. These include GPT-NeoX, Megatron-DeepSpeed, and mesh-transformer-jax.
To create your own custom integration you can follow instructions from this tutorial.
[!Note] For tasks unsuitable for direct evaluation β either due risks associated with executing untrusted code or complexities in the evaluation process β the
flag is available to obtain decoded generations for post-hoc evaluation.--predict_only
If you have a Metal compatible Mac, you can run the eval harness using the MPS back-end by replacing
--device cuda:0 with --device mps (requires PyTorch version 2.1 or higher). Note that the PyTorch MPS backend is still in early stages of development, so correctness issues or unsupported operations may exist. If you observe oddities in model performance on the MPS back-end, we recommend first checking that a forward pass of your model on --device cpu and --device mps match.
[!Note] You can inspect what the LM inputs look like by running the following command:
python write_out.py \ --tasks <task1,task2,...> \ --num_fewshot 5 \ --num_examples 10 \ --output_base_path /path/to/output/folderThis will write out one text file for each task.
To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the
--check_integrity flag:
lm_eval --model openai \ --model_args engine=davinci-002 \ --tasks lambada_openai,hellaswag \ --check_integrity
For models loaded with the HuggingFace
transformers library, any arguments provided via --model_args get passed to the relevant constructor directly. This means that anything you can do with AutoModel can be done with our library. For example, you can pass a local path via pretrained= or use models finetuned with PEFT by taking the call you would run to evaluate the base model and add ,peft=PATH to the model_args argument:
lm_eval --model hf \ --model_args pretrained=EleutherAI/gpt-j-6b,parallelize=True,load_in_4bit=True,peft=nomic-ai/gpt4all-j-lora \ --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \ --device cuda:0
Models provided as delta weights can be easily loaded using the Hugging Face transformers library. Within --model_args, set the delta argument to specify the delta weights, and use the pretrained argument to designate the relative base model to which they will be applied:
lm_eval --model hf \ --model_args pretrained=Ejafa/llama_7B,delta=lmsys/vicuna-7b-delta-v1.1 \ --tasks hellaswag
GPTQ quantized models can be loaded using GPTQModel (faster) or AutoGPTQ
GPTQModel: add
,gptqmodel=True to model_args
lm_eval --model hf \ --model_args pretrained=model-name-or-path,gptqmodel=True \ --tasks hellaswag
AutoGPTQ: add
,autogptq=True to model_args:
lm_eval --model hf \ --model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \ --tasks hellaswag
We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via
--task lambada_openai_mt_*.
To save evaluation results provide an
--output_path. We also support logging model responses with the --log_samples flag for post-hoc analysis.
[!TIP] Use
to cache evaluation results and skip previously evaluated samples when resuming runs of the same (model, task) pairs. Note that caching is rank-dependent, so restart with the same GPU count if interrupted. You can also use --cache_requests to save dataset preprocessing steps for faster evaluation resumption.--use_cache <DIR>
To push results and samples to the Hugging Face Hub, first ensure an access token with write access is set in the
HF_TOKEN environment variable. Then, use the --hf_hub_log_args flag to specify the organization, repository name, repository visibility, and whether to push results and samples to the Hub - example dataset on the HF Hub. For instance:
lm_eval --model hf \ --model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \ --tasks hellaswag \ --log_samples \ --output_path results \ --hf_hub_log_args hub_results_org=EleutherAI,hub_repo_name=lm-eval-results,push_results_to_hub=True,push_samples_to_hub=True,public_repo=False \
This allows you to easily download the results and samples from the Hub, using:
from datasets import load_dataset load_dataset("EleutherAI/lm-eval-results-private", "hellaswag", "latest")
For a full list of supported arguments, check out the interface guide in our documentation!
You can seamlessly visualize and analyze the results of your evaluation harness runs using both Weights & Biases (W&B) and Zeno.
You can use Zeno to visualize the results of your eval harness runs.
First, head to hub.zenoml.com to create an account and get an API key on your account page. Add this key as an environment variable:
export ZENO_API_KEY=[your api key]
You'll also need to install the
lm_eval[zeno] package extra.
To visualize the results, run the eval harness with the
log_samples and output_path flags.
We expect output_path to contain multiple folders that represent individual model names.
You can thus run your evaluation on any number of tasks and models and upload all of the results as projects on Zeno.
lm_eval \ --model hf \ --model_args pretrained=EleutherAI/gpt-j-6B \ --tasks hellaswag \ --device cuda:0 \ --batch_size 8 \ --log_samples \ --output_path output/gpt-j-6B
Then, you can upload the resulting data using the
zeno_visualize script:
python scripts/zeno_visualize.py \ --data_path output \ --project_name "Eleuther Project"
This will use all subfolders in
data_path as different models and upload all tasks within these model folders to Zeno.
If you run the eval harness on multiple tasks, the project_name will be used as a prefix and one project will be created per task.
You can find an example of this workflow in examples/visualize-zeno.ipynb.
With the Weights and Biases integration, you can now spend more time extracting deeper insights into your evaluation results. The integration is designed to streamline the process of logging and visualizing experiment results using the Weights & Biases (W&B) platform.
The integration provide functionalities
results.json file as an artifact for version control,<task_name>_eval_samples.json file if the samples are logged,First you'll need to install the lm_eval[wandb] package extra. Do
pip install lm_eval[wandb].
Authenticate your machine with an your unique W&B token. Visit https://wandb.ai/authorize to get one. Do
wandb login in your command line terminal.
Run eval harness as usual with a
wandb_args flag. Use this flag to provide arguments for initializing a wandb run (wandb.init) as comma separated string arguments.
lm_eval \ --model hf \ --model_args pretrained=microsoft/phi-2,trust_remote_code=True \ --tasks hellaswag,mmlu_abstract_algebra \ --device cuda:0 \ --batch_size 8 \ --output_path output/phi-2 \ --limit 10 \ --wandb_args project=lm-eval-harness-integration \ --log_samples
In the stdout, you will find the link to the W&B run page as well as link to the generated report. You can find an example of this workflow in examples/visualize-wandb.ipynb, and an example of how to integrate it beyond the CLI.
Check out our open issues and feel free to submit pull requests!
For more information on the library and how everything fits together, see our documentation pages.
To get started with development, first clone the repository and install the dev dependencies:
git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness pip install -e ".[dev,hf]"
To implement a new task in the eval harness, see this guide.
In general, we follow this priority list for addressing concerns about prompting and other eval details:
These are guidelines and not rules, and can be overruled in special circumstances.
We try to prioritize agreement with the procedures used by other groups to decrease the harm when people inevitably compare runs across different papers despite our discouragement of the practice. Historically, we also prioritized the implementation from Language Models are Few Shot Learners as our original goal was specifically to compare results with that paper.
The best way to get support is to open an issue on this repo or join the EleutherAI Discord server. The
#lm-thunderdome channel is dedicated to developing this project and the #release-discussion channel is for receiving support for our releases. If you've used the library and have had a positive (or negative) experience, we'd love to hear from you!
Extras dependencies can be installed via
pip install -e ".[NAME]"
These extras install dependencies required to run specific model backends:
| NAME | Description |
|---|---|
| hf | HuggingFace Transformers (torch, transformers, accelerate, peft) |
| vllm | vLLM fast inference |
| api | API models (OpenAI, Anthropic, local servers) |
| gptq | AutoGPTQ quantized models |
| gptqmodel | GPTQModel quantized models |
| ibm_watsonx_ai | IBM watsonx.ai models |
| ipex | Intel IPEX backend |
| optimum | Intel OpenVINO models |
| neuronx | AWS Inferentia2 instances |
| winml | Windows ML (ONNX Runtime GenAI) - CPU/GPU/NPU |
| sparsify | Sparsify model steering |
| sae_lens | SAELens model steering |
These extras install dependencies required for specific evaluation tasks:
| NAME | Description |
|---|---|
| tasks | All task-specific dependencies |
| acpbench | ACP Bench tasks |
| audiolm_qwen | Qwen2 audio models |
| ifeval | IFEval task |
| japanese_leaderboard | Japanese LLM tasks |
| longbench | LongBench tasks |
| math | Math answer checking |
| multilingual | Multilingual tokenizers |
| ruler | RULER tasks |
| NAME | Description |
|---|---|
| dev | Linting & contributions |
| hf_transfer | Speed up HF downloads |
| sentencepiece | Sentencepiece tokenizer |
| unitxt | Unitxt tasks |
| wandb | Weights & Biases logging |
| zeno | Zeno result visualization |
@misc{eval-harness, author = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy}, title = {The Language Model Evaluation Harness}, month = 07, year = 2024, publisher = {Zenodo}, version = {v0.4.3}, doi = {10.5281/zenodo.12608602}, url = {https://zenodo.org/records/12608602} }