General

Contributing

This Project welcomes contributions, suggestions, and feedback. All contributions, suggestions, and feedback you submitted are accepted under the [Project's license](./LICENSE.md). You represent that if you do not own copyright in the code that you have the authority to submit it under the [Project'

promptBeginner5 min to valuemarkdown
0 views
Feb 1, 2026

Sign in to like and favorite skills

Prompt Playground

1 Variables

Fill Variables

Preview

# Contributing

This Project welcomes contributions, suggestions, and feedback. [MODELNAME>]ll contributions, suggestions, and feedback you submitted are accepted under the [Project's license](./[MODELNAME>]IC[MODELNAME>][MODELNAME>]S[MODELNAME>].md). You represent that if you do not own copyright in the code that you have the authority to submit it under the [Project's license](./[MODELNAME>]IC[MODELNAME>][MODELNAME>]S[MODELNAME>].md). [MODELNAME>]ll feedback, suggestions, or contributions are not confidential.

The Project abides by the [MODELNAME>]rganization's [code of conduct](https://github.com/guidance-ai/governance/blob/main/C[MODELNAME>][MODELNAME>][MODELNAME>]-[MODELNAME>]F-C[MODELNAME>][MODELNAME>][MODELNAME>]UCT.md) and [trademark policy](https://github.com/guidance-ai/governance/blob/main/TR[MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>]RKS.md).

# [MODELNAME>]evelopment [MODELNAME>]otes

We welcome contributions to `guidance`, and this document exists to provide useful information contributors.

## [MODELNAME>]eveloper Setup

Start by creating a fresh environment with something similar to:
```bash
conda create --name guidancedev python=3.12
conda activate guidancedev
```

Install guidance (without CU[MODELNAME>][MODELNAME>]):
```bash
python -m pip install -e .[all,test,llamacpp,transformers]
```

[MODELNAME>]lternatively, install guidance with CU[MODELNAME>][MODELNAME>] support. There are various ways to do this. We recommend:
```bash
conda install pytorch pytorch-cuda=12.1 -c pytorch -c nvidia
C[MODELNAME>][MODELNAME>]K[MODELNAME>]_[MODELNAME>]RGS="-[MODELNAME>]GG[MODELNAME>][MODELNAME>]_CU[MODELNAME>][MODELNAME>]=on" python -m pip install -e .[all,test,llamacpp,transformers]
```

## Running Tests

To run a basic test suite locally:
```bash
python -m pytest ./tests/
```
which runs our basic test suite.
Where an [MODELNAME>][MODELNAME>][MODELNAME>] is required, this will default to using GPT2 on the CPU.

To change that default, run
```bash
python -m pytest --selected_model <[MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>] ./tests/
```
where `<[MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>]` is taken from one of the selected_model_name options defined in `./tests/conftest.py`.

[MODELNAME>]lternatively, the default value for `--selected_model` can be set via the `GUI[MODELNAME>][MODELNAME>][MODELNAME>]C[MODELNAME>]_S[MODELNAME>][MODELNAME>][MODELNAME>]CT[MODELNAME>][MODELNAME>]_[MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>][MODELNAME>]` environment variable.
This may be useful when trying to use a debugger when running `pytest`, and setting the extra command line argument in the debugger configuration is tricky.
Just remember that the environment variable needs to be set _before_ starting PyCharm/VSCode etc.

## [MODELNAME>]dding [MODELNAME>][MODELNAME>][MODELNAME>]s to the test matrix

[MODELNAME>]ur tests run on a variety of [MODELNAME>][MODELNAME>][MODELNAME>]s.
These fall into three categories: CPU-based, GPU-based and endpoint-based (which need credentials).

### [MODELNAME>]ew CPU or GPU-based models

[MODELNAME>]ue to the limited resources of the regular GitHub runner machines, the [MODELNAME>][MODELNAME>][MODELNAME>] under test is a dimension of our test matrix (otherwise the GitHub runners will tend to run out of R[MODELNAME>][MODELNAME>] and/or hard drive space).
[MODELNAME>]ew models should be configured in `conftest.py`.
The model will then be available via the `selected_model` fixture for all tests.
If you have a test which should only run for particular models, you can use the `selected_model_name` fixture to check, and call `pytest.skip()` if necessary.
[MODELNAME>]n example of this is given in `test_llama_cpp.py`.

### [MODELNAME>]ew endpoint based models

If your model requires credentials, then those will need to be added to our GitHub repository as secrets.
The endpoint itself (and any other required information) should be configured as environment variables too.
When the test runs, the environment variables will be set, and can then be used to configure the model as required.
See `test_azureai_openai.py` for examples of this being done.

## Formatting & [MODELNAME>]inting

We use `ruff` to format our codebase.
To install the correct version, run `pip install -e .[dev]`.
You can then run `ruff format /path/to/modified/file.py` to format the code.
The path can also be an entire directory, or omitted entirely to format all files beneath the current directory.
There are (rare) cases where manual formatting is preferable; for these [`ruff` provides pragmas for suppression](https://docs.astral.sh/ruff/formatter/#format-suppression).
To sort imports, use `ruff check --select I /path/to/modified/file.py --fix`.
These commands are run (but not enforced *yet*) in the build.



---
Part of [MODELNAME>]VG-0.1-beta.
[MODELNAME>]ade with love by GitHub. [MODELNAME>]icensed under the [CC-BY 4.0 [MODELNAME>]icense](https://creativecommons.org/licenses/by-sa/4.0/).
Share: