<h1 align="center">
<a href="https://prompts.chat">
An LLM playground you can run on your laptop.
Sign in to like and favorite skills
An LLM playground you can run on your laptop.
https://user-images.githubusercontent.com/111631/227399583-39b23f48-9823-4571-a906-985dbe282b20.mp4
Try the hosted version: nat.dev.
pip install openplayground openplayground run
Alternatively, run it as a docker container:
docker run --name openplayground -p 5432:5432 -d --volume openplayground:/web/config natorg/openplayground
This runs a Flask process, so you can add the typical flags such as setting a different port
openplayground run -p 1235 and others.
git clone https://github.com/nat/openplayground cd app && npm install && npx parcel watch src/index.html --no-cache cd server && pip3 install -r requirements.txt && cd .. && python3 -m server.app
docker build . --tag "openplayground" docker run --name openplayground -p 5432:5432 -d --volume openplayground:/web/config openplayground
First volume is optional. It's used to store API keys, models settings.
server/models.json file. If you find better default parameters for a model, please submit a pull request!openplayground install <model> or in the UI.Models and providers have three types in openplayground:
You can add models in
server/models.json with the following schema:
For models running locally on your device you can add them to openplayground like the following (a minimal example):
"llama": { "api_key" : false, "models" : { "llama-70b": { "parameters": { "temperature": { "value": 0.5, "range": [ 0.1, 1.0 ] }, } } } }
Keep in mind you will need to add a generation method for your model in
server/app.py. Take a look at local_text_generation() as an example.
This is for model providers like OpenAI, cohere, forefront, and more. You can connect them easily into openplayground (a minimal example):
"cohere": { "api_key" : true, "models" : { "xlarge": { "parameters": { "temperature": { "value": 0.5, "range": [ 0.1, 1.0 ] }, } } } }
Keep in mind you will need to add a generation method for your model in
server/app.py. Take a look at openai_text_generation() or cohere_text_generation() as an example.
We use this for Huggingface Remote Inference models, the search endpoint is useful for scaling to N models in the settings page.
"provider_name": { "api_key": true, "search": { "endpoint": "ENDPOINT_URL" }, "parameters": { "parameter": { "value": 1.0, "range": [ 0.1, 1.0 ] }, } }
Instigated by Nat Friedman. Initial implementation by Zain Huda as a repl.it bounty. Many features and extensive refactoring by Alex Lourenco.