sushiswap-sdk
>
Complete Open WebUI API integration for managing LLM models, chat completions, Ollama proxy operations, file uploads, knowledge bases (RAG), image generation, audio processing, and pipelines. Use this skill when interacting with Open WebUI instances via REST API - listing models, chatting with LLMs, uploading files for RAG, managing knowledge collections, or executing Ollama commands through the Open WebUI proxy. Requires OPENWEBUI_URL and OPENWEBUI_TOKEN environment variables or explicit parameters.
Sign in to like and favorite skills
>
Query real-time and historical financial data across equities and crypto—prices, market moves, metrics, and trends for analysis, alerts, and reporting.
Promote Doppel world builds across social platforms. Use when the agent wants to share builds on Twitter/X, Farcaster, Telegram, or Moltbook to drive observers, grow reputation, and recruit collaborators.
Complete API integration for Open WebUI - a unified interface for LLMs including Ollama, OpenAI, and other providers.
Activate this skill when the user wants to:
Do NOT activate for:
export OPENWEBUI_URL="http://localhost:3000" # Your Open WebUI instance URL export OPENWEBUI_TOKEN="your-api-key-here" # From Settings > Account in Open WebUI
Example requests that SHOULD activate this skill:
Example requests that should NOT activate this skill:
OPENWEBUI_URL and OPENWEBUI_TOKEN are setUse the CLI tool or direct API calls:
# Using the CLI tool (recommended) python3 scripts/openwebui-cli.py --help python3 scripts/openwebui-cli.py models list python3 scripts/openwebui-cli.py chat --model llama3.2 --message "Hello" # Using curl (alternative) curl -H "Authorization: Bearer $OPENWEBUI_TOKEN" \ "$OPENWEBUI_URL/api/models"
| Endpoint | Method | Description |
|---|---|---|
| POST | OpenAI-compatible chat completions |
| GET | List all available models |
| POST | Native Ollama chat completion |
| POST | Ollama text generation |
| Endpoint | Method | Description |
|---|---|---|
| GET | List Ollama models |
| POST | Pull/download a model |
| DELETE | Delete a model |
| POST | Generate embeddings |
| GET | List loaded models |
| Endpoint | Method | Description |
|---|---|---|
| POST | Upload file for RAG |
| GET | Check file processing status |
| GET/POST | List/create knowledge collections |
| POST | Add file to knowledge base |
| Endpoint | Method | Description |
|---|---|---|
| POST | Generate images |
| POST | Text-to-speech |
| POST | Speech-to-text |
Always confirm before:
DELETE /ollama/api/delete) - Irreversiblesk-...XXXX formatpython3 scripts/openwebui-cli.py models list
python3 scripts/openwebui-cli.py chat \ --model llama3.2 \ --message "Explain the benefits of RAG" \ --stream
python3 scripts/openwebui-cli.py files upload \ --file /path/to/document.pdf \ --process
python3 scripts/openwebui-cli.py knowledge add-file \ --collection-id "research-papers" \ --file-id "doc-123-uuid"
python3 scripts/openwebui-cli.py ollama embed \ --model nomic-embed-text \ --input "Open WebUI is great for LLM management"
python3 scripts/openwebui-cli.py ollama pull \ --model llama3.2:70b # Agent must confirm: "This will download ~40GB. Proceed? [y/N]"
python3 scripts/openwebui-cli.py ollama status
| Error | Cause | Solution |
|---|---|---|
| 401 Unauthorized | Invalid or missing token | Verify OPENWEBUI_TOKEN |
| 404 Not Found | Model/endpoint doesn't exist | Check model name spelling |
| 422 Validation Error | Invalid parameters | Check request body format |
| 400 Bad Request | File still processing | Wait for processing completion |
| Connection refused | Wrong URL | Verify OPENWEBUI_URL |
Files uploaded for RAG are processed asynchronously. Before adding to knowledge:
/api/v1/files/{id}/process/status until status: "completed"Pulling models (e.g., 70B parameters) can take hours. Always:
Chat completions support streaming. Use
--stream flag for real-time output or collect full response for non-streaming.
The included CLI tool (
scripts/openwebui-cli.py) provides:
Run
python3 scripts/openwebui-cli.py --help for full usage.