<h1 align="center">
<a href="https://prompts.chat">
This document provides detailed configuration information for vibe-tools.
Sign in to like and favorite skills
This document provides detailed configuration information for vibe-tools.
vibe-tools can be configured through two main mechanisms:
Create
.vibe-tools.env in your project root or ~/.vibe-tools/.env in your home directory:
# Required API Keys PERPLEXITY_API_KEY="your-perplexity-api-key" # Required for web search GEMINI_API_KEY="your-gemini-api-key" # Required for repository analysis # Optional API Keys OPENAI_API_KEY="your-openai-api-key" # For browser commands with OpenAI ANTHROPIC_API_KEY="your-anthropic-api-key" # For browser commands with Anthropic, and MCP commands OPENROUTER_API_KEY="your-openrouter-api-key" # For MCP commands with OpenRouter and web search GITHUB_TOKEN="your-github-token" # For enhanced GitHub access GROQ_API_KEY="your-groq-api-key" # For Groq LLM access # Configuration Options USE_LEGACY_CURSORRULES="true" # Use legacy .cursorrules file (default: false)
Create this file in your project root to customize behavior. Here's a comprehensive example with all available options:
{ "perplexity": { "model": "sonar-pro", // Default model for web search "maxTokens": 32000 // Maximum tokens for responses }, "gemini": { "model": "gemini-2.5-pro-preview", // Default model for repository analysis "maxTokens": 32000 // Maximum tokens for responses }, "plan": { "fileProvider": "gemini", // Provider for file identification "thinkingProvider": "openai", // Provider for plan generation "fileMaxTokens": 32000, // Tokens for file identification "thinkingMaxTokens": 32000 // Tokens for plan generation }, "repo": { "provider": "gemini", // Default provider for repo command "maxTokens": 32000 // Maximum tokens for responses }, "doc": { "maxRepoSizeMB": 100, // Maximum repository size for remote docs "provider": "gemini", // Default provider for doc generation "maxTokens": 32000 // Maximum tokens for responses }, "browser": { "defaultViewport": "1280x720", // Default browser window size "timeout": 30000, // Default timeout in milliseconds "stagehand": { "env": "LOCAL", // Stagehand environment "headless": true, // Run browser in headless mode "verbose": 1, // Logging verbosity (0-2) "debugDom": false, // Enable DOM debugging "enableCaching": false, // Enable response caching "model": "claude-sonnet-4-20250514", // Default Stagehand model "provider": "anthropic", // AI provider (anthropic or openai) "timeout": 30000 // Operation timeout } }, "tokenCount": { "encoding": "o200k_base" // Token counting method }, "openai": { "maxTokens": 32000 // Will be used when provider is "openai" }, "anthropic": { "maxTokens": 21000 // Will be used when provider is "anthropic" }, "groq": { "model": "llama-3.3-70b-versatile", "maxTokens": 16384 } }
model: The AI model to use for web searchesmaxTokens: Maximum tokens in responsesmodel: The AI model for repository analysismaxTokens: Maximum tokens in responsesfileProvider: AI provider for identifying relevant filesthinkingProvider: AI provider for generating implementation plansfileMaxTokens: Token limit for file identificationthinkingMaxTokens: Token limit for plan generationprovider: Default AI provider for repository analysismaxTokens: Maximum tokens in responsesmaxRepoSizeMB: Size limit for remote repositoriesprovider: Default AI provider for documentationmaxTokens: Maximum tokens in responsesdefaultViewport: Browser window sizetimeout: Navigation timeoutstagehand: Stagehand-specific settings including:
env: Environment configurationheadless: Browser visibilityverbose: Logging detail leveldebugDom: DOM debuggingenableCaching: Response cachingmodel: Default AI modelprovider: AI provider selectiontimeout: Operation timeoutencoding: Method used for counting tokens
o200k_base: Optimized for Gemini (default)gpt2: Traditional GPT-2 encodingThe GitHub commands support several authentication methods:
Environment Variable: Set
GITHUB_TOKEN in your environment:
GITHUB_TOKEN=your_token_here
GitHub CLI: If you have the GitHub CLI (
gh) installed and logged in, vibe-tools will automatically use it to generate tokens with the necessary scopes.
Git Credentials: If you have authenticated git with GitHub (via HTTPS), vibe-tools will automatically:
ghp_ or gho_)To set up git credentials:
git config --global url."https://github.com/".insteadOf [email protected]:
git config --global credential.helper store # Permanent storage # Or for macOS keychain: git config --global credential.helper osxkeychain
Authentication Status:
Without authentication:
With authentication (any method):
vibe-tools will automatically try these authentication methods in order:
GITHUB_TOKEN environment variablegh is installed and logged in)If no authentication is available, it will fall back to unauthenticated access with rate limits.
When generating documentation, vibe-tools uses Repomix to analyze your repository. By default, it excludes certain files and directories that are typically not relevant for documentation:
node_modules/, packages/, etc.)dist/, build/, etc.).git/)test/, tests/, __tests__/, etc.).env, .config, etc.)You can customize the files and folders to exclude by adding a
.repomixignore file to your project root.
Example
.repomixignore file for a Laravel project:
vendor/ public/ database/ storage/ .idea .env
This ensures that the documentation focuses on your actual source code and documentation files. Support to customize the input files to include is coming soon - open an issue if you run into problems here.
The
browser commands support different AI models for processing. You can select the model using the --model option:
# Use gpt-4o vibe-tools browser act "Click Login" --url "https://example.com" --model=gpt-4o # Use Claude 3.7 Sonnet vibe-tools browser act "Click Login" --url "https://example.com" --model=claude-sonnet-4-20250514
You can set a default provider in your
vibe-tools.config.json file under the stagehand section:
{ "stagehand": { "provider": "openai" // or "anthropic" } }
You can also set a default model in your
vibe-tools.config.json file under the stagehand section:
{ "stagehand": { "provider": "openai", // or "anthropic" "model": "gpt-4o" } }
If no model is specified (either on the command line or in the config), a default model will be used based on your configured provider:
o3-miniclaude-sonnet-4-20250514Available models depend on your configured provider (OpenAI or Anthropic) in
vibe-tools.config.json and your API key.
vibe-tools automatically configures Cursor by updating your project rules during installation. This provides:
For new installations, we use the recommended
.cursor/rules/vibe-tools.mdc path. For existing installations, we maintain compatibility with the legacy .cursorrules file. If both files exist, we prefer the new path and show a warning.
To get the benefits of vibe-tools you should use Cursor agent in "yolo mode". Ideal settings:
The
ask command requires both a provider and a model to be specified. While these must be provided via command-line arguments, the maxTokens can be configured through the provider-specific settings:
{ "openai": { "maxTokens": 8000 // Will be used when provider is "openai" }, "anthropic": { "maxTokens": 8000 // Will be used when provider is "anthropic" } }
The plan command uses two different models:
You can configure both models and their providers:
{ "plan": { "fileProvider": "gemini", "thinkingProvider": "openai", "fileModel": "gemini-2.5-pro-preview", "thinkingModel": "o3", "fileMaxTokens": 8192, "thinkingMaxTokens": 8192 } }
The OpenAI o3-mini model is chosen as the default thinking provider for its speed and efficiency in generating implementation plans.
The
vibe-tools mcp run command supports using OpenRouter as a provider. You can configure this using the following:
--provider (Command-line option): Specify the provider to use. Valid values are anthropic (default) and openrouter.--model (Command-line option): Specify the OpenRouter model to use (e.g., openai/o3-mini). This option is ignored if the provider is Anthropic.ANTHROPIC_API_KEY or OPENROUTER_API_KEY in your environment.Default Behavior:
--provider is not specified, anthropic is used by default.--model is not specified and the provider is openrouter a provider default model is used.Example
:
The vibe-tools.config.json
vibe-tools.config.json is not currently used to configure MCP.