English Translator and Improver
I want you to act as an English translator, spelling corrector and improver. I will speak to you in any language and you will detect the language, translate it and answer in the corrected and improved...
Sign in to like and favorite skills
I want you to act as an English translator, spelling corrector and improver. I will speak to you in any language and you will detect the language, translate it and answer in the corrected and improved...
Act as a Game Description Writer. You are responsible for crafting an engaging and informative overview of the mobile game '${gameName:Bake Merge Bounty}'. Your task is to highlight the core gameplay...
You are a specialized assistant for Google VEO3 video generation, designed to help users create, enhance, and optimize text prompts for high-quality video output. Your expertise encompasses the complete VEO3 prompting framework, technical specifications, and creative best practices.
Search...
Ctrl K
Search...
Navigation
On this page
OpenClaw builds a custom system prompt for every agent run. The prompt is OpenClaw-owned and does not use the p-coding-agent default prompt.The prompt is assembled by OpenClaw and injected into each agent run.
The prompt is intentionally compact and uses fixed sections:
config.apply and update.run.agents.defaults.workspace).OpenClaw can render smaller system prompts for sub-agents. The runtime sets a
promptMode for each run (not a user-facing config):
full (default): includes all sections above.minimal: used for sub-agents; omits Skills, Memory Recall, OpenClaw
Self-Update, Model Aliases, User Identity, Reply Tags,
Messaging, Silent Replies, and Heartbeats. Tooling, Workspace,
Sandbox, Current Date & Time (when known), Runtime, and injected context stay
available.none: returns only the base identity line.When
promptMode=minimal, extra injected prompts are labeled Subagent
Context instead of Group Chat Context.
Bootstrap files are trimmed and appended under Project Context so the model sees identity and profile context without needing explicit reads:
AGENTS.mdSOUL.mdTOOLS.mdIDENTITY.mdUSER.mdHEARTBEAT.mdBOOTSTRAP.md (only on brand-new workspaces)Large files are truncated with a marker. The max per-file size is controlled by
agents.defaults.bootstrapMaxChars (default: 20000). Missing files inject a
short missing-file marker.Internal hooks can intercept this step via agent:bootstrap to mutate or replace
the injected bootstrap files (for example swapping SOUL.md for an alternate persona).To inspect how much each injected file contributes (raw vs injected, truncation, plus tool schema overhead), use /context list or /context detail. See Context.
The system prompt includes a dedicated Current Date & Time section when the user timezone is known. To keep the prompt cache-stable, it now only includes the time zone (no dynamic clock or time format).Use
session_status when the agent needs the current time; the status card
includes a timestamp line.Configure with:
agents.defaults.userTimezoneagents.defaults.timeFormat (auto | 12 | 24)See Date & Time for full behavior details.
When eligible skills exist, OpenClaw injects a compact available skills list (
formatSkillsForPrompt) that includes the file path for each skill. The
prompt instructs the model to use read to load the SKILL.md at the listed
location (workspace, managed, or bundled). If no skills are eligible, the
Skills section is omitted.
Copy
<available_skills> <skill> <name>...</name> <description>...</description> <location>...</location> </skill> </available_skills>
This keeps the base prompt small while still enabling targeted skill usage.
When available, the system prompt includes a Documentation section that points to the local OpenClaw docs directory (either
docs/ in the repo workspace or the bundled npm
package docs) and also notes the public mirror, source repo, community Discord, and
ClawHub ( https://clawhub.com) for skills discovery. The prompt instructs the model to consult local docs first
for OpenClaw behavior, commands, configuration, or architecture, and to run
openclaw status itself when possible (asking the user only when it lacks access).
Ctrl+I
Assistant
Responses are generated using AI and may contain mistakes.