Coding
PromptBeginner5 minmarkdown
Markdown Converter
Agent skill for markdown-converter
7
Last updated: October 10, 2025
Sign in to like and favorite skills
Last updated: October 10, 2025
Server version:
HOPE Memory v1.2.0 (Titan aliases remain available)
Tool registry: 17 tools (see below)
You are connected to the @henryhawke/mcp-titan MCP server over stdio. Follow the official tool schemas in docs/api/README.md and prefer the latest help output at runtime. - Call `init_model` before other memory operations unless the server confirms an active model. - Use `bootstrap_memory` to seed context before heavy reads. - Persist state with `save_checkpoint` / `load_checkpoint`; checkpoints must reside in approved directories. - Monitor capacity via `get_memory_state` and `get_token_flow_metrics`. Prune with `prune_memory` before capacity exceeds 70%. - Control the learner loop with `init_learner`, `pause_learner`, `resume_learner`, `get_learner_stats`, and `add_training_sample`. - Treat textual error responses as guidance and adjust parameters accordingly.
| Tool | Purpose | Parameters |
|---|---|---|
| Lists tools, categories, and optional examples. | |
| Fetches documents from a URL or raw corpus, seeds TF-IDF fallbacks, and stores summaries in memory. | |
| Instantiates with configurable dimensions and flags. Defaults follow . | |
| Tool | Purpose | Parameters |
|---|---|---|
| Runs a forward pass with optional existing memory state; updates internal memory automatically. | `{ x: string |
| Performs a supervised update between and . Validates matching dimensions. | `{ x_t: string |
| Clears accumulated gradients to recover from divergence. | |
| Pushes samples into the learner replay buffer with optional contrastive pairs. | `{ input: string |
| Tool | Purpose | Parameters |
|---|---|---|
| Returns raw memory tensors/stats. Intended for debugging. | |
| Summarizes capacity, surprise score, pattern diversity, and quick health check. | |
| Reports token flow window size, weight statistics, and variance when is active. | |
| Runs information-gain pruning with optional threshold override. | |
| Tool | Purpose | Parameters |
|---|---|---|
| Serializes memory tensors, shapes, and config to a file inside approved directories. | |
| Loads checkpoint data, validating tensor shapes and . | |
| Tool | Purpose | Parameters |
|---|---|---|
| Starts with configurable buffer/training hyperparameters. Injects mock tokenizer if none present. | |
| Pauses the learner update interval. | |
| Resumes the learner update interval. | |
| Returns learner buffer size, step counts, and recent loss metrics. | |
manifold_step, encode_text, get_surprise_metrics, analyze_memory, and predict_next are roadmap items. Avoid calling them until new handlers ship.bootstrap_memory may fetch external resources; ensure the environment allows network access if required.save_checkpoint and load_checkpoint enforce a path allowlist; use absolute paths under ~/.hope_memory or the working directory.init_learner installs a random-vector tokenizer by default. Replace server.tokenizer with AdvancedTokenizer if deterministic embeddings are needed before enqueuing samples.get_memory_state and prune_memory to prevent unchecked growth.await callTool("init_model", { memorySlots: 8000, enableMomentum: true }); await callTool("bootstrap_memory", { source: "https://example.org/notes.txt" }); await callTool("forward_pass", { x: "Summarize the previous meeting notes." }); await callTool("prune_memory", { threshold: 0.75 }); await callTool("save_checkpoint", { path: "~/.hope_memory/checkpoints/session-001.json" });
For schema updates, consult
docs/api/README.md. For implementation details and roadmap status see SYSTEM_AUDIT.md, IMPLEMENTATION_PACKAGE.md, and ROADMAP_ANALYSIS.md.