Markdown Converter
Agent skill for markdown-converter
This FastMCP server provides tools to extract and concatenate personal context from an Obsidian vault. The primary use case is loading curated personal knowledge into an LLM conversation by leveraging Obsidian's frontmatter tagging system.
Sign in to like and favorite skills
This FastMCP server provides tools to extract and concatenate personal context from an Obsidian vault. The primary use case is loading curated personal knowledge into an LLM conversation by leveraging Obsidian's frontmatter tagging system.
The server consists of several main tools:
vault_path: "/path/to/obsidian/vault" default_context: properties: context: "personal" tags: []
.md files using pathlib--- at line startProperties Matching:
Tags Matching:
match_all_tags=True requires all specified tagstags: [personal, finance]Combined Filtering:
Format:
================================================================================ /absolute/path/to/file.md ================================================================================ [full file content including frontmatter] ================================================================================ /absolute/path/to/next/file.md ================================================================================ [full file content including frontmatter]
context_type: str - Context type to match (e.g., 'personal', 'work')chunk_index: int = 0 - Which chunk to retrievemax_chars: int = 95000 - Maximum characters per chunkproperties: dict - Key-value pairs to match in frontmattertags: list[str] - Tags to search formatch_all_tags: bool = False - Whether to require all tags (AND) vs any tags (OR)chunk_index: int = 0 - Which chunk to retrievemax_chars: int = 95000 - Maximum characters per chunkproperties: dict - Key-value pairs to match in frontmattertags: list[str] - Tags to search formatch_all_tags: bool = False - Whether to require all tags (AND) vs any tags (OR)file_path: str - Absolute or relative path to the filesearch_pattern: str - Text or regex pattern to search for in file contentcase_sensitive: bool = False - Whether to perform case-sensitive searchregex: bool = False - Whether to treat search_pattern as regex (default: plain text)context_chars: int = 100 - Number of characters of context around matchesfastmcp - MCP server frameworkpathlib - File system operations (built-in)yaml - Configuration and frontmatter parsingre - Frontmatter extraction regex---)obsidian_context_server.py # Main MCP server implementation config.yaml # Configuration file README.md # Documentation and usage examples
# LLM calls: fetch_personal_context() # Returns: All files with context: personal property, chronologically ordered
# LLM calls: fetch_matching_files(properties={"type": "project"}, tags=["active"]) # Returns: Files with type: project AND containing "active" tag
# LLM calls: fetch_matching_files(tags=["research", "ai"], match_all_tags=True) # Returns: Files containing both "research" AND "ai" tags
# Phase 1: Browse metadata without loading full content # LLM calls: fetch_frontmatter_index(tags=["ai"]) # Returns: Table showing titles, paths, tags, context types for 25 files # Phase 2: Agent selects specific files based on metadata # LLM calls: fetch_specific_file("research/ai-governance-framework.md") # Returns: Complete content of just that targeted file
# Content-based discovery when you don't know the frontmatter structure # LLM calls: search_vault_content("machine learning algorithms") # Returns: Frontmatter index of files containing that phrase, with match context # Advanced regex search # LLM calls: search_vault_content("neural.*network", regex=True) # Returns: Files matching regex pattern, sorted by relevance (match count)
This implementation provides a robust, flexible system for integrating Obsidian vault content into LLM conversations while maintaining clear separation of concerns and comprehensive error handling. The progressive disclosure and hardcore search features enable efficient context browsing, content discovery, and selective loading.