Markdown Converter
Agent skill for markdown-converter
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Sign in to like and favorite skills
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This is a multi-agent debate system built using the PydanticAI + LangGraph unified architecture. The system implements dialectical collaboration patterns where agents engage in structured debates to arrive at better solutions through thesis-antithesis-synthesis cycles.
# Navigate to the debate implementation cd colab/debate # Install dependencies pip install -r requirements.txt
# Run all tests with async support pytest tests/ -v --asyncio-mode=auto # Run specific test files pytest tests/test_debate.py -v --asyncio-mode=auto
# Format code with Black black . # Type checking with mypy mypy agents/ graph/ --ignore-missing-imports # Check formatting without making changes black --check .
# Start the FastAPI server cd colab/debate uvicorn api.main:app --reload --host 0.0.0.0 --port 8000 # For production uvicorn api.main:app --host 0.0.0.0 --port 8000
# Test Archon connectivity python -c "from archon.client import ArchonClient; client = ArchonClient(); print('✓ Archon connected')" # Test PydanticAI agents python -c "from agents.thesis_agent import thesis_agent; print('✓ Thesis agent loaded')" # Test LangGraph workflow python -c "from graph.workflow import get_debate_graph; print('✓ Graph compiled')"
This project uses ONE unified system where:
debate/ ├── colab/ # Main collaborative system │ ├── debate/ # Dialectical debate implementation │ │ ├── agents/ # PydanticAI agents │ │ │ ├── thesis_agent.py # Proposes solutions │ │ │ ├── antithesis_agent.py # Critiques proposals │ │ │ ├── synthesis_agent.py # Combines insights │ │ │ └── moderator_agent.py # Manages debate flow │ │ ├── graph/ # LangGraph orchestration │ │ │ ├── state.py # Debate state management │ │ │ ├── nodes.py # Node wrappers for agents │ │ │ └── workflow.py # Debate workflow compilation │ │ ├── api/ # FastAPI endpoints │ │ └── tests/ # Test suite │ └── PRPs/ # Product Requirement Prompts │ └── examples/ # Reference implementations
PydanticAI agents are wrapped in LangGraph nodes:
async def thesis_node(state: DebateState, writer) -> dict: # Execute PydanticAI agent result = await thesis_agent.run( state["messages"][-1].content, deps=deps, message_history=state.get("pydantic_message_history", []) ) # Return state update return {"agent_results": [result.data]}
Agents automatically fall back to web search when Archon results are insufficient:
# In agent tools results = await ctx.deps.archon_client.perform_rag_query(query) # Automatic fallback if enabled if ctx.deps.enable_web_fallback: results = await search_web_fallback( query=query, archon_results=results, threshold=ctx.deps.web_search_threshold # Default 0.6 )
Create a
.env file in colab/debate/ (see .env.example for all options):
# Quick start - copy from .env.minimal.example MODEL_PROVIDER=openai OPENAI_API_KEY=sk-... ARCHON_API_KEY=... # Or see .env.example for full configuration options including: # - Multiple LLM providers (OpenAI, Anthropic, Gemini) # - Web search APIs (Google, Bing) # - Performance tuning # - Security settings # - Monitoring configuration
PRPs/examples/Agents support automatic web search fallback when Archon results are insufficient:
enable_web_fallback: Enable/disable web fallback (default: True)web_search_threshold: Minimum Archon relevance score (default: 0.6)pytest-asyncio for async testsGET / - Health checkPOST /debate - Start a new debate (returns streaming response)GET /debate/{session_id}/status - Get debate statusGET /debate/{session_id}/solution - Get final solutioncolab/debate)--asyncio-mode=auto with pytest