Markdown Converter
Agent skill for markdown-converter
**Date:** 2025-01-19
Sign in to like and favorite skills
Date: 2025-01-19
Category: Framework Integration, Multi-Turn Conversations
Impact: Critical - Affects all multi-turn agent conversations
Status: Resolved
Multi-turn conversations stopped working after the first query. The LLM would successfully call tools on the first user message but would refuse to call tools on subsequent messages, claiming it "cannot directly access" information despite tools being available.
Symptoms:
019a99c9-c281-74b2-ac8c-8552ec736c9bCreated
tool_calling_wyckoff.py to test tool calling with real Wyckoff data:
Created
tool_calling_wyckoff_multiturn.py to test passing message_history:
# Turn 1 (works) result1 = await agent.run(query1, deps=mock_deps) # Extract message history (like production does) message_history = result1.all_messages() # Turn 2 (reproduces production bug) result2 = await agent.run(query2, deps=mock_deps, message_history=message_history)
Critical Discovery:
When inspecting
message_history structure, found:
Message 1: ModelRequest Parts count: 2 Part 1: SystemPromptPart ← KEY FINDING! Part 2: UserPromptPart
The first
ModelRequest in the message history contained a SystemPromptPart!
Single-turn (fresh conversation):
agent = Agent(model, system_prompt="You are helpful...") result = await agent.run("query", deps=deps)
SystemPromptPartModelRequest internallysystemInstruction field (Gemini) or system message (OpenAI)Multi-turn (with message_history):
result = await agent.run("query", deps=deps, message_history=previous_messages)
SystemPromptPartmessage_history to already contain itIn
simple_chat.py, the load_conversation_history() function (line 111) had:
for msg in db_messages: if msg.role in ("human", "user"): pydantic_message = ModelRequest(...) elif msg.role == "assistant": pydantic_message = ModelResponse(...) else: # Skip system messages - Pydantic AI handles them continue # ← BUG: This is wrong for multi-turn!
Why this was wrong:
message_history without system promptagent.run() with incomplete historyInject
SystemPromptPart into the first ModelRequest after loading history:
from pydantic_ai.messages import SystemPromptPart # After loading message_history and creating agent (which has system_prompt text) if message_history and len(message_history) > 0: first_msg = message_history[0] if isinstance(first_msg, ModelRequest): # Check if SystemPromptPart already exists has_system_prompt = any(isinstance(part, SystemPromptPart) for part in first_msg.parts) if not has_system_prompt: # Inject SystemPromptPart at the beginning system_prompt_part = SystemPromptPart(content=system_prompt) new_parts = [system_prompt_part] + list(first_msg.parts) message_history[0] = ModelRequest(parts=new_parts)
:backend/app/agents/simple_chat.py
SystemPromptPart to imports (line 28)
:backend/investigate/tool-calling/tool_calling_wyckoff_multiturn.py
When using
parameter:message_history
ModelRequest MUST contain SystemPromptPartStorage (Database):
Runtime (Pydantic AI):
| Scenario | System Prompt Handling |
|---|---|
Fresh conversation () | Pydantic AI auto-injects |
With history () | Must be present in |
After result () | is included in first message |
Common mistake: Assuming frameworks handle everything automatically
Reality:
analyze_message_history() revealed SystemPromptPartIf you're building multi-turn conversations with Pydantic AI:
Always include system prompt in reconstructed history
# After loading from database first_msg.parts = [SystemPromptPart(content=system_prompt)] + first_msg.parts
Don't rely on automatic injection with message_history
Test multi-turn scenarios explicitly
message_historyThis issue is specific to Pydantic AI's message history handling. Other frameworks may:
Always verify how YOUR framework handles system prompts in multi-turn conversations.
After implementing the fix:
messages, llm_requests, sessionsExpected Results:
SystemPromptPart in first messagebackend/investigate/tool-calling/tool_calling_wyckoff_multiturn.pybackend/app/agents/simple_chat.py (lines 494-515, 905-927)019a99c9-c281-74b2-ac8c-8552ec736c9bc47a368 - "Fix multi-turn conversation tool calling by injecting SystemPromptPart"This was a subtle but critical bug caused by incomplete understanding of how Pydantic AI handles message history. The framework's behavior is correct and well-designed (system prompts are configuration, not conversation data), but requires developers to explicitly reconstruct complete message history including system instructions.
Key Takeaway: When working with LLM frameworks, always inspect runtime message structures, test multi-turn scenarios, and understand the distinction between what the framework auto-handles vs. what you must provide.