Markdown Converter
Agent skill for markdown-converter
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Sign in to like and favorite skills
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
O'Reilly Live Training course teaching LangChain 1.0+ and LangGraph for building AI applications. The course covers agents, RAG systems, and complex AI workflows.
All code in this repository must follow LangChain >= 1.0.0 documentation and patterns. When writing or modifying code, always use the latest LangChain 1.0 APIs. Reference the official documentation at https://docs.langchain.com/ for current patterns.
| Deprecated (Pre-v1) | Current (v1.0+) |
|---|---|
| |
| |
| returns a runnable graph directly |
| |
| |
| |
| (only for legacy prompts) |
| Use LangGraph checkpointing or message history |
| Use LCEL (`prompt |
| Use LCEL with |
create_agent() returns a LangGraph CompiledGraph, not an executor{"messages": [...]} format, not {"input": "..."}langchain-openai, langchain-anthropic, etc.)# Setup with Makefile (uses uv + conda) make all # Full setup: conda env, pip-tools, notebook kernel # Manual setup conda create -n oreilly-langchain python=3.12 conda activate oreilly-langchain pip install -r requirements/requirements.txt # Jupyter kernel python -m ipykernel install --user --name=oreilly-langchain # Dependency management (uses uv pip-compile) make env-update # Compile and sync requirements make freeze # Freeze current environment
export OPENAI_API_KEY="..." export TAVILY_API_KEY="..." # For search tools export LANGCHAIN_API_KEY="..." # Optional: LangSmith tracing
This codebase uses LangChain 1.0+ patterns. Key differences from pre-v1:
# LangChain 1.0 pattern (use this) from langchain.agents import create_agent agent = create_agent( model="openai:gpt-4o-mini", # String format tools=tools, system_prompt="...", ) result = agent.invoke({"messages": [{"role": "user", "content": "..."}]}) # NOT the old AgentExecutor pattern
from langchain_core.tools import tool @tool def my_tool(param: str) -> str: """Tool docstring becomes the tool description.""" return result
# Token-by-token streaming for token, metadata in agent.stream({"messages": "..."}, stream_mode="messages"): print(token.content, end="") # State streaming for step in agent.stream({"messages": "..."}, stream_mode="values"): step["messages"][-1].pretty_print()
from dataclasses import dataclass from langgraph.runtime import get_runtime @dataclass class RuntimeContext: db: SQLDatabase @tool def execute_sql(query: str) -> str: runtime = get_runtime(RuntimeContext) return runtime.context.db.run(query) agent = create_agent(..., context_schema=RuntimeContext) agent.invoke({...}, context=RuntimeContext(db=db))
notebooks/ - Main course Jupyter notebooks (numbered by module)scripts/ - Standalone Python examples (jira-agent.py, rag_methods.py)archive/pre-v1/ - Legacy pre-LangChain-1.0 notebooks (reference only)requirements/ - Dependencies managed via pip-tools (requirements.in → requirements.txt)# Models from langchain_openai import ChatOpenAI # Core from langchain_core.messages import HumanMessage, AIMessage, SystemMessage from langchain_core.tools import tool # Agents from langchain.agents import create_agent # Search (LangChain 1.0) from langchain_tavily import TavilySearch # NOT langchain_community.tools.tavily_search