Markdown Converter
Agent skill for markdown-converter
The `codex-proxy` is a Python-based intermediary service for the Codex system, designed to handle API orchestration and environment-specific logic. It is container-native and relies on Docker for consistent execution.
Sign in to like and favorite skills
The
codex-proxy is a Python-based intermediary service for the Codex system, designed to handle API orchestration and environment-specific logic. It is container-native and relies on Docker for consistent execution.
codex-proxy8765${HOME}/.gemini is mounted to /home/appuser/.gemini for credential access../src is mounted to /app/src to enable hot-reloading during development.docker-compose for direct lifecycle management if scripts are not used./scripts)The project uses a unified control script for all operations. Execute from the
codex-proxy/ root directory.
./scripts/control.sh <command> [options]
Commands:
start - Start the proxy container in detached modestop - Stop and remove the proxy containerlogs - Follow the logs of the proxy containertest - Run the pytest test suiterun [-p|--profile <name>] -- "prompt" - Rebuild container and run codex command through proxy~/.gemini.control.sh run to quickly test end-to-end changes.control.sh test after modifications to ensure that chaining and proxy logic are still functional.Examples:
# Start the proxy ./scripts/control.sh start # Run a test command ./scripts/control.sh run -- "hello world" # Run with specific profile ./scripts/control.sh run -p glm -- "test prompt" # Check logs ./scripts/control.sh logs # Run tests ./scripts/control.sh test # Stop the proxy ./scripts/control.sh stop
The primary goal of
codex-proxy is to ensure seamless compatibility between multiple AI providers (Gemini, Z.AI) and the OpenAI Responses API protocol used by the Codex ecosystem. The proxy normalizes different wire formats to a unified internal OpenAI-like structure.
Multi-Provider Support:
gemini* models by default): Uses Google's internal and public APIs with OAuth2 authenticationglm*, zai* models by default): Uses Z.AI's coding-focused API with Bearer token authenticationconfig.model_prefixes to map model prefixes to providersRequest Normalization (
normalizer.py):
/responses endpoint) to internal OpenAI chat formatOpenAI Responses API Compatibility:
/responses endpoint with model, input, instructions, previous_response_id, store fieldsresponse.created, response.done, and intermediate content chunksreasoning_content from Gemini thinking blocksprevious_response_id for multi-turn conversationsx-codex-turn-state and response metadataGemini Integration:
reasoning_contenttool_calls structurethinkingTokenCount to OpenAI usage formatconfig.models for model list, config.compaction_model for compaction, config.fallback_models for fallback logicconfig.reasoning for customizable effort levels, with config.reasoning_effort as defaultconfig.model_prefixes for custom model prefix to provider mappings/compact endpoints, provider is determined by config.compaction_model (not the request's model), ensuring compaction works with any selected model. Both Gemini and Z.AI models support compaction.config.reasoning for customizable effort levels and budgetsZ.AI Integration:
https://api.z.ai/api/coding/paas/v4/chat/completionsStreaming Protocol:
response.created event with full response object at startError Handling:
Performance Optimizations:
https://context7.com/websites/platform_openai/llms.txt?topic=Responseshttps://context7.com/websites/z_ai/llms.txt?topic=api~/Work/codex-proxy/reference/ for protocol analysisThe proxy must maintain 1:1 behavioral parity with native OpenAI Responses API while seamlessly bridging the underlying provider differences.