Markdown Converter
Agent skill for markdown-converter
This directory is where AI-agents can put requests for features or documentation.
Sign in to like and favorite skills
This directory is where AI-agents can put requests for features or documentation. Each request should be a markdown file with a clear title and description of the request.
Use kebab-case names that describe the feature or change:
implement-real-bootstrap-evaluation.mdscenario-config-builder.mdadd-policy-evolution-cli.mdA well-structured request helps the implementing agent understand the context, requirements, and acceptance criteria clearly.
# Feature Request: <Title> **Date**: <YYYY-MM-DD> **Priority**: <High|Medium|Low> **Affects**: <Components, modules, or systems affected> ## Summary <1-3 sentence description of what is being requested and why> ## Problem Statement <Describe the current state and why it's problematic> ### Current Behavior <What happens now - include code snippets if helpful> ```python # Example of current problematic code def current_approach(): # This has issues because...
<Explain the impact: bugs caused, maintenance burden, correctness issues, etc.>
# Example of proposed solution class ProposedSolution: """Docstring explaining the solution.""" def proposed_method(self, arg: Type) -> ReturnType: """What this method should do.""" ...
# Show how the solution would be used solution = ProposedSolution() result = solution.proposed_method(input)
<Any specific implementation details, constraints, or considerations>
<Reference relevant invariants from docs/reference/patterns-and-conventions.md>
| Component | Impact |
|---|---|
| |
|
docs/reference/<relevant-doc>.md - docs/legacy/<relevant-doc>.md - path/to/relevant/file.py - <why it's relevant>path/to/other/file.py - <why it's relevant><Any additional context, historical information, or considerations>
--- ## Request Categories ### Feature Requests New functionality to be added. Include: - Clear problem statement - Proposed solution with API examples - Acceptance criteria ### Bug Fix Requests Issues that need to be corrected. Include: - Steps to reproduce - Expected vs actual behavior - Root cause analysis (if known) ### Refactoring Requests Code improvements without behavior change. Include: - Current state and why it's problematic - Proposed improvement - Migration path if breaking changes ### Documentation Requests Documentation to be added or updated. Include: - What needs documenting - Where it should live - Outline of content --- ## Examples of Good Requests ### Example 1: Clear Problem + Solution From `scenario-config-builder.md`: ```markdown ## Problem Statement The codebase currently has **multiple parallel helper methods** that extract agent configuration from scenario YAML files. This pattern is error-prone because: 1. **Easy to forget parameters**: When adding a new agent property, developers must remember to add extraction logic in multiple places 2. **No single source of truth**: The same extraction logic is duplicated 3. **Silent failures**: Missing parameters cause subtle bugs ### The Bug This Pattern Caused When `liquidity_pool` was added to the scenario config schema, the extraction was added to the main simulation path but **not** to the bootstrap evaluation path.
This is effective because it:
From
implement-real-bootstrap-evaluation.md:
## Acceptance Criteria 1. [ ] Initial simulation produces `initial_simulation_output` for LLM context 2. [ ] `TransactionSampler.collect_transactions()` is called after initial simulation 3. [ ] Bootstrap samples use `TransactionSampler.create_samples()` with `method="bootstrap"` 4. [ ] Each bootstrap evaluation uses resampled transactions (not parametric generation) 5. [ ] LLM prompt includes all three event streams 6. [ ] Deterministic scenarios (exp1) continue to work correctly 7. [ ] Tests verify bootstrap is resampling, not regenerating
This is effective because each criterion is:
From
scenario-config-builder.md:
### Usage After Migration ```python # Before (fragile) evaluator = BootstrapPolicyEvaluator( opening_balance=self._get_agent_opening_balance(agent_id), credit_limit=self._get_agent_credit_limit(agent_id), max_collateral_capacity=self._get_agent_max_collateral_capacity(agent_id), liquidity_pool=self._get_agent_liquidity_pool(agent_id), ) # After (single extraction, can't forget fields) config = self._scenario_builder.extract_agent_config(agent_id) evaluator = BootstrapPolicyEvaluator( opening_balance=config.opening_balance, credit_limit=config.credit_limit, max_collateral_capacity=config.max_collateral_capacity, liquidity_pool=config.liquidity_pool, )
This is effective because: - Shows concrete code, not abstract description - Makes the improvement immediately obvious - Implementer knows exactly what to build --- ## Anti-Patterns to Avoid ### Too Vague ❌ Bad: ```markdown ## Summary Make the bootstrap evaluation better. ## Acceptance Criteria - [ ] It works correctly
✅ Good:
## Summary Implement actual bootstrap resampling (with replacement from historical data) instead of parametric Monte Carlo simulation for policy evaluation. ## Acceptance Criteria - [ ] Bootstrap samples use `TransactionSampler.create_samples()` with `method="bootstrap"` - [ ] Each bootstrap evaluation uses resampled transactions (not parametric generation)
❌ Bad:
## Proposed Solution Add a ScenarioConfigBuilder class.
✅ Good:
## Problem Statement The codebase has 4 separate helper methods that extract agent config, leading to bugs when new fields are added to one path but not others. ## Proposed Solution Add a ScenarioConfigBuilder class that provides a single extraction point.
❌ Bad:
## Summary Fix the bootstrap bug.
✅ Good:
## Summary Fix bootstrap evaluation ignoring `liquidity_pool` parameter (commit `c06a880`). ## Related Code - `api/payment_simulator/experiments/runner/optimization.py` - Missing extraction - `api/payment_simulator/ai_cash_mgmt/bootstrap/evaluator.py` - Receives None
patterns-and-conventions.md are noted if applicable