Markdown Converter
Agent skill for markdown-converter
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Sign in to like and favorite skills
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Tel-Insights is a microservices system. Start services using the service runner:
# Service runner (recommended) python run_service.py aggregator # Telegram message collector python run_service.py ai-analysis # AI processing with Gemini python run_service.py smart-analysis # MCP server and alerts python run_service.py alerting # Telegram bot interface # Alternative scripts ./run_service.sh <service> # Linux/macOS run_service.bat <service> # Windows
# Run all tests pytest # Run by category pytest -m unit pytest -m integration pytest -m e2e # With coverage pytest --cov=src --cov-report=html # Single test file pytest tests/unit/test_models.py
# Apply migrations alembic upgrade head # Create new migration alembic revision --autogenerate -m "Description" # Rollback alembic downgrade -1
# Format code black src/ isort src/ # Lint flake8 src/ mypy src/
Tel-Insights is a microservices architecture with asynchronous message queue communication:
Aggregator → RabbitMQ → AI Analysis → PostgreSQL ↓ ↓ ↑ PostgreSQL Smart Analysis ←──┘ ↑ ↓ Alerting ←─────────────── MCP Tools
src/aggregator/): Telegram client using Telethon, monitors channels, publishes to queuesrc/ai_analysis/): Queue consumer, processes messages with Gemini LLM, stores metadatasrc/smart_analysis/): MCP server, frequency-based alerts, news summarizationsrc/alerting/): Telegram bot using python-telegram-bot (TODO: not yet implemented)src/shared/database.py): PostgreSQL with SQLAlchemy, JSONB for AI metadatasrc/shared/models.py): SQLAlchemy ORM modelssrc/shared/config.py): Environment-based configurationsrc/shared/messaging.py): RabbitMQ queue utilitiessrc/shared/logging.py): Structured logging with Rich/Structlogai_metadata JSONB column (GIN indexed)Copy
config.env.template to .env and configure:
DATABASE_URL: PostgreSQL connectionRABBITMQ_URL: Message queueTELEGRAM_API_ID/HASH: Telegram API credentialsTELEGRAM_BOT_TOKEN: Bot tokenGOOGLE_API_KEY: Gemini API keyMONITORED_CHANNELS: Comma-separated channel listServices communicate asynchronously via RabbitMQ. Never use direct HTTP calls between core services - use the message queue to maintain loose coupling and fault tolerance.
Messages store rich AI analysis in PostgreSQL JSONB:
{ "summary": "Brief message summary", "topics": ["technology", "AI"], "sentiment": "positive", "entities": {"organizations": ["OpenAI"], "locations": ["SF"]}, "keywords": ["AI", "breakthrough"], "confidence_score": 0.95 }
Smart Analysis runs an MCP server on port 8003 with tools:
summarize_news: Generate news summariestopic_trends: Analyze topic frequencycheck_alerts: Process alert conditionstests/unit/: Isolated component teststests/integration/: Service interaction teststests/e2e/: Full pipeline testsWhen running linting/formatting, always use the commands specified above. The project uses Black with 88-character line length and isort with Black profile.