ArchDesigner employs a sophisticated multi-agent AI system built on advanced Chain-of-Thought (CoT) reasoning patterns. Each agent is a specialized serverless function that performs specific tasks with high accuracy, transparency, and security.
What is Chain-of-Thought (CoT) Reasoning?
Chain-of-Thought is an AI prompting technique where the model explicitly breaks down complex problems into step-by-step reasoning stages. Instead of jumping directly to an answer, the AI "shows its work" by progressing through logical stages (input gathering → analysis → problem identification → recommendations → output). This leads to more accurate, transparent, and debuggable AI decisions.
Last Updated: December 29, 2024
Version: 0.0.0
- Single Responsibility: Each agent focuses on one domain
- Chain-of-Thought Reasoning: All agents use explicit multi-stage reasoning
- Security-First: Input sanitization, validation, and audit logging
- Observable: Comprehensive logging and metrics
- Validated Output: Schema validation for all agent responses
- RBAC: Role-based access control for all operations
- Runtime: Deno (secure JavaScript/TypeScript runtime)
- Platform: Base44 SDK v0.8.4
- LLM Integration: Base44 InvokeLLM (abstracted LLM provider)
- Language: TypeScript
- Deployment: Serverless functions on Base44 infrastructure
All agents follow this execution pattern:
1. Request Authentication (Base44 Auth)
2. Input Validation (validateRequired, validateEnum)
3. Data Fetching (parallel where possible)
4. Authorization Check (enforceOwnership)
5. Input Sanitization (filterSensitiveForLLM, sanitiseString)
6. Audit Logging (auditLog)
7. CoT Reasoning Execution (executeAdvancedCoTReasoning)
8. Output Validation (validateCoTOutput)
9. Metrics & Logging (logger.metric)
10. Response Generation (createSuccessResponse)
Every agent uses a 5-stage Chain-of-Thought reasoning process:
- Collect and structure all relevant data
- Identify context and constraints
- List all available resources
- Understand relationships and dependencies
- Assess current state vs. desired state
- Identify patterns and anti-patterns
- Discover issues, gaps, and risks
- Prioritize by severity and impact
- Consider edge cases
- Propose actionable solutions
- Provide rationale for each recommendation
- Include implementation steps
- Structure results according to schema
- Include confidence scores
- Provide reasoning transparency
All agents share a common utilities library (
functions/lib/utils.ts
):
sanitiseString()
: Clean and limit string inputs
sanitiseLLMInput()
: Prepare prompts for LLM
filterSensitiveForLLM()
: Remove sensitive data before LLM calls
enforceOwnership()
: RBAC ownership validation
validateRequired()
: Check required fields
validateEnum()
: Validate enum values
validateCoTOutput()
: Validate CoT reasoning output
createLogger()
: Structured logging with correlation IDs
generateCorrelationId()
: Unique request tracking
auditLog()
: Security audit trail
createSuccessResponse()
: Standardized success format
createErrorResponse()
: Standardized error format
ErrorCodes
: Enumerated error codes
executeAdvancedCoTReasoning()
: Orchestrates 5-stage reasoning
buildCoTPrompt()
: Constructs explicit stage prompts
CoTStages
: Enumeration of reasoning stages
File:
functions/analyzeArchitecture.ts
Endpoint:
/analyzeArchitecture
Purpose: Comprehensive microservices architecture analysis and health assessment
{
project_id: string (required)
}
What it does:
- Analyzes microservices architecture health (0-100 score)
- Identifies bottlenecks and single points of failure
- Detects security vulnerabilities in architecture
- Finds scalability limitations
- Recommends missing essential services
- Provides confidence-scored recommendations
How it works:
- Fetches project and all associated services
- Prepares sanitized context (service names, categories, technologies)
- Executes 5-stage CoT reasoning:
- Input Gathering: Lists all services and tech stacks
- Contextual Analysis: Assesses relationships and patterns
- Problem Identification: Finds architectural issues
- Recommendation Generation: Suggests improvements
- Output Formatting: Structures findings with scores
- Validates output against schema
- Returns health score, bottlenecks, security concerns, scalability issues, and missing services
{
health_score: number (0-100),
reasoning_steps: Array<{
stage: string,
findings: string[],
confidence: number
}>,
bottlenecks: Array<{
service: string,
issue: string,
severity: 'critical' | 'high' | 'medium' | 'low',
recommendation: string
}>,
security_concerns: Array<{
title: string,
description: string,
priority: string
}>,
scalability: string[],
performance: string[],
missing_services: Array<{
name: string,
purpose: string,
priority: string
}>,
overall_confidence: number
}
Health Score Calculation:
- Deduct points for each bottleneck (severity-weighted)
- Deduct for security concerns (priority-weighted)
- Deduct for missing critical services
- Add points for good patterns and best practices
- Base score: 100, minimum: 0
Why this matters:
- Provides objective architecture quality metric
- Prioritizes technical debt remediation
- Prevents production issues before they occur
- Guides architecture evolution decisions
File:
functions/securityAudit.ts
Endpoint:
/securityAudit
Purpose: OWASP Top 10 compliance checking and comprehensive security analysis
{
project_id: string (required),
audit_type?: 'full' | 'static' | 'dynamic' | 'dependencies' | 'compliance',
compliance_standards?: Array<'SOC2' | 'ISO27001' | 'HIPAA' | 'PCI-DSS' | 'GDPR' | 'NIST'>,
include_remediation?: boolean,
severity_threshold?: 'critical' | 'high' | 'medium' | 'low'
}
What it does:
- Static Code Analysis: Scans for OWASP Top 10 vulnerabilities
- Dynamic Runtime Checks: Authentication, authorization, session management
- Dependency Scanning: CVE detection, outdated packages
- Compliance Assessment: SOC2, ISO 27001, HIPAA, PCI-DSS, GDPR, NIST
- Remediation Guidance: Actionable fixes with code examples
- Exploit Scenarios: Demonstrates security impact
How it works:
- Fetches comprehensive project context (services, CI/CD, APIs, existing findings, code samples)
- Prepares sanitized security context
- Executes specialized CoT for each audit type:
- Static: Code pattern analysis for vulnerabilities
- Dynamic: Runtime security checks
- Dependencies: CVE database cross-reference
- Compliance: Multi-standard checklist validation
- Generates prioritized findings with severity scoring
- Creates remediation plans with code examples
- Saves findings to SecurityFinding entity
-
A01:2021 - Broken Access Control
- Checks: RBAC implementation, ownership validation, privilege escalation
-
A02:2021 - Cryptographic Failures
- Checks: Encryption at rest/transit, key management, algorithm strength
-
A03:2021 - Injection
- Checks: Input sanitization, parameterized queries, validation
-
A04:2021 - Insecure Design
- Checks: Architecture patterns, threat modeling, security controls
-
A05:2021 - Security Misconfiguration
- Checks: Default configs, unnecessary features, security headers
-
A06:2021 - Vulnerable and Outdated Components
- Checks: Dependency versions, CVE presence, patch availability
-
A07:2021 - Identification and Authentication Failures
- Checks: Password policies, MFA, session management, brute force protection
-
A08:2021 - Software and Data Integrity Failures
- Checks: CI/CD security, unsigned packages, integrity validation
-
A09:2021 - Security Logging and Monitoring Failures
- Checks: Audit logging, alerting, log retention, tampering detection
-
A10:2021 - Server-Side Request Forgery (SSRF)
- Checks: URL validation, network segmentation, allowlists
{
audit_id: string,
project_id: string,
audit_type: string,
overall_score: number (0-100),
findings: Array<{
id: string,
severity: 'critical' | 'high' | 'medium' | 'low',
category: string (OWASP category),
title: string,
description: string,
affected_services: string[],
exploit_scenario: string,
remediation: {
description: string,
code_example: string,
effort: 'low' | 'medium' | 'high',
priority: number
}
}>,
compliance_results: {
[standard: string]: {
passed: number,
failed: number,
percentage: number,
requirements: Array<{
id: string,
description: string,
status: 'pass' | 'fail' | 'warning',
evidence: string
}>
}
}
}
Severity Calculation:
- Critical: Exploitable remotely, high impact, low complexity
- High: Exploitable, moderate impact or complexity
- Medium: Difficult to exploit or limited impact
- Low: Minimal risk, best practice violation
Compliance Scoring:
- Each standard has weighted requirements
- Must pass critical requirements for certification
- Percentage score = (passed / total) * 100
File:
functions/generateCode.ts
Endpoint:
/generateCode
Purpose: Production-ready microservice scaffolding with tests and deployment configs
{
service_id: string (required),
language?: 'typescript' | 'python' | 'go' | 'java' | 'rust',
framework?: string, // e.g., 'express', 'fastapi', 'gin', 'spring-boot'
include_tests?: boolean (default: true),
include_docker?: boolean (default: true),
include_api?: boolean (default: true),
architecture_pattern?: 'microservices' | 'monolith' | 'serverless'
}
What it does:
- Generates complete microservice project structure
- Creates API endpoints based on service definition
- Generates database models and schemas
- Scaffolds unit and integration tests
- Produces Dockerfile and docker-compose.yml
- Creates CI/CD pipeline configurations
- Generates comprehensive README and API documentation
How it works:
- Fetches service details and parent project context
- Prepares safe code generation context
- Executes CoT reasoning:
- Input Gathering: Service requirements, APIs, technologies
- Contextual Analysis: Best practices for language/framework
- Problem Identification: Edge cases, error handling needs
- Recommendation Generation: File structure, dependencies
- Output Formatting: Complete codebase with docs
- Validates generated code structure
- Saves CodeGeneration entity for future reference
Source Files:
- Main application entry point
- API route handlers
- Business logic/services
- Data models
- Middleware (auth, logging, error handling)
- Configuration management
Test Files:
- Unit tests for each module
- Integration tests for APIs
- Test fixtures and mocks
- Test utilities
Configuration Files:
- Package manager config (package.json, requirements.txt, go.mod)
- Environment variable templates (.env.example)
- Linter/formatter configs
- TypeScript/build configs
Docker Files:
- Multi-stage Dockerfile
- docker-compose.yml with dependencies
- .dockerignore
Documentation:
- README.md with setup instructions
- API documentation (Swagger/OpenAPI)
- Architecture decision records
{
files: Array<{
path: string,
content: string,
description: string,
type: 'source' | 'test' | 'config' | 'docker' | 'docs'
}>,
project_structure: string, // ASCII tree
dependencies: {
production: string[],
development: string[]
},
setup_commands: string[], // Install and run instructions
api_documentation: string, // Markdown API docs
test_coverage_target: number // Expected coverage percentage
}
Language-Specific Patterns:
- TypeScript/Node.js: Express, Fastify, NestJS patterns
- Python: FastAPI, Flask, Django patterns
- Go: Gin, Echo, Chi patterns
- Java: Spring Boot, Micronaut patterns
Best Practices Enforced:
- Dependency injection
- Environment-based configuration
- Structured logging
- Error handling middleware
- Input validation
- API versioning
- CORS and security headers
- Health check endpoints
- Graceful shutdown
File:
functions/generateCICD.ts
Endpoint:
/generateCICD
Purpose: Generate production-grade CI/CD pipeline configurations
{
project_id: string (required),
platform: 'github-actions' | 'gitlab-ci' | 'jenkins' | 'circleci',
include_security_scanning?: boolean (default: true),
include_deployment?: boolean (default: true),
deployment_target?: 'kubernetes' | 'docker-swarm' | 'aws' | 'azure' | 'gcp'
}
What it does:
- Generates multi-stage CI/CD pipelines
- Includes lint, test, build, scan, deploy stages
- Configures security scanning (SAST, dependency scan, container scan)
- Sets up deployment automation
- Creates environment-specific configurations
- Adds secret management
- Implements rollback strategies
Pipeline Stages:
- Lint: Code quality checks
- Test: Unit and integration tests
- Build: Compile and package
- Security Scan: SAST, dependencies, secrets
- Container Build: Docker image creation
- Deploy: Environment-specific deployment
- Smoke Test: Post-deployment validation
{
platform: string,
config_files: Array<{
filename: string,
content: string,
description: string
}>,
environment_variables: Array<{
key: string,
description: string,
required: boolean,
secret: boolean
}>,
setup_instructions: string
}
File:
functions/generateDocumentation.ts
Endpoint:
/generateDocumentation
Purpose: Automated, comprehensive project documentation generation
{
project_id: string (required),
doc_type: 'api' | 'architecture' | 'deployment' | 'status-report' | 'all',
format?: 'markdown' | 'html' | 'pdf'
}
What it does:
- Generates API documentation (OpenAPI/Swagger)
- Creates architecture documentation with diagrams
- Produces deployment guides
- Compiles status reports
- Generates README files
- Creates architecture decision records (ADRs)
API Documentation:
- Endpoint specifications
- Request/response schemas
- Authentication requirements
- Error codes
- Rate limits
- Example requests
Architecture Documentation:
- System overview
- Service catalog
- Dependency maps
- Data flow diagrams
- Technology stack
- Design decisions
Deployment Guides:
- Prerequisites
- Step-by-step deployment
- Configuration instructions
- Troubleshooting
- Rollback procedures
{
document_id: string,
title: string,
content: string,
format: string,
metadata: {
generated_at: string,
version: string,
services_documented: number
}
}
File:
functions/projectHealthCheck.ts
Endpoint:
/projectHealthCheck
Purpose: Continuous project health monitoring and scoring
{
project_id: string (required)
}
What it does:
- Calculates overall project health score (0-100)
- Monitors service availability
- Tracks technical debt
- Identifies stale components
- Measures test coverage
- Assesses documentation completeness
Service Health (30%):
- All services have descriptions
- Technology stacks defined
- APIs documented
- Dependencies mapped
Security Health (25%):
- Recent security audit
- No critical vulnerabilities
- Compliance standards met
- Security scanning enabled
Testing Health (20%):
- Test coverage > 80%
- Tests passing
- Integration tests present
Documentation Health (15%):
- README exists
- API docs current
- Architecture documented
CI/CD Health (10%):
- Pipeline configured
- Deployments successful
- Security scans passing
{
health_score: number,
metrics: {
services: { score: number, details: object },
security: { score: number, details: object },
testing: { score: number, details: object },
documentation: { score: number, details: object },
cicd: { score: number, details: object }
},
recommendations: Array<{
priority: string,
action: string,
impact: number
}>
}
File:
functions/securityScan.ts
Endpoint:
/securityScan
Purpose: Rapid vulnerability scanning and threat detection
{
project_id: string (required),
scan_type: 'quick' | 'deep',
target?: 'services' | 'dependencies' | 'infrastructure' | 'all'
}
What it does:
- Quick vulnerability scans
- Dependency CVE checking
- Infrastructure misconfiguration detection
- Secret exposure detection
- License compliance checking
Scan Types:
- Quick: Fast scan of critical areas (< 30 seconds)
- Deep: Comprehensive scan (2-5 minutes)
{
scan_id: string,
vulnerabilities_found: number,
critical_count: number,
high_count: number,
medium_count: number,
low_count: number,
findings: Array<{
id: string,
severity: string,
type: string,
description: string,
remediation: string
}>
}
File:
functions/apiGateway.ts
Endpoint:
/apiGateway
Purpose: API management, routing, and integration orchestration
What it does:
- Manages API integrations
- Routes requests to services
- Handles authentication
- Implements rate limiting
- Provides API analytics
File:
functions/exportProject.ts
Endpoint:
/exportProject
Purpose: Project export in multiple formats
{
project_id: string (required),
format: 'json' | 'yaml' | 'zip' | 'terraform',
include_code?: boolean,
include_docs?: boolean
}
What it does:
- Exports complete project definition
- Includes architecture, services, configurations
- Generates Infrastructure as Code (Terraform, CloudFormation)
- Creates importable archives
- Supports project migration
File:
functions/sendNotification.ts
Endpoint:
/sendNotification
Purpose: Multi-channel notification delivery
{
user_id: string (required),
channel: 'email' | 'whatsapp' | 'slack' | 'webhook',
type: 'info' | 'warning' | 'error' | 'success',
title: string,
message: string,
metadata?: object
}
What it does:
- Sends notifications across multiple channels
- Supports Email, WhatsApp, Slack, webhooks
- Templates for common notification types
- Respects user notification preferences
- Includes retry logic for failed deliveries
Chain-of-Thought reasoning makes AI decisions transparent, debuggable, and more accurate by breaking complex tasks into explicit stages.
All agents use
executeAdvancedCoTReasoning()
from utils:
const result = await executeAdvancedCoTReasoning({
task: 'task_name',
context: { /* sanitized data */ },
logger: logger,
outputSchema: VALIDATION_SCHEMA,
executor: async (ctx) => {
// LLM invocation with explicit CoT prompt
},
validator: (result) => {
// Business logic validation
}
});
Stage 1: Input Gathering
- Lists all data points
- Identifies constraints
- Notes missing information
Stage 2: Contextual Analysis
- Analyzes relationships
- Considers best practices
- Evaluates patterns
Stage 3: Problem Identification
- Discovers issues
- Prioritizes problems
- Considers edge cases
Stage 4: Recommendation Generation
- Proposes solutions
- Provides rationale
- Includes implementation steps
Stage 5: Output Formatting
- Structures according to schema
- Adds confidence scores
- Includes reasoning trail
Every CoT output is validated:
- Schema Validation: Required fields present and correctly typed
- Business Logic Validation: Values within acceptable ranges
- Consistency Checks: No contradictions in reasoning
- Confidence Scoring: Outputs confidence levels (0-1)
Sanitization:
sanitiseString()
: Removes control characters, limits length
sanitiseLLMInput()
: Prepares safe prompts for LLM
filterSensitiveForLLM()
: Removes PII, secrets, sensitive data
Validation:
- Required field checking
- Enum validation
- Type checking
- Range validation
Authentication:
- All endpoints require Base44 authentication
- JWT tokens validated on every request
- User identity attached to all operations
Authorization:
- RBAC enforcement with
enforceOwnership()
- Project-level access control
- Operation-level permissions
Every agent operation is logged:
- User identity
- Operation type
- Resource accessed
- Timestamp
- Correlation ID for tracing
LLM Data Handling:
- Never send PII to LLM
- Filter sensitive fields before prompts
- Use sanitized, generic identifiers
- Log all LLM interactions
Data Retention:
- Audit logs retained for compliance
- Sensitive data encrypted at rest
- PII minimization principles
- Always use CoT: Never skip the 5-stage reasoning
- Validate everything: Input, process, output
- Sanitize for LLM: Never trust raw user input in prompts
- Log comprehensively: Use structured logging with correlation IDs
- Handle errors gracefully: Return standardized error responses
- Enforce RBAC: Check ownership before operations
- Audit security operations: Log all security-sensitive actions
- Provide confidence scores: Be transparent about AI certainty
- Test extensively: Unit tests, integration tests, security tests
- Document decisions: Explain complex logic and trade-offs
- Check authentication: Ensure user is authenticated
- Validate inputs: Use TypeScript types and runtime validation
- Handle errors: Expect and handle ErrorCodes
- Monitor performance: Track execution time metrics
- Review reasoning: Examine CoT stages for transparency
- Trust but verify: Review AI recommendations before applying
- Iterate on failures: Use validation issues to improve prompts
- Respect rate limits: Don't overwhelm LLM endpoints
- Cache when possible: Reduce redundant LLM calls
- Provide feedback: Report issues to improve agent quality
Each agent tracks:
- Execution Time: Total time to complete
- LLM Latency: Time spent in LLM calls
- Data Fetch Time: Time to retrieve data
- Validation Time: Time to validate outputs
- Confidence Scores: AI certainty in recommendations
- Validation Pass Rate: Percentage of valid outputs
- User Acceptance: How often users accept recommendations
- Error Rate: Frequency of errors
- Uptime: Agent availability
- Error Types: Categorized error tracking
- Rate Limiting: Request throttling
- Cost Tracking: LLM token usage
"Authentication required"
- Ensure valid Base44 auth token in request
- Check token expiration
"Project not found"
- Verify project_id exists
- Check user has access to project
"Validation failed"
- Review required fields in input schema
- Check data types match expected
"LLM invocation failed"
- Check LLM rate limits
- Verify prompt isn't too long
- Review LLM error messages
"Ownership validation failed"
- User doesn't own the resource
- Check RBAC permissions
Enable verbose logging:
logger.info('Debug message', { detailed_context });
Check correlation ID:
- All logs include correlation_id
- Trace entire request flow
Review CoT stages:
- Examine reasoning_steps in response
- Identify where logic diverged
Validate schemas:
- Ensure output matches expected schema
- Check validation_issues in response
- Multi-Agent Orchestration: Agents collaborate on complex tasks
- Agent Marketplace: Community-contributed specialized agents
- Custom Agent Creation: Visual agent builder
- Agent Analytics: Performance dashboards
- Predictive Agents: ML-based forecasting
- Conversational Agents: Natural language interaction
- Agent Chaining: Sequential agent execution
- Parallel Execution: Run multiple agents concurrently
- Agent Versioning: A/B test agent improvements
- Self-Improving Agents: Learn from feedback
To add a new agent:
- Create new TypeScript file in
functions/
- Import utilities from
functions/lib/utils.ts
- Implement 5-stage CoT reasoning
- Add input/output schemas
- Include comprehensive validation
- Add error handling
- Write unit tests
- Document in this file
- Add to API documentation
- Submit pull request
For agent-related issues:
- GitHub Issues: archdesigner/issues
- Documentation: README.md
- API Reference: API_DOCS.md
Maintained by: Krosebrook Team
License: See LICENSE file
Version: 0.0.0
Last Updated: December 29, 2024