Markdown Converter
Agent skill for markdown-converter
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Sign in to like and favorite skills
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
This is a go-bricks demo project demonstrating production-ready patterns for building modular Go applications. It uses the
go-bricks framework (located at ../go-bricks) with local replacement via go.mod.
Key characteristics:
Requirements:
This repository is the public showcase for GoBricks. It exists so external engineers can clone the project, run it locally, and experience core framework capabilities—configuration, observability, secrets, jobs, messaging—without reverse engineering. Every contribution should sharpen that first-hour experience.
GoBricks is a production-grade framework for building MVPs fast. It provides enterprise-quality tooling (validation, observability, tracing, type safety) while enabling rapid development velocity. The framework itself maintains high quality standards so applications built with it can move quickly with confidence.
Success Criteria: Visitors should be able to say, "I stood up a tenant-aware API with tracing, secrets, jobs, and database access in under an hour using GoBricks," and they should leave confident they can repeat that pattern in their own domain.
When working in this codebase, follow these principles from the developer manifesto:
context.Context as first parameter for tracing, cancellation, deadlines. No global variables for tenant IDs or trace IDs—always thread context through callsWhereRaw() usage with required annotationsClient vs AMQPClient)# 1. Start infrastructure services make docker-up # 2. Run database migrations make migrate # 3. Build and run application make run # 4. Test the API curl http://localhost:8080/health curl http://localhost:8080/api/v1/products
make dev # Full dev environment: docker-up + migrate (recommended first step) make build # Build application binary to bin/go-bricks-demo-project make run # Build + run (requires services to be running) make test # Run all tests with race detector make check # Run fmt + lint + test (pre-commit checks)
make docker-up # Start PostgreSQL + RabbitMQ + observability stack make docker-down # Stop all services and remove volumes make status # Show running service status make logs # Follow logs from all services
Note: All docker-compose files are located in
etc/docker/ directory, but Makefile handles the path for you.
make migrate # Run Flyway migrations (uses --profile migrations) make migrate-info # Show migration status
make fmt # Format code with gofmt make lint # Run golangci-lint make coverage # Generate HTML coverage report
make loadtest-install # Install k6 load testing tool make loadtest-smoke # Quick validation (30 seconds) - run this first! make loadtest-crud # Realistic CRUD mix test (~15 min) make loadtest-read # Read-only baseline test (~12 min) make loadtest-ramp # Find breaking points (~17 min) make loadtest-spike # Test resilience under traffic spikes (~6 min) make loadtest-sustained # Detect memory/connection leaks (~17 min) make loadtest-all # Run all tests sequentially (~60 min)
See wiki/LOAD_TESTING.md for detailed load testing guide.
The application uses
go-bricks/app.New() which handles:
config.yaml (see Config System section)Entry point: cmd/api/main.go
app.New() to bootstrap frameworkgetModulesToLoad()application.Run()Modules must implement
app.Module interface:
type Module interface { Name() string Init(*app.ModuleDeps) error RegisterRoutes(*server.HandlerRegistry, server.RouteRegistrar) DeclareMessaging(*messaging.Declarations) Shutdown() error }
Module structure pattern (see internal/modules/products/):
products/ ├── module.go # Module implementation, wires dependencies ├── domain/ # Domain models (Product) ├── repository/ # Data access layer (ProductRepository) ├── service/ # Business logic (ProductService) └── http/ # HTTP handlers (ProductHandler)
Dependency injection flow:
module.Init(deps *app.ModuleDeps)deps.GetDB and deps.GetMessaging (context-aware functions)RegisterRoutes()go-bricks config uses
koanf for YAML loading with two loading methods:
Unmarshal(key, &struct) - For nested structs with mapstructure: tagsInjectInto(&struct) - For flat structs with config: tags (only supports primitives)Environment-based config:
APP_ENV=development loads config.yaml + config.development.yamlconfig.{env}.yamlAPP_NAME overrides app.name)IMPORTANT: The
DEBUG environment variable conflicts with go-bricks' debug config section. Unset it before running:
unset DEBUG && make run
Modules receive context-aware database access via
deps.GetDB:
func (m *Module) Init(deps *app.ModuleDeps) error { m.getDB = deps.GetDB // Store function, don't call yet m.repo = repository.NewSQLProductRepository(m.getDB) // ... }
In repository methods:
func (r *Repository) GetByID(ctx context.Context, id string) (*Product, error) { db, err := r.getDB(ctx) // Get DB for this request's context if err != nil { return nil, err } // Use type-safe Filter API qb := database.NewQueryBuilder(database.PostgreSQL) f := qb.Filter() query, args, err := qb.Select("id", "name", "price"). From("products"). Where(f.Eq("id", id)). ToSQL() if err != nil { return nil, err } // Execute query... }
Why context-aware? Enables multi-tenant mode where
ctx determines which database connection to use.
Current mode: Single-tenant (see
config.yaml: multitenant.enabled: false)
Multi-tenant mode (can be enabled):
X-Tenant-ID)deps.GetDB(ctx) returns tenant-specific DB based on contextThe project supports two observability stacks that can be switched using Docker Compose profiles:
Best for: Local development with immediate feedback (< 30 seconds vs. 10-15 min cloud delay)
Start:
cd etc/docker docker-compose --profile local up -d
Access:
Features:
The local stack includes two production-ready dashboards:
1. Application Overview (
Go Bricks - Application Overview)
OTel Runtime Metrics Support: The dashboard now uses OpenTelemetry semantic conventions for Go runtime metrics:
gobricks_go_memory_used (with type labels), gobricks_go_memory_limit, gobricks_go_memory_allocatedgobricks_go_goroutine_countgobricks_go_memory_gc_goal, existing go_gc_duration_secondsgobricks_go_processor_limit (GOMAXPROCS), gobricks_go_config_gogcgobricks_go_schedule_duration (histogram)gobricks_go_memory_allocations (count)go_memstats_* metrics for backward compatibility2. Error Analysis (
Go Bricks - Error Analysis)
Access dashboards:
Dashboard features:
Best for: Production-like monitoring and APM
Setup:
.env file in project root:
NEW_RELIC_LICENSE_KEY=your_license_key_here NEW_RELIC_REGION=US # or EU
make docker-up-newrelic # Or manually: cd etc/docker docker-compose --profile newrelic up -d
Access:
go-bricks-demo-project# Stop current stack cd etc/docker && docker-compose down # Start desired stack docker-compose --profile local up -d # For Prometheus/Grafana/Loki/Tempo docker-compose --profile newrelic up -d # For New Relic
Note: Application doesn't need restart when switching - it always sends to
localhost:4317.
Planned implementation (OTLP export via Grafana Alloy):
Application (zerolog) → OTel SDK → Grafana Alloy → Loki → Grafana ↓ (also exports to Tempo & Prometheus)
Current Status:
mode="stdout+OTLP" but logs are only going to stdoutvolume_enabled: true and ready to ingest logsWhen OTLP logs work, you'll get:
LogQL query examples:
# All error-level logs {container_name=~".*"} |= "level" | json | level="error" # Logs for a specific trace {container_name=~".*"} |= "trace_id" | json | trace_id="abc123" # HTTP errors (status >= 400) {container_name=~".*"} | json | http_status >= 400 # Search for specific text in messages {container_name=~".*"} |= "database connection failed" # Rate of error logs (errors per second) sum(rate({container_name=~".*"} | json | level="error" [5m]))
Tip: Use Explore view in Grafana for ad-hoc log queries, or use pre-built dashboard panels.
# HTTP server metrics (namespace: gobricks_) gobricks_http_server_request_duration_seconds_bucket gobricks_http_server_request_body_size_bytes_bucket gobricks_http_server_response_body_size_bytes_bucket # Example queries: rate(gobricks_http_server_request_duration_seconds_count[5m]) # RPS histogram_quantile(0.95, rate(...[5m])) # p95 latency
See wiki/PROMETHEUS_GRAFANA_SETUP.md for complete observability guide.
This is a demo application built with GoBricks, not production code. Testing strategy reflects this:
Coverage Target: 60-70% on core business logic (repository queries, service methods, HTTP handlers)
Testing Focus:
Quality Gate: Run
make check (fmt + lint + tests) before pushing to keep main branch green.
go test ./internal/modules/products/... # Test specific module go test -v -race ./... # All tests with race detector go test -run TestProductService_Create ./... # Run specific test make test # Run all tests (uses race detector)
make test-products-api # Uses scripts/test-products-api.sh
Manual API testing:
# Ensure services are running make docker-up # Start app make run # Test endpoints curl http://localhost:8080/health curl http://localhost:8080/api/v1/products
The project includes comprehensive k6 load testing scripts. See wiki/LOAD_TESTING.md for details.
Quick start:
# Install k6 make loadtest-install # Run quick smoke test make loadtest-smoke # Run realistic CRUD test make loadtest-crud
Available tests:
TypeScript Support: All load tests are written in TypeScript for better type safety and IDE support. k6 v1.3.0+ has native TypeScript support, so tests run directly without any build step:
# Type check tests (optional - for catching errors before running) npm run type-check # Run tests directly - k6 handles TypeScript transpilation k6 run loadtests/products-crud.ts make loadtest-smoke # No webpack or build step needed!
Performance tuning:
config.development.yaml → database.pool.max.connectionsconfig.development.yaml → app.rate.limit/burstdatabase.query.slow.thresholdCreate module directory structure:
mkdir -p internal/modules/mymodule/{domain,repository,service,http}
Implement
interface in app.Module
module.go:
type Module struct { deps *app.ModuleDeps // ... your fields } func (m *Module) Init(deps *app.ModuleDeps) error { m.deps = deps // Wire up repository → service → handler return nil } func (m *Module) RegisterRoutes(hr *server.HandlerRegistry, r server.RouteRegistrar) { // Register HTTP routes }
Register in cmd/api/main.go:
func getModulesToLoad() []ModuleConfig { return []ModuleConfig{ {Name: "products", Enabled: true, Module: products.NewModule()}, {Name: "mymodule", Enabled: true, Module: mymodule.NewModule()}, } }
go-bricks location:
../go-bricks (local replacement)
When modifying go-bricks:
cd ../go-bricks # Make changes cd ../go-bricks-demo-project make build # Automatically picks up local changes
go-bricks provides:
app - Application bootstrap and module systemconfig - Configuration loading with koanfdatabase - Multi-database support (PostgreSQL, Oracle, MongoDB)messaging - RabbitMQ AMQP clientserver - Echo HTTP server with middlewarelogger - Structured logging with zerologobservability - OpenTelemetry provider (traces + metrics)Base path:
/api/v1 (configured in config.yaml: server.path.base)
Health checks:
GET /health - Liveness probeGET /ready - Readiness probe (checks DB + messaging)Products module:
GET /api/v1/products - List all productsGET /api/v1/products/:id - Get product by IDPOST /api/v1/products - Create productPUT /api/v1/products/:id - Update productDELETE /api/v1/products/:id - Delete productconfig.yaml - Base configuration (not present in this project, uses framework defaults).env - Secrets (gitignored, use .env.example as template)Follow these engineering principles when contributing:
Security is mandatory, not optional:
WhereRaw() must include this annotation:
// SECURITY: Manual SQL review completed - identifier quoting verified query := qb.WhereRaw("custom_condition")
Use go-bricks structured errors where possible. Handlers should return appropriate HTTP status codes.
Use structured logging via
deps.Logger:
m.logger.Info(). Str("product_id", id). Msg("Product created successfully")
Use go-bricks type-safe Filter API for all queries:
qb := database.NewQueryBuilder(database.PostgreSQL) f := qb.Filter() // SELECT with filters query, args, err := qb.Select("id", "name", "price"). From("products"). Where(f.Eq("status", "active")). Where(f.Gt("price", 10.0)). ToSQL() // UPDATE with filters query, args, err := qb.Update("products"). Set("status", "inactive"). Where(f.Eq("id", productID)). ToSQL() // DELETE with filters query, args, err := qb.Delete("products"). Where(f.Eq("id", productID)). ToSQL()
Filter methods:
Eq, NotEq, Lt, Lte, Gt, Gte, In, NotIn, Like, Null, NotNull, Between, And, Or, Not, Raw
Important: Always use
ToSQL() (uppercase) not ToSql() for consistent API.
V1__description.sql, V2__another.sqlmake migrateAll Docker-related files are in etc/docker/ directory:
docker-compose.yml - Main compose file with service profilesotel/ - OpenTelemetry Collector configurations (Prometheus vs. New Relic)prometheus/ - Prometheus scrape configurationpromtail/ - Promtail log collection configurationloki/ - Loki log storage configurationgrafana/provisioning/ - Auto-provisioning configs
datasources/ - Prometheus, Tempo, Loki datasourcesdashboards/ - Dashboard provider configurationdashboards/json/ - Pre-built dashboard JSON filesalloy/ - (Reserved for future Grafana Alloy integration)Service profiles:
--profile local - Prometheus + Grafana + Tempo + Loki (local development)--profile newrelic - New Relic Cloud integration (production-like)--profile migrations - Flyway migration runnerWhen contributing to this showcase project, follow this workflow to maintain quality and consistency:
.env.example - If new environment variables are neededBefore pushing to
main, run the quality gate:
make check # Runs: fmt + lint + test
Required checks:
make fmt - Code formatting with gofmtmake lint - Static analysis with golangci-lint (must pass with no errors)make test - All tests pass with race detectorRecommended checks:
make coverage - Review HTML coverage report, aim for 60-70% on business logicmake loadtest-smoke to validate performance hasn't regressedNew to this codebase? Follow this tour to understand how everything fits together.
Explore the code in this order:
cmd/api/main.go - Application entry point
app.New() bootstraps the frameworkgetModulesToLoad() - how modules are registeredinternal/modules/products/module.go - Module implementation
app.Module interfaceInit(deps *app.ModuleDeps)RegisterRoutes()internal/modules/products/http/ - HTTP handlers
internal/modules/products/repository/ - Data access layer
getDB(ctx)Select, Where, ToSQL())internal/modules/shared/ - Shared bricks
secrets/ - Multi-tenant AWS Secrets Manager integrationconfig.development.yaml - Configuration
Experience the application running:
Bootstrap environment:
make dev # Starts docker-up + runs migrations
Start application:
make run # Build and start the API server
Exercise endpoints:
# Health checks curl http://localhost:8080/health curl http://localhost:8080/ready # Products CRUD curl http://localhost:8080/api/v1/products curl http://localhost:8080/api/v1/products/1 # Or use the test script make test-products-api
Review telemetry:
gobricks_Inspect generated metrics:
# See what metrics are being emitted curl http://localhost:8889/metrics | grep gobricks_
Run load test:
make loadtest-smoke # 30-second quick validation # Watch metrics in Grafana update in real-time
After this tour, you'll understand the module system, dependency injection, observability integration, and how to extend the showcase with new capabilities.
# Symptom: Configuration error on startup # Solution: Unset DEBUG environment variable unset DEBUG && make run
# Stop all services and remove orphaned containers make docker-down docker ps -a | grep go-bricks | awk '{print $1}' | xargs docker rm -f make docker-up
# Symptom: "no connections available" errors under load # Solution: Increase pool size in config.development.yaml database.pool.max.connections: 50 # Increase from default 25
# Enable slow query logging in config.development.yaml database.query.slow.threshold: 100ms database.query.slow.enabled: true # Run application and check logs for slow queries make run
# Symptom: Loki datasource works but no logs appear in dashboards # Solution 1: Check Promtail is running and collecting logs docker logs go-bricks-promtail # Solution 2: Verify Loki is receiving data curl http://localhost:3100/ready curl http://localhost:3100/metrics | grep loki_ingester_streams_created_total # Solution 3: Ensure application is running and generating logs docker ps | grep go-bricks # Solution 4: Test Loki query manually curl -G -s "http://localhost:3100/loki/api/v1/query" --data-urlencode 'query={container_name=~".*"}' | jq
# This is expected behavior - collector may show "unhealthy" but still works # Check if it's actually processing telemetry: curl http://localhost:8889/metrics | grep gobricks_ # Should show metrics docker logs go-bricks-otel-collector-local | tail -20 # Should show trace/metric processing