Markdown Converter
Agent skill for markdown-converter
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Sign in to like and favorite skills
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
HyperFleet API is a stateless REST API that serves as the pure data layer for the HyperFleet cluster lifecycle management system. It provides CRUD operations for clusters and node pools, accepts status updates from adapters, and stores all resource data in PostgreSQL. This API contains no business logic and creates no events - it is purely a data persistence and retrieval service.
HyperFleet API is one component in the HyperFleet architecture:
The API's role is strictly limited to:
/{resourceType}/{id}/statuses/{resourceType}Go 1.24: Required for FIPS compliance in enterprise/government deployments
TypeSpec: Provides type-safe API specification with better maintainability than writing OpenAPI YAML manually
GORM: Provides database abstraction with migration support and PostgreSQL-specific features
Testcontainers: Enables integration tests with real PostgreSQL instances without external dependencies
make build # Build the hyperfleet-api binary to bin/ make install # Build and install binary to GOPATH/bin make run # Run migrations and start server with authentication make run-no-auth # Run server without authentication (development mode)
make test # Run unit tests make test-integration # Run integration tests make ci-test-unit # Run unit tests with JSON output for CI make ci-test-integration # Run integration tests with JSON output for CI
make verify # Run source code verification (vet, formatting) make lint # Run golangci-lint
make db/setup # Start PostgreSQL container locally make db/login # Connect to local PostgreSQL database make db/teardown # Stop and remove PostgreSQL container ./bin/hyperfleet-api migrate # Run database migrations
make generate # Regenerate Go models from openapi/openapi.yaml make generate-vendor # Generate using vendor dependencies (offline mode)
hyperfleet-api/ ├── cmd/hyperfleet/ # Application entry point │ ├── migrate/ # Database migration command │ ├── serve/ # API server command │ └── environments/ # Environment configuration │ ├── development.go # Local development settings │ ├── integration_testing.go # Integration test settings │ ├── unit_testing.go # Unit test settings │ └── production.go # Production settings ├── pkg/ │ ├── api/ # API models and OpenAPI spec │ │ ├── openapi/ # Generated Go models │ │ │ ├── api/openapi.yaml # Embedded OpenAPI spec (44KB, fully resolved) │ │ │ └── model_*.go # Generated model structs │ │ └── openapi_embed.go # Go embed directive for OpenAPI spec │ ├── dao/ # Data Access Objects │ │ ├── cluster.go # Cluster CRUD operations │ │ ├── nodepool.go # NodePool CRUD operations │ │ ├── adapter_status.go # Status CRUD operations │ │ └── label.go # Label operations │ ├── db/ # Database layer │ │ ├── db.go # GORM connection and session factory │ │ ├── transaction_middleware.go # HTTP middleware for DB transactions │ │ └── migrations/ # GORM migration files │ ├── handlers/ # HTTP request handlers │ │ ├── cluster_handler.go # Cluster endpoint handlers │ │ ├── nodepool_handler.go # NodePool endpoint handlers │ │ └── compatibility_handler.go # API compatibility endpoint │ ├── services/ # Service layer (status aggregation, search) │ │ ├── cluster_service.go # Cluster business operations │ │ └── nodepool_service.go # NodePool business operations │ ├── config/ # Configuration management │ ├── logger/ # Structured logging │ └── errors/ # Error handling utilities ├── openapi/ │ └── openapi.yaml # TypeSpec-generated OpenAPI spec (32KB, source) ├── test/ │ ├── integration/ # Integration tests for all endpoints │ └── factories/ # Test data factories └── Makefile # Build automation
The API is specified using TypeSpec, which compiles to OpenAPI, which then generates Go models:
TypeSpec (.tsp files in hyperfleet-api-spec repo) ↓ tsp compile openapi/openapi.yaml (32KB, uses $ref for DRY) ↓ make generate (openapi-generator-cli in Podman) pkg/api/openapi/model_*.go (Go structs) pkg/api/openapi/api/openapi.yaml (44KB, fully resolved, embedded in binary)
Key Points:
hyperfleet-api-spec repositoryopenapi/openapi.yaml is the source of truth for this repository (generated from TypeSpec)make generate uses Podman to run openapi-generator-cli, ensuring consistent versions//go:embedGORM Session Management:
// pkg/db/db.go type SessionFactory interface { NewSession(ctx context.Context) *gorm.DB Close() error }
Transaction Middleware: All HTTP requests automatically get a database session via middleware at pkg/db/transaction_middleware.go:13:
func TransactionMiddleware(next http.Handler, connection SessionFactory) http.Handler { // Creates session for each request // Stores in context // Auto-commits on success, rolls back on error }
Schema:
-- Core resource tables clusters (id, name, spec JSONB, generation, labels, created_at, updated_at) node_pools (id, name, owner_id FK, spec JSONB, labels, created_at, updated_at) -- Status tracking adapter_statuses (owner_type, owner_id, adapter, observed_generation, conditions JSONB) -- Labels for filtering labels (owner_type, owner_id, key, value)
Migration System: GORM AutoMigrate is used at startup via
./bin/hyperfleet-api migrate command.
DAOs provide CRUD operations with GORM:
Example - Cluster DAO:
type ClusterDAO interface { Create(ctx context.Context, cluster *api.Cluster) (*api.Cluster, error) Get(ctx context.Context, id string) (*api.Cluster, error) List(ctx context.Context, listArgs *ListArgs) (*api.ClusterList, error) Update(ctx context.Context, cluster *api.Cluster) (*api.Cluster, error) Delete(ctx context.Context, id string) error }
Patterns:
context.Context for transaction propagationdb.NewContext()ListArgsHandlers follow a consistent pattern at pkg/handlers/:
func (h *clusterHandler) Create(w http.ResponseWriter, r *http.Request) { // 1. Parse request body var cluster openapi.Cluster json.NewDecoder(r.Body).Decode(&cluster) // 2. Call service/DAO result, err := h.service.Create(r.Context(), &cluster) // 3. Handle errors if err != nil { errors.SendError(w, r, err) return } // 4. Send response w.WriteHeader(http.StatusCreated) json.NewEncoder(w).Encode(result) }
The API calculates aggregate status from adapter-specific conditions:
Adapter Status Structure:
{ "adapter": "dns-adapter", "observed_generation": 1, "conditions": [ { "adapter": "dns-adapter", "type": "Ready", "status": "True", "observed_generation": 1, "reason": "ClusterProvisioned", "message": "Cluster successfully provisioned", "created_at": "2025-11-17T15:04:05Z", "updated_at": "2025-11-17T15:04:05Z" } ] }
Aggregation Logic: The API synthesizes two top-level conditions from adapter reports:
Available condition:
True if all required adapters report Available=True at any generationobserved_generation is the minimum across all adaptersReady condition:
True if all required adapters report Available=True AND their observed_generation matches the current resource generationWhy This Pattern: Kubernetes-style conditions allow multiple independent adapters to report status without coordination. The API synthesizes
Available and Ready conditions for clients to easily determine resource state.
Endpoints:
GET /api/hyperfleet/v1/clusters - List with pagination and searchPOST /api/hyperfleet/v1/clusters - Create new clusterGET /api/hyperfleet/v1/clusters/{cluster_id} - Get single clusterGET /api/hyperfleet/v1/clusters/{cluster_id}/statuses - Get adapter statusesPOST /api/hyperfleet/v1/clusters/{cluster_id}/statuses - Report status from adapterKey Fields:
spec (JSON): Cloud provider configuration (region, version, nodes, etc.)generation (int): Increments on each spec change, enables optimistic concurrencylabels (map): Key-value pairs for categorization and filteringstatus.observed_generation: Latest generation that adapters have processedEndpoints:
GET /api/hyperfleet/v1/nodepools - List all node poolsGET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools - List cluster's node poolsPOST /api/hyperfleet/v1/clusters/{cluster_id}/nodepools - Create node poolGET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id} - Get single node poolGET /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}/statuses - Get statusesPOST /api/hyperfleet/v1/clusters/{cluster_id}/nodepools/{nodepool_id}/statuses - Report statusKey Fields:
owner_references.id: Parent cluster ID (enforced via foreign key)spec (JSON): Instance type, replica count, disk size, etc.The
hyperfleet binary provides two main subcommands:
hyperfleet serve - Start the API ServerServes the hyperfleet REST API with full authentication, database connectivity, and monitoring capabilities.
Basic Usage:
./bin/hyperfleet-api serve # Start server on localhost:8000 ./bin/hyperfleet-api serve --api-server-bindaddress :8080 # Custom bind address ./bin/hyperfleet-api serve --enable-authz=false --enable-jwt=false # No authentication
Key Configuration Options:
Server Binding:
--api-server-bindaddress - API server bind address (default: "localhost:8000")--api-server-hostname - Server's public hostname--enable-https - Enable HTTPS rather than HTTP--https-cert-file / --https-key-file - TLS certificate filesDatabase Configuration:
--db-host-file - Database host file (default: "secrets/db.host")--db-name-file - Database name file (default: "secrets/db.name")--db-user-file - Database username file (default: "secrets/db.user")--db-password-file - Database password file (default: "secrets/db.password")--db-port-file - Database port file (default: "secrets/db.port")--db-sslmode - Database SSL mode: disable | require | verify-ca | verify-full (default: "disable")--db-max-open-connections - Maximum open DB connections (default: 50)--enable-db-debug - Enable database debug modeAuthentication & Authorization:
--enable-jwt - Enable JWT authentication validation (default: true)--enable-authz - Enable authorization on endpoints (default: true)--jwk-cert-url - JWK Certificate URL for JWT validation (default: Red Hat SSO)--jwk-cert-file - Local JWK Certificate file--acl-file - Access control list fileOCM Integration:
--enable-ocm-mock - Enable mock OCM clients (default: true)--ocm-base-url - OCM API base URL (default: integration environment)--ocm-token-url - OCM token endpoint URL (default: Red Hat SSO)--ocm-client-id-file - OCM API client ID file (default: "secrets/ocm-service.clientId")--ocm-client-secret-file - OCM API client secret file (default: "secrets/ocm-service.clientSecret")--self-token-file - OCM API privileged offline SSO token file--ocm-debug - Enable OCM API debug loggingMonitoring & Health Checks:
--health-server-bindaddress - Health endpoints server address (default: "localhost:8080")--enable-health-https - Enable HTTPS for health server--metrics-server-bindaddress - Metrics endpoint server address (default: "localhost:9090")--enable-metrics-https - Enable HTTPS for metrics serverPerformance Tuning:
--http-read-timeout - HTTP server read timeout (default: 5s)--http-write-timeout - HTTP server write timeout (default: 30s)--label-metrics-inclusion-duration - Telemetry collection timeframe (default: 168h)hyperfleet migrate - Run Database MigrationsExecutes database schema migrations to set up or update the database structure.
Basic Usage:
./bin/hyperfleet-api migrate # Run all pending migrations ./bin/hyperfleet-api migrate --enable-db-debug # Run with database debug logging
Configuration Options:
--db-host-file, --db-name-file, --db-user-file, --db-password-file--db-port-file, --db-sslmode, --db-rootcert--db-max-open-connections - Maximum DB connections (default: 50)--enable-db-debug - Enable database debug modeMigration Process:
All subcommands support these logging flags:
--logtostderr - Log to stderr instead of files (default: true)--alsologtostderr - Log to both stderr and files--log_dir - Directory for log files--stderrthreshold - Minimum log level for stderr (default: 2)-v, --v - Log level for verbose logs--vmodule - Module-specific log levels--log_backtrace_at - Emit stack trace at specific file:line# Prerequisites: Go 1.24, Podman, PostgreSQL client tools # Generate OpenAPI code (required before go mod download) make generate # Download Go module dependencies go mod download # Initialize secrets directory with default values make secrets # Start PostgreSQL make db/setup # Build binary make build # Run migrations ./bin/hyperfleet-api migrate # Start server (no authentication) make run-no-auth
When the TypeSpec specification changes:
# Regenerate Go models from openapi/openapi.yaml make generate # This will: # 1. Remove pkg/api/openapi/* # 2. Build Docker image with openapi-generator-cli # 3. Generate model_*.go files # 4. Copy fully resolved openapi.yaml to pkg/api/openapi/api/
Unit Tests:
OCM_ENV=unit_testing make test
Integration Tests:
OCM_ENV=integration_testing make test-integration
Integration tests use Testcontainers to spin up real PostgreSQL instances. Each test gets a fresh database to ensure isolation.
# Connect to database make db/login # Inspect schema \dt # Stop database make db/teardown
The application uses
OCM_ENV environment variable to select configuration:
development - Local development with localhost databaseunit_testing - In-memory or minimal databaseintegration_testing - Testcontainers-based PostgreSQLproduction - Production credentials from secretsEnvironment Implementation: See cmd/hyperfleet/environments/framework.go:66
Each environment can override:
Configuration is loaded from
secrets/ directory:
secrets/ ├── db.host # Database hostname ├── db.name # Database name ├── db.password # Database password ├── db.port # Database port ├── db.user # Database username ├── ocm-service.clientId ├── ocm-service.clientSecret └── ocm-service.token
Initialize with defaults:
make secrets
Structured logging is provided via pkg/logger/logger.go:36:
log := logger.NewOCMLogger(ctx) log.Infof("Processing cluster %s", clusterID) log.Extra("cluster_id", clusterID).Extra("operation", "create").Info("Cluster created")
Log Context:
[opid=xxx] - Operation ID for request tracing[accountID=xxx] - User account ID from JWT[tx_id=xxx] - Database transaction IDErrors use a structured error type defined in pkg/errors/:
type ServiceError struct { HttpCode int Code string Reason string }
Pattern:
if err != nil { serviceErr := errors.GeneralError("Failed to create cluster") errors.SendError(w, r, serviceErr) return }
Errors are automatically converted to OpenAPI error responses with operation IDs for debugging.
The API supports two modes:
No Auth (development):
make run-no-auth
OCM JWT Auth (production):
Implementation: JWT middleware validates tokens and populates context with user information.
Database sessions are stored in request context via middleware. This ensures:
adapter_statuses uses owner_type + owner_id to support multiple resource types:
SELECT * FROM adapter_statuses WHERE owner_type = 'Cluster' AND owner_id = '123'
This avoids creating separate status tables for each resource type.
The
generation field increments on each spec update:
cluster.Generation++ // On each update
Adapters report
observed_generation in status to indicate which version they've processed. This enables:
The OpenAPI spec is embedded at compile time using Go 1.16+
//go:embed:
//go:embed openapi/api/openapi.yaml var openapiFS embed.FS
This means:
All 12 API endpoints have integration test coverage in test/integration/:
Test factories in test/factories/ provide consistent test data:
factories.NewClusterBuilder(). WithName("test-cluster"). WithSpec(clusterSpec). Build()
Integration tests use Testcontainers to create isolated PostgreSQL instances:
// Each test suite gets a fresh database container := testcontainers.PostgreSQL() defer container.Terminate()
This ensures:
If integration tests fail with PostgreSQL-related errors (missing columns, transaction issues), recreate the database:
# From project root directory make db/teardown # Stop and remove PostgreSQL container make db/setup # Start fresh PostgreSQL container ./bin/hyperfleet-api migrate # Apply migrations make test-integration # Run tests again
Note: Always run
make commands from the project root directory where the Makefile is located.
# Connect to database make db/login # Check what GORM created \dt # List tables \d clusters # Describe clusters table \d adapter_statuses # Check status table # Inspect data SELECT id, name, generation FROM clusters; SELECT owner_type, owner_id, adapter, conditions FROM adapter_statuses;
# Start server make run-no-auth # View raw OpenAPI spec curl http://localhost:8000/openapi # Use Swagger UI open http://localhost:8000/openapi-ui
The server is configured in cmd/hyperfleet/server/:
Ports:
8000 - Main API server8080 - Health endpoints (/healthz, /readyz)9090 - Metrics endpoint (/metrics)Middleware Chain:
Implementation: See cmd/hyperfleet/server/server.go:19
Symptom: Server starts but endpoints return errors about missing tables
Solution: Always run
./bin/hyperfleet-api migrate after pulling code or changing schemas
Problem: There are two openapi.yaml files:
openapi/openapi.yaml (32KB, source, has $ref)pkg/api/openapi/api/openapi.yaml (44KB, generated, fully resolved)Rule: Only edit the source file. The generated file is overwritten by
make generate.
Wrong:
db := gorm.Open(...) // Creates new connection
Right:
db := db.NewContext(ctx) // Gets session from middleware
Always use the context-based session to participate in the HTTP request transaction.
The API automatically calculates status.phase from adapter conditions. Don't set phase manually - it will be overwritten.
Ensure indexes exist for common queries:
CREATE INDEX idx_clusters_name ON clusters(name); CREATE INDEX idx_adapter_statuses_owner ON adapter_statuses(owner_type, owner_id); CREATE INDEX idx_labels_owner ON labels(owner_type, owner_id);
Spec and conditions are stored as JSONB, enabling:
-- Query by spec field SELECT * FROM clusters WHERE spec->>'region' = 'us-west-2'; -- Query by condition SELECT * FROM adapter_statuses WHERE conditions @> '[{"type": "Ready", "status": "True"}]';
GORM manages connection pooling automatically. Configure via:
db.DB().SetMaxOpenConns(100) db.DB().SetMaxIdleConns(10)
The API is designed to be stateless and horizontally scalable:
Health Check:
GET /healthcheck returns 200 OK when database is accessible
Metrics: Prometheus metrics available at
/metrics (port 9090)
/Users/ymsun/Documents/workspace/src/github.com/openshift-hyperfleet/architecturehyperfleet-api-spec (API specification source)Common issues and solutions:
make db/setup was run and container is runningmake generate to regenerate from OpenAPI specOCM_ENV is setgo version