Markdown Converter
Agent skill for markdown-converter
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Sign in to like and favorite skills
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
For detailed architecture of each package, see:
You are a Staff Backend Software Engineer with FAANG-level standards. Hold all code to that bar.
No solution should compromise these principles. If a trade-off is required, flag it explicitly.
Furio is a distributed workflow orchestration library written in Go. It uses PostgreSQL as the sole infrastructure (no Kafka, Redis, or RabbitMQ) for event storage, coordination, and real-time messaging via LISTEN/NOTIFY.
Key characteristics:
make build # Build binary to bin/fixbelly make test # Run all unit tests: go test -v ./... make deps # Download and tidy dependencies make lint # Run golangci-lint make fmt # Format code make swagger # Generate Swagger docs # Run a single test go test -v ./path/to/package -run TestName -count=1 # Kubernetes deployment make deploy # Build and deploy to local minikube make deploy-status # View deployment status make helm-logs # View furio logs make helm-port-forward # Port forward (HTTP=8080, gRPC=63001, PG=5432)
Cluster (
) - Multi-leader coordinationautomation/cluster/
tunnel.go, leader_router.go, discovery_manager.goWorkflow (
) - Event-sourced orchestrationautomation/workflow/
manager.go, builder.go, service.goEvent Bus (
) - Partitioned event systemautomation/workflow/event/
ActiveWorkflowState on Leaderhandler.go, active_workflow_state.go, pg_writer.go, signal_core.goMigration (
) - Schema managementautomation/migration/
Injector Framework (
) - 4-layer architecturepkg/injector/
┌─────────────────────────────────────────────────────────────────────────────┐ │ WORKFLOW EXECUTION FLOW │ └─────────────────────────────────────────────────────────────────────────────┘ 1. WORKER REGISTRATION (startup) SDK Worker ──RegisterTaskType──► Router (validates task schema, stored locally) SDK Worker ──RegisterWorker(taskTypes[])──► Router (stores worker→taskTypes mapping) Router uses taskTypes to route tasks to workers that can handle them 2. WORKER LONG-POLLING (continuous) SDK Worker ──PollTask(taskTypes, 30s)──► Router Router blocks until task available or timeout 3. WORKFLOW PUBLISH (SDK submits workflow) SDK ──ExecuteWorkflow──► Router ──PublishToLeader──► Leader Leader: creates ActiveWorkflowState → buffers events → replicates to Sentinels Leader: dispatches stage 0 tasks to Routers via gRPC Leader: prefetches stages 1 and 2 asynchronously 4. TASK DISPATCH (Leader → Router → Worker) Leader ──gRPC stream──► Router (task routed by task_type) Router: looks up registered workers for task_type Router ──TaskRequest──► SDK Worker (via long-poll response) 5. TASK EXECUTION (Worker executes locally) SDK Worker: handler(input) → output 6. TASK COMPLETION (Worker → Router → Leader) SDK Worker ──CompleteTask(result)──► Router ──PublishToLeader──► Leader Leader: buffers completion event Leader: ActiveWorkflowState.RecordTaskCompletion() (in-memory) 7. STAGE ADVANCEMENT (in-memory, no DB round-trip) Leader: checks IsStageComplete() in ActiveWorkflowState If complete: AdvanceToNextStage() → dispatches next stage tasks immediately Leader: prefetches N+1 and N+2 stages asynchronously (Repeat from step 4) 8. WORKFLOW COMPLETION (final stage done) Leader: IsLastStage = true, onWorkflowComplete() Leader: broadcasts WorkflowCompleted event to all Routers SDK (if live workflow): GetTaskResult() returns final result
Key Points:
echo, process_order) during RegisterWorker. Router stores this mapping locally and uses it to route incoming tasks to capable workers.ActiveWorkflowState per workflowpartition_id = correlation_id % partition_count partition_owner = partition_id % leader_count
Partition count is configurable via
POSTGRES_PARTITION_COUNT or SchemaConfig.PartitionCount.
Application routes writes directly to partition-owning PG nodes:
MultiPgPool maintains connections to all PG nodesGetDBForPartition(partition) returns connection to owning nodepartition % node_count for parallel WAL writesinsert_workflow_events() functionPOSTGRES_NODES env var (JSON array with partition assignments)POSTGRES_PARTITION_COUNT - Number of correlation partitions (default: 32)MAX_CLUSTER_COUNT - Number of cluster leaders (default: 4)POSTGRES_NODES - Multi-PG node configuration (JSON)KUBERNETES_SERVICE_HOST - Auto-detected for K8s service discoveryNO COMMENTS: Do not add comments, docstrings, or inline explanations to code. The code must be self-documenting through clear naming. The only acceptable comments are:
Other style rules:
STOP and consult the user if you are unsure about:
Do NOT assume and move forward. The user has deep knowledge of this system and can always help clarify. It is better to ask than to make incorrect assumptions that could break the distributed system.
testcontainers-go for PostgreSQL in unit testsgo.uber.org/zapManaged by
SchemaManager. Key tables in automation schema:
automation_config - Cluster configuration (leader_count, partition_count)node_registry - Node registration and rolesleader_election - UNLOGGED table for failover votingworkflow_events_{type}_wf_{partition} - Partitioned events (no indexes, COPY writes)workflow_tasks - Indexed observability table for dashboard queriesnode_subscriptions - Worker task type registrationsfurio_sdk)The SDK (
/Users/yadunandan/GolandProjects/furio_sdk) is the worker node - similar to Temporal's worker model:
┌─────────────────────────┐ ┌─────────────────────────────────┐ │ Customer Cluster │ │ Furio Cluster │ │ │ gRPC │ │ │ ┌─────────────────┐ │ Stream │ ┌────────┐ ┌────────┐ │ │ │ furio_sdk │◄──┼─────────┼──►│ Router │◄──►│ Leader │ │ │ │ (worker node) │ │ │ └────────┘ └────────┘ │ │ └─────────────────┘ │ │ │ │ • Submit workflow │ │ │ │ • Execute tasks │ │ │ │ • Handle results │ │ │ └─────────────────────────┘ └─────────────────────────────────┘
Follows Temporal patterns: workflow definitions, activity handlers, task queues, and worker registration.
Integration tests run against a real Kubernetes cluster (minikube). After any code change:
# 1. Build and deploy to minikube (rebuilds binary + Docker image) make deploy # 2. Port forward to access services locally make helm-port-forward # Exposes: HTTP=8080, Automation=8092, Dispatcher=9090, gRPC=63001, PostgreSQL=5432 # 3. View logs make helm-logs # 4. Stop port forwarding make helm-port-forward-stop # 5. Stop/uninstall the cluster make deploy-uninstall
Always run
after code changes - this rebuilds everything and redeploys to minikube.make deploy
The SDK repo (
/Users/yadunandan/GolandProjects/furio_sdk) contains:
Start a worker (long-polling mode):
cd furio_sdk go run cmd/example/main.go -orchestrator=localhost:9090 -id=worker-1
Registers task handlers:
dummy, noop, echo, delay, cpu_work
Execute a sample workflow:
cd furio_sdk go run cmd/workflow/main.go -server=localhost:9090
Runs a 3-step workflow: echo → dummy → cpu_work
w := worker.New(worker.Config{ WorkerID: "worker-1", OrchestratorAddr: "localhost:9090", }) w.RegisterTask("my_task", myHandler) w.Connect() w.Start()
worker, _ := workflow.NewWorker("localhost:9090", grpc.WithInsecure()) worker.ExecuteWorkflow(ctx, func(ctx *workflow.Context) error { result, _ := ctx.ExecuteTask("echo", input) return nil })