Platform architecture
Loopback is delivered as three cooperating implementations behind a single product surface (console and public APIs). Buyers and compliance teams care about this split because it defines trust boundaries, where authoritative configuration lives, and what keeps working when a single request returns quickly but work continues in the background.
The names below are product-level. File-level maps for auditors sit with your account team under NDA (see Technical verification).
Control API
Role: The authoritative HTTP layer customers and integrations call.
Responsibilities:
- Authentication and sessions for human accounts.
- Authorization via fine-grained permissions at organization, project, and workspace scope (see Access control).
- Validation of inputs against the same rules the UI uses.
- Persistence of declared intent (organizations, projects, workspaces, hosts, firewalls, monitoring objects, and the rest of the tenant configuration graph).
- Enqueue of asynchronous work so long jobs do not block the HTTP thread (durable task queue and worker messaging).
What you observe: Most mutations return quickly with accepted or 201 semantics while provisioning continues. Read APIs reflect committed intent; actual infrastructure may lag until workers and reconciliation catch up.
Execution worker
Role: Long-running orchestration for operations that touch many systems in sequence.
Examples of work owned here:
- Workspace lifecycle - creating Kubernetes control planes (Kamaji TenantControlPlane objects), namespaces, kubeconfig materialization, DNS hooks.
- Network lifecycle - overlay create/apply, bridge creation/destruction, distributing WireGuard material to agents.
- Catalog and delivery - installing curated Kubernetes applications and bundle build/deploy flows where enabled.
- Provider-specific setup triggered from control-plane tasks (for example onboarding certain provider types).
What you observe: Multi-minute pipelines, explicit failure and retry behavior, and ordering (for example network before certain host operations). When something is stuck in provisioning, this layer is usually where engineers look first.
Reconciliation scheduler
Role: Continuous alignment between declared configuration and external reality.
It runs on intervals and on events (state changes, missing heartbeats, monitoring configuration changes). It drives compute engines that call public cloud APIs (today Hetzner and IONOS in the shipped code) and perform SSH / kubeadm style operations on hosts.
What you observe: Status fields that change without a user click, autoscaling decisions, monitoring probe runs, DNS convergence, and agent health checks. This is distinct from the one-off execution worker jobs: reconciliation is the steady-state loop.
Deep dive: Reconciliation.
Data movement (conceptual)
- Authoritative configuration - system of record for intent and metadata.
- Message fabric - connects Control API decisions to execution and, indirectly, to reconciliation triggers.
- Redis (where deployed) - ephemeral publish/subscribe for progress signals (for example workspace state topics) alongside other cache-style uses.