Reconciliation
Loopback is not only “API plus persistence”. A reconciliation scheduler runs scheduled reconciliation over many entity types so declared configuration matches reality in clouds, Kubernetes, DNS, and agents.
This page explains what that means for operations, procurement, and audit: what is guaranteed to be eventually consistent, and what is explicitly not reconciled in the registry.
Conceptual model
Loopback keeps a desired state for tenant objects (organizations, projects, workspaces, hosts, DNS records, monitoring definitions, load balancers, scaling groups, and so on). The world outside - cloud APIs, Kubernetes, agents on servers, DNS providers - is only eventually consistent with that record.
The reconciliation scheduler closes the gap on a schedule and on events. It does not replace the execution worker, which runs large one-off pipelines (for example creating a workspace). Think of the worker as project-style jobs and the scheduler as steady-state hygiene and policy enforcement across the configuration graph.
That split matters when you debug: a stuck create is usually queued execution work, while drifting status or autoscale lag is often reconciliation and provider or agent health.
Why reconciliation exists
Infrastructure drifts:
- Cloud APIs return timeouts; creates half-finish.
- Agents miss heartbeats after network cuts.
- DNS providers lag propagation.
- Object storage policies change out-of-band.
Reconciliation retries, re-reads, and emits status so dashboards and auditors see current truth rather than last click truth.
Major reconciled entities
The scheduler registers reconciliation policies for entity types including (illustrative for the shipped product):
- Workspaces - cluster alignment, seeding, health (on the order of hours between full passes; also event-triggered).
- Workspace seeding - follow-up for Kubernetes catalog seeding flags.
- Scaling groups - desired versus actual host counts (high frequency).
- Load balancers - backend sync and related maintenance triggers.
- Compute providers - credential validation and inventory-style sync where implemented.
- Object stores - usage tracking and policy sync.
- Hosts - liveness and agent heartbeat checks (very frequent).
- DNS zones and DNS records - sync and propagation checks.
- Organizations - quota and billing alignment (on the order of a day).
- Agents - connectivity and version checks.
- Update deliveries - staged rollouts of agent or platform updates.
- Monitoring objects - probe execution and condition evaluation driving alerts.
- Monitoring sources - health of probe endpoints tied to agents and hosts.
Some entities intentionally have no dedicated reconciliation routine (for example monitoring alerts and notification channels): they are produced or configured through other paths and consumed by humans and integrations.
Triggers versus schedules
Each reconcilable entity may run on:
- Fixed interval (every minute, hour, and so on).
- Events such as state changes, missing heartbeats, and monitoring configuration changes.
Practical impact
- Changes you make may not reflect instantly in every cloud; allow minutes for convergence.
- Repeated flickering status often means flapping dependencies (DNS, provider API, agent offline).
Agents and heartbeats
Hosts and agents participate in fast loops. If an agent stops heartbeating:
- The host may move to degraded states.
- Monitoring sources may flag unavailable.
- Automatic remediation may be operator-defined (replace node, open ticket).
Tasks you enqueue versus reconciliation you inherit
API-initiated tasks - create workspace, upgrade Kubernetes, deploy bundle, create network bridge - are explicit jobs owned primarily by the execution worker path (names illustrative).
Reconciliation - happens continuously even if nobody logs in, and re-asserts desired configuration after drift.