Loopback.Cloud
Documentation
DocumentationAgent and fleet management

Agent and fleet management

This page explains why Loopback installs an agent on your servers, what problems it solves for buyers and operators, and how it fits next to Kubernetes, bare metal workspaces, and cloud APIs.


What “fleet management” means here

In Loopback, a fleet is the set of hosts that belong to your workspaces under an organization. Those hosts may be:

  • Cloud VMs created through Loopback compute flows.
  • Dedicated servers from supported providers.
  • Existing machines where you install the agent yourself (subject to your operator’s policy).

Fleet management is the combination of:

  • Identity - each host proves it belongs to a workspace.
  • Health - heartbeats and status feed reconciliation and the UI.
  • Operations - patching hooks, network mesh configuration, optional high-performance firewall modules, and break-glass remote access when you allow it.
  • Policy - who may mint install tokens, open shells, or trigger power actions is entirely governed by RBAC (see Access control & permissions and Auditing and fine-grained access).

The agent’s role (in one paragraph)

The Loopback agent is a small, long-running service on each Linux host. It maintains an authenticated channel to the control plane, applies declared configuration (network mesh, firewall modules, metrics hooks, and more), and exposes controlled operational surfaces (for example shell sessions) only when your roles allow it. It is not a replacement for your configuration management tool of choice, but it is the integration point that makes Loopback’s networking, edge, and lifecycle features real on the metal.


Why buyers care

Concern How the agent helps
Visibility You see which servers are managed, which version of the agent they run, and whether they are healthy enough for automation.
Consistency Overlay networking and optional eBPF firewall modules apply the same policy model across hosts without hand-editing each box.
Security Install tokens are workspace-scoped and revocable; sensitive actions are RBAC-gated and should be audit-logged.
Velocity New hosts can join a workspace quickly after cloud provisioning or manual imaging, without waiting for a separate “bootstrap team” for every change.

Lifecycle at a glance

  1. Provision or register a host - Loopback creates the host record, or you adopt existing hardware (see Fleet visibility and existing servers).
  2. Mint an agent token - a workspace-scoped credential used only for first join (see Agents, tokens, and shell access).
  3. Run the install script - your operator publishes the URL; the script installs the agent and enables systemd (typical Linux path).
  4. Check-in - the host appears as managed; reconciliation can drive network, firewall, and update behavior.
  5. Operate - maintenance windows, scaling groups, monitoring sources, and optional product modules engage through the agent channel.

For versioning and staged rollouts of the agent itself, see Agent install and updates.


Relationship to Kubernetes

  • Kubernetes workspaces: worker nodes are hosts in Loopback. The cluster’s control plane is provisioned separately (Kamaji-based path in standard deployments), but node networking, host firewall modules, and remote operations still flow through the agent story.
  • Bare metal workspaces: there is no tenant Kubernetes API as the center of gravity; the agent + hosts story is even more prominent for day-two operations.

Relationship to cloud provider APIs

Loopback still talks to hyperscaler APIs to create or destroy servers. The agent does not replace that. Instead, it closes the gap after the instance exists: mesh, policy, observability hooks, and human break-glass are unified in the product instead of scattered across SSH runbooks.


Governance checklist (for procurement)

  • Who may mint agent tokens? Tie this to least privilege and ticketed break-glass for production.
  • Are shell sessions enabled? If yes, require SOC2-style logging and time bounds.
  • Which optional modules are deployed? Start with Agent modules and Host firewall (eBPF).
  • Air-gapped or restricted egress? Plan mirroring of agent packages with your operator; do not assume every host can reach the public internet.

Loopback.Cloud
© Loopback.Cloud. All rights reserved.