Loopback.Cloud
Documentation
DocumentationFleet visibility and existing servers

Fleet visibility and existing servers

Loopback is often introduced as a Kubernetes workspace product—and that is a major use case—but the same agent-centric model also supports teams who need to operate servers as a fleet even when Kubernetes is not the day-to-day control plane.

This page explains how workspaces, hosts, and the agent combine to give you generic compute tracking and automation surfaces for existing hardware as well as cloud-born machines.


Two workspace styles (same building blocks)

Kubernetes workspaces

You get a tenant Kubernetes API for application teams, and Loopback still models each worker node as a host with an agent. That means:

  • Fleet operations (patch windows, mesh networking, optional eBPF firewall module, break-glass shell) apply per node.
  • Cluster-level concerns (Deployments, Services, ingress) remain Kubernetes-native.

Bare metal workspaces

You get hosts, networking, firewalls, load balancers, DNS, monitoring, and related features without centering the workflow on kubectl. This is a strong fit when:

  • You run traditional stacks (databases, queues, custom middleware) on dedicated servers.
  • You are migrating toward Kubernetes later, but need uniform operations now.
  • You have regulatory or latency reasons not to containerize everything immediately.

See also Bare metal provisioning roadmap for PXE / DHCP / imaging style bring-up (coming soon in the product narrative).


What “generic compute tracking” means

Once a server runs the Loopback agent and is attached to a workspace, the control plane can maintain a durable record of that machine independent of whether Kubernetes schedules pods on it:

  • Identity - hostname, provider linkage (if any), workspace membership.
  • Health - heartbeats and agent version, enabling reconciliation to flag stale or degraded hosts.
  • Capability - eligibility as a monitoring source, participation in WireGuard meshes, attachment to firewall policy.
  • Lifecycle - power actions where the provider allows; scaling groups when hosts are cloud-born and profiles exist.

From a buyer perspective, this is the difference between “we SSH and hope” and “we have a system of record that matches intent to reality.”


Bringing existing hardware under management

Typical patterns (exact steps depend on operator policy):

  1. Create or select a workspace - often bare metal if no cluster is required.
  2. Network reachability - the server must reach the Loopback API (direct or via corporate proxy / mirror).
  3. Mint an agent token - workspace-scoped, revocable (see Agents, tokens, and shell access).
  4. Install the agent - script-driven bootstrap on supported Linux.
  5. Validate - host appears in inventory; mesh and policy features converge according to your network design.

Important: adopting existing servers does not automatically imply Loopback will image or PXE-boot them today. Full bare metal provisioning is documented as a roadmap topic:


Governance: who can see and touch servers?

All sensitive paths—tokens, shell, power, firewall policy, LB integration—should be modeled with custom roles and least privilege (see Auditing and fine-grained access).


When to choose bare metal vs Kubernetes workspace

Situation Lean toward
Primary interface is Helm / GitOps / kubectl Kubernetes workspace
Primary interface is SSH / systemd / metal imaging Bare metal workspace (today) + roadmap for deeper provisioning
Mixed estate Often separate workspaces per environment, or a Kubernetes workspace for the cluster plus adjacent metal tracked carefully

Your solutions architect can help you draw boundaries so RBAC and billing stay clean.


Loopback.Cloud
© Loopback.Cloud. All rights reserved.