Hosts, compute profiles, and scaling groups
This page covers servers in a workspace: how you order them, how you list them, and how scaling groups maintain a fleet size.
Hosts
A host is a machine (virtual or dedicated) that belongs to one workspace and one organization.
Listing hosts
Hosts are listed with pagination (default page size, sort by creation time). Filters respect your host read permission.
Creating a host
You provide:
- hostname — identifier Loopback and DNS flows use.
- compute_profile — references a catalog SKU (see Compute providers).
- optional metadata — your tags.
The platform enqueues provisioning against the compute provider backing that profile.
Host lifecycle
Hosts move through created → active → … and may enter error or deleting states. Reconciliation periodically checks agent heartbeat and provider-side health.
Host-scoped actions (routes)
Under each host, the product exposes operational endpoints (exact menu placement varies):
- Power — start/stop/reset class actions (provider-dependent).
- Firewalls — host-level firewall management.
- Load-balancer / WGLB / LBFW integrations — attach hosts to Loopback-managed LB fabrics where deployed.
See also Agents & shell access for registration of the agent on that host.
Compute profiles (ordering context)
Before ordering, list available profiles for the workspace:
- Prices may show agreement overrides.
- Availability may be false for certain SKUs if the cloud API reports the server type absent in allowed locations (Hetzner Cloud path).
You do not pass raw “server type id” as a free string — you pick a profile id Loopback validated.
Scaling groups
Scaling groups express desired capacity: “keep N similar hosts in this workspace for this profile.”
Typical capabilities:
- Create a scaling group with min/max/desired counts and a compute profile.
- List scaling groups.
- Update desired count or bounds.
- Delete when winding down autoscaling.
Reconciliation runs frequently (on the order of minutes) to evaluate metrics, terminate failed nodes, or provision replacements — consult your operator for the exact policy implemented in routines.
Caution: autoscaling spends money. Pair with spending limits and alerts.
Kubernetes-specific note
For Kubernetes workspaces, hosts often become worker nodes. After join:
- Upgrades may run kubeadm-style operations during version bumps.
- Cordoned / drained nodes are managed via Kubernetes API helpers (see Kubernetes user features).