Loopback.Cloud
Documentation
DocumentationScaling groups

Scaling groups

Scaling groups maintain a desired fleet size of hosts inside a workspace using metrics thresholds, min/max bounds, and reconciliation that runs about every minute in the scheduler configuration.


Configuration surface

  • Scope - metrics can target the group alone or the whole workspace fleet, depending on how the group was defined.
  • Thresholds - CPU and memory upscale and downscale percentages (defaults on the order of high-90s upscale and ~50% downscale are common in templates).
  • Sizing - min / max host counts, compute profile binding, cooldown between scale operations, victim selection on downscale (often last-in-first-out style), optional keep windows, and mode (e.g. static sizing).
  • Reconciliation - rolling metric aggregation, scheduler interval, intended vs observed size, timestamps for the last scale event.
  • Live view - current member count and latest metrics snapshot the scheduler used.
  • Membership - hosts attached to the group, maintained by automation.

Provider resolution

Creating hosts through a scaling group may claim from operator-managed compute pools (see Compute provider model).


Permissions

Workspace scaling group routes are gated by workspace-scaled RBAC (create/read/update/delete and scale actions).


Operational guidance

  • Tune thresholds to avoid flapping on bursty workloads.
  • Keep max fleet size aligned with budget and quota agreements.
  • Remember downscale often uses last-in-first-out host selection in default templates - verify whether that matches your stateful workload policy.

Loopback.Cloud
© Loopback.Cloud. All rights reserved.