Loopback.Cloud
Documentation
DocumentationProvider extensibility and on-prem

Provider extensibility and on-prem

Loopback’s compute provider layer answers a simple procurement question: “Where do servers come from?” The platform is architected so new provider families can be added without rewriting the entire product—while still being honest that your deployment only includes what your operator ships and supports.

This page complements Compute providers and ordering hosts and the conceptual model in Compute provider model.


The generic interface (how to think about it)

At a high level, a compute provider integration includes:

  1. Credentials and tenancy - how Loopback authenticates to an API and whether capacity is docked to one customer or drawn from an operator pool.
  2. Catalog mapping - how compute profiles (SKUs) translate into provider-specific parameters (instance type, location, image).
  3. Lifecycle operations - create, delete, power, rescue, attach networking—whatever the provider supports.
  4. Reconciliation - periodic alignment so declared intent matches cloud reality (timeouts, partial failures, inventory drift).
  5. Guardrails - administrative allow lists so only supported provider families appear in customer-facing validators.

Buyer takeaway: Loopback is not “locked to one hyperscaler architecture”; it is multi-provider by design, with shipping reality defined per deployment.


What ships in typical deployments today

The repository and public docs center on Hetzner Cloud, Hetzner Robot, and IONOS DCD as first-class integrations for many tenants.

That set is not a theoretical maximum—it is the default engineering focus in the tree your team can inspect under NDA for diligence (see Technical verification).


On-prem and private cloud (OpenStack, Proxmox, VMware-class)

Enterprises routinely ask: “Can we bring our own datacenter?”

Architecturally, the answer is yes as a program of work:

  • Implement provider operations against the private API (server create, networking attach, storage, etc.).
  • Register the provider family in the factory so execution and reconciliation can route work to it.
  • Extend administrative validation so the new family is allowed where appropriate.
  • Publish compute profiles that encode SKUs meaningful to your hardware (CPU/RAM/disk classes, rack locations, network backends).

OpenStack

OpenStack is a common target because it already abstracts compute (Nova), networking (Neutron), and images (Glance). A Loopback provider module would typically map profiles to flavors + networks + security groups, and rely on your cloud’s RBAC for upstream isolation.

Proxmox

Proxmox environments often combine virtual machines and containers with a single API. A Loopback integration would focus on the VM path for “servers as hosts,” and would need clear decisions about storage, network bridges, and image lifecycle.

Why this is never “just a config toggle”

Private clouds differ in:

  • Network models (VLANs, SDN overlays, BGP, floating IPs).
  • Image standards (cloud-init, ignition, corporate golden images).
  • Compliance (disk encryption, TPM, secure boot).

Expect professional services or internal platform engineering time to harden integrations.


Procurement language that holds up

Use this wording in RFPs and security reviews:

  • “Loopback’s architecture supports multiple compute provider integrations; our deployment includes <list> as supported today.”
  • “Additional providers require engineering registration, validation, and operational runbooks—scoped as a project.”
  • “Provider credentials are stored by reference via the secret subsystem, not embedded in API payloads.”

Loopback.Cloud
© Loopback.Cloud. All rights reserved.