Bare metal provisioning roadmap
Status: Coming soon in Loopback’s public product narrative. Today, Loopback excels at fleet operations after Linux is installed (agent, networking, firewalls, monitoring, scaling groups on supported providers). Fully automated bare metal provisioning—from racked server to booted OS driven entirely by Loopback—is a roadmap theme, not something this documentation promises as generally available.
This page explains how the industry typically provisions metal, so buyers can align expectations, RFP language, and integration plans with Loopback’s direction.
What exists today (honest scope)
- Bare metal workspaces as an operational home for servers and non-Kubernetes-first workflows.
- Agent-based onboarding once a supported Linux is present and reachable.
- Provider-backed dedicated servers where a cloud API can create machines (for example Hetzner Robot-class flows in supported deployments).
What is not promised here as a turnkey Loopback feature yet:
- Full datacenter automation (DHCP scopes, PXE menus, firmware baselines) entirely inside Loopback for arbitrary colo racks.
How enterprises usually provision bare metal
Modern metal automation stacks combine out-of-band management, network boot, and image delivery. Common building blocks:
1. Out-of-band management (IPMI / Redfish)
Before an OS exists, operators need power, boot order, and inventory control.
- IPMI has been the traditional answer.
- Redfish (DMTF) is the modern REST + JSON direction many vendors standardize on—better session semantics and security posture than many legacy IPMI deployments when implemented well.
Why it matters: provisioning orchestrators ask the BMC to reboot into PXE or mount a virtual CD, then watch state transitions.
2. DHCP and network boot (PXE / iPXE)
DHCP hands out addresses and next-server / boot file hints so a machine can fetch a bootloader across the network.
- PXE is the classic firmware path.
- iPXE adds richer boot scripting, HTTP/HTTPS chain loading, and more flexible san boot scenarios.
Why it matters: this is how racks of servers find the provisioning service without local USB sticks.
3. Image delivery and install mechanics
Once the bootloader runs, the node needs an OS image or installer:
- Kickstart / autoinstall patterns for unattended Linux.
- Golden images streamed to disk.
- Cloned disk images in regulated environments.
Why it matters: repeatability and compliance evidence depend on known-good image pipelines.
4. Post-install handoff to management (agent join)
After the OS is up, the Loopback agent model applies:
- Identity with workspace tokens.
- Networking and policy convergence.
This is the bridge between “we have Linux” and “Loopback manages the fleet.”
What a future Loopback-shaped story could include
When Loopback moves deeper into metal provisioning, a plausible architecture would:
- Treat provisioning profiles as first-class objects (firmware policy, image selection, disk layout).
- Integrate with Redfish-capable hardware and/or vendor-specific BMC APIs where needed.
- Use network services (DHCP/PXE helpers) either managed by the platform or federated with your existing NetOps stack.
- Emit auditable events per server: discovered → enrolling → imaging → agent-healthy.
None of the above is a commitment of dates or scope—it is a buyer-facing framing of how Loopback would likely meet the market where it already is.
What to do now (practical guidance)
- If you need full metal automation today, plan a hybrid: use your existing provisioning stack (Foreman, MAAS, Tinkerbell, vendor tools) then join Loopback at the agent boundary.
- If you are RFP’ing, separate must-have vs nice-to-have:
- Must-have: RBAC, audit, networking, LB integration, monitoring.
- Nice-to-have / roadmap: end-to-end PXE under Loopback UI.