Loopback.Cloud
Documentation
DocumentationYour Kubernetes workspace: what gets built

Your Kubernetes workspace: what gets built

This page walks through what Loopback does for you when you create a Kubernetes workspace, in the order a user would care about. It explains outcomes (what exists afterward) more than internal component names.

If you only read one technical page about provisioning, read this one.


The big picture

  1. Loopback picks which management infrastructure will host your new cluster (part of the operator’s estate, not yours).
  2. It prepares isolation on that management side (a dedicated namespace).
  3. It may install prerequisite platform software that must exist before your Kubernetes API is fully ready.
  4. It creates your tenant control plane (your own Kubernetes API) using Kamaji.
  5. It retrieves admin credentials, wires DNS so you use a stable hostname, and stores secrets securely.
  6. It configures integration hooks inside your new cluster (load balancer integration, cloud provider secrets when applicable, OIDC bindings).
  7. It installs remaining catalog software that depends on workers or later phases.
  8. It marks the workspace active when the pipeline succeeds.

You experience steps 1–7 mostly as wait time and status changes in the UI; step 8 is when you can reliably use kubectl.


Step 1 — Network scope for the workspace (optional foundation)

If your workspace is configured for managed network, Loopback first ensures an underlying network object exists and finishes its own network creation task.

User meaning: your workspace may receive dedicated address planning and routing context before any Kubernetes objects appear. If this step fails, nothing downstream can succeed — you will see the workspace fail early in provisioning.


Step 2 — Selecting the management pool

Loopback associates your workspace with a parent Kubernetes cluster entry in its database — think of it as “which operator-run Kamaji installation will serve me.”

  • Standard workspaces use a Kamaji management pool filtered by the compute provider type you chose (for example Hetzner vs IONOS) and optionally by an explicit cluster if your operator pinned one.
  • Special workspace kinds (rare for end users) can instead target root management clusters.

User meaning: you do not receive credentials to this management cluster. You only see the effect: your tenant API server is created somewhere reliable that matches your region / provider strategy.

If no pool matches, creation fails with a capacity style error — your operator must add or activate management clusters for that provider.


Step 3 — Namespace on the management side

Loopback creates a namespace on the management cluster dedicated to your workspace. Names look like k8s-w-… for standard workspaces (or k8s-c-… for the special kind).

User meaning: operational isolation between customers on the shared management plane. You still do not log into this layer.


Step 4 — Prerequisite “Kubernetes applications”

Loopback maintains a catalog of Kubernetes applications — versioned packages of manifests or Helm content (see Kubernetes applications).

Some catalog entries are flagged as prerequisites: they must be applied before your Kamaji-based control plane is created, and they must not require worker nodes yet.

For each matching prerequisite, Loopback:

  • Creates a deployment record for your workspace.
  • Runs an automated install job and waits for completion (with a timeout).
  • On failure, attempts to tear down what it started so you are not left half-provisioned.

User meaning: when you first connect to your cluster, certain system namespaces and controllers may already exist because the platform depends on them. You can still add your own workloads, but you should treat those namespaces as managed.


Step 5 — Tenant control plane (your Kubernetes API)

Loopback now creates a Kamaji TenantControlPlane object — effectively your Kubernetes control plane as a workload on the management infrastructure.

Important user-relevant details:

  • The API server is fronted in a way that ends up on a load-balanced endpoint (implementation uses a LoadBalancer service type in the manifest template).
  • The control plane is configured for OIDC authentication against the issuer URL your operator configured (historically tied to Loopback’s API hostname in templates). Your workspace ID becomes the OIDC client id for that cluster.
  • Certificate SANs include DNS names under the operator’s management domain so TLS matches the hostname you will use.

Replica count: the Kamaji manifest template defaults to three control-plane replicas for resilience. The product also records a high_available vs single_node choice at workspace creation time; whether that choice currently changes replica count depends on your deployed version of the automation — ask your operator if you require strict single-node economics.


Step 6 — Kubeconfig, DNS name, and health checks

Loopback waits until the platform materializes an admin kubeconfig secret for your tenant.

Then it:

  • Reads the API server IP/URL from that kubeconfig.
  • Creates a DNS record under the operator’s workspace pattern so you connect to something like
    https://<workspace-id>.k8s.<management-domain>:6443
    instead of memorizing an IP.
  • Rewrites the kubeconfig you will download so the server field uses that DNS name and friendly context names.
  • Optionally probes /healthz on the API until it responds (with logging if that takes long).

User meaning: your developers get a stable URL and a standard kubeconfig story.


Step 7 — Inside your new cluster: integrations and RBAC

Once Loopback can speak to your Kubernetes API, it applies integration scaffolding:

  • Loopback load balancer integration — a namespace and secret that carry your workspace API key so in-cluster components can talk back to Loopback-managed load balancer APIs when you use that feature.
  • OIDC RBAC hook — a ClusterRoleBinding grants the loopback-workspace-claim group cluster-admin so authenticated OIDC users from your identity system can administer the cluster (subject to what your IdP puts in groups).
  • Cloud provider secrets (conditional) — if your workspace still binds a Hetzner Cloud compute provider at the model level, Loopback may create namespaces and hcloud secrets for cloud controller / CSI components.

Historical note: an older automation path installed Cilium and Flux via CLI during bootstrap. That path is deprecated in code; current provisioning relies on catalog deployments and API-driven configuration rather than those CLI installers. Your operator decides which CNI / GitOps stack you effectively receive via catalog content.

Loopback then encrypts and stores kubeconfig material in its secrets subsystem and links it to the workspace record so you can download it later through normal permissions.


Step 8 — Remaining catalog deployments

After the control plane record is saved, Loopback installs non-prerequisite catalog items — including anything that requires worker nodes or was intentionally deferred.

User meaning: this is when node-dependent components (ingress controllers, metrics, policy, or your operator’s standard stack) land, subject to host registration and provider behavior.


Step 9 — Workspace becomes active

When the pipeline completes, the workspace state flips to active and asynchronous tasks finish successfully.

User meaning: you can now:

  • Download admin or OIDC kubeconfig (see Access and identity).
  • Start deploying applications.
  • Attach bundles or CI/CD (where product maturity allows — Bundles).

Failure behavior (what you should expect)

  • Early failure (no management pool, bad version, verification): you get a clear validation error; nothing is partially created on your side.
  • Mid failure (prerequisite install timeout): Loopback attempts cleanup of partial catalog installs; you may need support if cleanup itself fails.
  • Late failure (API never becomes healthy): workspace may be marked for deletion or error states depending on deployment — involve your operator.

Was this helpful?
Loopback.Cloud
© Loopback.Cloud. All rights reserved.