Workspaces — overview
A workspace is the unit of environment Loopback gives you inside a project. Choosing the right workspace type is the most important product decision you will make: it determines whether you get a Kubernetes API for containers or a bare-metal–centric space for traditional servers.
Where workspaces live in the hierarchy
Organization
└── Project
└── Workspace ← you are here
Every workspace:
- Belongs to exactly one project (and therefore one organization).
- Has a name and optional metadata for your own labels or references.
- Has a type:
kubernetesorbaremetal. - Receives a workspace API key used by platform automation (not the same as a user password).
- Can have DNS settings (hostname pattern, provider) managed or delegated depending on configuration.
Kubernetes workspaces
What you get
A Kubernetes workspace is a dedicated Kubernetes cluster for your project: your workloads run with your own Kubernetes API endpoint, not a shared “namespace-only” slice of someone else’s control plane.
Loopback provisions that cluster using Kamaji on a management Kubernetes cluster that the operator runs. You normally never manage that management layer; you use your API server address, RBAC, and add-ons the platform installs for you.
What you choose when you create it
When you create a Kubernetes workspace (through the UI or any client that performs the same business logic), you typically specify:
-
Name
A stable identifier for humans and automation. The platform sanitizes names to safe resource-name patterns. -
Kubernetes version
You pick from versions the operator has activated in the version catalog. Only active versions are accepted. If creation fails with “unknown” or “unsupported” version, your selected version is not offered in that environment. -
Feature flags bundle (product term for creation options)
Practical fields include:- Ingress seeding — whether to include an NGINX ingress controller in the automated seeding path (boolean flag in the product model).
- Compute provider type — where worker capacity is expected to come from, usually
hetzner_cloudorionos_dcd. The platform checks that a matching active management pool exists before accepting creation. - Optional explicit management cluster — advanced: tie the workspace to a specific Kamaji management cluster your operator exposed, instead of automatic pool selection. This path only accepts active Kamaji clusters that carry a compute-provider binding.
- Control plane sizing intent — the API records
high_availablevssingle_nodeintent. Treat this as a requested profile; the effective realization depends on how your environment’s automation maps that intent to infrastructure (see What gets built for honesty about defaults).
-
Metadata
Free-form key/value data for your CMDB, chargeback codes, or ownership tags.
Preconditions you will hit in the real product
Creation is refused when:
- Your organization is not verified or not in an active healthy state.
- You lack permission to create workspaces on that project.
- No suitable Kubernetes version exists in the catalog.
- No active Kamaji management cluster matches your compute provider choice (or pinned cluster is invalid).
After validation, the platform queues asynchronous provisioning. The workspace moves through initializing states until it becomes active and usable.
DNS
Kubernetes workspaces are created with DNS configuration pointing at Cloudflare in the default model path. During provisioning the platform may register a stable hostname for the API server under the operator’s management domain so your kubeconfig uses a DNS name instead of a raw IP.
See What gets built for the user-visible effect.
Advanced: workspace “kind”
Some internal or migration scenarios use a workspace kind flag. When set, provisioning may use a different management pool (root clusters instead of Kamaji pools) and a different namespace naming pattern. Most customers only see the standard path where kind is unset.
If your operator mentions a “cluster workspace” variant, they are referring to this split — ask them which path you are on.
Bare metal workspaces
What you get
A bare metal workspace is for teams that primarily work with dedicated servers, Loopback-managed networks, and related infrastructure without going through the Kubernetes control-plane product path described above.
Creation still provisions managed network behavior (in the default template) and sets DNS provider defaults similar to Kubernetes workspaces at the model level, but you should expect the follow-on workflows to center on hosts, imaging, and connectivity — not kubectl.
When to pick bare metal
- Traditional workloads, databases on metal, or regulated environments where Kubernetes is not desired.
- Stepping stone environments while Kubernetes is evaluated.
After creation: what appears on the workspace
For Kubernetes workspaces, once provisioning progresses you will see (in UI or API responses):
- Kubernetes version reference and human-readable version string.
- Endpoint / public address fields as the platform learns them.
- Authorization settings — especially OIDC enablement and session lifetime.
- Maintenance configuration — weekly windows and toggles for automatic patching (see Day-two operations).
- Status subdocuments populated by reconciliation (the platform periodically aligns real cluster state with desired state).
For bare metal, expect growing host and network linkage rather than Kubernetes fields.
Permissions worth knowing about
Loopback splits sensitive capabilities so you can apply least privilege:
- General workspace read — list and open workspace details.
- Kubernetes details — version info, upgrade matrices.
- Admin kubeconfig download — full cluster-admin style file; treat like root access.
- OIDC kubeconfig / authenticate — ability to retrieve the interactive login kubeconfig.
- Maintenance updates — change patching windows; may be limited so only admins can disable maintenance entirely.
Exact names map to internal permission keys; from a user perspective, ask your admin for “can upgrade Kubernetes” or “can download kubeconfig” when onboarding teammates.
Workspace firewalls (API routes) may be exposed only in non-production deployment modes in some builds — confirm with your operator if you rely on them.
Everything inside a workspace (guide index)
- What gets built (Kubernetes / Kamaji)
- Access, kubectl, and identity
- Maintenance, upgrades, transfer
- Hosts, compute profiles, scaling groups
- Agents, tokens, shell sessions, power
- Kubernetes portal features (summary, nodes, drain, …)
- Workspace load balancers
- Kubernetes applications (catalog)