Loopback.Cloud
Documentation
DocumentationKubernetes applications — the platform catalog on your cluster

Kubernetes applications — the platform catalog on your cluster

Kubernetes applications are Loopback’s mechanism for delivering curated platform software into your workspaces in a repeatable order, with versioned definitions and automation that runs when workspaces are created or reconciled.

This page is written for customers who need to understand what will appear in their cluster and who controls it. Day-to-day catalog editing is an operator / administrator responsibility.


What problem this solves

Without a catalog, every customer would manually install the same base components (ingress, metrics, policy, CSI integrations, etc.) and inevitably drift. Loopback instead:

  • Stores each component as a named application with metadata and deployment rules.
  • Pins active content to a specific revision (version) of that application.
  • Materializes per-workspace deployments so upgrades and rollouts can be tracked.

You still bring your business workloads via GitOps, CI/CD, or kubectl; the catalog is for shared platform layers your operator wants consistent.


The four ideas to remember

1. Application (catalog entry)

The application is the product-level name for something installable — for example “ingress-nginx” or an internal logging stack.

It carries:

  • A human name and optional metadata (owner team, change ticket).
  • Deployment rules describing when it applies (all Kubernetes workspaces vs only some, prerequisite vs post-requisite, needs worker nodes or not).
  • Flags such as protected (harder to delete) or visible (whether it should surface in UIs).

2. Revision (versioned content)

A revision is an immutable snapshot of values and artifacts (manifest or Helm-oriented payloads) for one application.

Operators promote a revision to active for an application. Your workspace deployments always point at one revision at a time.

3. Deployment (your cluster instance)

A deployment ties application + revision + workspace together and records namespace choice and editable flags.

When automation runs, it creates the Kubernetes resources described by that revision inside your cluster (subject to rules).

Default namespace naming in automation follows a predictable pattern such as loopback-<application name> — treat these as managed.

4. Deployment order (hierarchy and prerequisites)

Applications carry ordering and gating:

  • Hierarchy / priority — higher-priority items are considered earlier in ordering logic.
  • Prerequisite flag — must run before your Kamaji tenant control plane is created, if the application matches your workspace type and does not require worker nodes yet.
  • Requires compute — must wait until worker nodes exist; used for components that cannot run on a control-plane-only phase.

User impact: during the first minutes of workspace creation you may observe some namespaces and pods appearing before the Kubernetes API is fully yours; that is expected prerequisite activity.


Who does what

  • Platform operator / administrator — creates applications, uploads revisions, marks prerequisites, tests upgrades, rolls out new versions.
  • Customer admin — decides when to create workspaces, approves maintenance windows, coordinates upgrades, requests catalog changes through support.
  • Developer — consumes the cluster; usually does not edit catalog objects. May see read-only lists in the UI if the operator exposes them.

How catalog installs line up with workspace creation

High-level sequence (mirrors the provisioning story in What gets built):

  1. Prerequisite installs — before your tenant API is ready, matching catalog items without compute dependency are applied and waited on.
  2. Control plane creation — Kamaji stands up your Kubernetes API; DNS and kubeconfig are wired.
  3. Integration scaffolding — load balancer integration secrets, OIDC bindings, optional cloud provider secrets.
  4. Remaining catalog installs — everything else, including items that need nodes.

If a prerequisite install fails, provisioning may abort and attempt cleanup — you might see a workspace error rather than a half-working API.


Legacy behavior

Some applications still carry an older deployment_strategy shape in the database. Automation considers both modern deployment_system rules and these legacy fields when deciding applicability.

User meaning: two workspaces created years apart might have slightly different effective catalogs even if their names match — migrations matter. Ask your operator for a catalog manifest if you need compliance proof of what was installed.


What you should not do as a customer

Unless your operator explicitly asks you to:

  • Do not delete protected catalog deployments or namespaces Loopback recreates — you will fight reconciliation.
  • Do not assume you can pin arbitrary Helm versions yourself for catalog components; request a new revision instead.

Was this helpful?
Loopback.Cloud
© Loopback.Cloud. All rights reserved.