Skip to main content

Self-Managed Clusters (provider: other)

Codiac normally provisions and manages clusters on your behalf in Azure or AWS. But if you already have a Kubernetes cluster running somewhere — on your own VMs, a private data center, or any other infrastructure Codiac didn't create — you can still use Codiac to deploy to it, manage cabinets on it, and restore it.

This guide covers the full workflow for bringing a self-managed cluster under Codiac management.


What "self-managed" means

What Codiac doesWhat you do
Stores your cluster definition and credentialsProvision and maintain the cluster
Deploys workloads, restores cabinetsKeep the cluster running and reachable
Manages the Codiac agent and infrx stacksHandle upgrades and node management

Codiac never creates or destroys the underlying infrastructure for a self-managed cluster. The provision and destroy phases are simply skipped. Everything else — agent install, infrx stacks, cabinet restore, deployments — works exactly the same as any other cluster.


Prerequisites

Before you start, make sure you have:

  1. A running Kubernetes cluster reachable from your network.
  2. A kubeconfig file that grants admin access to it (usually at ~/.kube/config, or provided by your cluster admin).

Step 1 — Define the cluster

Register the cluster in Codiac. This is pure metadata — nothing in your infrastructure is created or changed.

cod cluster define my-cluster \
--provider other \
--providerSubscription my-cluster \
--location on-prem \
--nodeSpec custom \
--nodeQty 3 \
--k8sVersion 1.29.0 \
--resourceGroup "" \
--silent

A few notes on the fields for self-managed clusters:

FlagWhat to put here
--providerAlways other
--providerSubscriptionA friendly label for your cluster (doesn't have to be an ID)
--locationA descriptive name for where it lives (on-prem, datacenter-1, etc.)
--nodeSpecAnything descriptive — Codiac won't interpret this for provisioning
--resourceGroupLeave blank or use any label — not used for self-managed clusters

Run cod cluster define without arguments to walk through the fields interactively.


Step 2 — Register your credentials

Give Codiac the kubeconfig it needs to connect to the cluster.

# From a file
cod cluster credentials-set my-cluster --file ~/.kube/my-cluster.yaml

# From a multi-context kubeconfig — specify which context to use
cod cluster credentials-set my-cluster --file ~/.kube/config --context my-cluster-admin

# Pipe it in via stdin
cat ~/.kube/my-cluster.yaml | cod cluster credentials-set my-cluster

Codiac selects the context to use as follows:

What's in the fileWhat happens
One contextIngested automatically
Multiple contexts, one references a cluster named my-clusterAuto-selected — no --context needed
Multiple contexts, ambiguousError listing available contexts — pass --context

Codiac stores only the credentials for the selected context — the rest of the file is discarded. You can re-run this command at any time to rotate or update the credentials.


Step 3 — Wire up your cabinets (optional)

If you have existing cabinets you want to run on this cluster, attach them now. If you're starting fresh, skip this step — you'll create cabinets after the cluster is up and running.

cod cabinet cluster attach \
--enterprise acme \
--environment prod \
--cabinet api-gateway \
--cluster my-cluster \
--silent

Repeat for each cabinet. Run without arguments to attach interactively.


Step 4 — Restore the cluster

Bring everything up. Because the cluster already exists, always pass --no-provision.

cod cluster restore my-cluster --no-provision --silent

This runs the remaining three phases in order:

PhaseWhat it does
AgentInstalls the Codiac in-cluster agent
InfrxInstalls the default cluster stack (ingress, cert-manager, etc.)
CabinetsRestores all attached SDLC cabinets

Each phase is independently skippable with --no-agent, --no-infrx, --no-cabinets if you need to re-run only part of the sequence.


Updating credentials

Credentials can be updated at any time — useful when certificates rotate, tokens expire, or your cluster admin provides a new kubeconfig. Because they're not machine- or session-specific, you can rotate them from anywhere: your workstation, a CI pipeline, or a scheduled job.

cod cluster credentials-set my-cluster --file ~/.kube/my-cluster-new.yaml --silent

After updating, re-run the phases that need a fresh connection:

cod cluster restore my-cluster --no-provision --silent

Full example, start to finish

# 1. Define the cluster
cod cluster define my-cluster \
--provider other \
--providerSubscription my-cluster \
--location on-prem \
--nodeSpec custom \
--nodeQty 3 \
--k8sVersion 1.29.0 \
--resourceGroup "" \
--silent

# 2. Register credentials
cod cluster credentials-set my-cluster --file ~/.kube/my-cluster.yaml

# 3. Attach existing cabinets (skip if you have none yet)
cod cabinet cluster attach \
--enterprise acme --environment prod \
--cabinet api-gateway --cluster my-cluster --silent

# 4. Restore (skip provision — you own the infrastructure)
cod cluster restore my-cluster --no-provision --silent

Reference

GoalCommand
Register a self-managed clustercod cluster define --provider other
Store or update connection credentialscod cluster credentials-set
Attach a cabinet to the clustercod cabinet cluster attach
Install agent + infrx + cabinetscod cluster restore --no-provision
Re-run a specific phasecod cluster restore --no-provision --no-agent (etc.)
Update credentialscod cluster credentials-set (run again)