Self-Managed Clusters (provider: other)
Codiac normally provisions and manages clusters on your behalf in Azure or AWS. But if you already have a Kubernetes cluster running somewhere — on your own VMs, a private data center, or any other infrastructure Codiac didn't create — you can still use Codiac to deploy to it, manage cabinets on it, and restore it.
This guide covers the full workflow for bringing a self-managed cluster under Codiac management.
What "self-managed" means
| What Codiac does | What you do |
|---|---|
| Stores your cluster definition and credentials | Provision and maintain the cluster |
| Deploys workloads, restores cabinets | Keep the cluster running and reachable |
| Manages the Codiac agent and infrx stacks | Handle upgrades and node management |
Codiac never creates or destroys the underlying infrastructure for a self-managed cluster. The provision and destroy phases are simply skipped. Everything else — agent install, infrx stacks, cabinet restore, deployments — works exactly the same as any other cluster.
Prerequisites
Before you start, make sure you have:
- A running Kubernetes cluster reachable from your network.
- A kubeconfig file that grants admin access to it (usually at
~/.kube/config, or provided by your cluster admin).
Step 1 — Define the cluster
Register the cluster in Codiac. This is pure metadata — nothing in your infrastructure is created or changed.
cod cluster define my-cluster \
--provider other \
--providerSubscription my-cluster \
--location on-prem \
--nodeSpec custom \
--nodeQty 3 \
--k8sVersion 1.29.0 \
--resourceGroup "" \
--silent
A few notes on the fields for self-managed clusters:
| Flag | What to put here |
|---|---|
--provider | Always other |
--providerSubscription | A friendly label for your cluster (doesn't have to be an ID) |
--location | A descriptive name for where it lives (on-prem, datacenter-1, etc.) |
--nodeSpec | Anything descriptive — Codiac won't interpret this for provisioning |
--resourceGroup | Leave blank or use any label — not used for self-managed clusters |
Run
cod cluster definewithout arguments to walk through the fields interactively.
Step 2 — Register your credentials
Give Codiac the kubeconfig it needs to connect to the cluster.
# From a file
cod cluster credentials-set my-cluster --file ~/.kube/my-cluster.yaml
# From a multi-context kubeconfig — specify which context to use
cod cluster credentials-set my-cluster --file ~/.kube/config --context my-cluster-admin
# Pipe it in via stdin
cat ~/.kube/my-cluster.yaml | cod cluster credentials-set my-cluster
Codiac selects the context to use as follows:
| What's in the file | What happens |
|---|---|
| One context | Ingested automatically |
Multiple contexts, one references a cluster named my-cluster | Auto-selected — no --context needed |
| Multiple contexts, ambiguous | Error listing available contexts — pass --context |
Codiac stores only the credentials for the selected context — the rest of the file is discarded. You can re-run this command at any time to rotate or update the credentials.
Step 3 — Wire up your cabinets (optional)
If you have existing cabinets you want to run on this cluster, attach them now. If you're starting fresh, skip this step — you'll create cabinets after the cluster is up and running.
cod cabinet cluster attach \
--enterprise acme \
--environment prod \
--cabinet api-gateway \
--cluster my-cluster \
--silent
Repeat for each cabinet. Run without arguments to attach interactively.
Step 4 — Restore the cluster
Bring everything up. Because the cluster already exists, always pass --no-provision.
cod cluster restore my-cluster --no-provision --silent
This runs the remaining three phases in order:
| Phase | What it does |
|---|---|
| Agent | Installs the Codiac in-cluster agent |
| Infrx | Installs the default cluster stack (ingress, cert-manager, etc.) |
| Cabinets | Restores all attached SDLC cabinets |
Each phase is independently skippable with --no-agent, --no-infrx, --no-cabinets
if you need to re-run only part of the sequence.
Updating credentials
Credentials can be updated at any time — useful when certificates rotate, tokens expire, or your cluster admin provides a new kubeconfig. Because they're not machine- or session-specific, you can rotate them from anywhere: your workstation, a CI pipeline, or a scheduled job.
cod cluster credentials-set my-cluster --file ~/.kube/my-cluster-new.yaml --silent
After updating, re-run the phases that need a fresh connection:
cod cluster restore my-cluster --no-provision --silent
Full example, start to finish
# 1. Define the cluster
cod cluster define my-cluster \
--provider other \
--providerSubscription my-cluster \
--location on-prem \
--nodeSpec custom \
--nodeQty 3 \
--k8sVersion 1.29.0 \
--resourceGroup "" \
--silent
# 2. Register credentials
cod cluster credentials-set my-cluster --file ~/.kube/my-cluster.yaml
# 3. Attach existing cabinets (skip if you have none yet)
cod cabinet cluster attach \
--enterprise acme --environment prod \
--cabinet api-gateway --cluster my-cluster --silent
# 4. Restore (skip provision — you own the infrastructure)
cod cluster restore my-cluster --no-provision --silent
Reference
| Goal | Command |
|---|---|
| Register a self-managed cluster | cod cluster define --provider other |
| Store or update connection credentials | cod cluster credentials-set |
| Attach a cabinet to the cluster | cod cabinet cluster attach |
| Install agent + infrx + cabinets | cod cluster restore --no-provision |
| Re-run a specific phase | cod cluster restore --no-provision --no-agent (etc.) |
| Update credentials | cod cluster credentials-set (run again) |