Skip to main content

Multi-Cluster Management

Manage multiple Kubernetes clusters as a single fleet. Deploy applications across AWS, Azure, GCP, and on-prem clusters with the same commands. Codiac abstracts cluster complexity, letting you focus on workloads instead of infrastructure.

Interactive CLI

The Codiac CLI features an interactive mode that guides you through each command. You don't need to memorize flags, just run the command and the CLI prompts you for what it needs.


The Problem: Cluster Sprawl

Most organizations run 5-20+ Kubernetes clusters:

  • Multiple regions (US-East, US-West, EU-West) for latency/compliance
  • Multiple clouds (AWS for compute, GCP for analytics, on-prem for legacy)
  • Multiple environments (dev, staging, prod) with dedicated clusters
  • Isolated clusters for teams, customers, or regulatory zones

Traditional management challenges:

  • Per-cluster kubectl context switching (kubectl config use-context prod-us-east)
  • Duplicate YAML manifests for each cluster
  • Manual deployment coordination across clusters
  • Configuration drift between clusters
  • No unified view of application status across fleet

Result: "Multi-cluster has become the operational baseline by 2026" (Google GKE Fleet Management), but most teams manage clusters individually.


How Codiac Solves It: Fleet-Wide Operations

Codiac treats clusters as cattle, not pets.

Instead of:

# ❌ Manual per-cluster deployment
kubectl config use-context prod-us-east
kubectl apply -f deployment.yaml

kubectl config use-context prod-us-west
kubectl apply -f deployment.yaml

kubectl config use-context prod-eu-west
kubectl apply -f deployment.yaml

Do this with Codiac:

# ✅ Deploy via interactive CLI
codiac asset deploy
# The CLI guides you through asset, version, and cabinet selection

Or deploy a snapshot to multiple clusters via the web UI at app.codiac.io.

Result: Same application deployed across your entire fleet with consistent configuration.


Key Capabilities

1. Unified Cluster Registry

All clusters registered in Codiac:

codiac cluster list

The CLI displays all your connected clusters with their status and details. You can also view your fleet in the web UI at app.codiac.io.

No kubectl context switching - Codiac knows how to reach every cluster.


2. Cross-Cluster Deployments

Deploy snapshots to multiple clusters:

Use the Codiac web UI to deploy a snapshot to multiple clusters simultaneously, or deploy via CLI:

codiac snapshot deploy

The CLI prompts you to select:

  • Which snapshot version to deploy
  • Target cabinet

Codiac handles:

  • Per-cluster configuration injection (region-specific settings)
  • Health checks per cluster
  • Rollback via previous snapshot

3. Fleet-Wide Visibility

See application status across all clusters:

Use the Codiac web UI at app.codiac.io to view your entire fleet in one dashboard:

  • All clusters and their health status
  • Deployed assets per cabinet
  • Configuration differences between clusters
  • Deployment history and rollback options

No need to query each cluster individually.


4. Configuration Inheritance Per Cluster

Same application, different cluster-specific configuration:

codiac config set

The CLI prompts you to select:

  • Scope (enterprise, environment, cabinet, or asset level)
  • Configuration key and value

Configuration follows a hierarchical model:

  • Enterprise-level: Applies to all environments
  • Environment-level: Applies to all cabinets in that environment
  • Cabinet-level: Applies to specific cabinet
  • Asset-level: Applies to specific asset

Lower levels override higher levels, so you can set defaults at the environment level and override for specific cabinets or assets as needed.


Common Use Cases

Use Case 1: Multi-Region for High Availability

Goal: Application running in 3 AWS regions for disaster recovery.

Workflow:

  1. Create or capture clusters in each region using codiac cluster create or codiac cluster capture
  2. Create cabinets for your workloads in each cluster
  3. Deploy your application to all regions using snapshots
  4. Configure DNS to route users to the nearest region

Result: Users automatically routed to nearest region. If one region fails, traffic reroutes.


Use Case 2: Multi-Cloud (AWS + Azure + GCP)

Goal: Avoid vendor lock-in by running on multiple clouds.

Workflow:

  1. Capture existing clusters from each cloud provider:
    codiac csp login
    codiac cluster capture
  2. Set cloud-specific configuration at the cabinet level:
    codiac config set
    # Select cabinet scope, then set cloud-specific values
  3. Deploy same application to all clouds via snapshot

Configuration hierarchy handles the differences - set AWS-specific values for AWS cabinets, Azure-specific for Azure cabinets.


Use Case 3: Environment Isolation (Dev/Staging/Prod Clusters)

Goal: Separate clusters per environment to prevent dev from affecting prod.

Workflow:

  1. Create environments:
    codiac environment create
    # Create dev, staging, prod environments
  2. Assign clusters to environments during capture or creation
  3. Deploy progressively:
    • Deploy to dev, validate
    • Deploy same snapshot to staging, validate
    • Deploy to prod

Benefit: Blast radius contained per environment. Dev cluster crash doesn't touch prod.


Use Case 4: Team-Specific Clusters

Goal: Each team gets dedicated cluster for autonomy.

Workflow:

  1. Create/capture clusters for each team
  2. Set up RBAC so teams can only access their clusters:
    codiac auth user invite
    # Assign roles with appropriate permissions
  3. Teams deploy independently to their own cabinets

Access control: Backend team can't modify frontend cluster (RBAC enforced by Codiac).


Fleet Management Operations

Add New Cluster to Fleet

Two options: Create new or capture existing

Option A: Create new cluster via Codiac

codiac cluster create

The CLI guides you through provider selection, region, and cluster configuration.

Option B: Capture existing cluster (recommended for migration)

# First, authenticate to your cloud provider
codiac csp login

# Then capture your cluster
codiac cluster capture

The CLI prompts you for provider, account/subscription, region, and cluster name.

Scripted mode (for automation)
# AWS
codiac cluster capture -n my-existing-cluster -p aws -s 123456789012 -l us-east-1

# Azure
codiac cluster capture -n my-existing-cluster -p azure -s your-subscription-id -l eastus -g your-resource-group

Codiac installs relay components (lightweight agent) in the cluster for management.


Remove Cluster from Fleet

# Remove cluster from Codiac management (doesn't delete the cluster itself)
codiac cluster forget

# Destroy cluster entirely
codiac cluster destroy

The CLI prompts you to select which cluster to remove.


Cluster Health Monitoring

View cluster health in the web UI:

The Codiac dashboard at app.codiac.io shows:

  • Cluster status (healthy, warning, unreachable)
  • Node count and resource utilization
  • Deployed workloads per cluster

Alerts: Codiac integrates with monitoring tools (Datadog, Prometheus) to alert on cluster health issues.


Comparison: Codiac vs Alternatives

FeatureCodiacGoogle GKE FleetAzure Fleet ManagerRancher
Multi-cloud✅ AWS, Azure, GCP, on-prem❌ GKE only❌ AKS only✅ Yes
Unified deployments✅ Single command⚠️ Config Sync (GitOps)⚠️ Propagation policies⚠️ Catalog apps
Cross-cluster config✅ Hierarchical❌ Manual⚠️ Limited⚠️ ConfigMaps
Cluster provisioning✅ Built-in✅ GKE Autopilot✅ AKS API⚠️ Via terraform
Certificate management✅ Automatic⚠️ Manual cert-manager⚠️ Manual cert-manager⚠️ Manual
Cluster upgrades✅ Blue/green⚠️ In-place⚠️ In-place⚠️ In-place
Configuration per clusterSet once at environment level, inherited by all clusters automaticallyReconfigure per cluster or complex GitOps syncPer-cluster propagation policiesPer-cluster or namespace ConfigMaps
Documentation overheadZero - single source of truth, self-documenting snapshotsHigh - per-cluster runbooks, team wikis, GitOps reposHigh - Azure-specific docs per clusterHigh - per-cluster catalogs, app configs

Key advantages:

  • Codiac works across all clouds and on-prem, while cloud-native fleet managers lock you to one provider
  • Configure once, inherit everywhere - no repeated configuration across clusters
Fleet-Wide Configuration Without Repetition

Traditional fleet management: Configure monitoring, logging, secrets, ingress, RBAC separately for each cluster. 10 clusters = 10× the configuration work.

Codiac fleet approach:

# Set monitoring config once at environment level
codiac config set
# Select environment scope → applies to all cabinets in that environment

Time savings: 30 minutes per cluster × 10 clusters = 5 hours saved on initial setup. Every future configuration change = 1 command instead of 10.

With Codiac's hierarchical configuration, you never configure the same thing twice. Environment-level settings automatically propagate to all clusters in that environment. Add a new cluster? It inherits the full configuration automatically. No duplicated configuration, no cluster-specific documentation, no configuration drift.


Organizing Your Fleet

By Environment

Codiac's environment hierarchy naturally groups clusters:

  • Development clusters in the dev environment
  • Staging clusters in the staging environment
  • Production clusters in the prod environment

Configuration set at the environment level automatically applies to all clusters in that environment.

By Purpose

Use cabinets to organize workloads logically:

  • Geographic - prod-us, prod-eu, prod-apac cabinets
  • Team-based - backend-api, frontend-web, data-pipeline cabinets
  • Compliance zones - pci-workloads, hipaa-workloads cabinets

Standardizing Cluster Configuration

When capturing or creating clusters, establish standards:

  • Use the same Kubernetes version across production
  • Deploy consistent add-ons (cert-manager, ingress controllers)
  • Apply uniform RBAC policies

The web UI provides visibility into cluster configuration differences to help maintain consistency.


Federation vs Fleet Management

Kubernetes Federation (KubeFed): Control plane manages resources across clusters.

Codiac Fleet Management: Application-focused orchestration layer above Kubernetes.

AspectKubeFedCodiac
Abstraction levelLow (Kubernetes resources)High (Applications/workloads)
Learning curveHigh (new CRDs, federation concepts)Low (simple CLI)
Cross-cloudPossible but complexBuilt-in
Deployment modelFederated resourcesSnapshot-based deployments
Use caseLow-level resource replicationApplication lifecycle management

Most teams prefer Codiac's approach: Higher-level abstraction, easier to use.


Troubleshooting

Cluster Unreachable

Symptoms: Cluster shows "unreachable" in the web UI or codiac cluster list

Causes:

  1. Network connectivity - VPN down, firewall blocking Codiac relay
  2. Relay component crashed - Kubernetes pod failure
  3. Credentials expired - Cloud provider credentials rotated

Debug:

# Check if you can reach the cluster directly
kubectl get nodes --context your-cluster-context

# Re-authenticate to cloud provider if needed
codiac csp login

# View cluster status
codiac cluster list

If the relay component needs reinstallation, re-run codiac cluster capture for the cluster.


Configuration Not Applied to Specific Cluster

Symptoms: Asset deployed to multiple clusters, but one cluster has wrong config.

Cause: Configuration hierarchy override or missing cabinet-level config.

Debug:

# View configuration for an asset
codiac config view
# Select the asset and cabinet to see effective configuration

Check if there's a cabinet-level or asset-level override that's different from the environment default.



FAQ

Q: Can I manage clusters across different cloud providers?

A: Yes. Codiac supports AWS EKS, Azure AKS, GCP GKE, and self-managed clusters (kubeadm, k3s, on-prem).

Q: Do I need a Codiac agent running in every cluster?

A: Yes. The Codiac relay component (lightweight agent) runs in each cluster for management communication. It's installed automatically when you create/connect a cluster.

Q: Can I deploy different versions of an application to different clusters?

A: Yes. Specify different image versions per cluster or use cluster-specific overrides.

Q: How does Codiac handle cluster failures?

A: Codiac detects unhealthy clusters and can automatically reroute traffic (if geo-routing enabled). You can also manually fail over by promoting a different cluster.

Q: Can I use Codiac for edge clusters (hundreds of small clusters)?

A: Yes, but consider performance. Codiac is optimized for 5-50 clusters. For 100+ edge clusters, contact support for architecture guidance.

Q: Does Codiac replace Rancher/GKE Fleet/Azure Fleet Manager?

A: Codiac provides overlapping functionality but focuses on application lifecycle management, not just cluster lifecycle. You can use Codiac alongside cloud-native fleet managers.


Start managing your cluster fleet: Connect your first cluster