Release Management & Environment Promotion
The real point of Codiac: WE REMEMBER EVERYTHING. Deploy exact system states across environments. No manual YAML copying. No "what was deployed when?" mysteries. Perfect reproducibility from day one.
The Traditional Problem
Manual Environment Promotion
Current reality for most teams:
# Engineer A deploys to staging (Friday afternoon)
kubectl apply -f api-deployment.yaml
kubectl apply -f worker-deployment.yaml
kubectl apply -f configmap.yaml
kubectl set image deployment/api api=myregistry/api:1.2.3
# Engineer B tries to promote to prod (Monday morning)
# Which files? Which versions? Which config changes?
# Was there a hotfix between Friday and Monday?
# Did someone manually scale replicas?
# Were environment variables changed?
# Result: 2 hours of Slack messages, Git archaeology, kubectl describe commands
What goes wrong:
-
Configuration sprawl across multiple sources:
- Some config in Helm values files (which override? which chart version?)
- Some config in Kustomize overlays (base + patches + per-environment)
- Some config in GitOps repos (ArgoCD ApplicationSets, Flux HelmReleases)
- Secrets in SOPS-encrypted files (different keys per environment)
- Image tags scattered across deployment YAML, CI/CD pipelines, and container registries
- No single source of truth for "what's actually running"
-
Manual promotion = human error:
- Copy staging YAML to prod directory
- Find-and-replace
staging→prod - Update image tags (did you get all of them?)
- Update resource limits (staging uses less RAM)
- Update ConfigMap values (database URLs, API keys)
- Apply in correct order (ConfigMap before Deployment)
- One typo = production incident
-
No audit trail:
- "What was deployed to prod on Nov 15th?"
- "Who changed the DATABASE_URL?"
- "Why are we using version 1.2.3 instead of 1.2.4?"
- Answer: Hours of Git log + kubectl + Slack archaeology
-
Impossible to reproduce environments:
- "It worked in staging last week, why is prod broken?"
- "Can you make dev look exactly like prod for debugging?"
- "What was the exact state when the bug was first reported?"
- Answer: We don't know, we didn't write it down
How Codiac Solves This: Immutable System Versioning
Core Concept: Every Deploy Creates an Immutable Snapshot
# Deploy application to staging
cod asset deploy \
--cabinet staging \
--asset web-api \
--image api:1.2.3 \
--replicas 2
# Codiac automatically creates snapshot: staging-v1.2.3
# This snapshot captures EVERYTHING:
# - Container image: myregistry/api:1.2.3
# - Replicas: 2
# - Environment variables: All config from staging cabinet
# - Resource limits: CPU/memory requests and limits
# - Health probes: Liveness, readiness, startup
# - Network policies: Ingress rules, domain mappings
# - Storage: Persistent volumes, file stores
# - Metadata: Who deployed, when, from which CLI version
# View snapshots
cod snapshot list
What this means:
- Snapshot = Complete System State: Not just YAML files. The entire deployed state.
- Immutable: Can never be changed.
staging-v1.2.3will always mean the exact same configuration forever. - Reproducible: Deploy this snapshot anywhere, get identical behavior.
- Auditable: Know exactly what was running at any point in time.
Real-World Promotion Workflow
Scenario: Deploying New API Version from Staging to Production
Traditional approach (manual):
# 1. Find what's in staging
kubectl get deployment api -n staging -o yaml > /tmp/staging.yaml
kubectl get configmap api-config -n staging -o yaml > /tmp/config.yaml
kubectl get service api -n staging -o yaml > /tmp/service.yaml
kubectl get ingress api -n staging -o yaml > /tmp/ingress.yaml
# 2. Manually edit each file
# - Change namespace: staging → production
# - Change image tag (which one is in staging?)
# - Change resource limits (prod needs more)
# - Change replicas (prod needs 10, staging has 2)
# - Change ConfigMap (prod database URLs)
# - Change domain (staging.api.com → api.com)
# 3. Apply in correct order
kubectl apply -f /tmp/config.yaml
kubectl apply -f /tmp/service.yaml
kubectl apply -f /tmp/staging.yaml
kubectl apply -f /tmp/ingress.yaml
# 4. Hope nothing broke
# Time: 1-2 hours
# Error rate: High (typos, forgotten changes)
Codiac approach (one command):
# 1. Test in staging
cod asset deploy --cabinet staging --asset web-api --image api:1.2.3
# (Snapshot staging-v1.2.3 created automatically)
# 2. Verify staging works
curl https://staging-api.mycompany.com/health
# Run integration tests, check logs, monitor metrics
# 3. Promote exact snapshot to production
cod snapshot deploy --version staging-v1.2.3 --cabinet production
# Done. Production now runs EXACTLY what staging tested.
# Time: 30 seconds
# Error rate: Zero (no manual editing)
What happened:
- Codiac deployed the exact snapshot from staging to prod
- Environment-specific config (DATABASE_URL, REDIS_URL) automatically applied from
prodenvironment settings - Prod-specific scaling (10 replicas vs 2) applied from prod cabinet config
- Domain mappings (api.com vs staging-api.com) applied automatically
- Complete audit trail logged: who promoted, when, from which snapshot
Comparison: Traditional vs Codiac
| Task | Traditional Approach | Codiac Approach |
|---|---|---|
| Promote staging to prod | 1-2 hours (find YAMLs, edit manually, apply in order) | 30 seconds (cod snapshot deploy --version staging-v1.2.3 --cabinet production) |
| Rollback production | Find old Git commit, reapply YAML (which commit was it?) | 10 seconds (cod snapshot deploy --version v1.2.2) |
| Answer "what's in prod?" | kubectl get + kubectl describe + Git log | cod snapshot list |
| Reproduce production bug | Impossible (no record of exact state) | cod snapshot deploy --version prod-v1.2.3 --cabinet development |
| Audit trail | Git log (only tracks YAML, not actual state) | Built-in (every snapshot logged with timestamp, user, metadata) |
| Onboard new engineer | "Here's kubectl, good luck" | cod snapshot list (see all snapshots) |
| Disaster recovery | Hope backups work, manually recreate | cod snapshot deploy --version last-known-good |
Enterprise Use Cases
1. Compliance & Audit Requirements
Problem:
"Your SOC 2 auditor asks: 'What was deployed to production on November 15th, 2024 at 3:42 PM?'"
Traditional answer:
# Check Git log (only shows YAML changes)
git log --since="2024-11-15 15:00" --until="2024-11-15 16:00"
# Check kubectl events (limited history)
kubectl get events --sort-by='.lastTimestamp' -n production
# Check container registry (image tags)
curl https://registry.mycompany.com/v2/api/tags/list
# Check ConfigMaps manually
kubectl get configmap api-config -n production -o yaml
# Result: 2 hours of work, incomplete answer
Codiac answer:
cod snapshot list
# Output shows all snapshots with deployment history
# Filter and view details in the web UI at app.codiac.io
# Complete answer in 5 seconds
2. Debugging: "It Worked Last Week"
Problem:
"The API was returning 200 OK last Thursday. Now it's 500 errors. What changed?"
Traditional approach:
# Check Git history
git log --since="last thursday" -- api/
# Check kubectl deployment history (limited to 10 revisions)
kubectl rollout history deployment/api -n production
# Check if someone manually scaled or edited
kubectl describe deployment api -n production
# Check if config changed
kubectl describe configmap api-config -n production
# Result: Partial answer, no way to perfectly recreate last Thursday's state
Codiac approach:
# List all snapshots
cod snapshot list
# Output:
# prod-v1.2.47 (current) - deployed 2 days ago
# prod-v1.2.46 - deployed 5 days ago (last Thursday)
# prod-v1.2.45 - deployed 8 days ago
# Deploy exact snapshot from last Thursday to test cabinet
cod snapshot deploy --version prod-v1.2.46 --cabinet test
# Now test environment is IDENTICAL to production last Thursday
# Run tests, compare behavior, identify exact difference
3. Onboarding New Engineers
Traditional onboarding:
# New engineer: "What's deployed in production?"
# Senior engineer: "Well, check the prod/ directory in Git"
# New engineer: "This YAML says image: api:1.0.0 but kubectl shows 1.2.3"
# Senior engineer: "Oh yeah, we manually updated that with kubectl set image"
# New engineer: "What about environment variables?"
# Senior engineer: "Some are in ConfigMaps, some are in the deployment, some we set manually"
# New engineer: "How do I see what's actually running?"
# Senior engineer: "kubectl get everything and piece it together"
Codiac onboarding:
# New engineer: "What's deployed in production?"
cod snapshot list
# Output shows all snapshots with versions
# View full details in the web UI at app.codiac.io
# New engineer now understands production in 60 seconds
# No tribal knowledge required-anyone can reproduce any environment
4. Disaster Recovery
Scenario: Primary region fails
Traditional DR:
# Panic: What was running in us-east?
# Check runbooks (outdated?)
# Check Git (which branch for prod?)
# Manually apply YAML to us-west cluster
# Hope you got everything
# Time: Hours
# Confidence: Low
Codiac DR:
# List current production snapshots
cod snapshot list
# Deploy exact snapshot to DR cabinet
cod snapshot deploy --version prod-current --cabinet us-west
# Result: Identical production environment in DR region
# Time: 2 minutes
# Confidence: 100% (immutable snapshot)
Advanced Promotion Workflows
Workflow 1: Progressive Rollout (Canary)
# 1. Deploy new version to staging
cod asset deploy --cabinet staging --asset web-api --image api:2.0.0
# Snapshot: staging-v2.0.0
# 2. Test in staging
# Run integration tests, monitor metrics
# 3. Deploy canary to production (10% traffic)
cod snapshot deploy --version staging-v2.0.0 --cabinet production-canary
# 4. Monitor canary
# Check error rates, latency, logs
# 5. If canary healthy, promote to full production
cod snapshot deploy --version staging-v2.0.0 --cabinet production
# 6. If canary unhealthy, instant rollback
cod snapshot deploy --version prod-v1.2.50 --cabinet production
Workflow 2: Multi-Environment Pipeline
# Development → Staging → Production
# 1. Deploy to dev
cod asset deploy --cabinet development --asset web-api --image api:2.1.0
# Snapshot: dev-v2.1.0
# 2. After dev testing, promote to staging
cod snapshot deploy --version dev-v2.1.0 --cabinet staging
# Snapshot: staging-v2.1.0
# 3. After staging testing, promote to production
cod snapshot deploy --version staging-v2.1.0 --cabinet production
# Snapshot: prod-v2.1.0
# At each stage, environment-specific config applied automatically
# Complete audit trail: dev → staging → prod lineage preserved
Workflow 3: Rollback with Investigation
# Production incident detected
# Current version: prod-v2.1.5
# 1. Instant rollback to last known good
cod snapshot deploy --version prod-v2.1.4 --cabinet production
# 2. Deploy broken version to test cabinet for debugging
cod snapshot deploy --version prod-v2.1.5 --cabinet test
# 3. Investigate in test (doesn't affect production)
cod cluster connect test
kubectl logs -f deployment/api -n test
# 4. Fix found, deploy hotfix to staging
cod asset deploy --cabinet staging --asset web-api --image api:2.1.6
# 5. Promote hotfix to production
cod snapshot deploy --version staging-v2.1.6 --cabinet production
Snapshot Management
Viewing Snapshots
# List all snapshots
cod snapshot list
# View snapshots in JSON format
cod snapshot list --output json
# For detailed filtering and search, use the web UI at app.codiac.io
Tagging Snapshots
# Manage snapshot tags (interactive)
cod snapshot tags
# Add tags to specific versions (non-interactive)
cod snapshot tags --silent --addTags stable --filterVersions prod-v1.2.45
# Remove tags from specific versions
cod snapshot tags --silent --removeTags beta --filterVersions prod-v1.2.45
# Filter by cabinet when managing tags
cod snapshot tags --silent --addTags stable --filterCabinets production
Snapshot Comparison
Compare snapshots visually in the web UI at app.codiac.io. The UI provides detailed diff views showing:
- Asset version changes
- Configuration changes
- Resource allocation changes
Integration with CI/CD
GitHub Actions Example
name: Deploy to Production
on:
push:
tags:
- 'v*'
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Build and Push Image
run: |
docker build -t myregistry/api:${{ github.ref_name }} .
docker push myregistry/api:${{ github.ref_name }}
- name: Deploy to Staging
run: |
cod asset deploy \
--cabinet staging \
--asset web-api \
--image api:${{ github.ref_name }}
- name: Run Integration Tests
run: |
npm run test:integration -- --env staging
- name: Promote to Production
run: |
cod snapshot deploy --version staging-latest --cabinet production
GitLab CI Example
stages:
- build
- deploy-staging
- test
- deploy-production
deploy-staging:
stage: deploy-staging
script:
- cod asset deploy --cabinet staging --asset web-api --image $CI_REGISTRY_IMAGE:$CI_COMMIT_TAG
integration-tests:
stage: test
script:
- npm run test:integration -- --env staging
deploy-production:
stage: deploy-production
script:
- cod snapshot deploy --version staging-latest --cabinet production
when: manual
only:
- tags
Security & Compliance Benefits
1. Complete Audit Trail
What Codiac tracks automatically:
- Who: User email, CLI version, API key used
- What: Exact snapshot deployed (images, config, resources)
- When: Timestamp (UTC) with millisecond precision
- Where: Environment, cluster, cabinet, asset
- Why: Optional deployment message/reason
- How: Deployment method (CLI, API, CI/CD pipeline)
- Result: Success/failure, error messages, rollback events
Audit query examples:
# List all snapshots
cod snapshot list
# View detailed audit logs in the web UI at app.codiac.io
# Filter by user, date, status, and more in the UI
2. Separation of Concerns
Problem: In traditional setups, developers often have direct kubectl access to production (scary).
Codiac solution:
# Developers deploy to staging (full access)
cod asset deploy --cabinet staging --asset web-api --image api:1.2.3
# Platform team promotes to production (controlled)
cod snapshot deploy --version staging-v1.2.3 --cabinet production
# Benefits:
# - Developers can't accidentally break production
# - Platform team controls production deployments
# - Snapshots ensure staging tests exactly what production will run
# - RBAC enforced at Codiac level (not just kubectl)
3. Immutable Infrastructure Compliance
Compliance requirement: "Demonstrate that production infrastructure hasn't been manually modified"
Codiac proof:
# List current production snapshots
cod snapshot list
# View deployment state in the web UI at app.codiac.io
# The UI shows drift detection and compliance status
# To restore desired state, redeploy the snapshot:
cod snapshot deploy --version prod-v1.2.45 --cabinet production
FAQ: Release Management with Codiac
Q: What happens if I manually change something with kubectl?
A: Codiac detects drift. View drift status in the web UI at app.codiac.io. You can reapply the snapshot to fix drift using cod snapshot deploy --version <version> --cabinet <cabinet>, or create a new snapshot if the manual change was intentional.
Q: Can I deploy different versions to different replicas (canary)?
A: Yes, use separate cabinets for canary vs stable traffic:
- Deploy new version to a canary cabinet
- Deploy stable version to production cabinet
- Configure weighted routing in the web UI at app.codiac.io
Q: How long are snapshots stored?
A: Forever. Snapshots are immutable and never deleted automatically. You can manage snapshots via the web UI.
Q: Can I promote from prod back to staging (reverse promotion)?
A: Yes. Snapshots are environment-agnostic. You can deploy any snapshot to any cabinet:
cod snapshot deploy --version prod-v1.2.45 --cabinet staging
Q: What about secrets? Are they in snapshots?
A: Snapshots store references to secrets (e.g., "AWS Secrets Manager: prod/api-key"), not the actual secret values. When you deploy a snapshot, Codiac fetches secrets from your configured secret store at deploy time.
Q: How do I handle environment-specific configuration?
A: Use environment and cabinet hierarchy:
# Configuration priority (highest to lowest):
# 1. Asset-level config
# 2. Cabinet-level config
# 3. Environment-level config
# 4. Enterprise-level config
# Example: DATABASE_URL differs per environment
codiac config set
# Select environment scope → prod → DATABASE_URL → postgres://prod-db
codiac config set
# Select environment scope → staging → DATABASE_URL → postgres://staging-db
# When you deploy snapshot to prod, it gets prod DATABASE_URL automatically
# When you deploy same snapshot to staging, it gets staging DATABASE_URL
Q: Can I see a visual timeline of deployments?
A: Yes, via Codiac web UI:
# Open web UI
cod ui
# Navigate to: Environments → Production → Timeline
# Shows visual deployment history with snapshots, rollbacks, and changes over time
Q: How do I integrate with approval workflows?
A: Use CI/CD pipeline with manual approval gates:
# GitLab example
deploy-production:
stage: deploy
script:
- cod snapshot deploy --version staging-v1.2.3 --cabinet production
when: manual # Requires manual approval in GitLab UI
only:
- main
Real Customer Testimonials
"We used to spend 4-6 hours every release manually copying YAML between environments. Now it's one command. The audit trail alone saved us during our SOC 2 audit."
- Platform Engineer, Series B SaaS Company
"The 'what was deployed when' problem was killing us. Engineers would manually kubectl edit things and we'd lose track. Codiac's immutable snapshots mean we can always recreate any production state."
- CTO, Healthcare Startup
"Onboarding new engineers used to take days of explaining our deployment process. Now we just say 'run cod snapshot list' and they see everything."
- Engineering Manager, E-commerce Platform
Getting Started with Release Management
Step 1: Deploy to Staging
# Create staging cabinet
cod cabinet create staging --environment staging
# Deploy application
cod asset deploy \
--cabinet staging \
--asset web-api \
--image api:1.0.0 \
--port 8080 \
--replicas 2
# Codiac automatically creates snapshot: staging-v1.0.0
Step 2: Test in Staging
# View deployed snapshots
cod snapshot list
# Access staging application
curl https://staging-api.mycompany.com/health
# Run integration tests
npm run test:integration -- --env staging
Step 3: Promote to Production
# Create production cabinet (if not exists)
cod cabinet create production --environment prod
# Promote staging snapshot to production
cod snapshot deploy --version staging-v1.0.0 --cabinet production
# Codiac automatically:
# - Creates prod-v1.0.0 snapshot
# - Applies prod-specific configuration
# - Scales to prod replica count
# - Maps to prod domains
# - Logs complete audit trail
Step 4: Verify Production
# View production snapshots
cod snapshot list
# Access production application
curl https://api.mycompany.com/health
Step 5: Rollback if Needed
# If issues detected, instant rollback
cod snapshot deploy --version prod-v0.9.9 --cabinet production
# Rollback completes in seconds
Migration from Traditional Release Management
Before: Manual Promotion Process
# Old workflow (2 hours, error-prone):
# 1. Find staging YAML files
cd k8s/staging/
ls -la # Which files do I need?
# 2. Copy to prod directory
cp deployment.yaml ../prod/deployment.yaml
cp service.yaml ../prod/service.yaml
cp configmap.yaml ../prod/configmap.yaml
cp ingress.yaml ../prod/ingress.yaml
# 3. Edit each file manually
vi ../prod/deployment.yaml
# Change namespace: staging → production
# Change image tag (check container registry for correct tag)
# Change replicas: 2 → 10
# Change resource limits (prod needs more)
vi ../prod/configmap.yaml
# Change DATABASE_URL
# Change REDIS_URL
# Change API_KEYS
# Change LOG_LEVEL
vi ../prod/ingress.yaml
# Change domain: staging.api.com → api.com
# Change TLS certificate references
# 4. Apply to production
kubectl apply -f ../prod/configmap.yaml
kubectl apply -f ../prod/service.yaml
kubectl apply -f ../prod/deployment.yaml
kubectl apply -f ../prod/ingress.yaml
# 5. Verify (hope nothing broke)
kubectl get pods -n production -w
# 6. Document in Slack what was deployed
# "Deployed API v1.2.3 to production (I think?)"
After: Codiac Promotion
# New workflow (30 seconds, zero errors):
# 1. Promote tested snapshot
cod snapshot deploy --version staging-v1.2.3 --cabinet production
# Done. Complete audit trail automatic.
Related Documentation
- System Versioning Guide - Deep dive into snapshots and versions
- Configuration Management - Environment-specific configuration
- Asset Management Guide - Deploying and updating applications
- Cluster Management - Multi-cluster deployments
- FAQ: Release Management - Common questions
Get Help
- Migration assistance: chris@codiac.io
- Schedule a workshop: codiac.io
- Community support: Discord
Last updated: 2026-01-23