Skip to main content

Stop Rebuilding Docker Images for Config Changes

The Problem: Config-on-Build Is Wasting Your Time

Scenario: You need to change a database connection string in your production API. With config-on-build, here's what happens:

  1. Edit the config file in your codebase
  2. Commit to Git
  3. Trigger CI/CD pipeline
  4. Rebuild the Docker image (even though the code didn't change)
  5. Wait 5-10 minutes for the build
  6. Push new image to registry
  7. Deploy to production
  8. Realize you made a typo in the connection string
  9. Repeat steps 1-7

Total time: 20-30 minutes for a one-line configuration change.

The Waste:

  • Rebuilding identical code just for config changes
  • Slower feedback loops (20 min vs 30 seconds)
  • Different images for dev/staging/prod (configuration drift risk)
  • Bloated Git history with config-only commits
  • Increased risk (each deployment is a new, untested artifact)

There's a better way: Config-on-deploy separates configuration from code, letting you deploy the same artifact across all environments with environment-specific settings applied at runtime.


Config-on-Build vs Config-on-Deploy

Config-on-Build (Anti-Pattern)

Configuration is baked into the Docker image at build time.

Example:

# Dockerfile - BAD: Config baked into image
FROM node:18
WORKDIR /app
COPY . .

# Config hardcoded at build time
ENV DATABASE_URL=postgres://prod-db:5432/myapp
ENV LOG_LEVEL=info

RUN npm ci --production
CMD ["node", "server.js"]

Problems:

  • Can't reuse images: Need separate images for dev/staging/prod
  • Slow changes: Must rebuild for every config change
  • Secrets in images: Database passwords, API keys baked into layers
  • Configuration drift: Each environment has different artifact (untested in prod)
  • Rollback complexity: Old image may have outdated config

Config-on-Deploy (Best Practice)

Configuration is injected at deployment time, allowing the same image to run in any environment.

Example:

# Dockerfile - GOOD: No hardcoded config
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .

# No ENV variables - provided at runtime
CMD ["node", "server.js"]

Deployment with runtime config:

# Dev environment
docker run -e DATABASE_URL=postgres://dev-db:5432/myapp \
-e LOG_LEVEL=debug \
myapp:1.2.3

# Prod environment (same image!)
docker run -e DATABASE_URL=postgres://prod-db:5432/myapp \
-e LOG_LEVEL=info \
myapp:1.2.3

Benefits:

  • One image, many environments: Same artifact tested in dev runs in prod
  • Fast config changes: Update ConfigMap/Secret, restart pods (30 seconds)
  • No secrets in images: Pull from environment variables or secret stores
  • Zero configuration drift: Guaranteed same code across all environments
  • Easy rollback: Old image works with new config

The Twelve-Factor App Principle

Config-on-deploy follows the Twelve-Factor App methodology:

III. Config: Store config in the environment

"An app's config is everything that is likely to vary between deploys (staging, production, developer environments, etc). Apps sometimes store config as constants in the code. This is a violation of twelve-factor, which requires strict separation of config from code."

What is "config"?

  • Database connection strings
  • API keys and secrets
  • Feature flags
  • Logging levels
  • External service URLs
  • Resource limits (memory, CPU)

What is NOT config?

  • Application code
  • Internal routing logic
  • Business logic
  • Dependencies (listed in package.json, requirements.txt, etc.)

Config-on-Deploy in Kubernetes

Kubernetes provides two main mechanisms for runtime configuration:

1. ConfigMaps (Non-Sensitive Configuration)

Store configuration as key-value pairs, injected into pods as environment variables or files.

Create ConfigMap:

kubectl create configmap app-config \
--from-literal=LOG_LEVEL=info \
--from-literal=API_TIMEOUT=30s \
--from-literal=FEATURE_NEW_UI=true

Inject into Pod:

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
template:
spec:
containers:
- name: api
image: myapp:1.2.3 # Same image for all environments
envFrom:
- configMapRef:
name: app-config # Load all config at runtime

Update Config Without Rebuilding:

# Update config
kubectl create configmap app-config \
--from-literal=LOG_LEVEL=debug \
--from-literal=API_TIMEOUT=60s \
--dry-run=client -o yaml | kubectl apply -f -

# Restart pods to pick up new config
kubectl rollout restart deployment/my-api

Result: Config updated in 30 seconds, no rebuild required.


2. Secrets (Sensitive Configuration)

Store sensitive data (passwords, API keys) securely, injected at runtime.

Create Secret:

kubectl create secret generic app-secrets \
--from-literal=DATABASE_PASSWORD=super-secret-password \
--from-literal=API_KEY=abc123xyz

Inject into Pod:

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
template:
spec:
containers:
- name: api
image: myapp:1.2.3
env:
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: DATABASE_PASSWORD
- name: API_KEY
valueFrom:
secretKeyRef:
name: app-secrets
key: API_KEY

Best Practice: Use external secret managers (AWS Secrets Manager, Azure Key Vault, HashiCorp Vault) instead of Kubernetes secrets for production.


Hierarchical Configuration (Advanced)

For teams managing multiple environments and services, hierarchical configuration reduces repetition.

Problem: Setting LOG_LEVEL=info individually for 50 services across 3 environments = 150 config entries.

Solution: Set config at the environment level, inherit by all services.

Example with Codiac:

# Set at environment level (inherited by all cabinets/assets)
codiac config set
# Select environment scope → prod → LOG_LEVEL → info

# Override for specific asset if needed
codiac config set
# Select asset scope → troublesome-service in prod → LOG_LEVEL → debug

Result:

  • Set once, inherited by all services
  • Override only when needed (exception-based configuration)
  • No configuration drift (all services use same base config)

Learn more about Codiac's Dynamic Configuration →


Real-World Example: Before and After

Before: Config-on-Build (Old Way)

Workflow:

  1. Need to change MAX_CONNECTIONS from 100 to 150
  2. Edit config.js in codebase
  3. Commit to Git: git commit -m "Increase max connections to 150"
  4. CI/CD pipeline triggered
  5. Docker build runs (6 minutes)
  6. Push to registry (1 minute)
  7. Deploy to staging (2 minutes)
  8. Test manually
  9. Deploy to prod (2 minutes)

Total time: 15-20 minutes Git commits: 1 config-only commit polluting history Deployments: 2 (staging + prod) Risk: New image deployed to prod (untested artifact)


After: Config-on-Deploy (New Way)

Workflow:

  1. Update ConfigMap
    kubectl patch configmap app-config -p '{"data":{"MAX_CONNECTIONS":"150"}}'
  2. Restart pods to pick up new config
    kubectl rollout restart deployment/my-api

Total time: 30 seconds Git commits: 0 (config change tracked separately) Deployments: 0 (same image, new config) Risk: Minimal (only config changed, code unchanged)


Common Objections & Responses

"But our config needs to be in Git for version control!"

Response: Config should be version controlled, but separately from application code.

Solution:

  • Store Kubernetes manifests (including ConfigMaps/Secrets) in separate Git repo
  • Use GitOps tools (ArgoCD, Flux) to track config changes
  • Codiac tracks all config changes with full audit trail and version history

Benefits:

  • Config versioned independently from code
  • Rollback config without rolling back code
  • Clear separation of concerns

"Environment variables are messy with dozens of config values!"

Response: True! Use config files mounted as volumes instead.

Example:

apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-file
data:
config.json: |
{
"database": {
"host": "prod-db.example.com",
"port": 5432,
"maxConnections": 100
},
"logging": {
"level": "info",
"format": "json"
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
template:
spec:
containers:
- name: api
image: myapp:1.2.3
volumeMounts:
- name: config
mountPath: /app/config
volumes:
- name: config
configMap:
name: app-config-file

Your application reads:

// Load config from mounted file
const config = JSON.parse(fs.readFileSync('/app/config/config.json'));

Benefits:

  • Structured config files (JSON, YAML, TOML)
  • Easier to read than dozens of env vars
  • Same pattern as local development

"We need different images for security (prod has different libraries)!"

Response: This is a legitimate concern but orthogonal to config-on-deploy.

Solution:

  • Use multi-stage builds to create prod-optimized images
  • Include all necessary libraries in image
  • Control behavior via config, not different builds

Example:

# Base image with all libraries
FROM node:18 AS base
WORKDIR /app
COPY package*.json ./
RUN npm ci

# Production image (no dev dependencies)
FROM node:18-slim
WORKDIR /app
COPY --from=base /app/node_modules ./node_modules
COPY . .

# Config injected at runtime
CMD ["node", "server.js"]

Use feature flags to toggle behavior:

# Dev: Enable debug features
ENABLE_DEBUG_ENDPOINTS=true

# Prod: Disable debug features
ENABLE_DEBUG_ENDPOINTS=false

Implementation Guide: Migrating to Config-on-Deploy

Step 1: Identify Configuration

List all hardcoded config in your application:

# Find hardcoded values in code
grep -r "DATABASE_URL\|API_KEY\|LOG_LEVEL" src/

# Find ENV statements in Dockerfile
grep ENV Dockerfile

Output Example:

src/config.js:  const DATABASE_URL = "postgres://localhost:5432/dev"
src/config.js: const LOG_LEVEL = "debug"
Dockerfile: ENV DATABASE_URL=postgres://prod:5432/app

Step 2: Extract Config to Environment Variables

Before (config.js):

module.exports = {
database: {
url: "postgres://localhost:5432/dev", // Hardcoded
maxConnections: 10
},
logging: {
level: "debug"
}
};

After (config.js):

module.exports = {
database: {
url: process.env.DATABASE_URL || "postgres://localhost:5432/dev",
maxConnections: parseInt(process.env.MAX_CONNECTIONS || "10")
},
logging: {
level: process.env.LOG_LEVEL || "debug"
}
};

Defaults for local development, environment variables for deployed environments.


Step 3: Create ConfigMap/Secret

# Create ConfigMap for non-sensitive config
kubectl create configmap app-config \
--from-literal=LOG_LEVEL=info \
--from-literal=MAX_CONNECTIONS=100

# Create Secret for sensitive config
kubectl create secret generic app-secrets \
--from-literal=DATABASE_URL=postgres://prod-db:5432/myapp

Step 4: Update Deployment to Inject Config

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
spec:
replicas: 3
template:
spec:
containers:
- name: api
image: myapp:1.2.3
envFrom:
- configMapRef:
name: app-config # Inject all config
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: app-secrets
key: DATABASE_URL

Step 5: Remove Config from Dockerfile

Before:

ENV DATABASE_URL=postgres://prod:5432/app
ENV LOG_LEVEL=info
ENV MAX_CONNECTIONS=100

After:

# No ENV statements - all config provided at runtime

Step 6: Test Configuration Changes

# Update LOG_LEVEL from info to debug
kubectl patch configmap app-config -p '{"data":{"LOG_LEVEL":"debug"}}'

# Restart pods to pick up new config
kubectl rollout restart deployment/my-api

# Verify config applied
kubectl exec -it deployment/my-api -- env | grep LOG_LEVEL

Expected: Pods restart with LOG_LEVEL=debug in ~30 seconds.


Time Savings Calculator

Scenario: Team of 10 developers, each making 2 config changes per week.

Config-on-Build (Old Way)

  • Config change time: 15 minutes (rebuild + redeploy)
  • Changes per week: 10 developers × 2 changes = 20 changes
  • Time per week: 20 × 15 min = 300 minutes (5 hours)
  • Time per year: 5 hours/week × 52 weeks = 260 hours

Config-on-Deploy (New Way)

  • Config change time: 1 minute (update ConfigMap + restart)
  • Changes per week: 20 changes
  • Time per week: 20 × 1 min = 20 minutes
  • Time per year: 20 min/week × 52 weeks = 17 hours

Time Saved: 260 - 17 = 243 hours per year (6 work weeks!)

Additional Benefits:

  • Fewer CI/CD pipeline runs = lower infrastructure costs
  • Faster incident response (fix config in 1 min, not 15 min)
  • Reduced risk (same tested artifact across all environments)

Codiac's Hierarchical Configuration

Managing config across 100+ services and 5+ environments gets complex fast. Codiac simplifies this with hierarchical configuration inheritance.

How It Works:

Enterprise
├─ Environment: Production
│ ├─ LOG_LEVEL=info (set once, inherited by all)
│ ├─ Cabinet: api-prod
│ │ ├─ REGION=us-west-2 (set once, inherited by all assets)
│ │ ├─ Asset: user-service (inherits LOG_LEVEL + REGION)
│ │ ├─ Asset: payment-service (inherits LOG_LEVEL + REGION)
│ │ └─ Asset: notification-service
│ │ └─ LOG_LEVEL=debug (override for this asset only)

Example:

# Set config at environment level (inherited by all)
codiac config set
# Select environment scope → prod → LOG_LEVEL → info

# Set config at cabinet level (inherited by all assets in cabinet)
codiac config set
# Select cabinet scope → api-prod → REGION → us-west-2

# Override for specific asset (exception-based config)
codiac config set
# Select asset scope → notification-service in api-prod → LOG_LEVEL → debug

Result:

  • Set LOG_LEVEL once → 100 services inherit automatically
  • Override only when needed (1 service out of 100)
  • Zero configuration drift
  • Full audit trail of all config changes

Learn more about Codiac's Dynamic Configuration →


Best Practices Summary

  1. Never hardcode config in Dockerfiles - Use ENV only for build-time settings (NODE_ENV=production)
  2. Use ConfigMaps for non-sensitive config - Database hosts, log levels, feature flags
  3. Use Secrets for sensitive config - Passwords, API keys, tokens
  4. Provide defaults for local development - process.env.VAR || "default-value"
  5. Use external secret managers for production - AWS Secrets Manager, Azure Key Vault
  6. Version control your config - Store ConfigMaps/Secrets in Git, use GitOps
  7. Use hierarchical config for scale - Set once at environment level, inherit everywhere
  8. Monitor config changes - Audit trail for compliance (who changed what, when)


Ready to stop rebuilding images for config changes? Try Codiac free to experience hierarchical configuration with automatic inheritance across all your services.