File Stores & Persistent Storage
Attach persistent storage to your assets for stateful workloads like databases, file uploads, and caches. Codiac integrates with cloud storage (AWS S3, Azure Blob) and Kubernetes persistent volumes.
What Are File Stores?
File stores provide persistent storage that survives pod restarts and can be shared across multiple pods. Unlike ephemeral container storage that's deleted when a pod terminates, file store data persists independently.
Business Value:
- Data persistence: Databases, uploads, and user data survive deployments and failures
- Scalability: Share storage across multiple pod replicas
- Cloud integration: Leverage managed storage (S3, Azure Blob) without managing infrastructure
- Backup & recovery: Cloud storage includes automatic backups and disaster recovery
Storage Types
| Type | Description | Best For | Examples |
|---|---|---|---|
| Kubernetes Volumes | Block storage attached to pods as mounted directories | Databases, logs, temporary files | PostgreSQL data directory, MongoDB storage |
| Cloud Object Storage | Scalable blob/object storage (S3, Azure Blob) | File uploads, backups, media | User uploads, document storage, images |
| Shared File Systems | Network file systems accessible by multiple pods | Shared assets, static content | Shared uploads, CMS files, assets |
How File Stores Work
Asset (Container)
↓
[Volume Mount]
↓
Persistent Volume Claim (PVC)
↓
Persistent Volume (PV)
↓
Cloud Storage (AWS EBS, Azure Disk, S3, etc.)
Workflow:
- Create file store definition (captures cloud storage configuration)
- Attach volume to asset (mount path inside container)
- Codiac creates PersistentVolumeClaim (PVC)
- Kubernetes binds PVC to cloud storage
- Container accesses storage at mount path
- CLI
- Web UI
Managing File Stores with CLI
Capturing File Store Configuration
Import existing cloud storage resources into Codiac:
cod filestore capture \
--provider aws \
--name my-s3-bucket \
--enterprise my-company
Expected Outcome:
- Codiac fetches S3 bucket configuration from AWS
- File store definition saved in enterprise
- Can now be attached to assets as volumes
Supported Providers:
aws- AWS S3, EBSazure- Azure Blob Storage, Azure Diskgcp- Google Cloud Storage (GCS)
Creating a Volume for an Asset
Attach persistent storage to an asset:
cod asset volume create /data \
--asset postgres-db \
--cabinet prod \
--size 100Gi \
--storage-class standard
Parameters:
/data- Mount path inside the container--size- Volume size (e.g.,10Gi,100Gi,1Ti)--storage-class- Storage tier (standard,ssd,premium)
Expected Outcome:
- PersistentVolumeClaim created with 100Gi capacity
- Volume mounted at
/datainside postgres container - Data in
/datapersists across pod restarts
Example: Database with Persistent Storage
PostgreSQL with 500GB volume for data directory:
cod asset volume create /var/lib/postgresql/data \
--asset postgres \
--cabinet prod \
--size 500Gi \
--storage-class ssd
Result:
- PostgreSQL stores database files in
/var/lib/postgresql/data - Data persists across deployments, scaling, and pod failures
- Uses SSD-backed storage for better I/O performance
Example: File Upload Service
Node.js API with volume for user file uploads:
cod asset volume create /app/uploads \
--asset api-server \
--cabinet prod \
--size 200Gi \
--storage-class standard
Application Code:
// Files saved to /app/uploads are persisted
const uploadPath = '/app/uploads';
app.post('/upload', upload.single('file'), (req, res) => {
const filePath = path.join(uploadPath, req.file.filename);
// File persists even if container restarts
});
Viewing Attached Volumes
List volumes attached to an asset:
cod asset view postgres --cabinet prod
Expected Output:
Asset: postgres
Cabinet: prod
Volumes:
- Mount Path: /var/lib/postgresql/data
Size: 500Gi
Storage Class: ssd
Status: Bound
Deleting a Volume
Remove volume from asset (data is retained in cloud storage):
cod asset volume delete /var/lib/postgresql/data \
--asset postgres \
--cabinet prod
Warning: This removes the volume attachment but does not delete the underlying data. The PersistentVolume and cloud storage remain for data recovery.
Forgetting a File Store
Remove file store definition from Codiac:
cod filestore forget my-s3-bucket \
--enterprise my-company
Result:
- File store definition removed from Codiac
- Does not delete cloud resources (S3 bucket, Azure storage account)
- Use this when migrating storage or cleaning up unused definitions
Storage Classes
Different storage tiers offer trade-offs between performance and cost:
| Storage Class | Performance | Cost | Best For |
|---|---|---|---|
standard | Medium | Low | General-purpose, logs, backups |
ssd / premium | High | High | Databases, high-I/O workloads |
cold / archive | Low | Very Low | Long-term backups, archives |
Example: Choosing Storage Class
# High-performance database
cod asset volume create /data \
--asset mongodb \
--storage-class premium \
--size 1Ti
# Log storage (lower cost)
cod asset volume create /var/log \
--asset app-server \
--storage-class standard \
--size 50Gi
Best Practices
1. Size Volumes Appropriately
Oversizing:
- ✅ Prevents out-of-space errors
- ❌ Wastes money on unused storage
Recommendation:
- Start with 2x current data size
- Monitor usage and resize as needed
2. Use Appropriate Storage Classes
SSD/Premium:
- Databases (PostgreSQL, MongoDB, MySQL)
- High-transaction systems
- Real-time applications
Standard:
- Application logs
- File uploads
- Temporary caches
3. Always Mount Databases with Volumes
Without Volume:
Container Dies → All Data Lost ❌
With Volume:
Container Dies → Data Persists → New Container Reattaches → Data Intact ✅
4. Set Backup Policies
Cloud storage includes snapshot and backup features:
- Enable automated snapshots for critical data
- Set retention policies (e.g., 30-day retention)
- Test restore procedures regularly
Managing File Stores in Web UI
Configure persistent storage through Codiac's visual interface.
Step 1: Capture Cloud Storage
Import existing cloud storage resources:
- Open Codiac web UI at https://app.codiac.io
- Navigate to Enterprise Settings
- Click File Stores tab
- Click Capture File Store
- Select cloud provider (AWS, Azure, GCP)
- Authenticate with cloud provider
- Select storage resources to import
Expected Outcome:
- Cloud storage resources imported into Codiac
- Available for attaching to assets
Step 2: Attach Volume to Asset
- Navigate to Asset in cabinet
- Click Storage or Volumes tab
- Click Add Volume
- Configure volume settings:
Mount Path:
- Directory path inside container (e.g.,
/data,/app/uploads) - Must be absolute path starting with
/
Volume Size:
- Storage capacity (GB or TB)
- Can be resized later (expansion only)
Storage Class:
- Performance tier (standard, SSD, premium)
- Select based on workload requirements
Access Mode:
ReadWriteOnce- Single pod can write (most common)ReadWriteMany- Multiple pods can write (shared storage)ReadOnlyMany- Multiple pods can read
Step 3: Save and Deploy
- Click Save to persist volume configuration
- Click Deploy to apply changes
- Monitor deployment progress
Expected Outcome:
- PersistentVolumeClaim created in Kubernetes
- Volume mounted to container
- Data persists across restarts
Viewing Volume Status
- Navigate to asset Storage tab
- See list of attached volumes:
- Mount path
- Size and usage
- Storage class
- Bound status
Status Indicators:
- ✅ Bound - Volume successfully attached
- ⏳ Pending - Waiting for storage provisioning
- ❌ Failed - Error attaching volume (check logs)
Editing Volume Configuration
- Navigate to asset Storage tab
- Click Edit on volume
- Modify size (expansion only) or storage class
- Click Save and Deploy
Note: Volume size can only be increased, not decreased. Shrinking requires data migration.
Removing Volume
- Navigate to asset Storage tab
- Click Remove on volume
- Confirm removal
- Deploy changes
Warning: Removing volume detaches it from container but does not delete data. PersistentVolume remains in cloud for recovery.
Common File Store Patterns
Pattern 1: PostgreSQL Database
cod asset volume create /var/lib/postgresql/data \
--asset postgres \
--size 500Gi \
--storage-class ssd
Why: Database files require high-performance SSD storage for IOPS.
Pattern 2: Redis Cache
cod asset volume create /data \
--asset redis \
--size 50Gi \
--storage-class ssd
Why: In-memory cache benefits from SSD for persistence (RDB/AOF files).
Pattern 3: File Upload API
cod asset volume create /app/uploads \
--asset api-server \
--size 200Gi \
--storage-class standard
Why: User uploads can use standard storage to save costs.
Pattern 4: Shared Assets (CMS)
cod asset volume create /var/www/uploads \
--asset cms \
--size 100Gi \
--storage-class standard \
--access-mode ReadWriteMany
Why: Multiple CMS pods need shared access to uploaded files.
Cloud Storage Integration
AWS S3 Integration
Use Case: Object storage for media, backups, user uploads.
Setup:
- Create S3 bucket in AWS
- Capture bucket in Codiac:
cod filestore capture --provider aws --name my-bucket - Use S3 SDK in application code to access bucket
Application Access:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
// Upload to S3 bucket captured in Codiac
s3.putObject({
Bucket: 'my-bucket',
Key: 'uploads/file.pdf',
Body: fileBuffer
});
Azure Blob Storage Integration
Use Case: Scalable object storage for files, backups.
Setup:
- Create Azure Storage Account
- Capture in Codiac:
cod filestore capture --provider azure --name my-storage-account - Use Azure SDK in application
Application Access:
const { BlobServiceClient } = require('@azure/storage-blob');
const blobService = BlobServiceClient.fromConnectionString(process.env.AZURE_STORAGE);
const container = blobService.getContainerClient('uploads');
await container.uploadBlob('file.pdf', fileBuffer);
Troubleshooting
Problem: Volume stuck in "Pending" status
Possible Causes:
- Storage class not available in cluster
- Insufficient cloud quota
- Invalid size or configuration
Solutions:
# Check available storage classes
kubectl get storageclass
# Describe PVC for error details
kubectl describe pvc my-volume -n prod
# Common fix: Ensure storage class exists
kubectl get sc | grep ssd
Problem: "Out of space" errors
Solutions:
- Resize volume (increase size):
cod asset volume resize /data --size 1Ti - Clean up old data
- Add log rotation for application logs
Problem: Poor database performance
Cause: Using standard storage class instead of ssd.
Solution: Migrate to SSD storage:
- Create new volume with SSD storage class
- Backup database
- Restore to new volume
- Delete old volume
FAQ
Q: Can I attach the same volume to multiple assets?
A: Only if volume has ReadWriteMany access mode. Most volumes are ReadWriteOnce (single pod write access).
Q: What happens to data if I delete an asset?
A: Volume remains in cloud storage. Data is not deleted unless you explicitly delete the PersistentVolume.
Q: Can I resize a volume?
A: Yes, you can expand volumes (increase size). Shrinking is not supported and requires data migration.
Q: Do volumes work with autoscaling?
A: Yes for ReadWriteMany volumes. ReadWriteOnce volumes can only be used by one pod at a time (not suitable for horizontal scaling).
Q: How do I backup volume data?
A: Use cloud-native snapshots (EBS snapshots, Azure disk snapshots) or application-level backups (pg_dump for PostgreSQL, mongodump for MongoDB).
Q: What's the difference between volumes and file stores?
A: File stores are cloud storage definitions (S3 buckets, etc.). Volumes are mounted storage attached to containers (PVCs). File stores can be used for object storage, while volumes provide block storage.
Related Documentation
Need help with storage configuration? Contact Support or check our cloud integration guides.