VMware Tanzu Migration Guide
Broadcom's acquisition of VMware changed everything. Price increases of 800-1,500%, forced product bundling, and aggressive licensing terms have teams scrambling for alternatives. Meanwhile, Tanzu still requires dedicated platform teams, tribal knowledge, and configuration sprawl across Helm charts, Kustomize overlays, and SOPS-encrypted secrets.
There's a better way. This guide walks you through migrating from Tanzu to Codiac—a platform built for platform and infrastructure teams who want Kubernetes to be repeatable and boring, so they can focus on architecture, security, and scale. Developers deploy with guided CLI commands or Web UI clicks. Platform teams still have full access to kubectl and standard Kubernetes when needed—but no longer have to use them for day-to-day operations.
Who This Guide Is For
Teams leaving Tanzu because:
- Broadcom's 800-1,500% price increases have made Tanzu cost-prohibitive (source)
- vSphere 7 end-of-life (October 2025) forces expensive upgrades or migrations
- You need a dedicated platform team just to keep things running
- Developers avoid Kubernetes and work around the platform instead of with it
- Cluster upgrades require multi-quarter planning cycles
- Tanzu Spaces abstractions still require extensive training and tribal knowledge
- Configuration still sprawls across Helm charts, Kustomize overlays, and SOPS-encrypted files
Teams adding Codiac to their current stack because:
- You want to simplify operations without ripping out existing infrastructure
- You need developer self-service without giving everyone kubectl access
- You're managing multiple clusters and configuration drift is a constant problem
- You want faster, safer cluster upgrades
- You need complete environment reproducibility-anyone should be able to recreate any environment
- Observability shouldn't require deploying additional infrastructure
Either way, the migration path is incremental. Start with one non-production cluster, prove the value, then expand.
The Broadcom Reality Check
Since Broadcom's acquisition of VMware, the landscape has changed dramatically:
| Change | Impact |
|---|---|
| 800-1,500% price increases | Customers report licensing costs multiplied overnight (CISPE complaint) |
| 72-core minimum purchases | Small teams pay for capacity they don't need |
| vSphere 7 EOL: October 2025 | Forced upgrades or migrations on Broadcom's timeline |
| 20% late renewal penalties | Pressure tactics for quick decisions |
| Product consolidation | Features bundled into expensive SKUs you may not need |
VMware Tanzu's market share has dropped from 16.4% to 10.8% as teams look for alternatives (PeerSpot). The question isn't whether to evaluate alternatives—it's which one.
What Developers Actually Want
Telco's platform engineering team summarized it perfectly:
"A platform for developers to use Kubernetes without needing to know Kubernetes."
Tanzu promised this. The reality?
- Developers still need to understand Spaces, Profiles, Traits, and Capabilities
- Someone still maintains Helm charts, Kustomize overlays, and SOPS-encrypted secrets
- Tribal knowledge accumulates—new engineers take weeks to get productive
- Configuration sprawl means "it works in staging" doesn't guarantee production success
Codiac delivers what Tanzu promised:
| Developer Task | Tanzu | Codiac |
|---|---|---|
| Deploy to staging | Learn Spaces API or ask DevOps | codiac deploy → select from menu |
| Check logs | kubectl access + training | Click "Logs" in Web UI |
| Roll back | Find the right Git commit, understand Flux | codiac rollback → select version |
| Get a new environment | Submit ticket, wait days | Clone existing environment in minutes |
Developers don't need YAML or kubectl. Platform teams define guardrails once—then get out of the ticket queue.
Why Abstractions Alone Don't Solve Complexity
Many platform engineering tools promise "developer abstractions" to hide Kubernetes complexity. Tanzu's Spaces feature is a good example-it aims to give developers a simpler interface. But there's a fundamental difference between hiding complexity and eliminating it.
The Hidden Complexity Problem
Consider what it takes to deploy a straightforward web service in a typical enterprise Kubernetes environment:
| Component | What Teams Must Manage |
|---|---|
| CI/CD Pipeline | Bamboo/Jenkins specs, build scripts, test configurations |
| Container Registry | Harbor/ECR policies, image scanning, retention policies |
| Helm Charts | Chart.yaml, values.yaml, templates, library chart dependencies |
| Kustomize Overlays | Base configs, per-environment overlays, patches |
| Secrets Management | SOPS encryption, key management, secrets-override files |
| GitOps | Flux/ArgoCD configs, sync policies, health checks |
| Environment Configs | Per-environment values, encrypted env files, certificates |
Even with Tanzu Spaces providing a "simple" developer interface, someone must still:
- Maintain Helm chart libraries and understand template inheritance
- Write and maintain Kustomize overlays for each environment
- Manage SOPS-encrypted secrets across multiple files
- Coordinate between Bamboo builds and Flux deployments
- Understand the relationship between Spaces, Profiles, Capabilities, and Traits
The abstraction shifts who deals with the complexity-it doesn't remove it.
How Codiac Is Different
Codiac takes a fundamentally different approach: it writes clean, standard Kubernetes objects under the hood. No custom operators. No proprietary CRDs. No abstraction layers that hide sprawling configuration.
| Tanzu/Traditional Approach | Codiac Approach |
|---|---|
| Developers need Tanzu training (Spaces, Profiles, Traits) | Developers never touch YAML—deploy via guided CLI or Web UI |
| Platform team maintains Helm charts, Kustomize, GitOps | Codiac generates clean K8s manifests automatically |
| Configuration sprawls across 5+ repos/files | Deploy-time config injection—same image, any environment |
| Tribal knowledge required to understand the stack | Perfect Memory—anyone can reproduce any environment on demand |
| Observability requires separate tooling setup | Logs stream to UI/CLI out of the box |
| Operators and CRDs add cluster complexity | Standard K8s objects, nothing proprietary in your cluster |
The result: Codiac takes the repetitive, low-value work out of Kubernetes operations—for platform teams and developers alike. When something goes wrong, you're debugging standard Kubernetes, not tracing through layers of Helm templates, Kustomize patches, and operator reconciliation loops.
Codiac treats Helm charts as first-class assets, just like containers. If you have existing Helm charts that work well, keep using them. Codiac deploys them with the same benefits: versioned, remembered, instant rollback, deploy-time configuration. You get the developer experience improvements without rewriting anything.
Turnkey Observability
One often-overlooked benefit of Codiac: logs stream directly to the UI and CLI without configuration.
The Traditional Observability Tax
In most Kubernetes environments, getting visibility into your applications requires:
- Deploying log aggregation (Fluentd, Fluent Bit, Filebeat)
- Setting up a log backend (ELK, Loki, CloudWatch)
- Configuring retention policies and storage
- Building dashboards and alerts
- Integrating with your deployment workflow
This stack adds complexity to your clusters, creates another moving part that can break, and contributes to configuration drift. When you do need these tools, Codiac makes it clean: add them once to your infrastructure stack and they deploy consistently across every cluster: versioned, repeatable, no per-cluster configuration sprawl.
Codiac's Built-in Visibility
With Codiac, application logs are immediately available:
# Stream logs from any asset
codiac logs my-api
# View logs in the web UI
# No configuration required-just click the asset
For teams that want more:
- Codiac integrates cleanly with Datadog, New Relic, or any other observability platforms
- The integration happens at the Codiac level, not per-cluster
- Your clusters stay simple and drift-free
This means developers get immediate visibility without platform teams adding observability infrastructure to every cluster.
What You Get
Concrete outcomes from migration:
| Challenge | Before | After |
|---|---|---|
| Cluster upgrades | Multi-week planning, maintenance windows, rollback anxiety | Blue/green migration in 30 minutes, instant rollback |
| Developer access | Ticket to DevOps, wait 2-3 days for environment changes | Self-service in web UI, changes in minutes |
| Configuration management | YAML sprawl, environment-specific manifests, drift | Single source of truth, deploy-time configuration |
| Multi-cluster consistency | Manual syncing, divergent configs, tribal knowledge | Fleet-wide snapshots, one-click propagation |
| Cost visibility | Estimate based on node counts | Actual usage per environment, automated scheduling |
Bottom line:
- Developers ship faster (self-service, no tickets)
- Platform team focuses on strategy, not firefighting
- Cluster upgrades go from "project" to "Tuesday"
- You stop paying for idle dev/staging environments
Migration Approaches
Option A: Full Migration (Replace Tanzu)
Best for: Teams whose Tanzu licenses are expiring or who want to eliminate VMware dependency entirely.
What changes:
- Codiac becomes your deployment and cluster management layer
- Keep your existing Terraform/IaC for cluster provisioning (or use Codiac's)
- Tanzu components (TBS, TAP, TMC) replaced with Codiac equivalents
Timeline: 2-4 weeks for first production workload
Option B: Incremental Addition (Codiac + Existing Stack)
Best for: Teams who want to reduce complexity without a full rip-and-replace.
What changes:
- Codiac manages deployments to your existing clusters
- Keep using Tanzu for what's working
- Gradually shift workloads as you see value
Timeline: 1 week for first workload, expand from there
We recommend Option B. Prove value before committing to full migration.
Real-World Onboarding Timeline
Most Kubernetes migrations take months. Codiac is different. With the right credentials in hand, you can have 20+ container assets running with ingress and secrets in about 2 hours.
Here's what that actually looks like:
Sample Timeline: Dev Environment (20 Assets)
| Step | Activity | Time |
|---|---|---|
| 0 | Try the Sandbox (optional) — Get familiar with Codiac concepts without touching your infrastructure | 30 min |
| 1 | Create or connect a cluster — New cluster, existing cluster, or clone an existing one | 30-60 min |
| 2 | Tell Codiac about your infrastructure — Cloud provider, container registry, images/Helm charts | 5-15 min |
| 3 | Define assets — Use codiac cluster export to auto-generate, or create manually | 5 min (export) or 5 min/asset (manual) |
| 4 | Configure assets — Environment variables, secrets, resource limits | 2-60 min/asset (varies by complexity) |
| 5 | Set up ingress — Point DNS/subdomain to cluster, configure host mapping | 10-15 min |
| 6 | Deploy — Watch Codiac track everything automatically | 5 min |
Total for 20 assets: ~2 hours (with cluster export) to ~4 hours (manual asset creation)
If you're migrating from an existing cluster, codiac cluster export reads your running workloads and generates a script of CLI commands that hydrates Codiac with your images, registries, ports, and environment variables. What would take hours of manual entry takes 5 minutes.
codiac cluster export --namespace my-app --output setup-script.sh
Prerequisites (Get These Ready First)
Before you start the clock, make sure you have:
| Credential | Why You Need It |
|---|---|
| Cloud provider admin access | Create clusters, assign IPs, configure networking |
| Container registry credentials | Pull images (ACRPull, ECR access, etc.) |
| Secrets store access | AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager |
| DNS control | Point subdomain to cluster ingress |
With these ready, you eliminate the back-and-forth that typically delays migrations.
Step-by-Step Migration
The Codiac CLI guides you through each command interactively. You don't need to memorize flags—just run the command and answer the prompts. The examples below show both approaches.
Step 0: Try the Sandbox (Optional)
Time: 30 minutes
Goal: Get familiar with Codiac concepts without risking your infrastructure.
Visit sandbox.codiac.io to:
- Deploy your first asset
- See how cabinets and environments work
- Experience the CLI and Web UI
- Understand Perfect Memory and rollbacks
This is optional but recommended if you want to build confidence before touching production infrastructure.
Step 1: Connect Your First Cluster
Time: 30-60 minutes (with proper credentials)
Goal: Get Codiac connected to one non-production cluster.
Choose your approach:
| Approach | When to Use | Time |
|---|---|---|
| Create new cluster | Clean start, no legacy baggage | 30-45 min |
| Connect existing cluster | Keep current infra, add Codiac management | 15-20 min |
| Clone existing cluster | Replicate prod setup to dev/staging | 45-60 min |
Prerequisites:
- kubectl access with admin permissions
- Codiac account (free trial available)
Steps:
- Install the Codiac CLI:
# Requires Node.js v20.13.1+
npm install -g @codiac.io/codiac-cli
# Verify installation
codiac version
- Log in to Codiac and your cloud provider:
codiac login
codiac csp login
The CLI will open your browser for authentication and guide you through provider selection.
- Capture your existing cluster:
codiac cluster capture
The CLI will prompt you for:
- Cloud provider (AWS, Azure, GCP)
- Account/subscription ID
- Region
- Cluster name
Scripted mode (for CI/CD pipelines)
# AWS
codiac cluster capture -n my-dev-cluster -p aws -s 123456789012 -l us-east-1
# Azure
codiac cluster capture -n my-dev-cluster -p azure -s your-subscription-id -l eastus -g your-resource-group
- Verify connection:
codiac cluster list
What just happened:
- Codiac installed a lightweight agent in your cluster
- Your cluster is now visible in the Codiac web UI
- No changes to existing workloads
Step 2: Tell Codiac About Your Infrastructure
Time: 5-15 minutes
Goal: Connect your container registry and define what you want to deploy.
Steps:
- Add your container registry:
codiac registry add
The CLI will prompt for registry URL and credentials (ACR, ECR, Docker Hub, Harbor, etc.).
- Define your assets (choose one approach):
Option A: Auto-generate from existing cluster (recommended)
codiac cluster export --namespace my-app
This reads your running workloads and generates CLI commands for all your images, ports, and environment variables. Review the output, then run it.
Option B: Create assets manually
codiac asset create
The CLI will guide you through:
- Selecting asset type (container or Helm chart)
- Choosing the image or chart
- Naming the asset
- Configuring ports
Step 3: Create Cabinets and Deploy
Time: 10-20 minutes
Goal: Create a logical grouping for your services and deploy.
Steps:
- Create a cabinet:
codiac cabinet create
The CLI will prompt you for the cabinet name and environment.
Scripted mode
codiac cabinet create my-app -e dev
- Deploy your assets:
codiac deploy
The CLI will prompt you to select the asset, version, and target cabinet.
What just happened:
- Your workloads are now tracked in Codiac
- You can view them in the web UI at app.codiac.io
- Every deployment creates a snapshot for instant rollback
Step 4: Configure Your Assets
Time: 2-60 minutes per asset (varies by complexity)
Goal: Set environment variables, secrets, and resource limits.
Steps:
- Configure deploy-time variables:
codiac config set
The CLI will prompt you to select:
- Asset or cabinet scope
- Environment
- Configuration key and value
Scripted mode
codiac config set -a my-api -e dev --setting DATABASE_URL --value "postgres://dev-db:5432/myapp"
codiac config set -a my-api -e staging --setting DATABASE_URL --value "postgres://staging-db:5432/myapp"
- Connect secrets store (optional):
codiac secrets connect
Integrates with AWS Secrets Manager, Azure Key Vault, or GCP Secret Manager.
Set common configs at the environment or cabinet level—they inherit down to all assets. You only configure exceptions per-asset, not every value for every service.
Step 5: Set Up Ingress
Time: 10-15 minutes
Goal: Route traffic to your services.
Steps:
-
Point DNS to your cluster:
- Create an A record or CNAME pointing your subdomain to the cluster's ingress IP
- Or use a wildcard:
*.dev.yourcompany.com
-
Configure host mapping in Codiac:
codiac ingress set
The CLI guides you through mapping hostnames to assets.
- Verify routing:
curl https://my-api.dev.yourcompany.com/health
Step 6: Enable Developer Self-Service
Time: 15-30 minutes
Goal: Let developers deploy without kubectl access or DevOps tickets.
Steps:
- Invite team members:
codiac auth user invite
The CLI will prompt for email and role assignment.
Scripted mode
codiac auth user invite -e developer@company.com -r developer
- Set up additional environments:
codiac environment create
Run this for each environment (dev, staging, prod). The CLI guides you through naming and cluster assignment.
What developers can now do (via web UI or CLI):
- Deploy to dev/staging without kubectl
- View logs and pod status
- Rollback to previous versions
- Promote from dev → staging
- Clone environments for testing
What they can't do (without elevated permissions):
- Access production without approval
- Modify cluster-level resources
- See secrets in plaintext
Beyond Day One
Once you're running, here's what comes next:
Blue/Green Cluster Upgrades
Goal: Upgrade clusters without maintenance windows or rollback anxiety.
The old way (Tanzu/manual):
- Schedule maintenance window
- Pray during in-place upgrade
- Debug issues in production
- Multi-week recovery if something breaks
The Codiac way:
- Provision new cluster with target version
- Deploy workloads via snapshot
- Validate with production traffic (canary)
- Cut over when ready
- Instant rollback if needed
Steps:
- List your current snapshots:
codiac snapshot list
Codiac automatically creates snapshots on each deployment-you likely already have rollback points.
- Provision new cluster with your IaC, then capture it:
codiac cluster capture
Select the newly provisioned cluster when prompted.
- Deploy to new cluster from snapshot:
codiac snapshot deploy
The CLI will prompt you to:
- Select a snapshot version
- Choose the target cabinet/cluster
- Validate:
- Run smoke tests against new cluster
- Route canary traffic (10%) to new cluster via your ingress/DNS
- Monitor for errors
- Cut over or rollback:
- Update DNS/ingress to point to new cluster
- If issues arise, simply point back to old cluster
- Destroy old cluster when confident:
codiac cluster destroy
Result: Cluster upgrade with zero downtime, instant rollback capability.
Reduce Costs with Zombie Mode
Goal: Stop paying for idle dev/staging environments.
The problem:
- Dev environments run 24/7 but are used 40 hours/week
- You're paying for 128 hours/week of idle compute
- That's 76% waste
Steps:
- Register for Zombie Mode:
Visit app.codiac.io/zombie/register and provide:
- Cluster name
- Namespaces to manage
- Schedule template (Demo, Nights & Weekends, or Blackout)
- Install the Zombie Mode agent:
helm install zombie-scheduler oci://ghcr.io/codiac-io/zombie-scheduler \
--namespace zombie-ns \
--set token="YOUR_REGISTRATION_TOKEN" \
--create-namespace
- Monitor savings:
- Dashboard at app.codiac.io/zombie shows actual vs projected costs
- Typical savings: 60-70% on non-production
What happens:
- Dev environments scale to zero on schedule (e.g., 6pm Friday)
- Automatically wake on schedule (e.g., 7am Monday)
- Developers can manually wake if needed (2-minute startup)
Mapping Tanzu Concepts to Codiac
| Tanzu Concept | Codiac Equivalent | Notes |
|---|---|---|
| Tanzu Application Platform (TAP) | Codiac Platform | Developer self-service, supply chain automation |
| Tanzu Build Service (TBS) | Your existing CI + Codiac | Codiac deploys artifacts, doesn't build them |
| Tanzu Mission Control (TMC) | Codiac Fleet Management | Multi-cluster visibility and management |
| Tanzu Kubernetes Grid (TKG) | Any Kubernetes + Codiac | Codiac is cluster-agnostic (EKS, AKS, GKE, on-prem) |
| Tanzu Service Mesh | Your existing mesh + Codiac | Codiac integrates with Istio, Linkerd, etc. |
| Workload clusters | Codiac Clusters | Connected and managed through Codiac |
| Spaces/Namespaces | Cabinets + Environments | Logical groupings with RBAC |
| Supply chains | Codiac + your CI/CD | Simpler pipeline, fewer steps |
Capability Comparison
| Capability | Tanzu Platform | Codiac |
|---|---|---|
| Developer Self-Service | Spaces provide abstraction, but developers still need Tanzu training (Profiles, Traits, Capabilities) | No YAML, no kubectl—guided CLI prompts or Web UI clicks |
| Configuration Management | Helm + Kustomize + SOPS across multiple files and repos | Deploy-time config injection—same image runs everywhere |
| Multi-Cluster Consistency | TMC policies + manual sync + drift detection | Fleet-wide snapshots with one-click propagation |
| Cluster Upgrades | In-place upgrades with maintenance windows, multi-quarter planning | Blue/green migration in 30 minutes, instant rollback |
| Environment Reproducibility | Depends on GitOps discipline and tribal knowledge | Perfect Memory—any engineer reproduces any environment |
| Secrets Management | SOPS encryption, manual key management, scattered files | Integrated with AWS Secrets Manager, Azure Key Vault, GCP Secret Manager |
| Rollback | Find correct Git commit, understand Flux reconciliation | codiac rollback → select version → done |
| Cost Optimization | Manual scaling or third-party tools | Zombie Mode: Environments become schedulable and disposable—waste disappears naturally (50-70% typical savings) |
| Observability | Deploy ELK/Prometheus/Loki stack per cluster | Logs stream to UI/CLI immediately—zero setup |
| Debugging | Trace through Helm templates → Kustomize patches → operator reconciliation | Debug standard Kubernetes—that's it |
The "Perfect Memory" Difference
Enterprise Kubernetes environments often suffer from state drift and tribal knowledge. Even with GitOps and proper tooling, the reality is:
- New team members ask "how was this environment set up?"
- Production differs from staging in undocumented ways
- Disaster recovery depends on finding the right person
- Rollbacks require understanding multiple systems
Codiac's perfect memory means complete state capture at every deployment:
| Scenario | Traditional Approach | Codiac Approach |
|---|---|---|
| New engineer joins | Days/weeks shadowing to understand setup | Clone any environment in minutes |
| Production incident | Check logs, compare configs, trace changes | Rollback to known-good snapshot instantly |
| Environment request | Ticket to DevOps, wait for manual setup | Self-service clone, ready in minutes |
| Compliance audit | Gather configs from multiple sources | Complete audit trail in one place |
| Disaster recovery | Hope your runbooks are current | Restore from snapshot on any cluster |
What About My Existing Tools?
Codiac works with your current stack:
| Tool | Integration |
|---|---|
| Terraform/Pulumi | Keep using for cluster provisioning; Codiac manages what runs on clusters |
| ArgoCD/Flux | Can coexist; many teams migrate deployment to Codiac, keep GitOps for infra |
| Helm | Helm charts become Codiac assets-versioned, remembered, rollback-ready. Keep your charts, get better DX |
| Jenkins/GitHub Actions | Call Codiac CLI from your pipelines |
| Datadog/New Relic | Codiac integrates for monitoring and alerting |
| Vault/AWS Secrets Manager | Codiac pulls secrets from your existing secret store |
You don't have to rip and replace. Add Codiac incrementally.
Common Migration Questions
Q: Do I need to redeploy my applications?
No. Codiac imports existing deployments. Your workloads keep running; Codiac starts managing them.
Q: What if I have custom Tanzu extensions?
Codiac works with standard Kubernetes. Custom CRDs continue to work. If you have TAP-specific supply chains, you'll simplify those to standard CI/CD + Codiac deployment.
Q: How long does migration take?
| Milestone | Timeline |
|---|---|
| First asset deployed | 2 hours |
| 20 assets with ingress and secrets | 2-4 hours |
| Full dev environment | 1 day |
| Staging environment | 1 week |
| Production (with proper validation) | 2-4 weeks |
| Full Tanzu replacement | 1-2 months |
Compare this to industry averages of 3-12 months for Kubernetes platform migrations.
Q: Can I try this without committing?
Yes. Connect one dev cluster, import a few workloads, give developers access. If it doesn't work for you, disconnect the cluster and nothing changes.
Q: What's the pricing?
- Free trial: 30 days, full features
- After trial: Based on managed clusters/environments
- Zombie Mode: 10% of your savings (you keep 90%)
Contact sales@codiac.io for enterprise pricing.
Migration Checklist
Week 1: Foundation
- Create Codiac account
- Install CLI
- Connect first non-production cluster
- Import 1-2 existing workloads
- Create initial snapshot
- Invite 2-3 developers
Week 2: Expand
- Import remaining dev workloads
- Set up environment-specific configuration
- Enable developer self-service
- Configure Zombie Mode for cost savings
- Test blue/green cluster upgrade (non-prod)
Week 3-4: Production
- Connect staging cluster
- Import staging workloads
- Validate deployment workflow
- Set up monitoring integration
- Plan production migration
Month 2: Complete Migration
- Connect production clusters
- Migrate production workloads (incremental)
- Decommission Tanzu components as workloads move
- Train remaining team members
- Document runbooks
Get Started
Make Kubernetes operations repeatable and boring.
When environments are reproducible and disposable, waste disappears naturally. Cluster upgrades become routine. Developers stop waiting on tickets. Platform teams focus on architecture instead of firefighting.
- Start free trial — Connect your first cluster in 15 minutes. No credit card required.
- Book a migration assessment — We'll analyze your Tanzu footprint and show you the path forward.
- Join Discord — Talk to teams who've already migrated.
Questions?
- Sales: sales@codiac.io
- Support: support@codiac.io
- Enterprise migration: We offer white-glove migration support for teams with complex Tanzu deployments
Related Resources
- Why Teams Are Leaving VMware Tanzu - Blog post with detailed analysis
- Multi-Cluster Management Guide - Managing fleet-wide deployments
- Cluster Upgrade Checklist - Zero-downtime upgrade patterns
- Zombie Mode Cost Optimization - Reduce non-production costs by 70%