Back to Blog
Engineering

Cloud Exit Strategy: How to Avoid Vendor Lock-In

Daniel Brooks8 min read
Cloud Exit Strategy: How to Avoid Vendor Lock-In

Most teams choose a cloud provider and never think about leaving. The platform works, the team learns it, and the integrations multiply. Until the pricing changes, the service degrades, or a better option emerges — and suddenly migration feels impossible.

A cloud exit strategy isn't about planning to leave. It's about maintaining the freedom to leave. Teams that never need to migrate still benefit from the discipline of building portable systems. The architecture decisions you make in week one determine whether a future migration takes hours or months.

This guide covers how to evaluate vendor lock-in risk, which technology choices protect your portability, and how to build a practical exit plan before you need one.

Why Cloud Exit Strategies Matter

The case for thinking about cloud portability often arrives too late, when switching costs have already compounded. Several categories of risk make an exit strategy valuable even if you never use it.

Pricing changes with little notice. Heroku's removal of its free tier in November 2022 affected millions of developers and applications. The announcement gave 90 days of notice. Teams with portable architectures migrated quickly. Teams deeply integrated with the Heroku add-on marketplace spent weeks untangling dependencies. AWS has increased prices for several services over the years, and managed service pricing across providers has trended upward as platforms compete on features rather than cost.

Services get deprecated or shut down. Google has a documented history of discontinuing products: Google Cloud IoT Core (shut down August 2023), Google Domains (sold to Squarespace), Firebase features removed in ongoing cleanup cycles. Microsoft Azure has retired dozens of services since 2014. When a service you depend on announces end-of-life, you are forced to migrate on a schedule defined by someone else.

Compliance requirements shift. Data residency regulations like GDPR and the emerging patchwork of national data sovereignty laws may require you to move workloads to specific geographic regions or infrastructure types. A provider that serves you well today may not offer the right compliance posture in three years. Air-gapped deployments, which are required in some regulated industries, are only possible if your application was never written against cloud-specific APIs.

Ownership and control changes. When a cloud provider is acquired, strategic priorities shift. The acquiring company may deprecate services, change pricing models, or redirect engineering investment. Teams running on niche managed services are particularly exposed when those services deprioritized post-acquisition.

Better alternatives emerge. Cloud infrastructure improves rapidly. Per-second billing, which Out Plane offers, did not exist as a standard PaaS feature five years ago. Teams locked into providers that bill by the hour or by fixed monthly tiers are paying for efficiency gains they cannot access.

Vendor lock-in is a slow process. You don't notice it until the switching cost is prohibitive. Each proprietary service adds friction to a future migration. The right time to address that friction is before it accumulates.

The Lock-In Spectrum

Not all cloud dependencies create equal migration friction. Understanding where a given technology sits on the lock-in spectrum helps you make deliberate trade-offs rather than accidental ones.

Lock-In LevelExampleMigration Difficulty
NoneStandard Docker containersMinutes
LowPaaS with standard buildpacksHours
MediumCloud-specific managed servicesDays to weeks
HighProprietary serverless (Lambda, Cloud Functions)Weeks to months
ExtremeMulti-service deep integration (Lambda + DynamoDB + SQS + API Gateway)Months

The pattern is consistent: proprietary abstraction increases migration difficulty. A Docker container that runs on any Linux host is trivially portable. An AWS Lambda function that consumes events from SQS, reads from DynamoDB, and returns responses through API Gateway is a system of proprietary integrations that has no equivalent on another provider.

Every proprietary service you adopt is a link in a chain. Each link is small. The chain is heavy.

The goal is not to avoid all managed services. Managed databases, managed caches, and managed queues provide significant operational value. The goal is to choose managed services that use standard interfaces — so replacing the managed service means changing a connection string, not rewriting application logic.

Common Lock-In Traps

Understanding the specific mechanisms of vendor lock-in helps you recognize them before you're inside them.

Proprietary Serverless

AWS Lambda, Vercel Serverless Functions, Google Cloud Functions, and Cloudflare Workers all follow the same pattern: you write functions that conform to a proprietary execution model, and the platform handles deployment and scaling.

The lock-in is structural. Lambda functions receive events in formats defined by AWS: S3 event shapes, SQS message envelopes, API Gateway request objects. The function signatures, runtime lifecycle, environment constraints, and cold start behavior are all AWS-specific. Moving a Lambda function to Google Cloud Functions is not a configuration change. It is a rewrite.

Vercel's serverless functions tie deployment to their edge network. The vercel.json configuration file, Edge Config API, and ISR (Incremental Static Regeneration) behavior are platform-specific. Applications built around these features cannot be deployed elsewhere without changes.

Proprietary Databases

DynamoDB, Firestore, Cosmos DB, and MongoDB Atlas (with Atlas-specific features) represent a different category of lock-in: data model lock-in.

DynamoDB uses a proprietary query model built around partition keys, sort keys, and single-table design patterns. There is no standard SQL equivalent. Your application's data access patterns are designed around DynamoDB's specific performance characteristics. Migrating to PostgreSQL requires rethinking data models, rewriting queries, and often redesigning application logic.

Beyond the schema, there is no standard export path. You can use DynamoDB Streams and export to S3, but the data format is not directly importable into a relational database. The migration path involves transformation tooling and extended parallel-run periods.

Proprietary databases can be justified when their specific capabilities are genuinely required. The point is to enter that trade-off deliberately, with awareness of what you are accepting.

Proprietary APIs and SDKs

AWS SDK calls distributed throughout an application codebase are a subtle form of lock-in that compounds over time. When you call s3.putObject(), sns.publish(), or sqs.sendMessage() directly from application code, you are embedding AWS dependencies at the business logic layer.

Authentication mechanisms deepen this problem. AWS IAM roles, instance profiles, and STS token exchange are AWS-specific. Google Cloud uses service account JSON files and Workload Identity Federation. Azure uses managed identities and service principals. Applications that handle their own authentication to cloud services have per-provider logic that must be ported.

Event-driven architectures built on proprietary message buses — AWS EventBridge, Google Cloud Pub/Sub, Azure Service Bus — inherit the same problem. The message schemas, delivery guarantees, and consumer patterns differ enough between providers that migration requires redesigning the integration layer.

Deployment Configuration

Infrastructure-as-code tools tied to specific providers are a form of operational lock-in. AWS CloudFormation templates, SAM templates, and CDK stacks describe infrastructure in AWS-specific terms. Even Terraform, which is provider-agnostic, accumulates AWS-specific resource definitions over time.

Vendor-specific CI/CD systems add another layer. AWS CodePipeline, Google Cloud Build, and Azure Pipelines each have proprietary pipeline definition formats. Moving a complex build and deployment system from one to another requires significant rework of the operational tooling, separate from any application code changes.

Designing for Portability

Portability is not a feature you add after the fact. It is a consequence of technology choices made at the beginning of a project. The following practices establish portability from the ground up.

Use Standard Containers

Docker is the universal packaging format for application deployment. A Docker container that builds and runs correctly is compatible with every major cloud provider, every Kubernetes distribution, and most PaaS platforms.

Paketo Buildpacks, the open-source standard supported by platforms including Cloud Foundry and Out Plane, produce OCI-compliant images from standard language runtimes without requiring a custom Dockerfile. The build process is reproducible and vendor-neutral.

Avoid vendor-specific container extensions. ECS task definitions contain AWS-specific networking and IAM configurations that are not portable. GKE Autopilot includes Google-specific annotations and admission controllers. Building your application as a plain Docker container that reads configuration from environment variables keeps the container itself fully portable, regardless of where you run it.

Use Standard Databases

PostgreSQL and MySQL are available on every major cloud provider, on every major PaaS platform, and as self-hosted options on any infrastructure. Standard SQL means your queries, schemas, and migrations work without modification across providers.

The migration path is well-established and well-tooled. pg_dump produces a standard binary dump format. pg_restore imports it. This is the same workflow regardless of whether you are moving from Heroku Postgres to Out Plane's managed PostgreSQL, from RDS to a self-hosted instance, or between any other combination of providers.

This compatibility exists because the underlying engine is the same PostgreSQL version. You are not running a proprietary service that happens to speak SQL. You are running PostgreSQL.

Avoid cloud-specific database extensions unless their capabilities are genuinely necessary. AWS Aurora's fast failover, AlloyDB's columnar engine, and Azure's hyperscale architecture each provide real benefits at scale. The question is whether those benefits justify accepting migration difficulty. For most applications, standard PostgreSQL performance is sufficient.

Abstract Cloud Services

Some cloud-specific services are worth using for their operational value — object storage, email delivery, message queues. The key is abstracting them behind interfaces in your application code so that the provider can be replaced by changing a configuration value, not by editing application logic.

Object storage abstraction is straightforward. The S3 API has become a de facto standard: Cloudflare R2, Backblaze B2, and MinIO all implement compatible S3 APIs. If your application talks to an S3-compatible interface through a thin storage adapter, switching providers requires changing credentials and an endpoint URL, not application code.

Email delivery services — SendGrid, Postmark, Mailgun — provide similar functionality through similar APIs. Using a thin adapter or a library that abstracts email delivery (like ActionMailer in Rails or Nodemailer in Node.js) lets you swap providers at the configuration layer.

Queue-based workloads can use PostgreSQL-based job queues such as Sidekiq (Redis), pg-boss (PostgreSQL), or BullMQ (Redis) with standard Redis. These run on any infrastructure. If you need a more scalable queue, RabbitMQ and AMQP are open standards available on any cloud.

Keep Configuration Portable

Environment variables are the universal configuration layer for containerized applications. The Twelve-Factor methodology, now widely accepted as the standard for cloud-native application design, specifies environment variables as the correct place for configuration that changes between environments.

An application that reads all configuration from environment variables — database connection strings, API keys, feature flags, external service endpoints — can be deployed anywhere that can provide those variables. There are no vendor-specific configuration files in the application code.

Avoid platform-specific configuration systems. Heroku Config Vars, AWS Parameter Store, and GCP Secret Manager are all valid places to store secrets, but your application should retrieve them through standard environment variables at startup, not through provider-specific SDKs embedded in the application.

The principle is simple: if your deployment requires environment variables and a Docker image, it will run anywhere.

Evaluating Provider Lock-In Risk

When evaluating a new cloud provider, managed service, or deployment platform, a short checklist surfaces the most critical portability risks before you commit.

Data portability:

  • Can you export your data in a standard format?
  • Is pg_dump, mysqldump, or another standard tool supported?
  • What is the process for a full data export under time pressure?

Application portability:

  • Does your application run in a standard Docker container?
  • Are the databases standard (PostgreSQL, MySQL) with no proprietary extensions?
  • Are cloud-specific SDKs isolated behind interfaces, or distributed throughout the codebase?

Operational portability:

  • Can you replicate this infrastructure setup on another provider?
  • Is there a self-hosted option if you need to move off the managed service entirely?
  • What is the documented migration path if you need to leave?

This checklist is most valuable when the answer to several questions is "no" and you are making a deliberate choice to accept that trade-off. It is less useful as a post-hoc rationalization. Run it before you start building, not after.

Building Your Exit Plan

An exit plan is a living document, not a one-time exercise. It takes two to four hours to create the first version and a half-hour quarterly to keep it current.

Step 1: Inventory Your Dependencies

List every cloud service your application uses. For each service, note whether it uses a standard interface or a proprietary one, and estimate migration difficulty on the spectrum described earlier.

The goal is a clear map of where your portability risk is concentrated. Most applications have one or two high-risk dependencies (a proprietary database, a serverless function architecture) surrounded by lower-risk ones (managed PostgreSQL, S3-compatible storage). Knowing which dependencies are the critical blockers lets you focus mitigation effort where it matters.

Step 2: Identify Critical Data

Your data is harder to move than your code. For each data store, answer: where is the data stored, can it be exported in a standard format, how large is it, and what is the acceptable recovery point if you execute a migration under time pressure?

Test the export process before you need it. Running pg_dump against your production database and verifying the output is importable takes less than an hour. Not having done it when you need it costs days.

Step 3: Document the Migration Path

For each high-risk dependency, write down the specific alternative and the steps to replace it. "We use AWS SQS. The alternative is BullMQ with our existing Redis instance. The migration requires updating the queue producer and consumer to use the BullMQ client and modifying the worker deployment configuration."

This level of specificity matters when someone is executing the migration under pressure. Vague documentation ("use a different queue") provides no operational value.

Step 4: Test Portability Periodically

Run your application locally with Docker. Test your database backup and restore process. Verify that your Docker image builds and runs outside the CI environment.

Quarterly portability testing does two things: it confirms that your exit plan is still accurate as the application evolves, and it ensures the team knows how to execute the steps before they are under pressure to do so.

The Out Plane Approach to Portability

Out Plane is designed to minimize the portability trade-offs that PaaS platforms typically require.

Standard Docker containers. Every application on Out Plane runs as a Docker container. The same image runs on Out Plane, on Kubernetes, on a VPS, or on your local machine. There are no Out Plane-specific runtime dependencies.

Paketo Buildpacks. For applications without a Dockerfile, Out Plane uses Paketo Buildpacks — an open standard maintained by the Cloud Native Buildpacks project. The generated image is a standard OCI container with no platform-specific binaries or configurations.

Standard PostgreSQL. Out Plane's managed databases run standard PostgreSQL. pg_dump works. pg_restore works. A backup taken from an Out Plane database can be imported into any PostgreSQL instance anywhere without transformation.

GitHub-based deployment. Deployment is triggered by pushing to a GitHub branch. The deployment configuration is a git repository and a set of environment variables. There is no Out Plane-specific CLI required to manage deployments, no proprietary configuration file checked into the repository, and no vendor-specific toolchain that must be present on developer machines.

Your application deployed on Out Plane is a standard Docker container backed by standard PostgreSQL. Moving to another platform requires changing your deployment target, not your code.

For teams evaluating the trade-offs between managed PaaS and self-hosted infrastructure, the self-hosted vs. managed PaaS comparison covers the decision in detail. If you are coming from AWS and want to understand the specific migration steps, the migrating from AWS guide covers the process end to end.

Migration Patterns

The practical shape of a migration depends on how much lock-in has accumulated. Three patterns cover most scenarios.

Same-Day Migration

Profile: Standard Docker containers, PostgreSQL, environment variable configuration, no proprietary managed services.

Process: Update deployment configuration to point at the new provider. Export database with pg_dump, import with pg_restore. Update DNS. Total elapsed time: hours.

This is the migration that portability discipline makes possible. The application runs anywhere because it was built to run anywhere.

Planned Migration

Profile: Moderate lock-in — a mix of portable application code and some proprietary managed services (cloud object storage with native SDK calls, a managed queue with provider-specific consumer logic).

Process: Replace proprietary service integrations with standard alternatives before changing providers. This might mean replacing direct S3 SDK calls with an adapter that supports multiple backends, or replacing a cloud-specific queue with a PostgreSQL-backed alternative. Once the integrations are portable, the provider migration follows the same-day pattern.

Timeline: days to weeks, depending on the number of services requiring replacement and the integration depth.

Complex Migration

Profile: High lock-in — significant use of proprietary serverless, non-relational databases with proprietary query languages, deep event-driven integrations using cloud-native event buses.

Process: Requires re-architecture, not just configuration changes. Serverless functions must be rewritten as long-running containers or background workers. Proprietary database queries must be translated to standard SQL with corresponding schema changes. Event integrations must be replaced with standard message queue patterns.

Timeline: weeks to months. This type of migration is genuinely expensive, which is why cloud exit strategies are more valuable before these dependencies accumulate than after.

Practical Starting Points

If you are building a new application, the portability baseline is straightforward to establish: Docker container, PostgreSQL, environment variables for configuration, standard language-level abstractions over any cloud services you use. These choices cost nothing and eliminate the most common categories of lock-in before they start.

If you are evaluating an existing application, run the dependency inventory described above. Most applications have one or two proprietary dependencies that represent the majority of the migration risk. Addressing those specifically is more useful than a broad architectural overhaul.

If you are evaluating a new provider, run the portability checklist before committing. The questions are fast to ask and the answers tell you clearly what you are accepting.

The goal in all three cases is the same: maintain the freedom to change your infrastructure decisions as better options emerge. That freedom is most available when you have treated it as a design requirement from the beginning.

Summary

A cloud exit strategy is not a contingency for crisis. It is a design discipline that makes your application resilient to decisions made by people outside your organization.

The practical summary is short:

  • Use standard containers (Docker) so your application runs anywhere without modification
  • Use standard databases (PostgreSQL) so your data is portable with standard tooling
  • Abstract cloud-specific services behind interfaces so providers are swappable at the configuration layer
  • Keep all configuration in environment variables so there are no vendor-specific config files in your application
  • Build and maintain an exit plan so that when you need to move, you are executing a documented process rather than an improvised one
  • Test portability periodically so you know the plan works before you need it

Every proprietary service you adopt represents a potential migration cost. Some are worth accepting. Most are not, because standard alternatives exist with equivalent operational characteristics.

If you are starting a new project and want a deployment platform that does not introduce lock-in, Out Plane provides git-driven deployment, standard Docker containers, and managed PostgreSQL with per-second billing. Your application remains a portable artifact throughout its lifetime. Get started at console.outplane.com.

For more context on choosing infrastructure that doesn't create long-term obligations, see deploying a Docker application and Out Plane as an AWS alternative.


Tags

cloud
vendor-lock-in
portability
architecture
infrastructure
strategy

Start deploying in minutes

Connect your GitHub repository and deploy your first application today. $20 free credit. No credit card required.