ChaptersCircleEventsBlog
Join us for the in-person CCSK Azure course at Black Hat from August 4–5! Register now for a hands-on deep dive and secure your spot now!

Why Do Organizations Migrate to the Public Cloud? Hint: It Isn’t About Cost Anymore

Published 06/26/2025

Why Do Organizations Migrate to the Public Cloud? Hint: It Isn’t About Cost Anymore

graphic of a man and a cloud

Written by Eyal Estrin.

 

Why do organizations migrate to the public cloud? This blog post was written in 2025, and it may sound like a simple question, but let's dive into it.

 

Historically: The Cost Factor

For many traditional organizations, it began with the debate of how to lower the cost of their IT budget.

Variable purchase options for consuming services (from pay-as-you-go, saving plans, to Spot) and the ability to easily deploy an entire environment in a few clicks (or a few API calls) looked very appealing.

It was so appealing that many organizations (from small startups to large enterprises) forgot to embed cost as part of their design decisions, which ended up in large monthly bills. After migrating data and workloads to the public cloud, they are now debating cloud repatriation and going back to on-prem.

Rushing to the cloud without proper design and without looking at all aspects (from security, scalability, availability, and cost) ended up with failed projects.

Mature organizations with experienced teams (from developers, DevOps, architects, etc.) are able to design modern architectures based on a combination of managed services, APIs, and serverless services, which may be cost-efficient and able to save money on cloud services. However, for most organizations still taking their first steps in the cloud, or those that lack experienced teams, migrating to the cloud will very likely end up as a major disappointment when just looking at the cost factor.

 

The Factors That Matter in 2025

The Agility Factor

Agility was a huge benefit from the early days of the public cloud – it allowed organizations to move fast and shorten the time to deliver new services or products to their customers.

Organizations of all sizes were able to test new services (or features), practice with new technologies (from the early days of serverless, till recent years with the latest improvements in GenAI services), deploy applications to test environments, and if the new development provided customer value, deploy at production scale.

The cloud allowed organizations to break free from the constraints of legacy data centers (with the long purchase cycles and the requirement to use the same hardware for several years), test new capabilities, quickly recover from failures, and try again until they get fully functional production services that satisfy their customers.

 

The Scalability Factor

One of the biggest advantages of the hyper-scale cloud providers over most organizations' data centers is size. At the end of the day, a data center has physical size limitations (such as the maximum racks you can put inside, or the maximum power you can use to run physical infrastructure and cool an entire data center).

For organizations with stable workloads, and with minimal peak in traffic or customer demand, the traditional data center may be sufficient. However, for organizations with a global presence, serving customers all around the world with variable traffic patterns (such as Black Friday or Cyber Monday events), scale is an important factor.

Perhaps you have an e-commerce site that needs to scale up or down to meet customer demand according to different times of the year. Perhaps you have a workload performing an end-of-the-month calculation. Perhaps you are training a large language model based on a large amount of customers' data. For all those cases, the ability to have an (almost) infinite scale is critical, and the public cloud is the best place for this (when you're running on top of one of the hyper-scale cloud providers).

 

The Elasticity Factor

The ability to add or remove resources on demand is an important factor when designing applications.

The combination of (almost) infinite resources (such as compute and storage), microservices architecture (with the ability to scale up or down specific components according to load), and the ability to consume serverless services (such as FaaS, storage, database, etc.) that automatically respond to load and elastically manage the amount of required resources (lower the burden of human maintenance), made elasticity so important and a huge benefit compared to the traditional data center.

Even the ability to switch hardware (with minor or zero downtime) or use the latest GPUs for a new GenAI application (or an extremely fast storage service for a huge HPC cluster), and when done shut it down and save cost, made elasticity a huge driver for using the public cloud.

 

The Efficiency Factor

Perhaps efficiency was previously not prioritized, with the constraints of physical hardware. However, in the past decade, efficiency became something more and more organizations chose to embed as part of their design decisions.

The cloud allowed us to achieve almost the same goals using many different patterns – containers, function as a service, APIs, event-driven architectures, and more.

At any point in time, we can stop and question past decisions we took. Is our current workload running at the most efficient architecture, or can we make some adjustments to make it more cost-efficient, resilient, and respond quickly to customers' requests?

Sometimes switching to modern hardware, between different storage services (or event storage service tiers), between different database types (such as relational vs. NoSQL, from graph to time-series, etc.), or even from tightly-coupled to loosely-coupled architectures, may result in a more efficient workload.

 

The Automation Factor

Although mature organizations with a large number of servers and applications have been using automation scripts for many years to achieve fast and reproducible outcomes, the cloud took it to a whole new place where (almost) everything is exposed using APIs.

Infrastructure as Code allowed organizations to automate things from building entire environments across multiple SDLC stages (such as Dev, Test, and Prod), to multiple availability zones and even multiple regions (when a global footprint is required).

IaC languages such as Terraform (or OpenTofu) and Pulumi, or more vendor-opinionated native alternatives (such as CloudFormation or ARM templates), allowed organizations (after learning how to write IaC) to be proficient in workload deployments in a standard and automated way.

The addition of Policy as Code (from HashiCorp Sentinel, AWS SCP, Azure Policy, or Google Organization Policies, till Open Policy Agent) allowed organizations to add a layer of guardrails (which resources are allowed to be consumed, and what are their limitations such as region or specific instance types), making security and configuration a standard across the organization.

 

The Security Factor

When organizations' and customers' data is spread across multiple places (from the on-prem data centers to SaaS applications and even partners' data centers), we can no longer look at the physical location as a security boundary.

In many cases (but unfortunately, not all cases), services deployed on IaaS or PaaS are configured as secure by default. Although deploying compute resources with public IP still happens, today it is much rarer to deploy a publicly exposed object storage service (without specifically configuring it as a public resource).

Encryption, both in transit and at rest, comes enabled by default in most cloud services. To get higher assurance of who has access to private data, most hyper-scale cloud providers allow customers to configure customer-managed encryption keys. This ensures that organizations don’t just control the encryption keys, but also the key generation process.

Audit of admin activity is enabled by default (for user data access, we still need to consider if we want to manually enable it due to its extra cost) and logs can be stored for an infinite amount of time (for as long as organizations need it for incident response processes or to satisfy regulatory requirements).

Network access in the cloud is still a pain for many organizations. The larger your cloud environment is (not to mention spread across regions and even multiple cloud providers), the more visibility you are required to have (sometimes using built-in services or open-source tools, and sometimes using third-party commercial solutions). Be alert when changes happen because, as in real life, if you keep the door open, someone eventually will come inside.

 

Summary

In this blog post, I've tried to answer the question of why organizations are migrating to the public cloud.

There are many cases where organizations will choose to keep some of their workloads on-prem (or in co-location or hosting facilities) due to high service costs (from real-time storage to expensive hardware such as GPUs), requirements for low network latency (such as connectivity to stock exchange), or data sovereignty requirements.

We will probably still see hybrid architectures for many years, but there is no doubt that the public cloud takes more and more importance in the design and architecture decisions of organizations of all sizes.

If we stop looking at the public cloud as a place to lower our costs (it is possible, but not for all use cases) and if we start looking at agility, scalability, elasticity, efficiency, automation, and built-in security (enabled by default) as important factors, we see the answer to the question of why organizations are migrating to the public cloud.

 


About the Author

Eyal Estrin is a cloud and information security architect, an AWS Community Builder, and the author of the books Cloud Security Handbook and Security for Cloud Native Applications, with more than 25 years in the IT industry.

You can connect with him on social media (https://linktr.ee/eyalestrin).

Opinions are his own and not the views of his employer.

Share this content on your favorite social network today!

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates