- English
- français
- Deutsche
- Contact us
- Docs
- Login

Kubernetes has become the default foundation for a lot of modern application infrastructure.
It’s powerful, flexible, and widely supported, which makes it an obvious starting point for many teams building a cloud-native application platform (a standardized way for teams to deploy, run, secure, and operate applications in production).
But there’s a distinction that often gets lost early in the decision process:
Kubernetes is a framework. It is not a platform.
Choosing Kubernetes doesn’t automatically mean you’re building a full platform. But the moment you want consistent deployments, security guardrails, shared services, observability, and sane developer workflows, you quickly move beyond “just Kubernetes” and into platform-building territory.
Kubernetes provides the core orchestration layer. Everything else, from CI/CD and environment management to security controls, service provisioning, and governance, has to be designed, integrated, and maintained on top of it.
Managed Kubernetes services such as EKS, AKS, or GKE undeniably reduce some of the operational burden. They typically handle the control plane, cluster lifecycle, and an increasing number of infrastructure concerns, making Kubernetes easier to adopt than it once was.
Even with these advances, Kubernetes remains a framework rather than a complete application platform. Teams still need to design and maintain the path from code to production, integrate CI/CD and Git workflows, secure applications, and operate application-level services such as databases, caches, and storage.
Kubernetes provides powerful primitives. Turning those primitives into a coherent, secure, and repeatable developer experience is a separate engineering effort that requires ongoing expertise and maintenance.
Early Kubernetes environments often feel manageable. A small cluster, a handful of services, and a few engineers who know their way around YAML can get surprisingly far.
The complexity shows up later.
As environments grow, teams must start making precise decisions about resource requests and limits, isolation boundaries, network policies, and access controls. Multi-tenancy introduces new challenges around fairness, security, and blast radius. Observability and alerting evolve from “nice to have” into critical infrastructure that has to be reliable under pressure.
None of this is unsolvable, but none of it is free. Each layer adds configuration, operational knowledge, and long-term maintenance overhead.
Kubernetes is often described as “cheap” because the software itself is open source and managed clusters are relatively inexpensive to provision.
In practice, infrastructure costs are rarely the limiting factor. The real cost shows up in engineering time and focus.
Running Kubernetes well requires ongoing decisions about networking, storage, security, deployment workflows, and service integration. Even in leaner setups, teams still need to understand how these pieces fit together and how changes affect reliability and security over time. This work does not stop after the first deployment.
In practice, many teams spend months assembling and stabilizing the surrounding platform before developers can reliably ship production code.
For many organizations, this means allocating senior engineering time to operating the platform rather than building product. Over time, that opportunity cost often outweighs the apparent savings of a DIY approach. The real question is not whether Kubernetes can be made to work, but whether this is where a team’s engineering effort is best spent.
Kubernetes is designed to scale workloads. Scaling safely, predictably, and securely across diverse applications is a different challenge altogether.
As clusters grow, teams have to balance performance with isolation. Misconfigured resource limits can lead to noisy neighbors. Weak isolation can turn small failures into platform-wide incidents. Security becomes a multi-layered problem spanning infrastructure, the Kubernetes control plane, and the applications themselves.
These are not one-time design decisions. They are ongoing operational concerns that demand constant attention and adjustment.
Another common surprise is how difficult cost control becomes over time.
Clusters tend to grow incrementally. Nodes are added “just in case.” Storage accumulates. Network egress costs creep in. Tooling for monitoring, security, or policy enforcement often comes with its own licensing costs.
More importantly, misconfigurations and manual changes introduce waste and risk. Each error costs engineering time, causes downtime, or increases exposure. These costs rarely show up neatly on a cloud invoice, but they compound steadily.
Kubernetes is not secure by default. Even with a managed service, your team remains responsible for securing large portions of the stack.
That includes configuring access controls, enforcing network policies, managing secrets, and documenting controls for audits and certifications. Each additional tool or layer increases both the attack surface and the documentation burden.
For regulated environments, this overhead can become a serious blocker. A platform that embeds security, backups, and auditability by default removes much of that burden and makes compliance a property of the system rather than a recurring project.
Kubernetes is a strong foundation for running applications, including those that depend on state. The challenge is not that Kubernetes cannot run stateful workloads, but that managing state reliably introduces a different class of complexity.
Databases, caches, and other stateful services require careful handling of data consistency, backups, recovery procedures, upgrades, and high availability. Kubernetes provides primitives to support this, and modern approaches such as Operators can help automate parts of the lifecycle.
Even so, teams still need to understand what those abstractions do, where they fall short, and how to diagnose problems when something goes wrong.
Some organizations address this by pairing Kubernetes with managed database services from their cloud provider. That can remove part of the operational burden, but it also introduces new integration points, cost considerations, and operational boundaries that teams need to manage explicitly.
The core question for buyers is not whether stateful workloads can run on Kubernetes. It is whether they want to own the domain-specific expertise required to operate them safely over time. That includes tuning, backup validation, recovery testing, and consistency across environments.
A cloud application platform shifts this responsibility away from application teams. Stateful services are provisioned, managed, and integrated consistently across environments, often with the same workflows used for application code. This reduces fragility and typically costs less, in both time and effort, than assembling and maintaining an equivalent setup yourself.
Kubernetes is often described as a foundation for building developer platforms. That phrasing is accurate, and it also reveals where much of the hidden cost actually lives.
A good developer experience does not emerge automatically from Kubernetes primitives. It has to be designed. Opinionated workflows, clear guardrails, and integrated tooling do not appear by default. They are the result of deliberate product and platform design, followed by ongoing maintenance as teams, applications, and requirements evolve.
In practice, this means organizations end up building and maintaining an internal platform alongside their product. Engineers are needed not only to keep the system running, but also to define deployment flows, manage environment lifecycles, standardize how services are consumed, and ensure that day-to-day development remains predictable and safe. This is real engineering work, and it compounds over time.
A managed cloud application platform takes on that responsibility directly. The developer experience is provided out of the box, with established workflows and guardrails that reflect production realities. Developers can focus on application logic and delivery, while the platform absorbs the complexity of designing and maintaining the paths they use every day.
None of this is an argument against Kubernetes. It is an exceptionally capable framework, and for some organizations, building on it directly is the right choice.
The real decision is about ownership.
Choosing Kubernetes means owning the platform: its security model, its workflows, its services, and its long-term evolution. Choosing a cloud application platform means delegating that responsibility to a system designed to absorb it.
The perceived higher cost of a platform like Upsun reflects work you no longer have to do and risk you no longer have to carry.
For a deeper look at why many teams choose a PaaS over a DIY Kubernetes approach, see our breakdown of PaaS vs Kubernetes.
The most important question isn’t whether Kubernetes is powerful enough. It almost always is.
The question is whether your organization wants to spend its time building and operating a platform, or whether it wants that platform to already exist.
When responsibility lives at the platform layer, teams move faster with fewer surprises. When it lives with application teams, flexibility comes with ongoing complexity and operational toil.
Understanding that tradeoff early is what separates sustainable platform strategies from expensive experiments.
Join our monthly newsletter
Compliant and validated