New greener region discount. Save 3% on Upsun resource usage. Learn how.
LoginFree trial
FeaturesPricingBlogAbout us
Blog

A brief history of application deployment

PaaSCI/CDdeploymentIaCRubyKubernetesautomation
08 May 2024
Ori Pekelman
Ori Pekelman
Chief Strategy Officer

Writing apps is great. If only we could be doing that rather than mucking around with the whole process of getting them up and running, right? I often say that apps are like planes: when they are up and flying in the sky chances are that, if you built the app right, it's gonna keep running smoothly. It's takeoff and landing when you usually get into trouble—a.k.a deployment.

In this blog post, I'll dive into a brief history of how we used to deploy apps, taking a look at what has changed, what worked well, and what wasn’t so great along the way. 

A brief history of application deployment

1. Traditional deployment (late 1990s to early 2000s)

  • Method: manual File Transfer Protocol (FTP) uploads to servers. Webmasters would develop the application locally and then transfer it manually to a production server.
  • Tools: standard FTP clients like FileZilla or WS_FTP, among others.

Today, you'd be shocked to know the huge systems that were deployed like this, and you’d probably think it's crazy—but it was also fun and invigorating! You had a thing to change. You double-clicked on an icon of your favorite FTP client (yes, we used Windows). You'd navigate to a file. Double click. Change a thing. Save. Voilà, change delivered. We didn't use that many frameworks. So chances are you had an enormous PHP file somewhere. And your change was localized. And if it broke? Well, you double-clicked again and corrected that. When cycle times are seconds, that was fine. Well, until it wasn't.

Often enough your app ran on a single server, dedicated to this use. It would have whatever database you needed running on it, or if you were running something more resource-hungry, you'd probably have a separate server for the database—what we called multi-tier deployments. And usually, someone else would take care of that one (the Database Administrator).

By 1998, Apache had something called virtual hosts which allowed multiple sites to be hosted on the same machine, which resulted in people cramming more and more applications on the same server. So if you changed a file in the wrong directory...well.

At the time, you may already have been working with some real pros and were on El Reg and Slashdot, so you already had a semblance of separation of responsibilities. If that were so, you already actually had a test server and you'd do this thing on staging. And then some sysadmin would do the copying to production which was fine—until it wasn't.

2. The introduction of version control systems (VCS) (early 2000s)

  • Method: developers started using VCS to manage code versions. Deployment sometimes involved checking out the latest code directly on a production server.
  • Tools: CVS and Subversion.

By now the pros had become even more serious, and they actually wanted to be in control of the things they were deploying. And engineers also started to hate folders called production_v12_final_final_backup_charles_final

So we started using SVN and putting something into production usually meant SVN up which was slow, painful, and often broke. But it was the serious way to do things. The smart people learned how to use Symlinks,  do SVN up first, and then switch over to the web root. It was like GitOps without the Ops, and with pull rather than push.

But still, you were running on a single-base system. Dependency had its best days in that period. And usually, staging and production were by now a long time out of sync. This is when, "it works on my machine," became a thing.

3. Shared hosting and control panels (2000s)

  • Method: the rise of shared hosting made it easy for individuals and small businesses to host web applications. Control panels like cPanel allowed users to manage hosting settings and deploy applications.
  • Tools: cPanel, Plesk, DirectAdmin.

You would be amazed by how much stuff is still running like this—going to a web page and being able to select from a deployable template of a bunch of apps (mostly PHP but later much more). And this is where something interesting happened. Application deployment became so much easier. One-click. 

However, as it turns out, this makes application maintenance all the more difficult. When deployment is easy but updating is hard, it’s not great. Usually these tools not only allowed you to edit stuff through FTP, they gave you a web client so you didn't actually need to know how to connect to FTP.

4. Dedicated servers and virtual private servers (VPS) (mid-2000s)

  • Method: as web applications grew in complexity, there was a move towards dedicated servers and VPS for more control and resources. This allowed developers to customize server settings to suit their application's needs.
  • Tools: VMware, Xen, and later followed by KVM.

Virtualization had a huge impact, suddenly we could start separating applications from the hardware. Now you could actually cook a machine image and deploy that rather than the application. 

This theme will return in the form of containers much later. More importantly, because you could have different bases. But cooking images was a complex story. Deployments were a slow thing. Some adopted the new tools to adopt new practices. Some continued to treat the VM-based machines as a thing they used FTP with. Or SVN up. With the same results.

5. Automated deployment and continuous integration (CI) (late 2000s to early 2010s)

  • Method: the software development lifecycle saw the introduction of automated deployment tools that allowed code to be tested and deployed automatically to production servers.
  • Tools: Jenkins, Bamboo, Capistrano, and Fabric.

At a point, software developers, like me, became very very tired of our exchanges with system administrators. We were mistreated. And we could code. 

So we decided we were going to code the problem away. This is what we call to this day DevOps. There were mainly two versions of this:

  1. Code that ran on your machine and automated what were previously manual actions (such as FTP, SVN up, or switching over a Symlink).
  2. Code that ran on the servers and basically did the same things.

The servers were still owned by system administrators, but by 2006 EC2 had become a thing. You could as a developer actually get access to a running machine without asking for permission. But the state of mind had not changed yet. And developers with little security training, would more often than not get quickly (P)owned. 

Still, if your system administrators of the time were modern enough they started to allow you to directly deploy. At least to staging. Because these were also the years of the trifecta: dev-staging-prod. We were automated enough to maintain (with suffering) all three. Which was better than staging-prod. Still, staging drifted, and dev was almost always at various stages of broken.

6. Platform-as-a-Service (PaaS) (early 2010s)

  • Method: platforms that abstracted away the infrastructure, letting developers focus on the code. Applications could be pushed to these platforms, which would handle the server setup, scaling, and deployment.
  • Tools: Heroku, Google App Engine, Microsoft Azure App Service, andPlatform.sh.

This is the phase we are most keen to talk about because we are a PaaS. And while we’d love to talk about ourselves, we have to respect our elders. 

At this point, we had tasted the liberty of deploying directly, and the cumbersomeness of continuous integration. Most of us, being fully accustomed to how ugly computers are, and how all software is broken, accepted this as part of the cost of doing business. Except, in these years something else happened: Ruby and Ruby on Rails. A game-changer with its own unique sense of aesthetics. Ruby people were more attached to beauty and simplicity than anything else. 

At the time we were at the paroxysm of Moore’s law—computers were getting faster and your slow code would get faster with time, so why not focus on its readability and beauty instead? 

Tests in the Ruby world were not thought of as, “the cost of doing business because the language is an unsafe mess,” but as the proof of thinking about the system in simple terms. In the words of the French poet Boileau, “what we conceive well, express itself clearly, and the words to say it come easily”. 

Another huge element was Git which quickly enough came and displaced SVN.

There is a huge difference in intentionality between `svn up` and `git push`. But the main thing was that Git made branches cheap and fast. The Heroku people, with their love for simplicity and Ruby aesthetics, basically said servers and cloud servers are now cattle. What system administrators do for us, can and should be automated away. We will simplify and give you a single runtime and a single database. Always the same; no painting outside the lines. That way, you can `git push` and your code will be running on a publicly accessible server. 

It could have been the end of the line, the final word of the story, but Heroku was bought early on by Salesforce. This isn’t to say Salesforce was a bad steward at all, they even added a few runtimes down the line but the service stayed mostly what it was, ignoring everything around it that was going to pass in the next 15 years. And the failure of Heroku to move with the times made people wary of the idea. 

The PaaS could have been what liberated developers to express themselves without unnecessary ceremony and bureaucracy but painting within the lines doesn’t feel like that.

But even if you did paint within the lines, Heroku and its lesser copy-cats from Google and AWS (AppEngine and Beanstalk) left quite a bit of the hassle for the developer (and now the DevOps team). It could handle production but continuous integration, development staging, and anything but the single runtime and attached managed database were left as an afterthought. Which were most of the things needed by any mature software organization. 

We’ll get to that when we talk about Upsun but that is essentially the bet that we made. That you can wrap up the whole process in one as a true Platform-as-a-Service must be a platform for the full continuous delivery cycle.

7. Containerization and microservices (mid 2010s)

  • Method: applications began to be developed as collections of loosely coupled microservices. Containers encapsulated these services, ensuring they had all the dependencies they needed to run.
  • Tools: Docker, Kubernetes, Docker Swarm, and Platform.sh.

Continuing on the PaaS theme, two of the things the early PaaS system did not resolve were running software locally and running microservices. Not running micro-services locally as this was basically impossible at the time.

The early PaaS systems were about simplification, and as we said they centered around the specific use case of the single monolith with the single managed database. 

8. Infrastructure-as-Code (IaC) and serverless (late 2010s)

  • Method:
    • IaC: infrastructure setup and configurations started to be treated as code, allowing consistent and replicable environments.
    • Serverless: developers would write functions that got executed in response to events, without worrying about the underlying servers.
  • Tools: AWS Lambda, Google Cloud Functions, Azure Functions, Terraform, Ansible, oh and Platform.sh.

9. Progressive web apps (PWAs) and JAMstack (late 2010s to early 2020s)

  • Method: a move towards client-side rendered applications and using APIs for backend operations. Deployment mostly involved pushing static files to content delivery networks (CDN) edge nodes.
  • Tools: Netlify, Vercel, Cloudflare Pages, and you guessed it, Platform.sh.

10. Edge computing (2020s)

  • Method: shifting computation and data storage closer to the location where it's needed, to improve response times and save bandwidth.
  • Tools: AWS Wavelength, Cloudflare Workers, Akamai Edge Workers, and Upsun.

Zooming out: the move towards automation

Over the years, the shift has been towards more automation, better abstraction, and enhanced developer experience. As the web and its associated technologies evolve, deployment methods and tools will continue to change to address new challenges and opportunities.

In this story, the important part, relating to what is currently happening, is two trends in particular:

The rise of containerization

  • Before Kubernetes, the deployment landscape was already being transformed by Docker, which popularized container technology. Containers brought a consistent environment from development to production, ensuring that the, “it works on my machine” excuse was a thing of the past. However, as containers began to gain traction, there was a growing need to manage, orchestrate, and scale them efficiently, especially for large-scale applications.

The emergence of Kubernetes (mid 2010s)

  • Kubernetes was introduced by Google in 2014, drawing from their experience with Borg, an internal cluster management system. Kubernetes addressed the challenges of container orchestration, scaling, and management.
  • Kubernetes wasn't the only player in the beginning. There were other tools and platforms, like Docker Swarm and Apache Mesos, vying for the spotlight in the container orchestration space. However, Kubernetes swiftly gained popularity due to its robust feature set, active community, and backing from industry giants, leading to its becoming the de facto standard for container orchestration.

While Kubernetes started as a tool to orchestrate containers, its influence has extended far beyond, reshaping the deployment landscape. It's enabled patterns like microservices to thrive, energized new paradigms like GitOps, and catalyzed a vast ecosystem of tools and extensions.

Where Upsun fits

Upsun tries to take the whole story above and distill what works into a neat, self-service box:

1. Simplification and abstraction

  • Unified toolset: a modern PaaS abstracts the complexities of the underlying infrastructure, allowing developers to focus on writing code and deploying applications. Instead of juggling multiple tools for container orchestration, scaling, logging, monitoring, etc., developers get a unified platform that integrates these features out of the box.
  • Streamlined workflows: developers can define the infrastructure and services their application needs using configuration files. This approach simplifies and codifies the deployment process.

2. Future resilience and flexibility

  • Avoid vendor lock-in: many PaaS solutions, including Upsun, are designed to be cloud-agnostic. This means you're not tied to a specific cloud provider, giving you the flexibility to switch providers or even use multiple providers without significant codebase changes.
  • Adapt to trends: Modern PaaS solutions can quickly integrate new tools or adjust to changing best practices, ensuring developers always have access to the latest methodologies and technologies without the overhead of manual integration.

3. Consistency across environments

Upsun, for example, allows developers to clone their production environment for development, testing, or staging. This ensures that the application behaves consistently across all stages, reducing the chances of unexpected behaviors in production due to environmental differences.

4. Built-in CI/CD

Many modern PaaS platforms come with integrated continuous integration/continuous deployment pipelines, ensuring that code is tested and deployed seamlessly. This integration reduces the need for third-party tools and streamlines the development-to-deployment process.

5. Scalability and performance

With automatic scaling features, PaaS solutions can handle varying traffic loads without manual intervention. They can allocate resources as needed, ensuring optimal performance and cost-efficiency.

6. Security

Modern PaaS solutions often come with built-in security features, including automated patching, secure network configurations, and compliance certifications. This built-in security means less manual configuration and fewer third-party tools, reducing potential points of failure.

7. Economic efficiency

By reducing the need for multiple tools and services, PaaS can lead to cost savings. Organizations don't need to invest in expertise for each individual tool and operational costs can be reduced thanks to the efficiencies of scale that PaaS providers achieve.

In conclusion, a history of tools has transformed the way we think about application deployment and infrastructure, and each of them come with their own complexities. Modern PaaS solutions, like Upsun, are a response to the demand for simpler, unified, and future-proof deployment methodologies. As technology continues to evolve, the ability to stay agile and adaptive with minimal friction is a key advantage, making PaaS an attractive choice for many organizations.

Want to give it a try? Start your free trial today

Upsun Logo
Join the community