New greener region discount. Save 3% on Upsun resource usage. Learn how.
LoginFree trial
FeaturesPricingBlogAbout us
Blog

Life on the edge: understanding the Upsun edge layer

microservicescloudcontainersinfrastructure
11 December 2023
Mohammed Ajmal Siddiqui
Mohammed Ajmal Siddiqui

First things first, what is the edge layer and why do we need it? The edge layer is what allows a request sent from your browser to a site hosted on Upsun to actually reach the right site, and then return the response. It also puts you in the correct server when you type upsun ssh in your terminal, making sure requests are sent to exactly where they need to be.

But what exactly does life on the edge with Upsun look like? How do requests work? And what are the key things you should know about how the edge layer works?

Foundational concepts of Upsun

Before we dive in, it’s useful to understand a few key pieces of the puzzle that we call Upsun. Specifically, we should be clear about the hierarchy of components that Upsun manages.

The broadest component here is the region. A region refers to a cloud region from one of our underlying cloud providers, specifically a Virtual Private Cloud (VPC) that represents a single Upsun region. A region manages hosts, which are virtual machines (VMs) provided by the region’s cloud provider. We have different kinds of hosts such as gateways, grid hosts, and coordinators, and a single host can contain many clusters with a cluster able to span multiple hosts. A cluster is a logical grouping of a few related services. 

A service is an abstraction that represents, well, a service we manage. This can be the database (DB), cache, or the customers’ app itself. Internally, a service runs in containers. A container is like a lightweight mini-VM, and we manage a ton of containers—a single host can have a lot of containers! Each service maps either to a single or multiple containers if running on High Availability mode. 

The problem statement

Now we know the bare minimum to formulate our problem statement in less vague terms: how do we accept a request to a site hosted on Upsun, forward it to the exact container that has the server for this site, and convey the response of the server back to the user?

The first step is for the request to reach the right Upsun region. This is achieved by the customer adding a DNS record (of type CNAME/ALIAS, if you’re interested) that says, “My custom domain resolves to the public IP address of this Upsun region.” This makes the browser of the user browsing the site send the request to the Upsun region which hosts the site.

From this point onwards, our Edge Layer takes over—cue dramatic music.

Peeling back the Edge Layers

Most of our hosts (i.e. the Virtual Machines that constitute an Upsun Edge Layer region) don’t even have a public IP. This includes the hosts that house the servers for the sites hosted on Upsun. The hosts that do have a public IP are the hosts at the edge. Naturally, any request that is directed toward the public IP of a region will then hit the edge.

Let’s recall that an edge host is just a VM provided by a cloud provider. Let’s also recall that a host can contain clusters. You might also be losing patience here as we peel back layer upon layer to find nothing of value. But stay with us, we promise you the next edge layer will be interesting!

Edge-proxy 

The official description is a bit of a mouthful: Edge is a dynamic, transparent, multi-protocol proxy.

Here’s the breakdown:

Dynamic: Nuntius can fetch changes and update its configuration without any manual intervention.

Transparent: The world doesn’t need to know about what nuntius is or how it works. You just send a request to nuntius and you get the appropriate response as if the app server itself was directly responding to you.

Multi-protocol: Nuntius is capable of dealing with HTTP (v1.1 and 2), HTTPS, and even SSH!

Proxy: This is the important bit: nuntius itself cannot handle any requests sent its way. Instead, it forwards them to the right place, waits for the right place to respond, and forwards this response back to you.

In other words, Edge Proxy is the first component that does something with the request. Meaning it has quite a few important responsibilities:

Act as a proxy for customer projects

Act as a Web Application Firewall

Act as a proxy for the Upsun API

Edge Proxy: finding the way

Our focus for this article will be the first role: a proxy that routes a request to the right application.

Each container on the region has an IP that is unique in the (overlay) network on which all containers are. The aim is for the proxy to forward the request to the correct container that houses the app server of the project. For this to work, the proxy maintains a mapping of URLs to container IPs. This is information that the proxy isn’t privy to, so it must ask something which is: the container orchestration or distributed datastore. The container orchestrator exposes information about routes to projects, ACLs, etc via an RPC. The proxy leverages this to know a) if the user making this request is allowed to do so and b) the container that can service this request.

The container IP that the proxy figured out is on a different VM. And to complicate things even further, we may have any number of container hosts in a region.

In other words, when the proxy forwards a request to the container IP, something needs to figure out which VM the said container is on. That something is the ARP daemon. This is slightly misleading because the ARP protocol is used to convert an IP address into a physical (i.e. MAC) address. In our case, ARP daemon figures out which grid host a container lives on based on the container IP i.e. converts container IP to host IP.

In fact, container IPs don’t even belong to the same network as the hosts. They actually belong in an overlay network. For now, it is important to know that the ARP daemon will take care of making sure a request addressed to a certain container IP gets there.

The gates of the environment: the router

Remember when I told you that the proxy figures out the container IP of the app container so that it can forward requests to it? Well, I lied. The requests are actually routed to a special service called the router. The router service contains a single router container, which is the gatekeeper for the environment—yes, we have one router per environment.

The router container runs a caching reverse proxy. A reverse proxy is something that sits in front of your server and forwards requests to and responses from it while also acting as a cache or a load balancer. 

This is the service that you can configure via the routes.yaml file in your Upsun project. That’s how you have one router per environment—if your routers.yaml differs between two environments, those environments will have differently configured routers.

The router finally sends the request to the application container i.e. the server that actually serves the site. 

At this point, the request has finally reached the server!

All the connections between the various nodes on the path of a request are TCP connections. The good thing about TCP is that the connection used by a request can be kept open. This means that the response from the server can take the same path back to the host at the edge and out to the world. This is how the response from the request reaches the user.

Container-to-world networking and other things

There is an important part of the edge layer that this post doesn’t cover—how can you have the app container interact with the external internet?

This is a lot more challenging than you might expect. We have separate routes for the egress traffic via the edge hosts (sometimes dedicated to egress traffic alone) that allow us to have fine-grained control over the outgoing traffic from our regions. 

We’ll see you over there! 

Acknowledgments

Big thanks to our fellow expert contributors to this post: Ricardo Kirkner, Pilar Gomez, Colin Strickland, Krishna Kashyap, and Eder Leão Moosmann.

Upsun Logo
Join the community