- Published on
What is the Edge?
- Authors
- Name
- Alex Lee
- @alexjoelee
What's Edge? Why are we still using this same term more than 30 years after it's invention? Has it's meaning changed? Let's dig in.
Edge, c. 1990
Edge computing is a term used to describe a distributed system of computers, where the computation and data storage are moved closer to the source of that data. The term originates from the 1990s when the first content delivery networks (CDNs) began springing up. CDNs work by serving content from datacenters located closer to users. Instead of a user in Chicago having to wait for their packets to travel around the world to your server in Frankfurt, the user is able to connect to an edge server in Chicago first. The edge server might store most (or all) of the data the user needs, and the data is returned to the user faster.
Expanding the Edge: The current status
Over the years, the term and use case of "servers closer to users" expanded a lot. Engineers began experimenting with running different workloads in different regions and the affect of latency on different parts of an application system. Truth is, the limits of edge computing are still being tried and tested to this day. This thread from Lee Robinson, VP of Product at Vercel from just a month ago shows the challenges Vercel ran into when attempting to render applications using edge compute.
First, and most obvious, is that your compute needs to be close to your database. Most data is not globally replicated. So running compute in many regions, which all connect to a us-east database, made no sense.
What'd they land on?
We saw with @v0 that it was faster to do SSR + streaming with Node.js than edge rendering.
Server-side rendering your static assets and sending them down the wire still seems to be the fastest way to serve web traffic. Good day for us, that's kind of our specialty.
Edge, according to Skip2
From the onset, we've focused on moving HTTPs traffic as quickly as possible, and less about running 'functions' or 'compute' in different regions. When we refer to the edge, we refer to a network of proxies close to the end users. The proxies terminate TLS, handle load balancing, and accelerate the requests by using compression and caching. This might not sound like it's pulling as much weight, but by drastically reducing latency on the initial TLS handshake, data starts flowing faster. When we spread out tons of these servers all over the place, the benefits start to stack up, especially in terms of application load. The more we can reduce the latency between our network and the end users, the better the user experience we can provide. As well, by putting a layer between the application and the users, we are able to use caching to drastically reduce the number of requests going back to the application. Running your application code at the edge never made much sense to me anyway, when caching and proxying work so well and are so much simpler.
Hammers and Screws
Ultimately these conversations will always boil down to use the right tool for the job. You can build your entire SaaS using distributed microservices in Kubernetes, but you might not want to deal with it when it breaks. There's nothing wrong with the edge, there's nothing wrong with monoliths, just as long as the job gets done well. There are most certainly some tools and applications that exist and work much better as an edge function - just remember that a hammer doesn't work very well for pounding in screws.