Kris Beevers, CEO of NS1, discusses why global businesses are increasingly taking on the cost and complexity of building and operating their own edge delivery networks…
There is a fascinating trend among many of the ‘top internet’ firms: they are building their own edge delivery networks to service their applications, taking down co-located or cloud infrastructure around the globe to deliver resiliency and performance to their users.
These edge delivery networks tend to be a bit different than general purpose content delivery networks (CDNs). They are tailored specifically for the applications they’ve been built to service. In some cases, this means the edge networks leverage highly specific connectivity to regional internet service providers or between application facilities; in other cases, it means placing specialized hardware tuned to specific needs of the application in delivery facilities around the world. And most importantly, these networks are operating application specific software and configurations that are customized beyond what’s possible in more general purpose shared networks.
Why are organizations – from major social networks to large cloud storage providers to gaming powerhouses – increasingly taking on the cost and complexity of building and operating global edge networks?
You can’t beat the speed of light
For the better part of a decade, end users around the world have become increasingly acclimated to applications that pay attention to the impact of performance on user experience. Google, Facebook and other internet giants have invested in their global infrastructure, and they work hard to ensure the experience of a user in Singapore or Berlin is just as snappy as that of a user in San Francisco or New York. The internet is a global phenomenon and the experiences of users everywhere are important.
For years we’ve had technology like CDNs that have enabled better global delivery of certain kinds of content. CDNs continue to evolve to address new, more dynamic use cases, but most applications still fundamentally depend in some capacity upon executing code with respect to a dataset to compute a response to application requests – and unless the code and dataset are proximal to the user doing the request, the laws of physics limit the performance of the application.
Infrastructure as code isn’t just a buzz phrase – it’s a necessity for managing global edge delivery networks
Until someone invents a wormhole-powered web server, the only way you’ll beat the speed of light for these kinds of applications is the old-fashioned way: pack your bags, and deploy your code and dataset close to your users, wherever they may be.
Building an edge delivery network is easier than ever before
Historically, deploying infrastructure around the world has been an exercise in complexity and heterogeneity. So too has been the challenge of solving the associated operational problems like infrastructure management and application administration, not to mention data replication and consistency. What’s changed?
Most importantly, public cloud and other infrastructure providers have gone global. AWS, for example, provides the same compute infrastructure in North America, South America, Asia, Oceania and Europe. Other cloud vendors have similar – even wider – coverage. For applications with more specific needs, colocation providers like Equinix that have expanded to global presence and large transcontinental backbone providers with dense coverage around the world enable companies of all types to deploy normalized edge delivery facilities in many markets. And this doesn’t require interacting with an army of local vendors with varying product capabilities.
Application frameworks and database technologies have evolved as well. Architectures for solving data replication and consistency challenges at scale are now accessible to any developer.
Automated tooling makes managing the complexity possible
As distributed infrastructure has become more accessible, the approaches for managing widely dispersed and highly dynamic application deployments have rapidly evolved. In the late 2000s we saw the prevalence of configuration management technologies and the emergence of the modern DevOps movement, driven in no small part by increasingly complex infrastructure deployments. Configuration management has matured into a broader infrastructure automation ecosystem, full of powerful tools for managing global systems spanning tens of data centers and thousands of servers.
Managing connectivity and systems that span the globe, even with increasingly sophisticated automation technology and easy to use cloud services, is no small task
Infrastructure as code isn’t just a buzz phrase – it’s a necessity for managing global edge delivery networks. Thanks to the maturity of the tools available today, and their coverage of major cloud service providers like AWS or NS1, and of key software packages like major web servers, tying an edge delivery network together into a cohesive system manageable by a relatively small team is possible – more so than ever before.
Traffic management technology is the elixir that brings edge delivery networks to life
There remain plenty of challenges in building large global delivery networks for any application. The internet is complex, and managing connectivity and systems that span the globe, even with increasingly sophisticated automation technology and easy to use cloud services, is no small task.
One of the toughest problems for any edge delivery network is traffic management. The fundamental question of a distributed delivery network is “which data center should I send this user to right now?” This question is why we started NS1 – after building a large global CDN in the mid-2000s and thinking hard about global traffic management for that CDN for half a decade, we saw the early inklings of today’s trend toward distributed application delivery.
The trick is to solve what is arguably the most complex problem in achieving high performance global delivery from a custom edge network: the traffic management that selects which service endpoint to direct a user to for the best performance, given what’s happening in real time in the application infrastructure and on the internet.
The trend towards application-specific edge delivery networks will continue, especially as the tooling for automating deployment and management of global distributed infrastructure matures, and as perimeter network services that enable easy deployment of global connectivity evolve.