James Bailey of Hyperscale IT discusses what open source and the Open Compute Project can deliver in the data centre space…
With the advent of ‘hyperscalers’ such as Google, Facebook and Microsoft Azure has come a new breed of IT infrastructure. One that has to be simple and repeatable. Repeatable on a huge scale, sometimes to the tune of millions of servers.
As with any top tier of performance, the technology trickles down. For example, Formula 1 drives innovation that we ultimately see in road cars. The same thing occurs in computing. The difference here is that not only is the technology trickling down but also the mindset.
Today’s motives are built very much around the disaggregation of component pieces. Each component of a system does one job well but importantly can be interchanged with another option easily. This is a separation from software and hardware that is tightly coupled.
The network switching clearly demonstrates this idea. For years top manufacturers have been coupling software and hardware together to tie you into an ecosystem. It makes sense for a vendor to lock you in, as it makes good business sense. But it is poor for innovation if you are motivated for the wrong reasons. Now, open white box hardware has blown the lid of this with Software Defined Networking (SDN).
Major players are coming together to innovate efficient data centres at a pace faster than ever seen before
SDN is not a new idea and has been around since 1995. One of the first projects being AT&T’s GeoPlex. SDN architecture decouples network control from the forwarding functions which enables the network control to be directly programmable. The underlying hardware is abstracted, meaning that you can choose your switch and OS separately.
Why the sudden buzz?
SDN is just one of many ‘open’ pieces of technology trickling down from the ‘hyperscalers’. Facebook back in 2011 set up the Open Compute Project alongside Intel, Rackspace, Goldman Sachs and others, to promote a new way of thinking and publish their server designs. The non-profit organisation that was formed open sourced their hardware. Now, the list of companies involved is extensive and includes Google, which joined the ranks earlier this year and is already submitting designs.
The Open Compute Project marks a new dawn for hardware. What open source does for software can now be seen in hardware. At the OCP summit this year, Microsoft had stands next to Facebook and others. These major players are coming together to innovate efficient data centres at a pace faster than ever seen before.
All of this innovation sounds great and the hardware has been running and proved for some time, but this does not mean the road is easy for sub-hyperscale shops
This mindset trickles down, and the decoupling we see in the network arena can be seen everywhere. Just look at how many software-defined storage companies have popped up recently. It’s the same principle of running vanity-free white box hardware and letting the software do the clever stuff. Even in more modest size systems, if you are going to blow a disk or have some kind of failure, the software can now take care of redundancy.
The OCP is now in its fifth year and the hardware designs have come a long way. Facebook has just announced Backpack a follow up to the 6-pack. Backpack is a 100G core switch designed to connect top-of-rack switches, known as Wedge 100, to make a network fabric. Again, this hardware is open source driving the cost right down.
All of this innovation sounds great and the hardware has been running and proved for some time, but this does not mean the road is easy for sub-hyperscale shops. In Europe telcos and major banks have seen the light and are rolling out the hardware, but what about the enterprise customers? VARs like Hyperscale IT are focused on driving adoption and making it accessible to all by creating a channel for OCP – from design to data centre fit-out and hardware integration.
A new project underway within the OCP is looking to certify data centres capable of housing hardware to its full potential. Aegis Data, based in the UK, is one of the facilities built around these modern considerations.
There are a number of factors that that need to be considered when adopting OCP, but once they are understood the savings are real. CERN published an electricity saving of up to 30% on installation. OCP racks also hold the same footprint as a standard 19″ EIA rack but they have 15% more usable space inside. Imagine having a 15% larger data centre without having to mix up any cement.
There are currently around eight ODMs that build OCP hardware – many of the same factories that build current branded hardware. So why pay the 30% premium for the name?