Microsoft has revealed blueprints for a joint project with Nvidia that is intended to bridge the conceptual and practical gap between GPU-based computing and the cloud, and to open up new possibilities for machine learning programs to run in more generic and commercialised workloads than are currently common.

The HGX-1 hyperscale GPU, a prototype of which was on display at Microsoft’s stand at the Open Compute Summit at Santa Clara, is intended to form part of Microsoft’s Project Olympus, which recently made headlines by committing to ARM CPUs, in a radical departure from Intel X86.

The two companies have compared the advent of HGX-1 to the revolution that ATX brought to the PC motherboard market, claiming that the ability to leverage GPUs in such a transparent way will greatly advance new research projects in fields such as autonomous vehicle systems, voice recognition and many other sectors which currently tend to address GPU resources via cloud proxies – such as Amazon Mechanical Turks – or expensive dedicated hardware.

Nvidia’s founder and chief executive officer Jen-Hsun Huang commented: “AI is a new computing model that requires a new architecture,” and continued “The HGX-1 hyperscale GPU accelerator will do for AI cloud computing what the ATX standard did to make PCs pervasive today. It will enable cloud-service providers to easily adopt NVIDIA GPUs to meet surging demand for AI computing.”

The framework designs are being released an open source basis, as might be expected by an OCP venue announcement, and the accelerator itself employs eight Tesla P100 GPUs per chassis. According to Azure’s Hardware Infrastructure general manager Kushagra Vaid, the design has been conceived for easy adoption into current data centre standards.

Though Azure still trails behind Amazon Web Services in the high performance cloud computing space in terms of market reach, recent research concluded that Azure’s infrastructure currently has a notable edge in HPC.