Jonas Caino looks at how the wheat and chessboard problem intersects with Moore’s law and offers an interesting theory relevant to the data centre industry 

We’ve all (well, certainly the geeks among us) come across the famous legend about the origin of chess that tells like this: When the inventor of chess demonstrated the game to the emperor of India, the emperor was so impressed that he asked the man to name his reward.

The man responded: ‘Oh emperor, my wishes are simple. I only ask for this: give me one grain of rice for the first square of the chessboard, two grains for the next square, four for the next, eight for the next and so on for all 64 squares, with each square having double the number of grains as the square before.’

From the early mainframes to the desktop, networks to smart phones, to the Internet of Things, Moore’s law has continued to push up what’s possible

The emperor agreed, amazed that the man had asked for such a small reward; or so he thought. After a week, his treasurer came back and informed him that the reward would add up to an inconceivable sum, far greater than all the rice that his empire could produce in the foreseeable future.

This concept is known as the wheat and chessboard problem where the power of geometric progression is compelling. It is this concept that has governed the rise of computing power and data in our society. From the early mainframes to the desktop, networks to smart phones, to the Internet of Things, Moore’s law has continued to push up what’s possible in order to feed our ferocious appetite for more, better and quicker services in every area of our lives.

The problem comes in the second half of the chessboard where crazy exponential growth begins to collide with limited resources to manage that growth, thereby enacting the law of diminishing returns. In the case of computing power and data, the limited resources are humans and our ability to analyse and put all this data to good use. This is where machine learning steps in.

Data centre infrastructure powered by machine learning

If you’ll permit me to use the data centre physical infrastructure as a microcosm to this issue, this critical space is a complex array of interdependent systems – electrical, mechanical, environmental and control. Data centre infrastructure management (DCIM) goes some way towards using data to rationalise and optimise these systems for efficiency and availability, but only to a point.

Here is an example in the form of a question. What cooling and efficiency gains would one enjoy if one could correlate server performance patterns with weather conditions and adjust set points all in real-time? The amount of data and analysis to achieve this and the other interdependent datasets exists in the second half of the chessboard.

Machine learning and neural network algorithms are not just for the hyper-scale Google, Amazon and Facebooks of this world, all data centres have system datasets hidden away just waiting to be tapped and utilised. Machines can help data centre managers focus on the real task at hand, providing dynamic services to internal and external customers while the machines keep the data centre infrastructure running.

‘Is this real or just science fiction?’ one would ponder. This question has been asked again and again throughout the evolution of industrial technology and yet here we are. The drivers have always been and will continue to be cost and profit. In the second half of the chessboard, the gains – efficiency, availability and pounds sterling – will reflect the mathematical structure that governs that space.


This post originated at Data Centre Management magazine, from the same publisher as The Stack. Click here to find out more about the UK’s most important industry publication for the data centre space.