Robert Faulkner of Server Technology argues why you can’t improve what you don’t measure in the data centre…
There are many data center infrastructure management (DCIM) software packages available today, but very few have enjoyed significant success due to the bearish nature of monitoring every detail of a data center.
Most organizations are demanding that data center staff do more with less, often increasing the size and scope of facilities without increasing the headcount. This brings up a catch-22 scenario where data center managers have to account for the time it takes to install, implement, integrate, and continually manage and monitor a massive system, while still fighting the seemingly never-ending battles associated with overall business growth.
The answer can be found in very specific, purpose-built monitoring tools. Businesses run on servers, storage, and network gear connected to rack PDUs, and we can therefore get the most critical information by simply monitoring power and environmental data within the rack. This echoes the 80-20 rule – You can get 80% of the information needed for 20% of the work and 20% of the cost.
It may seem trite the long-stated adage that you can’t improve what you don’t measure. In fact, with the explosive growth in data centers, you can’t even maintain status quo if you aren’t monitoring power and environmental data within the rack. Even if you have the most efficient data center with more than enough redundancy built in to maintain the highest levels of uptime, changes in business needs force changes in the equipment and applications supported by rack PDUs.
The ever-more competitive nature of businesses and the ever-more impatient consumer leave minimal room for error when it comes to plans to consolidate or expand. This is why monitoring in the most straight-forward and effective way possible is critical in today’s data center environment.
Depending upon who you ask, anywhere between 40 and 90% of data center downtime is attributable to human error. It seems that the priority from a power delivery standpoint is to keep people out of the data center as much as possible. That means remote monitoring and management, even if it is from the next room, must be primary in the design of data center infrastructure.
Capacity planning used to be a one-time project, but has now become a regular evaluation task
It also means that you can’t make any assumptions about what occurs in the data center power chain. You need real data accumulated over time and presented in straightforward reports and data trends.
Evaluating capacity needs
Monitoring and analytic capabilities are also essential for effective capacity planning. This process can often present challenges for data center managers, unsure of how much and how fast their organization is going to grow. Quite often, it is much faster than expected.
I hear frequently from data center managers that their 10-year plan actually filled out in 5. Firstly, you need to observe what the growth looks like in real-time, then take that data and extrapolate what might happen if the growth is linear or exponential. Capacity planning used to be a one-time project, but has now become a regular evaluation task and must evolve to become a continual process which adapts to the actual measured growth. It could be called ‘expansion reaction’.
This is likely to be how the edge data center movement grows. You will expect to grow, but you won’t know when or where or by how much until it is an urgent need. Real-time analytics will be needed to understand where the bottlenecks lie, whether in compute, storage, bandwidth, or power. This will lead to modeling requirements for what-if scenarios of new deployments.
Longer-term, this would be a simple AI task to determine when and where to set up new data centers and even what specific equipment should be deployed along with the power needs.