Venessa Moffat, a growth hacking specialist, with 20 years’ experience in the data centre and tech spheres, discusses why the DCIM sector is flawed and what the industry can do to help reverse the trend…
We have all heard incidences of DCIM project failures – examples where the value has not been delivered quite as expected. From a more tangible perspective, in 2010 Gartner predicted that by 2014, DCIM penetration would be up to 60%. In 2015, The Uptime Institute data centre survey showed that only 27% of those surveyed had purchased a commercial DCIM solution. This market is not maturing as we have expected it to, but our requirements of it are.
Opinion among experts appears to be polarising: should we kill off DCIM in favour of something new? Or, do we knuckle down and re-invent it so that it is fit for purpose and supports the software-defined data centre (SDDC)?
According to IDC, 20% of SDDC implementations will run on schedule this year, the rest will struggle with capacity issues and DC infrastructure alignment issues. At the moment, we don’t know what the answer is, and not even the larger players have a 100% successful strategy in place, although they are mainly advocating that we stick with the current DCIM plan.
Regular M&A in the DCIM market suggests some instability, and we are also seeing new disruptive players coming along. 2017 really is an interesting and important year for DCIM.
We have underestimated the size of the DCIM beast, but it might still be worth taming.
Common trends in failed DCIM projects include over-tendering, overprovisioning and more importantly not being tightly aligned to real business needs. Software alone is not the answer and the DCIM implementation process could be better understood, and should be owned by the business, not the vendor. We have underestimated the size of the DCIM beast, but it might still be worth taming.
DCIM has to be in tune with real business requirements, and only those requirements. In a true software-defined data centre IT and Facilities Management systems must work together to be able to effectively programme shifts in power provision in relation to IT workload. This is a critical gap that we need to fix.
The collective learning that we have already is showing that we need to better understand the process of implementing DCIM. Future internal business cases will start to take a step by step realising value at each stage. For example, having the foundation of accurate asset and power management before implementing any kind of DCIM software solution option is often overlooked. As is a proper scoping process to assess whether you need full DCIM or not from the outset, which speaks again to knowing your exact business requirement.
We need to be able to map out our as-is data centre architecture, and our to-be architecture in order to implement a DCIM programme effectively. We have relied on software solutions to achieve our DCIM goals, and we need to take the bull by the horns and see that it is much bigger than just software alone. Future solutions will map out hardware, software, integration and people skills for a more rounded approach. This will then be the foundation for real agility at scale in data centres – the like of which only few have achieved so far.
Offering your expertise
The DCIM Deliberations Working Group has been set up to look at these challenges. We can see the turbulence, but we don’t have all the answers, which is why we are reaching out to anyone who wants to get involved in a global survey, with the kind support of The Stack and the Data Centre Alliance (DCA). We need the help to answer the questions collectively to ensure the recommendation is fit for use across the industry.
For those looking for more information on the DCIM Deliberations Working Group you can get in touch with anyone on the team if you know them – Steve Beber, Venessa Moffat, Alfonso Arias, Daniel Tautges, Monika Grass, Ken Peters. You can also join the LinkedIn Group.