The quest for a rock solid data centre
With the increasing demands by businesses and consumers for more data, it’s the data centre that’s feeling the squeeze. What’s it take to create a rock solid data centre?
According to Camille Mendler from research services company Informa, we are in the middle of a book in data centre construction. She says, “Our conservative estimates are that more than 800,000 square metres of data centre space is being constructed right now - or has been announced and is being constructed - by telecom operators right now.” The inner Mongolian city of Hohhot is seeing an investment of $8bn from cloud service providers with most of that going to data centre construction.
When there’s so much activity, how can we be assured that what we are building will reliably fulfil the needs of businesses and consumers? The answer starts by looking at the key problems and then building data centres that are, by design, created to not suffer the issues.
Informa’s research suggests that the main reasons data centres fail are:
- Acts of God
- Human error
- Fibre cut and/or network outage
- Power outage
- Equipment failure
- Hacking/malicious code
- Data corruption
- Cascading system failure arising from several of the above
It might be tempting to think that the reliability of your next data centre project will be dependent on the way the data centre is built and its location. But according to Mendler, size isn’t the key issue. It’s the ability to adapt to new demands - quickly and efficiently. That means the data centre fabric, external network, and systems and processes all count.
If you look at the list of key points of failure, many are external. And in order to be able to deal with the external factors, service providers building data centres need to be adaptable to change. As Charles Darwin put it: “It is not the strongest of the species that survives, nor the most intelligent ... It is the one that is the most adaptable to change.”
Mr Yulianus, a division head at Indonesian data centre provider Indosat, says, “Choosing the location as the strategic data centre facility is very important, especially in Indonesia where we have very limited power capability, so not in every location we can have dual outsource. So selecting the location is very strategic.”
Choosing the right location can mitigate many of the risks articulated by Mendler. By choosing a location with redundant power and carrier network connections, it’s possible to maintain to customers, both internal and external, in the event of an external failure.
One of the clear trends is that the increase in computing density created by virtualisation and blade servers has created a very different environment for data centre managers. Organisations like Google now run thousands of servers with far fewer staff than ever before. That reduces the chance of human error or what Mendler calls “the idiot who brings a can of Coke into the data centre and then spills it”.
Perhaps the most important factor in maintaining a rock solid data centre is a life cycle of vigilance - from construction to maintenance. This means looking at designing so that specific reliability levels are possible, conforming to standards such as ISO 227001 for security and ensuring that every single point of failure is removed or excluded by design.
In Mendler’s view, this is perhaps why so many businesses, large and small, are looking at externalising their data centre. “… that’s why a lot of enterprises are considering externalising the data centre, partly because of the complexity of maintaining their own, or if they want to maintain their own, they’ll externalise some of the backup disaster recovery to a third party,” she said.
Even a momentary electrical interruption can spell disaster for your business, so it pays to not...
The old adage that you can't manage what you can't measure is nowhere more apt than in...
Looking to delay a decision on cloud, or to simply save a buck, organisations are holding onto...