Keeping government alive during data disasters
By Nathan Steiner, Head of Systems Engineering, ANZ, Veeam Software
Friday, 01 December, 2017
Destructive hurricanes in the US have shone a light on the need for Australians to also be mindful of the need for disaster recovery plans.
In a world where storms, floods and data breaches are becoming more widespread, it is more important than ever to ensure emergency services departments stay online during national disasters. Whereas a commercial operator might sell a few less widgets, government data availability enables Australians to maintain their livable baseline.
It is easy for us to feel somewhat complacent when witnessing the devastating effects of three tropical cyclones across the US in as many months — Hurricanes Harvey across Texas, Irma across Florida and Jose across the eastern USA and the Bahamas.
Australia is a large, sparsely populated and geologically stable country with few population centres threatened by regular natural disasters. The nation’s data centres are mostly out of harm’s way. Yet keeping essential and government services up and running through times of crisis is still very important, and can be difficult. After all, we have far-flung urban centres with vulnerable data links running between them.
When residents of the NSW Blue Mountains lost 500 buildings and $94 million in property damage in October 2013, and when the 2009 Black Saturday bushfires across Victoria resulted in over 3500 buildings and 173 lives lost, the ability to coordinate and direct emergency services was paramount. The last thing the SES and first responders would have wanted was a loss of data or signal.
And catastrophe can strike anywhere at any time. The residents of Newcastle probably felt quite safe on the morning of 28 December 1989 before the huge earthquake struck, killing 13 and causing $4 billion damage.
When lives or livelihoods are threatened, nobody wants to worry about data storage or transmission systems. But when so many people rely on the communication and data deployment technologies used by police forces, the military or even civiliian agencies such as Centrelink, there is a lot at stake.
Disaster recovery (DR) is not about tempting fate or expecting the worst — rather, it is about getting up and running again as quickly and as smoothly as possible so that attention can be given to far more pressing concerns.
Although government data storage and transmission systems are, thankfully, robust in a developed nation like Australia, the market that supplies both government and industry is not nearly as expansive as those of the US, Asia or Europe. This makes data availability and a DR plan even more critical — with fewer nodes available to rely on, it is even more critical to deploy backed-up data at another location.
For example, Newcastle is home to at least two data centres — facilities which, were they to experience an earthquake measuring 5.6, such as the one in 1989, would almost certainly be damaged or destroyed.
So it is important to plan for disasters, rather than waiting for them.
If an essential service must stay on regardless of the circumstances, then the provider’s data availability plan needs to be made ironclad ahead of time — and that includes the human element. When danger approaches, staff need to evacuate with the confidence that processes, transactions or records will be transferred to another IT environment without missing a beat.
Today, the live backup of data to a location dozens, if not hundreds of kilometres away, should be pivotal to a DR plan, something made easy by cloud computing for the smallest and largest organisations alike. Many of the federal services people rely on benefit from being run concurrently in several regional zones in case one goes offline.
With Hurricanes Harvey and Irma, the paths of destruction encompassed entire cities, making on-site backup regimes ineffective.
However, it is not enough just to have a DR plan — it must be vigorously tested. Veeam has experienced countless instances of government agencies that have had errors in data that prevented recovery. The time to discover an error in the system is not when the operation of the department depends on it, so it is important to formalise the regular testing and scheduling of DR, backup and recovery to ensure all will be right when the time comes.
And organisations also need to look further than just their own perimeter. They pay good money for access, storage and processing from external providers, so they should be asking them hard questions about their DR plans. Do they adhere to the same backup DR regime? Do they maintain several live copies of client data, including at least one remotely? Do they have an emergency power system? Fire control?
Many accredited standards exist for provision of government services, and department or agency CTOs might have outlines that already cover data availability. If not, they should embark on an urgent review to create them.
Datacentermap.com lists 75 data centres in Australia, any one of which might end up in the path of a natural disaster when it is least expected. Just as Australian households and businesses prepare for bushfires every summer, so too should governments make provision for data disaster recovery.
The ITPA has teamed up with longstanding IT media channel Technology Decisions to help promote...
Federal cloud services have made huge strides in recent years, but challenges remain, writes...
New Zealand's Digital Media Minister Clare Curran believes that people's wellbeing should...