Big data requires different thinking for backup and recovery


By Merri Mack
Friday, 17 February, 2012


Big data requires different thinking for backup and recovery

Big data is here to stay, but how will organisations cope with this ever-increasing explosion of information, which is unstructured and streams in from different sources?

Phil Sargeant, Research VP for the Data Centre, Technology for Gartner, commenting on big data, said, “Big data for one company is […] relative to another company. For example one company might think 1 terabyte of data is big data, whilst another will think 1 petabyte of data is big data. So it’s all relative.

“There are two aspects of big data. Hardware - which means storage and intelligence - turns the data into information of value by using different means such as analytics or business intelligence. How this achieved is specific to different industries, as different methods are used to get the value out of the information.

“So big data is all about high capacity, but not necessarily high performance. Storage vendors are beginning to provide cheaper, high-capacity storage to accommodate big data. Examples are EMC’s Greenplum, and HP’s new Extreme Scale-Out (ExSO) servers. These big data solutions enable organisations to get value out of the information.

“Because of big data, disaster recovery (DR) has had to change its spots to provide a DR mechanism, because it becomes very hard to store 1 petabyte in a traditional backup on tapes. So organisations will consider replicating rather than backup. Big data really changes the way people think,” said Sargeant.

Big data and business intelligence (BI) tools

The bigger the data, the more the issue of existing business intelligence (BI) tools of not being able to handle it.

Andrew Milroy, VP ICT Asia Pacific, Frost & Sullivan, said: “The sheer volume of data being driven by compliance requirements, especially in the financial services industry, plus rich media and social media, is creating a need for intelligent decisions on the data. Companies must be able to demonstrate evidence-based decisions, especially in the financial services industry.

“Traditional BI vendors need to include tools to winkle out sentiment in social media exploration. Salesforce’s recently acquired Radien6 social media software makes sense of the data that is out there. Other BI vendors will need to build Radien6 type functionality into their products,” said Milroy.

Cloud-based BI tools might be worth a look, but there just aren’t that many available right now. According to Milroy, we’re waiting on better infrastructure with the flexibility to scale up and backup, which is a challenge.

“BI in the cloud is not really happening much now, but as BI vendors partner with carriers in Australia like Telstra and Optus, as well as data centre providers, it will gather pace.

“BI vendors such as SAS, IBM’s Cognos and SAP’s Business Objects are all extending their capabilities to handle much bigger data,” said Milroy.

Sargeant said, “It’s a bit premature to think the cloud will solve the problem, because if you have to move big data across the network to the cloud, it would be ridiculous and too slow. So people have to think about how long they want to retain data, and in what form. Do you keep data indefinitely? Consider archiving and think about what can be retained, and what can be deleted, and create subsets of data.

“I suspect there are few vendors who can supply the pieces of the jigsaw for big data and its backup and recovery. It’s early days for really good solutions,” said Sargeant.

Disaster recovery

Adrian Briscoe, General Manager, Kroll Ontrack Australia, has seen the company’s disaster recovery business grow from less than a gigabyte in 1982, when the company started in Australia, to a massive 30 petabytes by the end of 2011. The 2011 figure was up by 10 petabytes over 2010’s.

“The vast amount of data being lost since 2009/2010 is due to the increase in virtualisation, which has added a level of complexity and exposes enterprise data to new risks.

“Kroll Ontrack is seeing the number of data recovery requests from virtual systems grow rapidly. User error is an issue, with common causes of data loss from virtualised environments including file system corruption, deleted virtual machines, internal virtual disk corruption, RAID and other storage/server hardware failures and deleted or corrupt files contained within virtualised storage systems.

“We work closely with virtualisation vendors to write code and provide proprietary tools to get critical data back, so businesses can start running again. A Forrester-DRJ survey noted that 15% of respondents knew the cost of their business’s downtime; it averaged nearly $145,000 per hour.

“Backup windows are becoming smaller and smaller while data is getting bigger and bigger. Backups are expanded on the fly but there may not be enough space to accommodate backup. More deduplication will be utilised.

“More unstructured data plus an influx of employees bringing their own devices that need backing up, and the added burden of compliance and governance, means big data will only get bigger.

“Even though recovery of virtualised data is more complex, it can be recovered using software or logic, whereas if a disk platter is physically damaged, then it’s ‘goodbye’ to the data,” said Briscoe.

Applied Research conducted a global data survey on behalf of Symantec on backup and disaster recovery. It surveyed 1425 IT professionals from 31 countries in October 2011.

Key findings included: 32% are not meeting backup and recovery SLAs, or are unsure if they are; of those not meeting SLAs, 49% can’t meet them due to too much data; 62% report inconsistencies between physical and virtual environment SLAs. Confidence in backup is lacking, especially virtual backup. Backup and recovery will change with the 61% using disk-based backup now expected to drop to 40% within 12 months and the 45% use of cloud-based backup or recovery will increase to 51% within 12 months.

Tisser Perera is the Senior IT Manager, Service Delivery at Australian Energy Market Operator (AEMO), which controls national power and gas infrastructure of Australia. Perera treats backup as an insurance policy.

“We have been using the Acronis file system backup service for 6 years for a total of 900 servers. It allows me to sleep well knowing that we can recover. It’s a cost-effective delivery service and it is now a standard for us.”

Acronis recently released its global disaster recover index for 2012 based on a survey conducted by the Ponemon Institute in September and October 2011. The survey was based on 6000 responses from IT people compared to the 2010 survey which had 3000 respondents.

More than 300 Australian organisations responded, with 38% having between 101 to 500 seats.

Managing hybrid physical, virtual and cloud environments presents the biggest challenge, with 70% of IT people commenting that moving data between physical, virtual and cloud is still the biggest challenge just as it was in 2010.

Over a third of Australian companies surveyed still did not have an off-site backup strategy, despite disasters such as flooding that affected parts of the nation.

Related Articles

Seven predictions that will shape this year

Pete Murray, Managing Director ANZ for Veritas Technologies, predicts trends that will have a...

ARENA jointly funds Vic's first large-scale battery storage

Two large-scale, grid-connected batteries are to be built in Victoria with the help of the...

Protecting next-gen storage infrastructures

Companies looking to modernise their overall IT infrastructure cannot afford to take a relaxed...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd