Flash: saviour of the data centre?

By Stephen Withers
Thursday, 19 September, 2013

The widespread use of smartphones, tablets, ultrabooks and various other electronic devices means flash storage has become part of everyday life for many people, but it is also playing an important role in the data centre.

The vendors we spoke to for this article were unanimous about the significance of flash in the data centre.

Flash is “the biggest change we’ve seen in storage for some time”, said John Martin, principal technologist, NetApp.

“Oracle has long been a proponent of flash in the data centre,” said Jason Schaffer, senior director product management - disk storage at Oracle. For example, flash is used throughout the company’s Engineered Systems to provide high availability and to optimise database performance.

Adrian De Luca, Asia Pacific chief technology officer at Hitachi Data Systems, said HDS had provided “enhancements right across the board” with performance, efficiency and economic improvements to flash technology.

“The technology has a role throughout the data centre,” said Darren McCullum, XtremIO regional sales manager, EMC. The company led the use of flash in data centre storage around five years ago and now few Symmetrix or VMAX arrays ship without flash storage,” he said. “It’s an accepted medium in the data centre.”

The benefits of flash fall into three broad areas: speed, density and power consumption.


Flash is “the change that’s been needed”, said Garry Barker, storage specialist, IBM Systems Technology Group Australia and New Zealand, explaining that disk technology has not become appreciably faster in the last 10 years and is unlikely to do so in the next 10. Flash is still getting cheaper, so while it can already reduce the total cost of most ‘everyday’ applications such as Oracle and SAP, the savings will likely increase.

Martin pointed out that until the arrival of technologies such as in-memory databases there was little pressure for significant storage performance improvements, but there is now a realisation of the business benefits that can stem from say the fivefold improvement that can come from the selective application of flash to a database.

Where database performance is needed, “PCIe [flash] cards in the host are probably the best way of achieving that”, especially with x86-based servers. The idea is to put the storage as close as possible to the processor.

But what happens when you need to be able to move the application between servers? Barker said this requires PCIe flash cards to be fitted to both, taking you back around a decade to the use of direct attached storage (DAS) with good performance but inferior utilisation, making it harder to cost-justify flash. There are also limits on the number of PCIe cards and therefore the total amount of flash storage that can be installed in one server.

The good news is that according to Barker, flash-based SAN arrays can give very similar performance to PCIe flash cards. He suggests that once four or five flash devices are needed it becomes more cost and performance effective to use SAN-based flash instead.

McCullum said EMC’s XTremSF PCIe cards can be used with XtremSW Cache server flash caching software to combine the performance of onboard flash with the data protection that comes from writing through to a storage array in case the server or the card fails. This software “almost genericises flash in the data centre, at least in the server”, he said, as it also works with third-party PCIe cards and SSDs, providing common management and automation.

The technology that EMC gained in its recent ScaleIO acquisition allows it to virtualise any server direct attached storage (including disk, SSD and PCIe flash) into a storage array. It scales to thousands of units, McCullum said, and means that data can be stored very close to the application while still enjoying the data protection and other facilities provided by storage arrays.

The NetApp EF540 flash array is “the IOPS monster” according to Martin. It is said to deliver more than 300,000 IOPS with submillisecond latency and 6 GBps throughput. The EF540 can “easily beat pure flash players’ [products]”, he said.

If that is still not enough, HDS’s all-flash Hitachi Unified Storage can deliver up to one million IOPS. “That’s quite a huge number,” said De Luca. The company has also developed its own flash controller ASIC for better wear levelling, allowing it to offer a five-year endurance warranty on flash storage.

Relative newcomer Nimble Storage has a distinctive approach to building hybrid arrays. It puts newly written data into mirrored NVRAM, but unlike other vendors it then compresses the data, coalesces what can be as many as tens of thousands of small chunks of data into one large buffer and then stripes it across multiple hard disks. Gavin Cohen, director of marketing and technology, explained that relatively inexpensive SATA drives “can sustain incredibly low latencies” under these conditions.

To obtain high read performance, Nimble stores a second copy of the data on SSD and the controller ensures that ‘hot’ data stays there. 96% of reads in real-world installed systems come from SSD, Cohen claimed, so the company’s products deliver almost the same performance as all-flash arrays despite being around one-fifth the price. He said Nimble has more than 10 times the customers and installed systems as any other storage vendor of similar age, “the ultimate proof that this is a successful approach”. Local customers include one of the major banks, which uses Nimble with mission-critical applications.

Martin says that while every workload can benefit from hybrid arrays because they allow the use of a smaller number of higher capacity disk drives and are therefore cheaper for a given level of performance, there are two standout applications for pure flash arrays. One is Oracle or SQL Server databases, the other is virtual desktops. The latter is “incredibly I/O intensive”, he said - 1000 desktops can be more I/O intensive than a large bank’s core system.

McCullum agreed, saying that VDI is one of the applications where flash can already be cheaper than hard disk. The performance of flash allows for real-time inline deduplication, which is especially relevant to virtual disk images.

Big data mostly means getting really good analytics results very quickly, suggested Martin. Such projects typically involve less than 10 TB of data, which if stored in an EF540 only requires 2U of space. (Barker observed that as much as 20 TB of flash can be packed into 1U and then treated as “one big blob [that] we carve up as we like”.) This means the original data can be left wherever it currently resides with a read-only copy for analytics on an EF540, said Martin, who claimed this approach can be cost-justified by associated Oracle licence savings. The speed of flash arrays means CPUs spend less time waiting and more time working, said Martin, so you need fewer processors and therefore fewer Oracle licences.

Barker said one IBM customer that trialled the use of flash in conjunction with analytics software saw results 50 times faster while reducing the CPU requirements. “This is a big step,” he said, as there is an opportunity to do business differently if a particular analysis takes half a minute rather than half an hour.

At Microsoft’s recent TechEd Australia conference, Jeff Woolsey, principal group program manager for Windows Server virtualisation, demonstrated Windows Server 2012 R2’s storage tiering capability using 16 hard drives and four SSDs. An SQL-based application showed a sixteenfold performance improvement from the ‘hot’ data blocks being delivered automatically from SSD while the ‘cold’ blocks remained on hard disk. Obtaining that level of performance from spinning disks would require 260 15K RPM drives, he said.

“I’m really excited that we’re bringing storage tiering to Windows Server 2012 R2” as it makes SSD viable for a wider range of organisations, said Ben Armstrong, senior program manager lead at Microsoft. Adding a small proportion of SSDs to a collection of hard drives adds slightly to the total cost, “but the performance difference is awesome”.

Flash alone is not necessarily the answer to performance issues. Oracle has offered all-flash arrays for years, said Schaffer, as well as PCIe flash cards for servers. But thanks to Oracle’s ZFS file system, hybrid storage “is actually faster than all-flash systems”, he said, as well as being more scalable and much cheaper. This is because ZFS performs as much I/O from DRAM as possible, giving up to six times the efficiency of an all-flash array. An audit of customers’ systems found 85-90% of I/O is done from DRAM, so at this stage there is no real need for all-flash arrays, he said.

Martin said the combination of flash and very fast networks within data centres is leading to increased interest in remote DMA, the ability to transfer data between storage and a remote system’s memory without going via its CPU.

Remote DMA can provide a significant performance boost. At TechEd Australia, Armstrong demonstrated the live migration of virtual machines between servers. Changes made in Windows Server 2012 R2 reduced the time taken from 1 min 25 s to 32 s, and then the use of remote DMA-enabled hardware slashed it to just 11 seconds.

Cost-effective performance is the main reason for using flash storage, Barker suggested. It also reduces the amount of power and space required, “but that’s the icing on the cake”.


Using flash instead of disk to achieve high I/O rates can save a significant amount of space in the data centre. According to McCullum, a four-node VNX 7500 system capable of delivering one million IOPS occupies approximately half a rack. Getting the same performance from 15K RPM hard drives would require several thousand drives occupying around 10 racks.

Barker gave an example of a customer that wanted to store around 10 TB of data but needed to install 60 TB of disk to get the required performance. “That made it a very expensive proposition”, he said, observing that this is an increasingly common situation. Using flash storage to deliver the required performance reduces the number of devices required and hence the amount of space occupied.

Density considerations are especially important for secondary sites, he said, as they are often located in shared data facilities where there is a direct relationship between cost and space.

But it is not just a case of flash improving on the density of all-disk arrays: De Luca noted that HDS uses flash cards rather than SSDs in order to pack more storage into a given volume.

Power consumption

Those multiple 15K RPM drives previously needed for performance reasons don’t just take up more space, they draw more power, said McCullum. The availability of power is a constraining factor in some data centres, encouraging more organisations to turn to flash.

Barker said flash requires around one-fifth of the power consumed by hard disks under everyday workloads, so “it’s becoming more commercial[ly viable]”.


Storage management can be a significant part of the total cost of ownership, so you do not want to lose the savings delivered by flash storage to increased management complexity. Fortunately, that can be avoided.

IBM’s SAN-attached flash storage looks like a “disk box” but runs approximately 50 times faster, can be administered via by any storage management software and does not require changes to applications, said Barker.

None of HDS’s all-flash competitors can match the company’s virtualisation capabilities such as the ability to virtualise existing arrays, said De Luca. “We’re offering the best of both worlds” - storage innovation plus a bridge that allows it to be introduced in a seamless and unified way.

EMC’s FAST (fully automated storage tiering) allows organisations to combine the performance of flash with the economy of disk, said McCullum. This operates at a more granular level than traditional hierarchical storage management and is therefore “more reactive and responsive”, he said. FAST currently works at the array level, but will soon be extended across the data centre to automatically store data in the most appropriate place.

The company’s XtremIO all-flash arrays take industry-standard SSDs and provide differentiation through software, said McCullum. XtremIO combines the speed of flash storage with enterprise features such as high availability, data protection, thin provisioning and real-time deduplication.

According to Schaffer, Oracle has gone further and now allows applications to manage their own storage on the grounds that they are closest to the data. Oracle Application Engineered Storage combines multiple layers of storage, automation and application-driven tuning. For example, Oracle 12c Database handles tiering and data movement at a granular level, he said. Automation is important, he said, as efficient operation cannot wait for database administrators to do their thing after every change to the system that impacts performance.


The price of flash is already approaching the cost of Tier 1 disk, observed Martin, and it delivers savings in power consumption and rack space. “Most people have some idea of where they’re going to put flash,” he said, and predicts that when it becomes cheap enough - not necessarily cheaper than disk - people will use it more widely.

Schaffer expects flash and DRAM to account for an increasing percentage of data centre storage. But the total amount of data will continue to grow and forthcoming 8 TB hard drives mean spinning disks will remain an efficient, reliable and responsive piece of the storage puzzle, as density and cost issues mean disk remains relevant.

Barker goes further, suggesting most primary data will be stored on flash within three to four years. “It will become tier one [storage]” and hard drives will be mostly relegated to storing archived data.

He said IBM is investing $1 billion over the next three years to develop better application infrastructures through the use of flash storage. “It does change the whole application scenario,” he said. The industry is at or near the tipping point where costs favour flash over hard disks, and “the effect [will be] pervasive”.

Looking ahead, Martin tips phase change memory to replace flash from around 2016-2018 in situations where extreme performance is needed, though some other technology may become the frontrunner as “there’s so much investment in many different places”.

“It’s all about solid state,” he observed, because apart from tape libraries, “disk is the last mechanical thing in the data centre”.

Related Articles

ARENA jointly funds Vic's first large-scale battery storage

Two large-scale, grid-connected batteries are to be built in Victoria with the help of the...

Protecting next-gen storage infrastructures

Companies looking to modernise their overall IT infrastructure cannot afford to take a relaxed...

Moving from backup to availability

CIOs must free their organisations from complex backup strategies in order for storage and...

  • All content Copyright © 2020 Westwick-Farrow Pty Ltd