SSD and 2.5" HDDs better for virtualisation

By Chris Mellor
Monday, 20 July, 2009


While the capacities of 3.5-inch hard disk drives continue to rise, rates of I/Os per second (IOPS) stay constant. The end result is that the 3.5-inch hard drive I/O channel is becoming a bottleneck, and 2.5-inch drives and solid-state drives (SSDs) are becoming attractive ways companies can boost IOPS without wasting capacity.

Let's back up a minute. Data is accessed randomly across the surface of a hard drive, with applications typically grabbing data in small pieces.

When you add a hard drive with increased capacity, the read/write head now has to find data in a larger capacity store occupying the same physical area. This is the IOPS rate and it's independent of the size of the data store.

Access density comes into play when a set of applications in a server accesses a hard drive store and are limited in their performance because the drive can't service them quickly enough. Let's have an example. Say a 300 GB 15K rpm Fibre Channel (FC) drive can do 200 IOPS. Dividing the second number into the first gives us an access density of 1.5.

Imagine we have five drives that give us 1.5 TB of storage and 1,000 IOPS. We'll hook these up to a one-core server with 10 applications. Each application needs 50 GB of capacity, 500 GB in total, and 100 IOPS. Presto! We're in I/O balance.

Now we'll upset this applecart and change to a two-core server and increase memory proportionately. This allows us to run 20 applications that will need 1 TB of storage capacity and 2,000 IOPS in total, but still 100 IOPS each. Oops, the storage system can't deliver the IOPS. Now let's change the server to a quad-core with uprated memory and run 40 applications. The storage system needs to provide 2 TB of capacity, 500 GB more than it has, and 4,000 IOPS. Double oops.

How do we get around IOPS trouble?

What can we do? To increase capacity, we simply change each of our five drives to 500 GB and reach 2.5 TB. But the IOPS problem remains because the IOPS rate stays the same at 200 IOPS/drive and five drives. There's no way we can increase access density. The drive spin rate stays at 15K rpm, and the number of read/write heads per drive stays at one per platter.

The only way forward, in hard disk drive terms, is to increase the number of drives. If we need 4,000 IOPS, and a single drive delivers 200 IOPS, then we need 20 drives (or four times as many). But 20 drives of 300 GB gives us 6 TB capacity, which is way too much.

Let's make the problem worse. We'll virtualise the quad-core servers and have them run 80 applications; now they'll need 4 TB of storage capacity and 8,000 IOPS. That means we now need 40 spindles. Our servers are becoming spindle-bound.

But we don't need 40 drives with 300 GB capacity each. We're not disk capacity-bound. If anything, we have too much capacity. Forty 150 GB drives would give us 6 TB, which is plenty of capacity headroom. Now eight-core processors are coming, which translates into 160 applications, 8 TB of capacity and 16,000 IOPS.

Solid-state drives as a solution

But we've forgotten something. Many servers are dual-socket, which immediately doubles the number of cores and hence IOPS needed. A dual-socket eight-core server in our example would mean 320 applications, 16 TB of capacity and 32,000 IOPS. That needs 80 spindles. A four-socket server would mean 640 applications, 32 TB of RAM, 64,000 IOPS and 320 hard disk drives.

Let's leave aside that blade servers connected to shared storage have the same problem because the number of IOPS needed by a rack of blades increases at an even greater rate as the number of cores and sockets per blade increases. And let's not think about hard disk drive failures and having to overprovision spindles to cover against drive failures.

These numbers and their progression are frightening. One answer to the conundrum of needing more spindles and less capacity per spindle is to move to small form factor (SFF) drives, such as 2.5-inch drives. We can cram more of these into the space occupied by 3.5-inch drives and thus increase the access density of a drive enclosure.

We could also take a step back and realise that we simply need a storage medium that can deliver a greater number of IOPS. It already exists in the form of solid-state drive technology, which has IOPS rates well above 50,000.

Solid-state drives are being used to provide a very fast and relatively small cache or tier 0 to hard disk drive arrays. They take advantage of many I/Os aimed at a small subset of the data on an array. Caching it in the SSD tier works very well, which is what Compellent, EMC, Pillar Data and other vendors do. However, a changeover from 3.5-inch to 2.5-inch drives provides more spindles for tier 1 data or applications whose value doesn't justify the expense of solid-state drives.

Flash storage can also be used when large chunks of data are being accessed and you need a high rate of IOPS. Large capacity flash arrays -- those used in place of hard disk drive arrays -- such as Violin Memory's 4 TB Violin 1010, Texas Memory Systems' 5 TB RamSan-620 and WhipTail Technologies' 6 TB WhipTail can do the job. The first two use fast single-level cell (SLC) flash with 200,000 IOPS or more, whilst the multi-level cell (MLC) WhipTail offers approximately 100,000 IOPS. We're entering an era when it will become common for IOPS-bound servers to be hooked up to solid-state drive data stores.

Related Articles

Storage strategy in the multicloud era

Data has become the essential raw material in the strategic orientation of business, making data...

Private AI models: redefining data privacy and customisation

Private AI signifies a critical step forward towards a more secure, personalised and efficient...

Why having an observability strategy is critical for effective AI adoption

As organisations continue to adopt AI and put it to work in a variety of innovative ways, many...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd