Storage best practices

Wednesday, 19 January, 2011


With continued pressure on IT departments to prove their worth to the business, a key area to examine is storage utilisation. Implementing storage best practices will allow IT professionals to drive value back into the business and demonstrate return on investment. With this in mind, here are ten best practice areas to consider when reviewing data storage.

  1. Understand your workload: Most application designers don’t really understand the finer points of infrastructure design and storage and are unable to explain performance requirements effectively. Build dialogue between the infrastructure team and the developers so requirements are expressed in terms of service level agreements rather than physical configurations. This prevents massive overprovisioning often seen in fibre channel storage area networks.
  2. Share as much as possible: Even though individual pieces of IT infrastructure are getting cheaper on a per-unit basis, the costs of management and data centre resources are not. Additionally, a small amount of waste in a dedicated or non-shared resource can be significant when multiplied hundreds of times over. In order to reduce cost and save on waste, use shared infrastructure resources where possible.
  3. Globalise your reserves: There are many reserve practices that make sense in smaller environments which aren’t applicable on a larger scale. For example, many administrators try not to fill a Windows file system greater than 80% in order to maintain performance, and in virtual stores, administrators allow for the space consumed by snapshots, or unplanned additions of new virtual machines. They are good ideas, but when these concepts are combined, the actual capacity of a ‘full’ set of windows servers on a virtual data store might only be about 65%. Employing technology such as thin provisioning allows administrators to allocate these reserves more effectively.
  4. Large physical pools allow flexibility: Sharing resources and globalising reserves is a lot easier when there are large physical pools of resources which are then divided into consumer-sized chunks via virtualisation. This applies to compute via powerful systems and virtual servers, networking via 10Gb Ethernet and virtual LANs, but also to storage through technologies such as large physical disk pools and dynamically resizable LUNs, volumes and filesystems.
  5. Use larger, slower disks such as SATA wherever possible: Larger and slower disks based on technologies such as SATA are cheaper, don’t take up as much rack space and consume less power. Previously, these kinds of disks didn’t have the same levels of reliability or performance as its fibre channel cousins; however, this has been overcome in modern storage controllers using large pools, intelligent layouts and big caches. These new designs and technologies have enabled SATA to produce the same performance results as fibre channel at a lower cost for many workloads.
  6. Compress and deduplicate data wherever possible: While compression and deduplication is commonly used for backup data, a number of technologies are available that can bring the benefits of these storage efficiency techniques to primary workloads. By taking advantage of increasingly powerful CPUs, improving the efficiency of memory caching and reducing back-end disk storage, deduplication technologies can improve the performance of certain workloads
  7. Non-duplication is better than deduplication: Although deduplication can increase the efficiency of storage, in many cases it makes more sense not to create new physical copies of data in the first place. The use of high-performance, pointer-based snapshots and writable copies for backups and test and development environments saves on time, resources and CPU time spent removing duplicate data. This not only improves storage efficiency but can dramatically improve process efficiency and business productivity by eliminating the time it takes to make data copies.
  8. Include backup as part of the whole picture: Backup has hidden costs. Backup systems may cost a 10th of the price per gigabyte of a primary storage system, but if you use 20 times as much of it, your backup system can end up costing you more than you budgeted for. Many customers sacrifice efficiencies in their primary storage environments because they can’t move the data fast enough to meet the backup window. This happens when backup is an afterthought. If it is designed into the primary storage system, storage efficiency can be maintained throughout the IT environment.
  9. Measure twice, cut once: Many infrastructure managers don’t know how to measure their storage environment effectively, or they don’t have the necessary tools to do so. When adding capacity, ensure there are effective ways of measuring usage and compare that against industry peers and other benchmarks. Without that information, you may be forced into buying too much storage or experience unpleasant conversations with end users.
  10. Don’t buy more storage than you need before you need it: Every six to 12 months, the cost of storage reduces. Implementing a ‘just in time’ storage provisioning model will save on the power and cooling costs of keeping unused disks spinning, as well as saving you money on the purchase price.

By John Martin, Principal Technologist, NetApp Australia and New Zealand

Related Articles

Seven predictions that will shape this year

Pete Murray, Managing Director ANZ for Veritas Technologies, predicts trends that will have a...

ARENA jointly funds Vic's first large-scale battery storage

Two large-scale, grid-connected batteries are to be built in Victoria with the help of the...

Protecting next-gen storage infrastructures

Companies looking to modernise their overall IT infrastructure cannot afford to take a relaxed...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd