Making the move to hyperconvergence

Hewlett Packard Enterprise

By Alan Hyde, Vice President and General Manager, Enterprise Group, HPE South Pacific
Wednesday, 17 August, 2016


Making the move to hyperconvergence

Chances are you’re one of the many IT leaders looking into how hyperconvergence can help you better manage your data centre infrastructure. But before you jump on the bandwagon, you need to understand that hyperconvergence is not for everybody — while it’s a great solution for some use cases, there are probably better alternatives for others. The point of hyperconvergence is to remove complexity and cost. So as you evaluate options from different vendors, there are six things you should consider.

1. Scalability. A hyper-converged solution should provide simplicity in two areas. First, it should give you the flexibility to handle growth and the agility to ramp new applications or services quickly. In traditional IT, scaling can be a major event. In a hyper-converged environment, scaling should be a feature that is just another part of day-to-day operations. Scaling isn’t just about data or user growth, it’s about growing the business as well. To keep up with competition and customer demands, deploying new services must be quick and painless or the business suffers.

2. Simplicity. It’s becoming more and more difficult and costly to find and retain IT resources with specialised knowledge of specific systems and solutions. Hyper-converged appliances enable you to take more of a generalist approach, with the best solutions being simple enough to look something like this: unbox the appliance, mount it in a rack, plug it in, power it on, execute a simple deployment wizard and start provisioning VMs. You should be able to get from power-on to provisioning in minutes.

3. Mobility. To scale with ease, data needs to be fluid. A hyper-converged solution should adapt to allow data to flow to different storage tiers to meet SLA requirements. Data also needs to be able to move to new systems to handle things such as system failures and new technology adoption.

4. Agnosticism. To enable data to move, an infrastructure should be built with agnostic components to open up the ability to integrate different hardware form factors, media, hypervisors and open source technologies. This enables you to change your mind, your business and your resourcing.

5. Availability. It’s the little things that can sometimes cause the biggest problems, and downtime is not an option. Continual availability requires a look under the bonnet to see how the components handle failure. Is there striping across the disks and systems? What’s the reliability level and is it proven? And is there component redundancy?

6. Protection. While infrastructure is designed for business continuity, it doesn’t guard against human errors, surprise audits, natural disasters and ever-changing policies. Look for features such as RAID or mirroring, replication at the site level and between sites, and automated workflow to support disaster recovery to cover the bases of short- and long-term data protection. Doing so means you can retrieve a lost file, replace a corrupt database, keep continuity in case of device failure, or spin up a new site in case of disaster.

Related Articles

Revolutionising connectivity: the trends redefining data centres in 2024

The rush of generative AI has hit the IT ecosystem hard.

Five key data trends Australian IT leaders need to know about this year

With zettabytes of data freely available at our fingertips, businesses must look inwards and...

Future-proofing digital growth in the cloud

As companies move into 2024, many will grapple with the best approach to unlocking the full...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd