Cloud not quite ready for enterprise-class services


By Anthony McLachlan, VP Asia Pacific, Ciena
Monday, 18 March, 2013


Cloud not quite ready for enterprise-class services

We’re all talking about the cloud but is it really ready for the enterprise?

Cloud-based storage is all the rage these days.

If you think you’ve read that line somewhere before, you’d be right. Only now, it’s being spoken outside the boardroom and on the shop floor.

Cloud-based storage options abound, from corporate services like Amazon’s Simple Storage Service (S3) to consumer-oriented, easy-to-use cloud storage provided by Dropbox, Google and Apple. ‘Storage’ has also evolved from the music and photo-exclusive services of the past to include books, videos and even business-oriented information like applications, documents, contacts, calendar and email.

In fact, storage has become the ‘killer app’ of the cloud in a market estimated to be worth more than $14 billion by 2014.

CIOs understandably are looking at ways to keep their organisations’ information available while reducing IT expenditures, so when you look at the sheer amount of data organisations generate combined with the ballooning cost of data retention, it’s easy to see why the cloud storage market is booming.

And so, as the cloud evolves to serving the enterprise-class needs of larger, business-critical data initiatives such as disaster recovery, workload migration and virtualisation, the ability to offer a secure, reliable, high-performance connection to the cloud becomes much more critical.

According to a reference chart published by Amazon, anything above 100 GB should be physically shipped instead of electronically transferred. To put this in perspective, 100 GB is about the size of a 2004-era laptop PC disk, so that’s not a lot of information by today’s standards.

By contrast, Amazon estimates that sending 1 TB over a 1 gigabit ethernet (GbE) network would take less than one day.  But for transfers exceeding 60 TB over a 1 GbE network, Amazon, again, recommends physical transport. And so even with a 1 GbE network, there are still some serious limitations with cloud data transfer.

Getting the picture?

It’s fair to say the cloud is ripe for more transactive services - for example, Salesforce.com where large volumes of data transfer are for the most part unnecessary, and Google Apps for business, where data is generated and mostly kept in the cloud.

But the cloud is not quite ready for enterprises that generate hundreds of thousands of terabytes of data every year. Transferring this data to the cloud over today’s networks is, for the most part, impractical.

The figures from Amazon suggest that infrastructure as a service (IaaS) applications like storage and new applications like virtual machine mobility are going to require more scalable bandwidth to get the work accomplished in a reasonable amount of time. All we need to make this happen is a different network architecture approach.

No walls, more traffic

You may already be familiar with the concept of the ‘Data Centre Without Walls’ (DCWoW). If not, here it is in a nutshell: DCWoW federates both enterprise and provider data centres in a virtualised multidata-centre architecture, and features dynamic connectivity between each data centre. It promotes service resiliency and performance independent of the user’s location, and it does all this more efficiently than is currently possible with isolated enterprise data centre architectures.

In the DCWoW, the cloud backbone network that connects data centres is an important performance enabler. This is easily demonstrated by looking closely at an important operational component: the migration of workload between enterprise and public cloud data centres, in response to changing workload requirements.

Live virtual machines (VMs) can be moved from one data centre to the other, accompanied by storage image transfers. Alternatively, additional VMs can be created in the provider data centre, requiring connections between VMs or between VMs and storage.

In all cases, any inadequacies of the network connection in terms of bandwidth, latency or quality will have a detrimental effect, either on the practicality and convenience of the operation or on the performance of the application.

To address this demand for bandwidth and performance we need a better, high-capacity cloud backbone. Making this bandwidth ‘on-demand’ makes it more affordable for cloud on-demand use cases like workload mobility, availability and collaboration. For example, a 1 Gbps cloud service could scale to a 10 Gbps service and enable more than 30 TB to be transferred in a day, easily addressing the bulk VM migration use case, and then scale back down to 1 Gbps once the migration is over.

In addition to more flexible bandwidth, cloud services need to provide data security, service reliability and low network latency.

But how are these cloud service performance characteristics best achieved?

The first line of attack is the choice of network architecture itself. The use of high-capacity ethernet architectures provides high traffic performance and cost-effective scale. Connection latencies are minimised and low-loss connection performances are assured, while IP ‘packet touch’ operations are minimised throughout the network, reducing equipment costs.

A second line of attack - multitenancy - becomes increasingly important as cloud service providers become more successful and must therefore support larger numbers of customers and customer applications. With success, physical resources requirements for both data centre and cloud backbone skyrocket and therefore need to be shared among multiple tenants.

Server and storage virtualisation promotes efficient pooling of resources, which is critical to delivering economies of scale in the cloud. However, virtualisation complicates the problem of network performance control if dedicated network resources must be applied to customer applications, so that targeted application performance levels can be engineered and assured.

You can solve this problem by creating a new performance-on-demand operational paradigm for the network. This allows network service provisioning processes to be virtualised, fully automated and driven directly by data centre operations systems.

For example, you can enable an enterprise-to-provider cloudburst operation automatically to provision and reserve network resources to support it, either immediately or deferred and scheduled. Outside of this reservation, those network resources are available for other operations, applications and users; no ‘slack’ capacity is needed. While active, the operation has the fully dedicated network resources to allow it to function precisely as needed and expected.

Performance-on-demand thus reconciles network performance with network efficiency and cost-at-scale for the cloud, enabling cloud infrastructure to achieve its full potential in delivering cost-effective, enterprise-class services.

Related Articles

How to prepare for the AI future (that isn't here yet)

Something big is indeed coming, but the revolution is not here yet.

Storage strategy in the multicloud era

Data has become the essential raw material in the strategic orientation of business, making data...

Private AI models: redefining data privacy and customisation

Private AI signifies a critical step forward towards a more secure, personalised and efficient...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd