Three steps for implementing cloud services


By David Oakley*
Tuesday, 20 August, 2013


Three steps for implementing cloud services

More and more companies are introducing cloud services. However, only very few actually plan their steps in a long-term manner or ask themselves even basic questions: what does the cloud mean for the business over the short, medium and long term? How can IT leverage the benefits of the cloud and integrate them into the existing infrastructure in an orderly manner?

Providing cloud-based applications requires a certain amount of work, but this doesn’t need to cause major problems or even chaos to break out. This technology can be implemented into the IT infrastructure in such a way that it can be controlled and managed in a strategic manner. Many companies have failed to take this into consideration enough in the past and therefore refer to the confusion that this has caused as “virtual sprawl”. Companies must adopt a clearly defined strategy on using and managing the cloud in order to be able to manage the obvious advantages it offers. Outlined below are three key steps to assist organisations in the implementation of cloud services and to ensure best practices, vital to success. These have been developed on the basis of the many different projects that have already been executed and apply for large and small companies as well as firms that embrace technology or are more practice-oriented:

First step: virtual infrastructure

Hardware becomes software: During the first step of implementing the cloud, physical infrastructures are replaced by virtual infrastructures. In this case, a software layer generates a virtual instance of the hardware. The software offers an advantage in that it is easier to replace and more easily controllable than the hardware. The technology on virtualisation is not new, however. For instance, IBM started offering virtual machine hypervisors as part of its portfolio back in the 1970s. Today, all major IT manufacturers offer virtualisation or cloud products. The most frequent use of virtualisation involves installing a program on server consolidation that reduces the number of physical machines. Then, virtualisation is gradually extended to generate private clouds that offer internal users virtual capacities and applications on request.

Cloud computing represents an extension of virtualisation to include the public network. The activities within the cloud obviously focus on benefits that can be realised quickly. These include the use of cloud capacities to provide a basic infrastructure for computing processes and various types of re-usable applications like databases. This type of usage of the cloud is usually referred to as infrastructure as a service (IaaS). The main advantage of these offerings lies in the rapid deployment of capacities and applications in minutes rather than weeks or months. Furthermore, they can be deployed automatically via programming interfaces (APIs). They are flexibly scalable and only incur costs when they are actually used.

The characteristics of the cloud have already resulted in two common usage scenarios: self-service generation of environments for tests, development, training and demonstrations as well as fast processing of high-performance workloads and load tests for applications using virtual machines that are provided on an ad hoc basis. Basic use of the cloud also poses challenges, however. Sometimes it is used simply to avoid having to work with the internal IT department. This strategy apparently seems to offer a faster way of implementing processes, but only until professional management of the IT assets generated becomes necessary. The resulting capacities and applications can either get out of control or remain unused. And if they are managed improperly, they can even pose a security risk. And oftentimes they aren’t even protected against failures.

Second step: dynamic applications

During the second phase, cloud applications actually begin to monitor their utilisation automatically. With increasing volumes of data, they use cloud APIs to duplicate their contents and distribute processes across the extended infrastructure. Here, one common approach is to allow for generation of scripts via runbook automation that virtual machines generate automatically. They install the necessary software and activate it for production. The combination of monitoring within the application and scripting outside it enables the computing capacity to be extended and reduced dynamically.

The economic benefits of this approach are extremely important. After all, setting up large data processing centres that are prepared to handle high volumes of data costs a lot of money. By using a dynamic application architecture, however, this is not necessary. Once success is achieved and the data traffic increases accordingly, companies can deploy higher computing capacity when it is actually needed. Then, they only need to pay for this for a short period of time and can lower the capacities again when the data volume drops. Scalability can range from several hundred to several thousand servers with large projects - and then be drawn down again.

Third step: flexible data processing centre

The third level of cloud implementation is only realised in companies that place the highest possible demands on their server environments. These are mainly companies in the areas of financial services, energy and online advertising with workloads that rise and fall very quickly and need to be processed immediately. High scalability applies here not only to the applications, but also to the entire data processing centre and all of its components, including servers, memory, databases, applications and the network.

Designing a flexible data processing centre is a difficult challenge. It can be met by relying on virtualised infrastructures. Virtual capacities are deployed internally in the most flexible data processing centres. In the future, however, this will most likely be handled more often via the public cloud. The functions to be performed control workloads that come from various sources and then scale the data processing centre as well as all of the applications involved in handling processes.

Strategic challenges

Introducing a cloud solution that best meets one’s own individual needs is not only a technological challenge, however. Strategic requirements must also be taken into consideration. Many companies view their cloud solutions as a type of ‘ERP for IT’. This means their IT is operated for the enterprise. The IT organisation needs to know how much its services cost, be able to accurately visualise the status of its projects and plan capacities accordingly. These also include the generation and publication of a comprehensive portfolio as well as an executable catalogue of the services that can be provided by the IT department. This approach includes labour costs, material and real estate values, while it also assesses the long-term costs of network bandwidth, maintenance, licensing or support. A uniform recording system for all processes on deploying, updating, repairing, supporting and managing business services and the underlying infrastructure can be used here. This allows for seamless transparency of IT activities for various divisions, priorities and the related risks.

Enterprise IT cloud needs to be able to manage all IT activities and their business operations in addition to IT management processes like life cycle management and operational aspects like budgets. This calls for various data sources to be integrated, for instance email, collaboration, human capital management software, ERP software or production systems. It thus becomes more and more important for the IT department to have control of the various influences on the cloud. After all, they need to ensure that the appropriate services are available around the clock. Currently, more and more external partners are offering cloud services. As a result, the IT department requires detailed insights into the performance, usage and condition of that provider’s various cloud offerings. This can only be achieved by having a comprehensive monitoring infrastructure with analyses that are easy to perform and use.

The three pillars of the cloud

According to current best practices, the optimal enterprise IT cloud is based on three pillars: security, reliability and flexibility. Security also includes compliance and control options as well as the appropriate validation of third-party vendors and certificates. Reliable service quality includes disaster recovery and high availability via redundant data processing centres in various regions. Flexibility can also take the form of innovations, current and first-class technologies, expandability and an individually customisable and easy-to-use interface in enterprise cloud solutions.

Nevertheless, companies should not only pay close attention to how secure, reliable and flexible their provider’s cloud offerings are. They should also take a look behind the scenes at the architecture of the cloud infrastructure. How innovative is it? How quickly can it be extended to include new applications or be modified to meet individual needs? Does it meet current standards? How well does it handle various workloads? How scalable is the data processing centre? However, the most important question to the service provider should be “Can they handle my business for me?” This question covers many individual areas that can differ quite significantly; for instance, whether a user is able to modify the application on their own or whether the company is able to develop applications quickly on its own. The question of where the provider’s data processing system is located is also important: the shorter the distance, the faster the connection is the rule of thumb due to latency periods. But solutions are also available for globally active customers that speed up network transmissions even over long distances. Furthermore, the cloud provider’s connection to the customer needs to be redundant and include at least two alternative routes, direct and via VPN, for example.

Conclusion

The cloud is still considered to be a revolution, but it no longer needs to pose a problem. The enterprise IT cloud can be implemented together with the appropriate monitoring, management, integration and automation solutions without creating chaos. And companies can then take advantage of the benefits that the cloud offers in a comfortable and easy way.

*David Oakley is Country Manager, ANZ, at ServiceNow.

Related Articles

Private AI models: redefining data privacy and customisation

Private AI signifies a critical step forward towards a more secure, personalised and efficient...

Why having an observability strategy is critical for effective AI adoption

As organisations continue to adopt AI and put it to work in a variety of innovative ways, many...

What you need to know to build a winning AI strategy

For organisations that have yet to start investing in AI solutions, it's not too late to use...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd