Inside Australia's drive to repatriate data and workloads

Hitachi Vantara

By Jason Honey, Enterprise Account Manager, APAC and ANZ, Hitachi Vantara
Monday, 20 November, 2023


Inside Australia's drive to repatriate data and workloads

Australia has a reputation for embracing emerging technologies, leading many parts of the world in adoption and production use. We’re seeing that play out at the moment with generative AI, and with public cloud before that.

Being an early adopter of any new technology has rewards and risks. The extent of either is unlikely to be apparent prior to proof of concept. Uncertainty may also carry through to production, in circumstances where it’s perhaps not apparent how well a technology choice or architectural decision will scale.

This is certainly true of Australian enterprise and government use of public cloud. The lessons learned by Australia’s early adopters have helped many organisations, domestically and overseas, to avoid strategic, architectural, configuration and management pitfalls that heightened security risks or resulted in bill shock.

Australia continues to lead on cloud strategy, but its relationship with cloud is changing. Just as it is possible to lead the world on take-up, it’s also possible to lead on a partial walk-back.

Indeed, Australia now leads the world on data and workload repatriation. According to research from 2022, 11% of Australian infrastructure leaders intended to repatriate workloads from the public cloud in 2023. My experience suggests that proportion has likely doubled, heading into 2024.

There is no single driver for data and workload repatriation, although a few stand out.

1. Took the wrong strategy or approach

Some early adopters of public cloud now recognise that going cloud-first or even cloud-only was not the right strategy, and they have switched to what might be best described as cloud-appropriate. This is particularly the case for organisations that participated in large-scale lift and shift, moving VM-based workloads en masse while delaying transformation and cloud-native refactoring of the application code until later. Some of these are now targets for repatriation, where organisations have found refactoring them would be too cost-prohibitive.

There is no longer a singular march into the cloud. Instead, infrastructure decision-making is increasingly being made on the basis of ‘best fit’ for an application or workload, with reference to its present architecture and usage profile.

2. Too much data movement

Data ingress into the public cloud is cheap or free; egress is where organisations typically run into difficulties. This is particularly the case for data-intensive workloads. Instead of setting up data collection to occur in the cloud, running that data through an analytics service and then egressing just the results, some organisations try to bring back the underlying data used in the decision-making process as well.

While this is understandable — in data science and even BI before that, the ability to go beyond the top line or dashboard-level figures and dig into the detail has always been valued — not all cloud-based structures make this cost-effective.

3. Data sovereignty

A relatively recent phenomenon, this is unlikely to have been as big a constraint on early adopters as it is for cloud users today. Data security, privacy and jurisdictional concerns have driven new rules and government policies around physical data location.

While regulated industries such as finance were forced to address some of these challenges earlier on, today many more industries are considered critical infrastructure. That means new restrictions on outsourced IT arrangements, including cloud. Some repatriation of workloads may be required.

4. Undercooked connectivity into the cloud

While in-cloud performance is really only limited by instance size and budget, enterprises need a way to connect to cloud-based systems and for data and traffic to move between their cloud and corporate environments. If any segment is underperforming or lacks resilience, it can have a massive impact on access, usability and internal perception of cloud system performance.

For application or data-intensive workloads that require guaranteed performance, repatriation to hybrid cloud or on-premises servers may be viewed as being favourable.

5. Greater governance

One of the marketed advantages of migrating to public cloud early on was being able to hand off infrastructure management and other operational responsibilities. But not every organisation or workload benefitted from this approach. Sure, you don’t have the worry of running the cloud, but at the same time you have very little control and governance over it. This is especially problematic when the cloud provider inevitably makes some sort of backend infrastructure change — to configurations, or to apply patches or other updates — and breaks something that causes an outage.

With some particularly extensive outages occurring lately, there are organisations rethinking whether critical workloads are best hosted in the cloud or in an environment where the customer has more control.

What repatriation looks like

If one or more of these factors is part of your own cloud experience, then repatriation may be on the cards or under active consideration. It’s important to note that repatriation isn’t necessarily to an organisation’s own on-premises data centre, assuming such a space even exists. More likely, repatriation is to managed infrastructure or to a private cloud hosted in a carrier-neutral data centre that also has good direct connections with public cloud. This hybrid cloud set-up is well suited to a ‘cloud-appropriate’ strategic approach.

A potential hurdle with any retreat from public cloud and repatriation effort is at the executive and board level. At minimum, a challenging conversation is assured if executives were convinced of the business case for moving to cloud, only to then be told that aspects didn’t stack up or materialise, requiring a backtrack. In saying that, the current economic climate means leaders are open to optimisation, especially if it would curtail costs.

Organisations may be assisted in all aspects of positioning, planning and executing data and workload repatriation, and in formulating a cloud-appropriate strategy going forward, by aligning with an experienced technology partner that can offer consultancy as well as specific blueprints, templates and other guidance.

Image: iStock.com/Just_Super

Related Articles

Why having an observability strategy is critical for effective AI adoption

As organisations continue to adopt AI and put it to work in a variety of innovative ways, many...

What you need to know to build a winning AI strategy

For organisations that have yet to start investing in AI solutions, it's not too late to use...

Want to tap generative AI? Get a handle on your data

The steps businesses must take in order to effectively tap into generative AI.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd