Uber Eats reimagined container delivery: Kubernetes is doing the same

Nutanix

By Daryush Ashjari, CTO and Vice President Solution Engineering APJ, Nutanix
Friday, 14 November, 2025


Uber Eats reimagined container delivery: Kubernetes is doing the same

In a recent text exchange with a colleague, autocorrect changed ‘Kubernetes’ to ‘Uber Eats’. At first, this was amusing, but then it got me thinking that there are many parallels between the food delivery service and the container orchestration platform.

For one, the popularity of Kubernetes has skyrocketed in the last few years. And, like Uber Eats, it has become the platform of choice. Kubernetes is now the go-to for deploying new enterprise applications, including the current surge in AI apps only expected to increase in the coming years. This future state is just around the corner, but while we prepare, enterprises will continue to run existing apps atop existing virtual machine architectures. The golden ticket will be finding a way for Kubernetes and virtual machines to co-exist.

In just over a decade, Kubernetes has gone from a pioneering, almost experimental container orchestration platform into the leading deployment method for new applications in enterprise IT. According to the Cloud Native Computing Foundation’s 2024 survey, four out of five organisations already use Kubernetes. Among these self-identified Kubernetes adopters, about 40% of their new apps are using Kubernetes; a figure that’s expected to increase to 80% within three years.

That’s as close to universal as you’ll find in the enterprise IT industry. But unlike other industries, even new technologies have to compete with the ones that came before them. After all, Uber Eats still has to compete with in-house delivery services. For Kubernetes, the number one rival remains the virtual machine (VM).

Virtual machines are a 2000s baby, coming onto the scene to separate the operating system from the underlying hardware, enabling workloads to move and scale independently of the hardware. VMs gave organisations operational advantages at the server level, but they didn’t fundamentally alter the way applications were developed and shipped. Developers still frequently require manual labour for testing and deployment.

Later, containerisation came in and changed the game. It allows developers to build a standard environment only once — testing it to ensure compatibility, keep code changes, then clone it for repeated downstream use and relieving developers from the manual task of creating separate environments.

Once developed, these containers can then be copied and deployed, while running within a management system. For developers, this significantly reduces the time needed for integration and testing work, simplifying the process of scaling applications. Some of Silicon Valley’s darlings were Kubernetes adopters: the likes of Google, Twitter and Airbnb could innovate faster and scale further than they could have using VMs.

While cloud-native computing is considered the default for most organisations today, it is sometimes at odds with existing tools, including VMs. Over the past 20 years, enterprises have invested billions of dollars into developing enterprise apps in VMs. Unsurprisingly, they’re not going to rip them out immediately and replace them with cloud-native computing — at least not without considering how the depth of investment can be repositioned.

This doesn’t mean they won’t use cloud-native architecture for new developments. It simply means that existing applications are likely to continue running on VMs, which means that VMs and containers will have to work together — just like Uber Eats has to coexist with other delivery partners, even if it is the most popular platform.

Kubernetes and VM architectures need to coexist in order for modern enterprises to function efficiently — and they can. With the right platform choice, Kubernetes and VMs can actually both live on the same industry-standard x86 machines.

Many organisations run each environment on separate physical machines, but if you can run them together, why wouldn’t you? VMs and Kubernetes can coexist on the same hardware, and this actually brings several benefits. This includes better hardware utilisation, improved management, integration, security and easier troubleshooting.

There are two ways to combine virtualised and containerised workloads on the same hardware: either run Kubernetes within VMs, or run VMs with Kubernetes on bare metal using unified development platforms.

For organisations considering a coexistence strategy, it’s important not to rush the process. A gradual transition enables IT personnel with VM skills to continue managing environments while upskilling for Kubernetes. Running Kubernetes in VMs gives companies the benefit of having an enterprise solution for VMs alongside a pure, cloud-native solution for containers. It also opens the doors to the full scale of Kubernetes features.

The benefits of Kubernetes are clear — so obvious in fact that they evidently represent the future of enterprise IT. Companies continuing to build Kubernetes environments need to take the time to consider the right approach for their enterprise, because it may be the difference between being compressed by the Kubernetes-VM transition or leading the way to more efficiency, using existing clusters.

While Uber Eats didn’t invent the concept of food delivery, it did revolutionise it. Likewise for Kubernetes: those who don’t embrace it will be left behind by their competitors.

Top image credit: iStock.com/Vertigo3d

Related Articles

The roadblocks to success in enterprise application strategies

Only 53% of business cases for new enterprise application projects are currently being approved.

Building AI success in ANZ organisations

Success with AI will depend on how well organisations can connect innovation with discipline.

Gartner identifies the top strategic technology trends for 2026

This year’s top strategic technology trends highlight those that will drive significant...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd