Avoiding the AI bottleneck: why data infrastructure matters for high-performance ambitions
By Paul Crighton, VP APJ, Riverbed Technology
Tuesday, 02 December, 2025
In Formula 1, raw horsepower is never enough to win. The cars may boast engines with more than a thousand horsepower, but without finely tuned systems to transmit that power to the track, the advantage is wasted. Telemetry, aerodynamics, tyre management and real-time data analysis are what ultimately turn raw energy into wins.
The same principle applies to artificial intelligence. Around the world, enterprises are investing significant resources in GPUs, large models and training infrastructure, yet many discover that these alone do not translate into competitive advantage. AI generates vast amounts of information that must flow across data centres, clouds and edge environments. Unless those data streams move quickly, securely and reliably, the investment risks being left on the grid.
We call this challenge the AI ‘data tsunami’. According to IDC, global data creation is growing at more than 35% annually, while network capacity is expanding at 24%. This mismatch creates bottlenecks that can slow or even stall AI projects — an issue highlighted locally in Riverbed’s recent 2025 global survey, with 92% of Australian enterprises saying they view AI data movement as important.
The bottleneck in motion
When I speak with business and technology leaders in Australia, the story is often the same. Enterprises are investing millions in AI, but only about a third feel ready to operationalise those projects at scale. Often, the bottleneck is the data infrastructure that governs how quickly information can be processed.
Think of the journey of AI data. Terabytes may need to travel from edge devices to the cloud for AI model training, then back out to endpoints, in a constant and iterative process.
Each round produces more data, but if that information doesn’t arrive when needed, insights come too late, decisions are made on stale information, and the value of AI diminishes.
Why legacy approaches struggle
Traditional methods for transferring data weren’t designed with this scale or urgency in mind. Direct connections with data centres offer high throughput, but they can be expensive, slow to provision, and inflexible. Virtual private networks, by contrast, can often be cheaper and easier to set up but are capped in bandwidth and often variable in performance.
Neither model is well-suited to the demands of petabyte-scale transfers or the latency sensitivity of AI-driven applications. As a result, organisations can end up with powerful AI ‘engines’ idling in place, starved of the fuel they need to perform.
Building infrastructure for data in motion
What’s needed is an architecture designed for data in constant motion. This starts with optimisation, finding ways to reduce the size of data transfers without losing fidelity, so that more information can flow through existing pipes in less time. It also means minimising latency, especially when data must cross long distances, by streamlining protocols and ensuring traffic is prioritised intelligently.
Security is another dimension. As data travels between on-premises, cloud and edge, it is exposed to risks. Protecting information in transit requires encryption methods that are both robust today and resilient to what’s coming.
Finally, AI workloads are highly dynamic. Today’s training pipeline may run in one cloud region; tomorrow’s may shift to another. Data infrastructure must adapt fluidly, scaling capacity up or down and shifting across environments without adding friction.
Turning speed into advantage
When data moves faster and more securely, the benefits compound. Time-to-insight shortens, which can accelerate innovation cycles. A new product idea informed by AI doesn’t sit idle waiting for datasets to sync; it can move quickly into testing and then to market. Decision-making improves as leaders work with the most current data, reducing the risk of basing strategy on outdated signals.
There are also tangible financial impacts. Delays in data movement introduce hidden costs — longer processing times, repeated transfers, or compliance risks if sensitive data isn’t handled properly. By ensuring that information flows efficiently, organisations avoid these overheads.
Most importantly, faster and more reliable data movement enables entirely new classes of use cases. Real-time fraud detection depends on analysing transactions as they happen, not hours later. Autonomous systems rely on split-second updates to function safely. Personalised digital experiences require immediate analysis of behavioural signals to adapt content or services on the fly. Without infrastructure that keeps pace with live data flow, these use cases aren’t viable.
Winning the race
The lesson from Formula 1 is clear: championships are won not by the teams with the biggest engines, but by those who master the flow of data.
The same is true for AI. The organisations that rise to the top will be those who treat data infrastructure as strategically as they treat compute, ensuring their systems can move information securely, flexibly, and at speed.
Uber Eats reimagined container delivery: Kubernetes is doing the same
The popularity of Kubernetes has skyrocketed in the last few years, and like Uber Eats, it has...
The roadblocks to success in enterprise application strategies
Only 53% of business cases for new enterprise application projects are currently being approved.
Building AI success in ANZ organisations
Success with AI will depend on how well organisations can connect innovation with discipline.
