GenAI: the hype, the hopes and the hard truths
As risks to the global economy mount, so does the pressure to generate massive productivity gains through artificial intelligence. Board directors are demanding “What’s our AI strategy and what aren’t we doing that our competitors are?” while top executives are imagining new growth plans with the expected savings from the AI-driven cost cuts, simplified processes and increased productivity.
Indeed, generative AI (GenAI) has been heralded as a revolutionary power that will transform industries, democratise innovation and eliminate human jobs, including software developers.
But as someone all too familiar with the complexities of software development, I am here to tell you that it ain’t that simple.
There is a paradox at the heart of the AI discussion. GenAI’s biggest promise — its ability to automate repetitive tasks — is often misinterpreted as a zero-sum game where the humans lose and the robots win. This misunderstanding stems from a disconnect between tech-literate leaders and those further removed from the intricacies of development.
Non-techie executives can be guilty of thinking GenAI will replace developers. They see code generation tools like Copilot and extrapolate this to mean software creation is now a mouse click away. We’ve spoken to clients who have seen the headlines and expect these new tools to realise 50% productivity gains by magic. But that’s not what we’ve seen in practice.
Make no mistake: tools like Copilot can aid in a number of tasks and in ways that more than justify the investment. But software is not just about writing code; its true value lies in architecting complex, distributed systems that evolve with business needs and adapt to constant change. This work requires deep domain expertise, creativity and human judgment at levels that GenAI cannot replicate.
It is one thing to train AI to recommend a power drill based on customer needs, but it’s another to trust it with mission-critical systems like banking transactions or analysing large amounts of medical data to help develop personalised treatment plans, where reliability and trust are non-negotiable.
GenAI does have vast potential and there’s considerable excitement about the improvements being made with the models when it comes to reasoning. But to my mind, there remain questions over how useful such improvements will be when it comes to software development tasks. Our initial explorations suggest that the tools remain more suited to deliver boosts in developer effectiveness and efficiency.
But scaling GenAI for riskier use cases requires careful evaluation and that means humans as well as AI. The crucial thing to note is that AI outputs are non-deterministic: in other words, their results can be unpredictable. Thus, they are not necessarily suitable for applications that demand maximum repeatability and transparency.
So, calm your farm: it will be some unknown length of time before AI robots can safely replace human decision-making on high-stakes projects. And in that time there needs to be a vast improvement in the reliability, evaluation capability and explainability of GenAI.
And so to the myth that GenAI will mean mass tech job layoffs: yes, it will certainly get away with many repetitive tasks, but as we have already discussed, software development is so much more and if you lose your developers you will lose the human creativity, collaboration and problem-solving that only a human software developer can deliver.
A good developer knows they are on a career path that requires constant upskilling and AI is no different in that respect from every previous technological leap, such as the jump from being digital to data-driven. And a good non-tech leader should know better than to be duped by AI marketing hype, and to trust their CTO and tech counsel if they are in doubt. Global tech titans have strong incentives to push companies into over-aggressive AI adoption, just as investors do by buying into the buzz.
Board directors therefore need to remember to do the critical thinking around AI investments that only humans, not robots, can do. They might advise their management to “slow down to speed up”: that is, focus their GenAI strategy on sensible use cases where the returns and lessons learned are easy and the risks to the business’s brand and reputation are relatively low. Anyone who over-invested on the cloud for very little of the promised returns will know that this is a lesson they have already learned. Instead, they should consider investing in the best human capital to guide the robots and keep them in their lanes. In that way they will build trust with all their stakeholders that their GenAI approach works, is safe and gets returns. |
![]() |
How organisations can manage cloud spend as AI drives costs up
Amid the current wave of AI investments, managing costs is an absolute imperative and burgeoning...
How AI agents will transform enterprise IT operations
Implementing AI agents requires careful consideration of where the technology fits, what risks it...
Who should take the lead in responsible AI?
The companies that treat responsible AI as a necessity today will be the ones defining the...