Opinion

UK firms can learn from big tech’s AI agent teething issues

By
By
Bharat Mistry

In the space of a few months, AI agents have moved from theory into daily enterprise use. What began as experimental tooling is now being embedded into workflows, decision-making and even system operations. Open-source projects like OpenClaw have been hailed by figures such as Jensen Huang as the next leap forward, while players like Anthropic are racing to extend agent capabilities with tools that can actively interact with user environments.

But as ever with emerging technology, progress is not linear. Alongside rapid adoption, we are seeing a growing list of issues inside some of the world’s most advanced technology firms. Operational disruptions, unintended system behaviours, and cases of sensitive data exposure have all been linked to early deployments of AI agents. These are not edge cases. They are early warnings.

The shift underway is significant. The first wave of generative AI helped employees write, analyse and create. It remained largely assistive. The current wave is fundamentally different. AI agents are designed to act. They can execute tasks, orchestrate workflows, and interact across multiple systems without constant human input. This introduces a new operational dynamic and, with it, a new risk profile.

At the same time, organisations are re-architecting their environments to support this shift. Traditional IT estates are giving way to what many now describe as AI factories, designed to produce intelligence at scale through continuous data processing, model training and automated outputs. These environments rely on high-performance compute, complex data pipelines and tightly integrated platforms. They are powerful, but also inherently more complex and exposed.

The challenge is that governance, security and oversight are not keeping pace.

Recent TrendAI research highlights a widening gap between AI adoption and organisational readiness. While many firms are investing heavily in AI initiatives, far fewer have established the frameworks needed to manage them safely. This disconnect is already contributing to failure rates. Industry estimates suggest that around 80% of corporate AI projects fail to move beyond proof of concept. A further 30% of generative AI projects are expected to be abandoned altogether, often due to poor data quality, unclear business value, or unresolved security concerns.

These are not simply technical issues. They point to deeper structural problems. Data remains fragmented across organisations, limiting the effectiveness of AI systems that depend on consistent, high-quality inputs. Leadership teams are often misaligned on priorities, while skills gaps continue to slow progress. In many cases, the infrastructure required to support production-scale AI has not yet been built.

Security, however, is emerging as the most critical fault line.

AI agents expand the attack surface in ways that traditional tools do not. By design, they require access to systems, data and processes in order to function. If compromised, they can provide attackers with a pathway to move laterally, escalate privileges or extract sensitive information at speed. Even without malicious interference, poorly governed agents can expose data inadvertently or trigger unintended actions across connected systems.

The incidents seen in large technology firms should be viewed in this context. These organisations have some of the most advanced capabilities in the world, yet they are still encountering operational and security challenges. For UK businesses early in their AI journey, the implications are clear. The risks are not theoretical, and they will not be solved by scaling adoption alone.

What is needed now is a more deliberate approach.

First, governance must be treated as a foundational requirement, not an afterthought. This means establishing clear policies around how AI agents are deployed, what they can access, and how their actions are monitored. Visibility is essential. Organisations need to understand not just where agents are operating, but how they are behaving in real time.

Second, data strategy needs to be addressed. AI systems are only as reliable as the data they consume. Without consistent standards, quality controls and integration across silos, the outputs of even the most advanced models will remain unpredictable.

Third, security teams must adapt their approach. Traditional perimeter-based models are no longer sufficient in environments where autonomous systems interact across multiple layers of infrastructure. Protection needs to extend into the AI stack itself, covering models, pipelines and agent behaviours.

Finally, leadership alignment is critical. AI is no longer confined to innovation teams. It is becoming core business infrastructure. That requires coordinated ownership across technology, security and operations, supported by the right skills and investment.

AI agents will play a defining role in the next phase of enterprise transformation. The opportunity is clear, but so are the risks. The experiences of big tech firms are not failures to be dismissed. They are lessons to be learned. For UK organisations, the advantage lies in acting on those lessons now, before today’s teething issues become tomorrow’s systemic problems.

Written by
April 15, 2026
Written by
Bharat Mistry
Field CTO, Trend AI
April 15, 2026
meta name="publication-media-verification"content="691f2e9e1b6e4eb795c3b9bbc7690da0"