Opinion

The 7 pitfalls derailing Agentic AI projects

By
By
Mohammad Ismail

Teething problems are to be expected with any new technology but AI is proving so difficult that projects are being abandoned in droves. Startlingly, MIT revealed that 95% of projects are failing and analyst house Gartner has claimed 40% of projects will be cancelled by the end of 2027. The question is, do these figures indicate the technology is over-hyped – which buys into the ‘AI bubble’ theory making the rounds – or is it simply that AI involves a steeper learning curve?

The issues that usually dog innovative tech projects, such as unclear or unrealistic expectations, governance risk and compliance (GRC), security risks, or integration challenges, all still apply. But AI is also exacerbated by a desire to shorten the time to market to deliver ROI and productivity gains. This sees the business placed under pressure when treading a new path while numerous mistakes are being made that doom these projects to failure.

This isn’t, of course, the first time we’ve had to grapple with a seismic technology shift; the comparison frequently drawn is between AI and the cloud. Yet, whereas cloud migration spanned a decade, giving cloud service models the time to mature, AI adoption has accelerated at an unprecedented rate. From the moment Open AI unleashed ChatGPT onto the world in 2022, it’s continued to gain momentum and is now reaching computational milestones every 3-6 months.

Over the intervening years, we’ve seen rival AI models proliferate, each of which has broken new ground. Anthropic, which released Claude, brought us the Model Context Protocol (MCP) in 2024, dispensing the need for AI agents to use customised code to access data and services. More recently, Google’s Gemini Flash 2.0 introduced conversational interaction and Meta’s LLaMa Enhanced local execution capabilities in 2025. These advances are paving the way for agentic AI, whereby agents are used to autonomously execute tasks.

But such enormous evolutionary steps can also jeopardise project development. While a prototype can be spun up in a few weeks, developing a scalable AI solution takes months and by the time the business is six months in, they may find the concept is unworkable or the technology has dated. At best, they’re then left with a prototype that requires manual workarounds, but which is too rigid to adapt to different AI use cases without constant retooling. 

However, solutions are emerging that can expedite project development and these, together with the following seven lessons from those that have already been down this path, promise to help make future agentic AI development more productive.

  1. Fail fast

Large Language Models (LLMs) are probabilistic, not deterministic, and that makes them difficult to predict. As agentic AI is based upon these models, they inherit this behaviour, so determining how agents will react in advance to inputs and assignments is extremely difficult. The only solution is trial and error, requiring the development team to go through numerous iterations. If the team is able to rapidly cycle through multiple prototypes, they can fail fast and are more likely to succeed.

  1. Speed = ROI and time to value

Rapid prototyping doesn’t come cheap, which is why it’s also crucial to minimise the steps involved. Any process that requires going back to the start to build a production system from scratch, figure out how to secure it and ensure it adheres to enterprise governance policies is to be avoided. If the process is well-designed, moving from prototype to production should require little more than pushing the “Go” button to make it available to all target users.

  1. Prioritise validation

There’s a strong temptation to run agentic AI prototypes as ‘open’ to get things up and running quickly. Authentication is either ignored or becomes specific to the project and so is isolated from other authentication processes. When projects bypasses established identity systems, they should be considered unprotected, as they effectively blow a hole in initiatives such as zero trust. Integrating from the get-go with enterprise IdP is therefore vital to allow authentication to happen continuously, respecting enterprise governance needs.

  1. Bake-in security and monitoring

A major bugbear with AI is that it’s seen as IT’s baby with an umbilical cord to the business management team so that the security team are sidelined. But securing agentic AI is crucial to avoid the agent being subverted and used to harvest data or escalate privileges through prompt injection attacks, business logic abuse, or unauthorised API access. To address such threats security needs to be baked in from the start rather than bolted on.

There will of course times when an AI agent will do things that are outside the realms of the expected due to the non-deterministic nature of agentic AI. It’s for this reason that any automated monitoring systems will also need to recognise when they need to ask for human assistance and to escalate to human in the loop (HITL)

  1. AI needs ‘lane assist’

Guardrails are essential to ensure AI-driven agents are kept in their lane and don’t begin to go astray. Network access policies and rate-limiting technology can help here by ensuring the AI doesn’t abuse IT systems, ramp up spend or create security issues. The best way to do this is to have flexible and configurable enforcement tools that can be used during production and prototype development. So, for example, a tool could be used that scores agent activity based on the risk it presents to data security and operations making it easier to look for instances of rogue activity and stop or limit such activity if it occurs.

  1. Only use approved MCP servers

MCP servers are revolutionary in facilitating access but unregulated access can pose major risks. Some MCP servers contain flaws that can allow users to see or access other user data or can leak data. It’s for this reason that the business should only allow trusted MCPs to be used and should seek to build a registry of these. It can also be difficult to see how third-party MCP servers are being used, who in the business is accessing them, how often and what data is being accessed. Plus, criminals are deploying fake MCP servers that look like sanctioned servers or are poisoning those made publicly available. To guard against such issues, it’s necessary to monitor network traffic to look for any instances where agents and MCP servers are behaving in ways that are contrary to corporate policy.

  1. Future-proof investment

The building blocks and operating environments within which agentic AI plays can and will change. MCP, for example, will continue to mature and require updates and such changes could increase downstream technical debt. If, however, such connections are handled extraneously in a gateway, the agent can continue to work as designed with any updates handled centrally. In an ideal world, the security architecture too will be separate from the agent architecture so that each can be maintained and upgraded separately.

Observing these seven rules can help ensure agentic AI projects proceed swiftly, stay on track, reduce risk and remain secure, but they will also ensure agents can be effectively managed and maintained. The easiest way of meeting these requirements is to use an AI gateway that doesn’t just route traffic but can create agentic AI-enabled applications and provide protection through the use of policies so that every stage of AI interaction is monitored. It can even allow the organisation to spin up its own servers and keep track of trusted third-party servers. In fact, adopting such an approach is likely to determine whether an agentic AI project makes it through to production or falls by the wayside.

Written by
January 22, 2026
Written by
Mohammad Ismail
meta name="publication-media-verification"content="691f2e9e1b6e4eb795c3b9bbc7690da0"