Turning AI trust into a competitive advantage

From accelerating decision-making to boosting productivity and fuelling innovation, AI has promised to help enterprises do everything from save time and money, to create value in new ways.
Yet despite this obvious potential, new research from Gong shows that 58% of mid-to-large businesses in the UK and US have stalled their AI projects. Why the pullback?
This isn't a question of capability or budget. It’s a measurable trust barrier sitting between AI pilots and full deployment, one organisations must overcome if they want the full returns from their AI investments. Those building the next wave of advancements have a key role to play.
Adoption stalls despite rising investment
The pressure is on to implement AI, with Gartner reporting 87% of revenue and sales leaders are facing top-down pressure to adopt it. Despite this, Gong's research shows that respondents have paused or canceled nearly half of their planned AI investments due to trust concerns.
And they feel the impact of this hesitation, with three-quarters of respondents now saying they feel they’re falling behind when it comes to maximising AI’s value.
This cycle must be broken. If this trust deficit deepens, organisations will remain stuck in limbo – unable to adopt the advancements they know they need, while others move ahead.
Unpacking the factors driving this lack of trust is the first step to building solutions that address businesses’ biggest concerns. When their top demands are already embedded into new tools, it’s significantly easier to get them on board.
Privacy, explainability, governance
Delving into the real conversations business leaders are having about buying AI solutions, it becomes clear where their biggest pain points lie. Gong Labs data, based on aggregated and de-identified signals from over 25 million sales interactions in 2025, found that one in four calls referenced security, with uncertainty over AI's foundational data and learning mechanisms the most commonly discussed topics.
This sensitivity business leaders have toward ensuring the AI solutions they adopt aren’t exposing them to risk is reflected in the survey data:
- Data privacy and security emerged as leaders’ top AI concern, with 34% of respondents calling it a major barrier to AI adoption. AI systems can ‘remember’ and potentially expose sensitive training data, putting financial information, customer data, and intellectual property at risk. Clear guardrails are therefore essential to prevent breaches and compliance gaps.
- Explainability – or the ability to articulate AI-generated outputs – ranked next, with 30% of respondents identifying it as a concern. The need for greater AI clarity has emerged as a significant barrier to trust, with leaders pointing to difficulty understanding how AI arrives at outputs and a lack of vendor transparency as drivers.
- Insufficient governance and oversight frameworks followed as businesses’ next major concern. Many organisations are still lagging when it comes to defining and implementing formal governance processes, such as deployment authorities, data access controls and review processes. If organisations can’t effectively control and manage AI – especially amid the rapid rise of agents – it’s hard to justify scaling those solutions safely.
Removing the trust barrier to unlock growth
With so many leaders feeling like competitors are racing ahead on realising AI’s benefits, it’s clear the trust gap isn’t just an IT issue – it’s a commercial one too.
Building trust in AI requires deliberate action. In practice, organisations are focusing on a few key areas:
- Defined boundaries around what AI can and cannot decide
- Transparency into the data being used and how decisions are made, supported by independent audits
- Demonstrating security beyond just baseline compliance, such as SOC 2 and ISO, by implementing stronger guardrails that give customers more control over how their data is protected and managed
As trust in AI grows, leaders can invest more confidently in their companies’ futures, allowing the technology to deliver compounding benefits over time.
Responsibility doesn’t stop with buyers, however. Those building AI systems have an equally important role to play in earning trust, not just promising it. That means anticipating the concerns of CIOs and CISOs early, being open about potential risks, and designing controls that organisations can tailor to their own governance and risk appetite, rather than forcing a one size fits all approach.
At its best, AI solutions are built with trust as the foundation. That requires mechanisms for giving organisations genuine visibility into how models work, how they learn, and what data they rely on – while backing that transparency with enterprise grade security and governance.
When security guardrails are in place, trust stops being a barrier to adoption and becomes an enabler, allowing teams to move faster, innovate with confidence, and scale AI in ways that directly support long term growth.
.jpg)
.jpg)
.jpg)