60% of UK Finance Professionals Use AI Every Day, and Most Have No Real Governance in Place

A survey of UK finance and accountancy professionals has revealed a sector that has embraced artificial intelligence at remarkable speed, but has not built the compliance structures needed to use it safely.
The findings, published by Cloud2Me, the UK's leading hosted desktop provider for accountancy firms, were gathered at a Finance, Accounting, and Bookkeeping event. They reveal a profession undergoing rapid, largely informal digital transformation in a sector where data accuracy and security are foundational requirements.
Daily AI Usage Is Now the Norm
The headline numbers are striking. 74% of UK finance professionals reach for AI tools at least several times a week. 60% use them every single day. ChatGPT and Microsoft Copilot together account for 55% of tool usage across the profession, though multi-tool approaches are common.
Professionals are mixing and matching platforms depending on the task, building personalised workflows rather than relying on a single approved system. For many practitioners, AI has moved firmly into the category of essential professional infrastructure. The question is no longer whether it belongs in accountancy. It is whether the profession has built the structures to deploy it responsibly.
Convenience Is Winning Over Compliance
One of the most significant findings concerns how professionals are actually choosing the tools they use. When asked about their selection criteria, 40% of respondents cited convenience or a peer recommendation as the deciding factor.
Not accuracy. Not an assessment of data handling practices. Non-compliance with regulatory requirements. In a profession governed by frameworks that demand meticulous handling of sensitive financial information, that statistic raises a serious question. Selecting a tool because a colleague mentioned it, or because it was already accessible on a device, does not satisfy the standard that accountancy's obligations require.
A New Skill Has Emerged: Spotting AI-Generated Content
Not all the findings point to risk. The survey reveals one area of clear professional progress: the ability to detect AI-generated content has become a near-baseline skill across UK finance.
Respondents described a consistent set of tells. Overuse of formatting. Random bolding and excessive structural organisation, where a human writer would compose naturally. Generic, coach-like language that fails to match the voice of a specific client. Typographic patterns that feel machine-produced rather than personal. As one respondent put it: "You know your clients, and the vocabulary doesn't correlate to the individual."
Some professionals have gone further, applying this detection capability to the hiring process — using AI tools to screen job candidates' interview responses for signs of generation rather than genuine reasoning.
GDPR Breaches Are Already Happening
The most urgent findings in the survey relate to data security. Multiple respondents raised serious concerns about what happens to client data once it is uploaded to an AI platform. Questions about storage location, processing jurisdiction, and third-party access are going unanswered, and in several documented cases, those gaps have already led to formal consequences.
One respondent described the situation directly: "Several staff members had to have disciplinary action over unsafe AI practice. Where is the data we upload going? Where is it stored? Big GDPR problem."
This is not an isolated incident. It reflects a pattern becoming increasingly visible across financial services: AI adoption is outpacing the governance frameworks designed to manage it.
What Industry Leaders Are Saying
Helen Brooks, Head of Commercial at Cloud2Me, argues the findings reveal a profession maturing in its relationship with AI, but doing so unevenly.
"These findings reflect a profession that is maturing in its relationship with AI, but maturing unevenly. Finance and accountancy professionals are sharp enough to spot AI-generated content, yet many are still selecting tools based on convenience rather than compliance credentials. In a sector where accuracy and data security are non-negotiable, that gap is a real risk."
She was direct on the regulatory exposure: "The GDPR concerns raised here are not hypothetical; they are already resulting in disciplinary action. The question for practices now is not whether to use AI, but whether they have the governance in place to use it responsibly."
Closing the Gap Before It Becomes a Crisis
The practices best positioned to navigate this period are those treating AI governance as a structural priority rather than an afterthought. In practical terms, that means establishing clear policies on which tools are approved for professional use, defining how client data may be handled within AI workflows, and building verification processes that ensure outputs are reviewed before reaching a client or a regulator.
The technology is not going away. The governance frameworks need to catch up.
About Cloud2Me
Cloud2Me is the UK's leading hosted desktop provider for accountancy firms, supporting practices in building secure, compliant, and productive digital working environments.
.jpg)
.jpg)
