Opinion

From Compliance to Confidence: The Business Case for Trustworthy AI

By
By
Tilman Harmeling

AI investments are soaring and the race to lead in artificial intelligence has pushed deployment into overdrive. Yet as innovation accelerates, the gap between progress and accountability continues to widen. A recent report by The British Standards Institution warned that many systems still lack basic safety checks, with companies and marketers placing faith in technology they don’t fully grasp. How can people trust a system that even its creators can’t explain?

Gen Z may be more willing to share data for AI, but 57% of people remain uneasy about their data being used to train models, and nearly half trust AI less than humans. This isn’t simply generational caution; it reflects a universal truth about digital trust. So, what does it take for businesses and marketers to build AI that people can truly rely on? It starts with introducing accountability, transparency and consent, and embedding privacy by design into every layer, from development to deployment. This is also where privacy-led marketing (PLM) becomes a strategic differentiator, turning responsible data practices from a compliance necessity into a catalyst for long-term growth.

Accountability

Accountability is the bedrock of AI that can be trusted. It begins with an ethical framework that sets clear responsibility for how AI is developed, deployed and governed. Our recent findings highlight how strongly this shapes public confidence. Financial institutions score 57% and public institutions 49% for trust in how they collect and use customer data; a reflection of rigorous regulation and visible accountability. When this kind of accountability is missing, transparency, consent and privacy have nothing to stand on. For businesses, it means recognising where things have fallen short and committing to meaningful, ongoing improvement.

Trustworthy AI is not a single milestone, but a continuous process built on responsibility and openness. This includes treating privacy by design as a core standard rather than an optional extra. Companies that embed transparency, consent and privacy into design will lead the next wave of responsible innovation. Those that pair this with a PLM strategy will not only reduce regulatory and reputational risk but also unlock stronger customer relationships and higher marketing performance.

Transparency

Transparency is where trust begins. People want to understand how their data is collected, shared and applied. Our recent report shows that consumers do not trust all brands equally, and nearly half say that being clear about how their data is used is the single most important factor in earning their trust. This expectation extends directly to AI.

An ethical AI system must also disclose how it was trained, what data it learned from and how its guardrails are set. Companies can no longer treat this information as proprietary; they must be open and honest about the data and decisions behind their models.

Without transparency, accountability becomes impossible. And when AI produces outputs shaped by hidden inputs, the ethical risks multiply. Transparency lays the foundation for trust, but lasting trust also depends on active consent.

Consent

In principle, users should always know when AI is being used and how their data influences its decisions. They need a clear understanding of what is happening with their information to give meaningful consent. Yet in practice, this is difficult to achieve; the complexity of most AI systems makes full disclosure unrealistic. Data inputs are vast and constantly changing, and the information users need to make informed decisions is often too abstract or technical to even understand. This is why many AI systems struggle to meet the GDPR requirement that consent be freely given, specific, informed and unambiguous.

As AI becomes embedded in everyday tools like emails, documents and search, often without users realising it, genuine trust in AI depends on privacy being built in from the ground up, not added later. To make this possible, companies must communicate AI use clearly, simplify consent choices and design experiences that help users make informed decisions with ease.

Privacy by Design

Privacy by design means embedding privacy into AI systems from the very start, not treating it as an add-on. When developers design with privacy in mind, they can anticipate risks early, reduce misuse and ensure people stay in control of their data throughout the process. A true privacy-first approach is transparent and grounded in respect for individual rights. It protects data integrity while preventing the kinds of privacy intrusions and biases that undermine trust.

Embedding privacy from the beginning also shapes how businesses approach marketing. Privacy-led marketing focuses on using data responsibly, collecting only what is necessary and communicating clearly about how information will be used. Instead of relying on broad or opaque data practices, it prioritises meaningful consumer choice and simple explanations. When people understand why data is being collected and how AI supports their experience, trust grows naturally. This approach reduces risk while creating more relevant and respectful interactions, helping businesses strengthen relationships rather than strain them.

Across all age groups, transparency, security guarantees and clear explanations remain the top three drivers of digital trust. If AI is to realise its potential, trust must become a core design principle, not an afterthought. Trust in AI starts with trust in data. That means giving people real control and clarity over how their information is collected, shared and applied. Brands that adopt PLM as part of this shift will gain a competitive edge, building deeper loyalty, improving AI adoption and accelerating business growth. When transparency and consent are prioritised, AI can earn the trust it needs to thrive.

Ultimately, the true measure of AI’s success will not be its power; it will be the trust people place in the data behind it.

Written by
November 24, 2025
Written by
Tilman Harmeling