Opinion

Why Leaders need an ‘AI Amnesty’ in the workplace

By
By
Steve Salvin

Enterprise AI is having an identity crisis. Within companies, the ‘AI averse’ (those mistrustful of the technology), sit alongside colleagues guilty of ‘shadow AI’ (the use of unsanctioned AI tools at work). And leadership is caught in the middle: under pressure to tap into AI-generated efficiencies but unclear on how to measure value, risk, and impact. For companies who feel out of control or on the backfoot when it comes to their AI strategy, there’s only one thing for it: a workplace ‘AI Amnesty’.

What is an AI Amnesty?

The general approach is simple: pause, pardon, and regroup. An AI Amnesty is an opportunity to take a moment to re-assess progress so far and take stock of what is and isn’t working. It’s also a chance to allow colleagues to be upfront about what tools they are using (both sanctioned and unsanctioned) without fear of reprimand. Ultimately, it’s a way of regaining control of your AI agenda, cleaning out some of the bad habits that will have built up over recent years, and doubling down on a refreshed AI strategy that genuinely works for you and your team.

Uncover the truth on staff AI use

Building a picture of how your teams are actually using AI is crucial. Make it explicitly clear that you want staff to tell the truth about tools they are and aren't using, and what they’re using them for, so that you can deliver better structures and usage frameworks, not punish people for their actions to date.

To aid this, you may want to create anonymous feedback portals for staff to use. This will help them feel safe sharing the true details of which LLMs and tooling they’ve been firing up on their laptops, and what company data they’ve been feeding them. It will also help you identify where your biggest risks of data loss might be, and where to focus data governance efforts to make sure sensitive data is classified, labelled, and stored with appropriate access permissions.

Once you have this information, you’ll be able to build a picture of what staff find useful and why. From there, you can make informed choices about which tooling or platforms to invest in, where data protocols need to be tightened up, and how staff needs and preferences can be met in a way that aligns with company policies.

Get honest about training gaps

Declaring an AI Amnesty also creates a perfect opportunity for your team to admit where their knowledge gaps are. In the UK, nearly three quarters (73%) of people have had no AI training or education, with AI use varying greatly between industries and age groups. And lack of skills among workforces is one of the leading reasons that enterprise AI roll-outs are failing.

Leaders can’t expect employees to understand how to use AI safely and effectively without proper training. So, speak to your team about where they have confidence and knowledge gaps, what additional training they might need, and identify the areas where they’ve been ‘winging it’ to date. Use this information to create a training plan that is grounded in reality.

As part of staff training, you should be crystal clear about what information can be inputted to AI tools going forward. These guidelines should also cover public tools like ChatGPT and Perplexity, which operate outside of your organisational controls. Doubling down on internal data governance can ensure that all data is given the relevant permissions. The absence of guidelines is what drives shadow AI adoption, so setting out these rules gives staff clarity on what’s got the green light.

Sift your ‘AI Personas’

Building this knowledge on AI confidence will also allow you to understand the different roles people across the team can play in creating a stronger, more streamlined AI strategy, and who might need tailored help to get up to speed.

The amnesty will help you organically identify your AI Ambassadors – those who are very comfortable leveraging the technology, using it safely, and who are seeing direct gains as a result. Bringing these people into the room on future AI decisions, as well as leveraging their talents to help upskill other colleagues, will help keep strategies practical and grounded in real company use cases over the long term.

The amnesty should also throw a spotlight on your AI avoiders – those who are resisting integrating AI tools into their daily workflows. Over two thirds (70%) of Brits use AI in their personal lives, yet this falls to 44% in professional settings. Again, identifying these colleagues isn’t an exercise in naming and shaming, but a chance to give them the right training and support going forward. It’s also a chance to listen. Why don’t they like using AI tools? What’s causing the friction? Uncovering this information will help leaders ensure future roll-outs or AI projects are designed with the whole workforce in mind.

It’s also an opportunity to reflect on the types of AI technologies you’ve been focused on so far. Have projects been limited to Generative AI and LLMs? Maybe you’re angling to harness AI agents, or more ‘traditional’ AI and Machine Learning technologies to help in your business? Organisations have placed huge focus on AI-powered productivity, but strategies can miss the big opportunity for AI to help with ‘under the surface’ tasks, like data management, compliance, data governance, and data quality improvement.

After several frenetic years of AI adoption and integration, an AI Amnesty is an opportunity to reset the dial and shed mistakes that may have accrued over time. It’s a chance to create a culture of open and responsible AI use that aligns with how your colleagues truly work.

The companies that capitalise on AI aren't the ones that race to implement projects the fastest or onboard the most sophisticated models before anyone else. Instead, they’re the ones that design their AI strategy around their people and their data needs. When we bring the focus back to how AI can unlock the power and talents of our teams, we can generate real value.

Written by
January 20, 2026
Written by
Steve Salvin
CEO and founder of Aiimi
January 20, 2026
meta name="publication-media-verification"content="691f2e9e1b6e4eb795c3b9bbc7690da0"