Opinion

Why 2024 will sound the call for AI transparency

Successful transformation needs strategy and culture to work together from the start
By
John Crossan
By
AI coder at work

2023 was truly the year of generative AI, with businesses globally feeling its impact. Across the board there’s been a range of reactions - from enthusiastic adoption to cautious apprehension. But as the world speculates how regulation will impact AI usage across business, one question remains:

Are you ready to embrace AI transparency now, or plan to wait to be told how to do it?

As we saw with third-party cookies, signposted presence of AI will likely become a requirement for businesses and customers in the coming years. Let’s take AI usage in chatbots, for example, those who find ways to make it abundantly clear (and then safely forgettable) that you are interacting with a bot, will be the ones to succeed in this new era of AI. 

Regulatory pressure

There’s no doubt that regulatory bodies will soon be putting pressure on organisations that haven’t appropriately prepared themselves for the new regulatory landscape. The EU AI Act is already making significant progress to passing as law, and several governments globally have started their journey towards formalising an AI policy. The EU AI Act, for instance, would legislate transparency requirements for some of the largest AI firms, making it mandatory for companies to disclose when AI is involved in their service or content. Up until now, the pace of regulation hasn’t been matching the pace of innovation, but it’s unlikely to remain this way for long, much less forever.

Over the coming year, it’s going to be interesting to see which businesses proactively address the issue of AI transparency, and wait to be ruled by regulation set out by relevant bodies. At Freshworks, we’ve got a rare opportunity to see which of our customers customise their bots to more clearly disclose when AI is being used. In some cases, customers have already begun highlighting their bots as AI, providing them specific names to point them out.

I’m of the opinion that customers being more aware as to when they are talking to a bot can support businesses, especially when accompanied by further education on the topic of generative AI and its benefits and limitations.  

This will also feed into greater agent satisfaction, as employees find their time and schedule less filled with simple requests that don’t require forward thinking or the compassion and passion for service a human can provide.

Dropping out of the waiting game

Time and time again, I’ve seen a similar cycle play out in the enterprise technology world - a new technology or best practice is implemented en masse, regulation comes to ensure responsible use, and businesses that failed to plan ahead with the new technology find themselves in a quagmire of red tape.

This pattern shows that when you chase after regulation, you run the risk of the implementation journey of a new technology being rougher than it needs to be. For instance, take the GDPR cookie preferences prompt that pops up everywhere new you go. Even if the core message is right - the average person wants control over how their data is tracked and shared within their digital identity - the reality is that often we find ourselves infuriated by the invasiveness of the cookie request more than our data being tracked, because businesses haven’t identified a more streamline way of adhering to GDPR regulation.

By simple practices towards transparent AI use, like naming a bot, companies can take a significant weight off consumers and end users’ shoulders, keeping them from guessing whether generative AI is involved in their experience. Through this, businesses will be able to stay ahead of the curve for regulation, as well as most likely raising their satisfaction score as customers address their simpler issues immediately through AI assistance, or are swiftly passed on to a member of the human team who can address as necessary.

Lessons learned

Despite chatbots being only one example of AI use that is likely to require greater transparency, the lesson to be learned here is that being proactive around your AI disclosure policy will lead to less complications down the line. It’s not enough to be reactive: you must be proactive in your AI transparency journey to develop practices that take into account the likely disclosure requirements to come further down the line. 

It’s far easier to develop sustainable AI usage when the future requirements of regulatory bodies have already been worked into the design of the solution you use, so business leaders are responsible now for considering the longer term implications of any solution with AI use. It’s become key to the decision-making process to choose the option that allows them to develop their AI usage in a way that takes into account the future of regulation, not just the present. 

Written by