False information is soaring. What does it mean mean for business?
.png)
Misinformation (accidentally spreading false information) and disinformation (doing so on purpose) are by no means new to the 21st century. As humans we have always loved to pass on stories that appeal to us, without necessarily checking their veracity first. And there have always been people who chose to invent such stories with the aim of damaging a reputation or influencing a way of thinking. The difference now is that our digital networks can spread such information like wildfire, fuelled by engagement rather than judgement, and AI has made it harder than ever to tell truth from fiction.
In its Global Risks Report for 2025, the World Economic Forum put misinformation and disinformation as the biggest risks currently facing the private sector. The report’s panel of 900 experts predicted that for 2027 this will apply across all sectors, so it’s not a problem that is going away soon. Most people are aware that misinformation is a huge issue in politics, but many business leaders are much less well-informed about the threat it can pose to their organisation - and haven’t given any thought at all about how to tackle it.
This article contains some answers to the questions this poses, with thanks to four experts who shared their knowledge with us at our recent panel discussion - Rob Waugh, Katy Howell, Max Templer and Antony Cousins, experts in journalism, social media, behavioural psychology and AI respectively.
The threat from misinformation
There have been many cases of high profile brands being damaged by the spread of false information. In 2022 the pharmaceutical company Eli Lilly was famously the victim of a fake Twitter account that announced “free insulin,” causing a storm of confusion and a drop in the share price. Another well-known brand, with a young target audience, discovered that baseless rumours had been spreading online about its alleged association with child pornography. In both cases the origin of the story was unknown. In the second case, the brand itself was unaware that the disinformation was spreading until it was revealed by the application of specialised monitoring software. In both cases, reputational damage occurred.
The media too is increasingly having the wool pulled over its eyes. Journalist Rob Waugh revealed in an article for Press Gazette that some of the “experts” recently quoted in the press are not in fact not people. The problem seems to have stemmed from the extreme pressure on journalists to create stories quickly, combined with the ability of AI to create passably credible quotes on the fly. While some news outlets have now changed their policy on how expert opinion is obtained, few can afford the manpower needed for robust source checking, so this is a problem that is likely to continue.
Meanwhile, some mainstream news outlets are already using AI to scrape the internet for stories and to re-write these into their own house style. Combined with the “ghost experts”, this creates a dangerous opportunity for AI-generated news to multiply and end up fully “laundered” on the pages of mainstream media outlets.
We as readers are not helping. Some younger people have been found to be deliberately re-sharing information that they know to be false - sometimes purely for amusement value - while numerous surveys have indicated that the over-60s are among the biggest culprits for spreading fake news.
What can we do?
The first action for brands is to monitor constantly what is being said about your business online. As bad news stories spread so fast, you need to know about them as soon as they happen in order to put your message out before it’s too late. This is likely to mean investing in technology tools to track your brand - and it’s a good example of where AI can be an ally against misinformation.
You also need to prepare - something that applies as much to a crisis based on a false rumour as it does to one based on real facts. Preparation means making sure that your communication machine is fully oiled and functional, and that your team knows what to do in the event of a crisis. There are many simple things that can be done - such as making sure that all of the passwords for your main social media outlets are available to the team that will be taking charge, and ensuring that you have a structured order in place of who needs to be informed when. And while you can never predict the exact form that a crisis will take, walking through possible scenarios is always valuable.
Perhaps the biggest priority for both brands and the media, however, is to build up trust in the first place. If your audiences trust you, your denial of a false accusation is more likely to be believed. The challenge is how to build trust when more and more of us are becoming aware of the potential for misinformation and disinformation wherever we look.
Ultimately, humans need to understand where information is coming from in order to decide whether they trust it. AI has the ability to generate plausible content that appears to emanate from a human but is very difficult to trace, so brands and media will need to bring the connection with real humans closer. Spokespeople may need to be more visible, while journalists will need to revert to interviewing people in person instead of relying on written comment. It feels like a leap at the moment, when the media are still cutting jobs but, as more people are now attending live events - both in business and in their personal lives - perhaps we are gradually learning to value individual relationships more highly again. Unfortunately we will undoubtedly continue to share stories that aren’t true. No amount of misinformation education is likely to alter the fact that 32% of Brits believe that alien life has already landed on Earth.