Opinion

How the Willy Wonka fiasco shows the need for regulations in AI advertising

Steve Vinall, Director of Global Brand at Bynder, on the danger of high-quality AI images in marketing
By
Steve Vinall
Willy Wonka made in sand
Ai or not? (answer: it's real)

Over the couple of weeks, if you haven’t seen the Willy Wonka experience fiasco, where have you been? 

The event was advertised online as an immersive experience by using AI-generated imagery, which painted a picture of a chocolate-filled wonderland which families flocked to buy tickets for. The reality of the experience, however, fell far short of the idyllic scenes the images, created with AI, depicted. Naturally, the event came under fire when the event turned out to be the opposite of what the paying customers expected.

The incident has sparked conversation on the ethical use of AI in advertising and why we need to be mindful of the ethics and advertising standards when using it.

AI has come a long way in recent years, and poses huge benefits to marketers and advertisers alike, particularly due to the ability to create high-quality images almost at the click of a button. However, as AI use increases (and it’s increasing at a rapid speed) it's important that we start conversations surrounding best practice.” “The Wonka Experience in Glasgow is a great example of why companies need to consider ethical use of AI and the potential backlash if misused - especially within advertising.

AI could pose risks to advertising standards

All ads need to adhere to strict advertising standards, however with it becoming easier than ever to use AI to create realistic looking imagery, false advertising could be on the rise.

It’s not illegal to use AI within advertising, but content which could mislead consumers in any way or be seen as exaggerating the claims it makes could be seen as false advertising and raise issues with advertising standards.

It could be argued that AI doesn’t pose any false advertising risks which weren’t there before it became mainstream. However, the sheer volume of people who now have access to AI to create these high quality images may mean that they are used more frequently in ads.

AI-generated ads also could lead to customer backlash

Customers are already becoming more distrustful of content, questioning its legitimacy due to the advancements of AI in creating convincing videos or imagery. If your customers are then proven that their suspicions were correct, with your advertising not matching up to the service or product you deliver, the chances are you will lose that customer trust, and custom, for good.

The laws surrounding AI-generated content

Currently, advertising laws don’t specifically address AI-generated content, but with the ever changing landscape and growth of the technology we can only expect this to change soon. Despite this, advertising laws still apply to this AI-generated content, meaning advertisers should be aware of the risks associated with misleading consumers.

As more and more people get access to these tools it’s important that we all have an understanding of responsible AI use and when and where this must be disclosed.

In a world where AI is becoming more readily available, it is increasingly difficult for people to spot which content is AI and which is created by humans. However, our research shows that only 22% of people preferred completely AI-generated designs, meaning marketers should consider this when creating their content. This is particularly important to note as the study also found that 68% of marketers plan to use AI to create marketing materials.

Best practice for AI-generated content

The best practice for AI-generated content is to make sure it is disclosed as such. With more content than ever before being created and managed by marketing and advertising experts, it’s also important to make sure that AI-generated content is edited and approved by a real person, to ensure that nothing slips through the cracks. You can improve content management by using a Digital Asset Management system such as Bynder, to ensure content is scalable but staff are always in control.

Written by
Steve Vinall