Why UK business leaders should care about the new EU AI Act
The adoption of the EU AI Act, which came into force this summer, marks a watershed moment for AI legislation. But because this legislation doesn't apply to UK companies operating outside the EU, its significance risks passing many business leaders by. In reality, its reach will extend far beyond the EU - its local impact should not be underestimated. If companies wish to find favour with increasingly conscientious customers and get ahead of future regulations, they should pay close attention to and use the EU’s new legislation to inform their own policies.
The EU AI Act is the first of its kind, and lays out a comprehensive, risk-based approach to regulate the development and use of AI, and will therefore become the baseline for global efforts to ensure businesses build and deploy AI safely, including in the UK, where binding regulations are likely imminent.
While some elements of the EU’s developer-focused legislation may not be directly relevant to all UK businesses, the core principles of transparency, human oversight, cybersecurity, and data quality are universally applicable. UK companies should take note of these principles to prepare for future standards and benchmarks in AI implementation.
In addition, the regulatory implications are not the sole reason to heed the EU’s new AI legislation. Recent media focus on "trustworthy AI" has shifted the conversation from AI threats to risk mitigation. As awareness of safe, ethical AI grows among customers and investors, businesses face increased scrutiny over their AI practices. The EU AI Act’s emphasis on transparency and data management serves as a checklist for verifying trustworthy AI use.
Consequently, verifiable trustworthy AI is becoming a value differentiator, offering commercial and reputational benefits similar to a B Corp Certification. UK businesses must recognise that the EU AI Act has made safe, trustworthy AI non-negotiable and adapt their policies accordingly to future-proof their success.
So how can UK companies ensure trustworthy AI use? It starts with documents and data. This is because the safety, security, and accuracy of an AI model and its answers (outputs) depend on the data it uses. Companies deploying AI within their operations must therefore ensure that their data is organised, secure, and accessible to those who need it, if they wish to get the most out of any AI they deploy and mitigate risks.
Then, companies must also implement AI tools that are explainable and allow for human oversight and transparency wherever possible. Building in such features from the outset will allow companies to check the accuracy of their AI models and demystify the inner workings of what can otherwise be a black box, thus being able to prove to customers and stakeholders that they are using AI in a trustworthy manner. Prioritising transparency will also set companies up for success when audits are required later down the line, too.
The EU AI Act will likely influence future regulations and further prioritise trustworthy AI. Leaders must see the passing of the EU AI Act as a call to review and update their AI policies. By treating it as a wake-up call and a guiding star, companies can unlock the full potential of AI safely, securely, and ethically. This approach will not only prepare them for future regulations but also position them as leaders in responsible AI use, bringing commercial, reputational and societal benefits.