What the EU AI act means for business
.jpg)
The European Union’s landmark AI Act was officially implemented last year, initiating a phased rollout of enforcement measures, some of which come into effect at the start of this month. Under the new rules, organisations deploying AI in the EU will need to comply with strict requirements around transparency, risk management, documentation, and local representation. This marks a pivotal moment in the regulation of artificial intelligence, and signals a move from voluntary principles to binding legal obligations.
Despite resistance from some major players, Meta among them, the European Commission is pushing ahead. In a notable development, Google has committed to the EU’s voluntary AI code of conduct, showing early support for the regulatory vision. While debate continues over the clarity and scope of the law, the message is clear: the EU intends to lead the global conversation on responsible AI, challenging companies worldwide to rethink their development and deployment strategies.
Introducing accountability
Without regulations in place, it can be all too easy for those developing and deploying AI applications to take shortcuts when it comes to minimising risks. However, holding those responsible accountable for the risks posed to the end-user can help to mitigate this. “By prohibiting a range of high-risk applications of AI techniques, the risk of unethical surveillance and other means of misuse is certainly mitigated,” explains Martin Davies, Audit Alliance Manager at Drata. “Even high-impact AI systems that remain permitted under the EU AI Act will still need impact assessments. This will require organisations that use them to understand and articulate the full spectrum of potential consequences.”
Davies sees this a step in the right direction, with the proposed penalties of the act rendering any developers of high-impact AI applications responsible for their outcomes: “The positive impact this Act could have on creating a safe and trustworthy AI ecosystem within the EU will lead to an even wider adoption of the technology. To that extent, this regulation will encourage innovation within defined parameters, which will only benefit the AI industry at large.”
Prioritising security
Experts are also welcoming the enhanced security that the EU AI Act is set to bring. llona Cohen, Chief Legal and Policy Officer at HackerOne, believes that: “Securing AI systems and ensuring that they perform as intended is essential for establishing trust in their use and enabling their responsible deployment.”
“We are pleased that the Final Draft of the General-Purpose AI Code of Practice retains measures crucial to testing and protecting AI systems, including frequent active red-teaming, secure communication channels for third parties to report security issues, competitive bug bounty programs, and internal whistleblower protection policies,” adds Cohen. “We also support the commitment to AI model evaluation using a range of methodologies to address systemic risk, including security concerns and unintended outcomes.”
Too many rules, too little clarity?
But it’s not all positive. Some are sceptical of the EU AI Act’s introduction, and with an ever-increasing range of regulations to keep up with, is this just complicating matters further? “Companies, individuals and governments around the world are working on an almost unimaginable range of AI-related projects,” explains Hugh Scantlebury, CEO and Founder of Aqilla. “So, trying to regulate the technology right now is like trying to control the high seas or bring law and order to the Wild West.”
He believes that for regulation to be truly effective, it has to be on a global scale, which is unlikely anytime soon. “Otherwise, if one region, such as the EU - or one country, such as the UK - attempts to regulate AI and establish a “safe framework,” developers will just go elsewhere to continue their work. The birth of AI is second only to the foundation of the Internet in terms of its power to fundamentally alter our lives - and some people even compare it to the discovery of fire. But hyperbole aside, AI is still in its infancy, and we have only scratched the surface of what it could achieve. So, right now, no one is in a position to legislate - and even if they were, AI is developing at such a pace that the legislation wouldn’t keep up.”
Echoing this concern around complexity, Darren Thomson, Field CTO EMEAI at Commvault, sees the EU AI Act as excessive regulatory divergence, rather than a positive sign of progress: “The lack of cohesion makes for an uneven playing field and conceivably, a riskier AI-powered future. Organisations will need to determine a way forward that balances innovation with risk mitigation, adopting robust cybersecurity measures and adapting them specifically for the emerging demands of AI.”
“Business leaders will need to have strong protection and defences as well as tried and tested disaster recovery plans,” he adds. “Effectively this means prioritising the applications that really matter and defining what constitutes a minimum viable business and acceptable risk posture.”
So, as the EU’s AI Act continues to take effect, it underscores a broader industry trend: governments are no longer sitting on the sidelines when it comes to regulating AI. While the Act sets important standards around transparency, accountability, and risk, it also adds to an increasingly dense web of legislative initiatives that businesses now must navigate.
Rather than streamlining oversight, the surge in regional frameworks, each with differing scopes and enforcement mechanisms, is making the regulatory landscape more fragmented and difficult to manage. The intent to protect end-users and guide responsible innovation is clear, but the sheer volume and variation of rules risk overwhelming companies and stifling progress.
As more jurisdictions introduce their own AI laws, the real challenge lies not just in drafting robust regulation, but in ensuring coherence across borders. Without greater global alignment, there is the risk of replacing technological uncertainty with regulatory confusion. One thing is certain: the age of voluntary principles is over, and compliance is now a complex - and growing - burden.