AI security gaps threaten Europe’s governance lead

European organisations lag global benchmarks on AI-specific security controls, according to research published this week.

Despite Europe’s strong AI governance, backed by the EU AI Act, adoption of key capabilities including AI anomaly detection, training data recovery and visibility over AI software components falls below global averages, the Kiteworks Data Security and Compliance Risk 2026 Forecast Report suggests.

Only 32% of French organisations, 35% of German organisations and 37% of UK organisations deploy AI anomaly detection, compared with a 40% global benchmark, according to the data. Training data recovery sits at 40% to 45% in Europe versus 47% globally, and 57% in Australia. Software bill of materials visibility remains limited at 20% to 25%, against 45% in leading regions.

This shortfall means AI-enabled attacks or unexpected model behaviour can go undetected, increasing exposure to breaches, regulatory penalties and reputational damage.

"Europe has led the world on AI governance frameworks with the AI Act setting the global standard for responsible AI deployment. But governance without security is incomplete," said Wouter Klinkhamer, general manager of EMEA strategy and operations at Kiteworks. "When an AI model starts behaving anomalously – such as accessing data outside its scope, producing outputs that suggest compromise, or failing in ways that expose sensitive information – European organisations are less equipped than their global counterparts to detect it. That's not a compliance gap; that's a security gap."

These findings indicate weaknesses are likely to persist through 2026. AI supply chain risk emerges as a major blind spot, with limited visibility over third-party components leaving exposures invisible until attacked. Only 4% of French and 9% of UK organisations have joint incident response playbooks with AI vendors, meaning breaches can spread unchecked. Manual governance processes also increase financial risk, as firms struggle to demonstrate controls during regulatory assessments or insurance claims.

"The AI Act establishes what responsible AI governance looks like. The question for European organisations is whether they can secure what they're governing," Klinkhamer added.



Share Story:

YOU MIGHT ALSO LIKE


Building cyber resilience in a complex threat landscape
Cyber threats are evolving faster than ever. This episode explores how organisations can strengthen defences, embed resilience, and navigate regulatory and human challenges in an increasingly complex digital environment.

The Future of Risk & Resilience with AI & Data
CLDigital's Co-Founder, Tejas Katwala, joins CIR Magazine to discuss how CLDigital is transforming enterprise risk and resilience. By integrating business processes, AI and data-centric strategies, organisations can move beyond compliance to proactive risk management – simplifying operations, strengthening resilience, and driving business performance. Listen now to explore the future of intelligent risk management.