European organisations lag global benchmarks on AI-specific security controls, according to research published this week.
Despite Europe’s strong AI governance, backed by the EU AI Act, adoption of key capabilities including AI anomaly detection, training data recovery and visibility over AI software components falls below global averages, the Kiteworks Data Security and Compliance Risk 2026 Forecast Report suggests.
Only 32% of French organisations, 35% of German organisations and 37% of UK organisations deploy AI anomaly detection, compared with a 40% global benchmark, according to the data. Training data recovery sits at 40% to 45% in Europe versus 47% globally, and 57% in Australia. Software bill of materials visibility remains limited at 20% to 25%, against 45% in leading regions.
This shortfall means AI-enabled attacks or unexpected model behaviour can go undetected, increasing exposure to breaches, regulatory penalties and reputational damage.
"Europe has led the world on AI governance frameworks with the AI Act setting the global standard for responsible AI deployment. But governance without security is incomplete," said Wouter Klinkhamer, general manager of EMEA strategy and operations at Kiteworks. "When an AI model starts behaving anomalously – such as accessing data outside its scope, producing outputs that suggest compromise, or failing in ways that expose sensitive information – European organisations are less equipped than their global counterparts to detect it. That's not a compliance gap; that's a security gap."
These findings indicate weaknesses are likely to persist through 2026. AI supply chain risk emerges as a major blind spot, with limited visibility over third-party components leaving exposures invisible until attacked. Only 4% of French and 9% of UK organisations have joint incident response playbooks with AI vendors, meaning breaches can spread unchecked. Manual governance processes also increase financial risk, as firms struggle to demonstrate controls during regulatory assessments or insurance claims.
"The AI Act establishes what responsible AI governance looks like. The question for European organisations is whether they can secure what they're governing," Klinkhamer added.
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.







YOU MIGHT ALSO LIKE