AI security innovations need to keep pace with cyber attacks

Artificial intelligence could become a significant factor in cyber attacks in the future and investment in security to counteract the threat is needed, according to a new report from cyber security specialist WithSecure.

While the use of AI in cyber attacks currently limited, a new report warns that this is poised to change in the near future. Co-created by WithSecure, the Finnish Transport and Communications Agency, and the Finnish National Emergency Supply Agency, the report analyses current trends and developments in AI, cyber attacks, and areas where the two overlap. It noted that at present, cyber attacks that use AI are currently very rare and limited to social engineering applications, (such as impersonating an individual) or used in ways that are not directly observable by researchers and analysts.

However, the report highlights that the quantity and quality of advances in AI have made more advanced cyber attacks likely in the foreseeable future. It suggests that target identification, social engineering, and impersonation are today’s most imminent AI-enabled threats and are expected to evolve further within the next two years in both number and sophistication.

It warns that within the next five years, attackers are likely to develop AI capable of autonomously finding vulnerabilities, planning and executing attack campaigns, using stealth to evade defences, and collecting or mining information from compromised systems or opensource intelligence.

Andy Patel, intelligence researcher at WithSecure, said: “Although AI-generated content has been used for social engineering purposes, AI techniques designed to direct campaigns, perform attack steps, or control malware logic have still not been observed in the wild. Those techniques will be first developed by well-resourced, highly-skilled adversaries, such as nation-state groups. After new AI techniques are developed by sophisticated adversaries, some will likely trickle down to less-skilled adversaries and become more prevalent in the threat landscape.”

While current defences can address some of the challenges posed by attackers’ use of AI, the report notes that others require defenders to adapt and evolve. New techniques are needed to counter AI-based phishing that utilises synthesized content, spoofing biometric authentication systems, and other capabilities on the horizon. The report also touches on the role non-technical solutions – such as intelligence sharing, resourcing, and security awareness training – have in managing the threat of AI-driven attacks.

Samuel Marchal, senior data scientist at WithSecure, added: “Security isn’t seeing the same level of investment or advancements as many other AI applications, which could eventually lead to attackers gaining an upper hand. You have to remember that while legitimate organisations, developers, and researchers follow privacy regulations and local laws, attackers do not. If policy makers expect the development of safe, reliable, and ethical AI-based technologies, they will need to consider how to secure that vision in relation to AI-enabled threats.”

    Share Story:

YOU MIGHT ALSO LIKE


Investec is disrupting premium finance – Podcast
Investec made waves in entering the premium finance market, where listening and evolving in response to brokers made a real difference.

Communicating in a crisis
Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here