INTERVIEW

The cyber security landscape has been completely reshaped in the past decade. Deborah Ritchie speaks to ethical hacker, Glenn Wilkinson, about the next wave of change, and about how AI is reshaping cyber security – empowering defenders and attackers alike

What have been the greatest drivers for change for cyber security professionals in recent years?

Cyber security has evolved dramatically in recent years, adapting to a complex landscape of technological advances and new risks. Unlike in the past, when servers were securely located on-site, many organisations now host data on third-party platforms across the globe. This shift brings challenges in defending data from both technological and legislative perspectives. A firm based in London but hosting its data on servers in the US, for instance, may struggle with compliance under regulations like GDPR, which requires careful handling of data across borders. Moreover, the remote work shift has dispersed networks, making it harder to secure resources as employees move around.

This transition to remote infrastructure has led to a much larger attack surface, which now includes a mix of personal devices, remote servers and interconnected networks.

Additionally, with online services like Zoom for meetings, DocuSign for contracts and cloud providers for data storage, businesses now rely on numerous third-party services, transferring control of sensitive data to external entities.

Once considered a luxury, cyber security is now central to business operations of all kinds. In the late 1990s and early 2000s, convincing companies to invest in security, such as by hiring ethical hackers, was a tough sell. But today, most businesses recognise that security is essential to protect their operations and reputation. This acceptance is visible at the executive level, although there is often still a lack of detailed understanding of how cyber attacks work, and what they entail.

Managed service providers present a unique risk, as they are prime targets for cyber criminals. Ransomware groups, in particular, seek out MSPs because, by compromising one provider, they gain access to numerous client systems. This approach was evident in the Kaseya attack on a major MSP in the US in 2021, where ransomware was deployed across all its clients’ networks. This event marked a shift in ransomware tactics and underlined the substantial threat it poses to modern organisations. Ransomware attacks can devastate a business, as they not only encrypt data but may also leak sensitive information if ransoms go unpaid.

How will artificial intelligence and machine learning influence the work of cyber security professionals – on both sides?

AI and ML are transforming cyber security from both the defender and attacker perspectives. The recent surge in AI and ML-based ventures echoes past tech booms, signalling broad interest and investment in these fields. For cyber security, separating AI hype from reality is essential as both defenders and attackers explore new applications.

ML has been used in cyber security for years, particularly for pattern recognition, anomaly detection and baseline user behaviour analysis. For instance, if an employee exhibits unusual activity, ML can flag it for investigation. However, most ML models operate as black boxes, which can complicate their use and reliability. For example, an ML model might flag suspicious activity without explaining why, resulting in false positives, and additional workload for security teams.

The most impactful recent development in AI is the rise of large language models like ChatGPT, which are revolutionising how professionals interact with complex data. For defenders, LLMs offer the ability to process vast amounts of network and user activity logs and summarise critical information. In the future, cyber security professionals may rely on these models to quickly identify potential breaches or anomalies. Imagine logging in on a Monday morning and querying an LLM about unusual incidents over the weekend. The model could highlight significant events, such as unauthorised login attempts, and provide context based on recent news, such as increased hacking activity from a specific group.

LLMs also excel at integrating diverse data sources, enabling a holistic view across disparate systems, locations and security tools. As defenders use multiple systems generating vast amounts of data – from intrusion detection logs to antivirus alerts – LLMs can synthesise this information, allowing teams to focus on the most pressing issues. Additionally, they help decode complex systems, providing step-by-step guidance on deploying new tools, configuring firewalls, or responding to incidents.

LLMs do have drawbacks, however, primarily their tendency to hallucinate. When LLMs make mistakes, they can introduce vulnerabilities or complicate problem-solving. For instance, an LLM might mistakenly advise setting a weak password, creating security risks. Moreover, LLMs are trained on internet data, which now includes AI-generated content, introducing potential inaccuracies.

On the attack side, AI and ML are also accessible tools for hackers, especially low-skilled ones. Basic users can manipulate LLMs to obtain potentially harmful code by rephrasing queries to bypass built-in safety restrictions. This capability could increase low-level cyber security threats, although it likely won’t significantly advance state-sponsored attacks or sophisticated hacking.

How will GenAI specifically impact the cyber security landscape?

Gen AI is transforming the cyber security landscape, though its impact currently affects lower-level threats more than high-level operations. Gen AI models, particularly LLMs, provide easy access to sophisticated information and can enable even less-skilled individuals to create basic malware or learn hacking techniques. This availability of information can empower entry-level hackers, though it is unlikely to boost the capabilities of advanced attackers significantly at this stage.

One prominent concern in GenAI is LLM safety, which focuses on ensuring that AI models do not assist in dangerous or illegal activities. Developers incorporate guardrails to prevent models from generating instructions for harmful activities, like building ransomware or malware. However, creative users can often find ways around these restrictions. For instance, some people trick LLMs into providing forbidden information by framing requests in clever or misleading ways, thus bypassing safety mechanisms. This jailbreaking of AI models highlights the need for robust ethical and safety guidelines in AI development to keep such tools within socially acceptable limits.

For moderately skilled attackers, GenAI offers a substantial advantage by lowering the skill barrier to entry. These models allow users with limited technical knowledge to generate malicious scripts or exploit software vulnerabilities with minimal effort. While this does not necessarily pose an existential threat to high-security organisations, it broadens the pool of potential attackers who might target small businesses or individual users.

Despite these risks, GenAI’s development is advancing quickly and the models are improving. Some believe that AI will eventually possess capabilities that could make it an essential tool in cyber security, enhancing both attack and defence strategies. However, AI is likely to remain a tool that complements rather than replaces human intelligence, as cyber security requires a blend of human insight and technical precision. The ideal scenario in cyber security involves a collaboration between intelligent human operators and sophisticated software tools, with humans providing context and judgment that AI cannot yet fully replicate.

In terms of application, GenAI is already proving valuable in coding and problem-solving. For example, coding assistants can now help cyber security professionals write, debug and refine code more efficiently, allowing them to respond to threats faster and deploy solutions with reduced time and effort. Although AI is far from replacing human defenders, it enables them to work smarter and more efficiently.

Ethical hackers also benefit from GenAI, as many entered the field driven by curiosity and the desire to understand how things work. In the early days, hacking was about experimentation and self-guided learning. Many ethical hackers began with limited formal training, often starting by dismantling simple machines and advancing to more complex networks. Over time, cyber security has evolved into a formalised field, with degree programs, certifications and defined career paths. Despite this, the core spirit of hacking – exploration and problem-solving – remains essential.


This article was published in the Q4 2024 issue of CIR Magazine.

View as PDF

Contact the editor



Share Story:

YOU MIGHT ALSO LIKE


Investec is disrupting premium finance – Podcast
Investec made waves in entering the premium finance market, where listening and evolving in response to brokers made a real difference.

Communicating in a crisis
Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here

Advertisement