Leviathan Security Group - Penetration Testing, Security Assessment, Risk Advisory

View Original

AI Augments Hackers, But Won’t Replace Them

LLMs are no substitute for human ingenuity, but they can help cyber professionals be more efficient.

Written by: Anton Shantir, Tom Pieragastini, Barbara Gray with Andrew Jaquith and TC Johnson

 

“Artificial Intelligence” (AI) and large-language models (LLMs) are generating hype all over the tech sector, including in cybersecurity. Recent advances in AI have caused cyber professionals to worry about AI’s social and political implications—in particular, whether AI will gain control over many aspects of life or result in large job losses.

 

These are troubling questions. But in order to answer them, we need to be clear about what AI can and cannot do. In the context of cybersecurity, few discussions about AI seriously explore the degree to which AI can truly be said to “think,” and as a consequence, whether artificial intelligence leaves any room for the hacker in an AI-/LLM-augmented world. Is AI technology truly “intelligent”?  How do AI’s inner workings relate to other forms or definitions of “thought”?

 

“Artificial Intelligence” describes a spectrum of capabilities. At one end of the spectrum, large language models can efficiently collect, join, and synthesize information from disparate sources. AI in this context means nothing more than rapid recall and synthesis of known information—a productive step up from querying data from Google or a hard drive, certainly, but not much more. But futurists often describe “AI” as a non-organic system that can autonomously process thought beyond the limitations imposed by its programmers. This view of AI implies originality: intentionally generating thoughts that have not yet been articulated, with implied abilities to autonomously violate the system's rules and guidelines, unburdened by the limitations of code.

 

Which of these two views—efficient synthesis or autonomous thinking—is correct? Math theory and Godel's Incompleteness Theorems offer clues. They demonstrate that any system of thought with inherent limitations cannot prove itself. For example, to “prove” math’s universal truth, one would already need to assume this universality in order to use math as a proof for its own truth value. In other words, any system with encoded limits can only evaluate the world as a reflection of itself. In computing, these limitations are imposed by the Boolean assertion of truth and falsity: the reduction of all propositions to true and false statements, open and closed, zero and one. This structure imposes a fundamental series of limitations on the possibility of thought.

 

All AI platforms are constrained by limitations of the computer systems they run on. At the most fundamental level, Boolean logic gates ultimately evaluate all claims to zeros or ones. Computers reject anything that evaluates non-deterministically—that is, as a non-binary, illogical, or counterfactual answer. But “thinking,” “thought,” and “intelligence,” as humans understand it, tolerate indeterminacy, vagueness, unclear results, and indecision.

 

The limitations of today's AI systems mean that they cannot, in the near term, supplant the ethical hacker’s role in evaluating the security of systems and networks. Offensive security specialists combine scientific rigor and creative license to question the assumptions of the systems under examination, even to the point of questioning the testing methodologies themselves. Hackers constantly engage in critical discourse. They question assumptions, cut across logic, and violate core tenets. It is this intentional pushing against convention that secures our systems.

 

If AI is rigid, and hackers creative, how can the cybersecurity industry best use AI? We assert that the industry’s use of AI should be intentional and limited. AI cannot replace a hacker’s ethos—the willingness and ability to break rules and systems creatively. Nonetheless, AI can provide tremendous value by automating activities that are time-consuming to research and explore. For example, LLMs can suggest potentially relevant attack vectors that a tester may want to investigate. But LLMs are no substitute for human hands-on keyboards.

 

More generally, AI can bring significant economic benefits to people whose roles depend on their abilities to rapidly assimilate and analyze information, freeing up time that they can use to think creatively. In cybersecurity, a paradigm shift is underway. Early adopters of AI, who have been able to exploit AI’s raw processing power and ability to handle vast datasets, are beginning to integrate AI into more practical, customer-centric models that are tailored to particular audiences and workflows.

 

At Leviathan, we envision AI as augmenting the work of our cybersecurity consultants, where it will enhance efficiency and effectiveness across the value chain from sales to executive reporting. For example, during presales activities, we will use AI to analyze market trends. Sifting through large datasets to identify businesses that may be at a higher risk of cyber threats and need cybersecurity services will be enabled by AI-enhanced research. AI will help us provide more targeted and personalized selling motions.

 

During engagements, AI will enhance the creation of customized risk assessments. AI-augmented systems can quickly and comprehensively evaluate past incidents, analyze current vulnerabilities, and identify emerging threats. Machine learning algorithms can rapidly predict emerging threats and weaknesses, outpacing traditional methods. This will allow us to develop deeply customized cybersecurity recommendations for each client. During the reporting and analytics phase of engagements, AI aids in generating reports based on a client’s security incidents, vulnerabilities during the engagement, and overall cybersecurity health. AI-driven analytics also aid in regulatory compliance by accurately reporting and archiving all necessary data. 

 

In conclusion, appropriate use of AI allows cybersecurity professionals to be more efficient and make more informed decisions. AI provides consultants with data and insights needed to fuel their creative problem-solving abilities and, in so doing, enhances their abilities to foresee, respond to, and reduce cyber risks. As cyber threats become more sophisticated, defenders can combine human ingenuity with AI's computational power to enable more robust and adaptable security strategies, leading to more secure and resilient systems.