How AI could provide us with the security cyborg
.jpeg)
The concept of a cybernetic organism – whereby biological life is combined with artificial technology to give almost superhuman powers – was first coined back in the 1960s but it’s fast becoming a reality. Technology is no longer an auxiliary tool but one we are highly dependent upon and while it may not be physically embedded in our bodies, it is becoming more assistive.
In cybersecurity, we’ve seen localised automation transform the sector by replacing and then augmenting previously manual threat detection and response processes. When a threat comes into the Security Operations Centre (SOC), it’s now subjected to threat intelligence and investigation enrichment, with alerts correlated so that the connection is made between events originating from the same user, device or IP address, and playbooks are used to automatically investigate, remediate and close the case.
This is all made possible through the application of machine learning (ML). ML analyses vast sets of data to identify patterns, anomalies and potential breaches and is able to ‘learn’ and make inferences to anticipate attacks and improve detection. In fact, 90% of a security investigation can now be performed automatically using ML, from qualifying the alert through to telling the SOC analyst how to deal with it.
ML has proved such a success that organisations have begun focusing on hyperautomation which sees as many business and IT processes automated as possible to deliver end-to-end automation and efficiencies. Hyperautomation sees the combination of ML with other technologies such as robotic process automation (RPA) and of course Artificial Intelligence (AI) but where ML provides localised systematic automation, AI is more expansive and exploratory, delivering different outputs.
What AI adds
Within the SOC, AI can, for instance, translate free text questions into syntactical queries of the Security Incident and Event Management (SIEM) platform or Endpoint Detection and Response (EDR) data and that means it effectively lowers the skillset needed to carry out threat hunting. It can also provide human-friendly explanations of alerts, investigations and findings, as well as case summaries. This speeds up investigation timescales and provides intelligible outputs that require less technical understanding. Such summaries can make a huge difference in a SOC investigating upward of 150,000 alerts per month, where the vast majority will results in similar investigation notes. Finally, AI can support detection engineering through the faster development of playbooks and detections by creating code and detection syntax.
However, while AI is now capable of rivalling ML, there are a number of caveats surrounding its use. Firstly, the commercial viability of carrying out high numbers of AI prompts needs to be considered. Cost is prohibitive and using AI for threat detection and response would be upwards of $14m per annum to cover those 150,000 alerts – twice the cost of ML used in conjunction with detection engineering, a 24x7 SOC and Security Orchestration Automation and Response (SOAR). That makes it expensive to both experiment with and deploy, which is why demonstrable use cases remain scarce.
While nobody can afford to implement these tools with gay abandon but nor can they trust them to run completely autonomously. Which brings us to the second issue: reliability. If you run the same detection through AI it will often respond differently every time because it takes a subjective stance. Such variability is unacceptable within a SOC as you can’t have an attack classed as malware one day and be given a free pass the next. What’s more, adjustment options are currently extremely limited, reducing the ability to fine tune detection.
During a recent assessment of AI’s capabilities, we found that one particular AI agent didn’t just misinterpret the threat, it went on to produce a fictitious kill chain and mitigation advice, all of which would have taken the SOC analyst down a rabbit hole they didn’t need to go down. That wastes valuable resource and calls into question whether the technology is currently mature enough.
There is, however, a middle ground between the precision and rigidity of ML and a fully-fledged AI analyst drawing its own conclusions. Think of investigations as Blackjack hands - the dealer doesn't need to tell you whether to hit or stand, they just need to deal the cards cleanly so you can see what you're holding. In the same way, LLMs can curate and frame investigation data, presenting it coherently so the analyst arrives at a verdict in seconds rather than minutes, without asking AI to formulate the conclusion itself.
This is where the cost model shifts. Instead of burning through expensive multi-stage AI prompts to reach a conclusion that may not be reliable, you're using AI for what it does well, processing and presenting; within a small context window, this is a fraction of the cost. ML detects, AI lays out the hand, the analyst makes the call. It's not the autonomous SOC of tomorrow, but it delivers a cost-effective middle ground that accelerates conclusions.
A glimpse of the future
With very few willing or able to stand up both AI tools and the human analysts required to review their outputs, most are holding tight, waiting for the industry to move in a coherent direction. So how is this likely to play out?
ML does have the precision that AI lacks. It provides a consistent outcome every time so utilising the benefits of both – ML to automate graphs and detection engines and AI to assist with the processing, understanding and interpretation of log data – makes sense. Eventually, the two will combine forces leading to the ‘autonomous SOC’ which will see agentic AI used to build and run playbooks on the fly, determine malicious or suspicious verdicts from investigations and remediate and contain threats automatically.
Yet, even then, the SOC will require human oversight or a human in the loop (HITL). But the SOC analyst of tomorrow will look very different to that of today. A cyborg equipped with both automated and intelligent computing powers, they will no longer have to deal with the alert fatigue associated with processing enormous alert volumes or perform alert triage by determining true and false positives. Nor will they have to remediate low complexity threats and instigate containment measures.
Instead, these SOC cyborgs will be able to focus on complex, proactive threat hunting and incident response, doing what humans do best which is looking at the wider business context to evaluate threats and impacts and developing security strategy. In this way, these technologies promise to emancipate analysts by taking on routine tasks and reducing the stresses and anxieties associated with the role. In fact, by removing the monotony and repetitive nature of these tasks, job satisfaction is likely to increase, reducing the propensity for high staff turnover in the SOC.
Of course, AI is something of a double-edged sword. While it will see threat detection and response increase in scope and speed, it’s also likely to allow attackers to develop novel attack methods and to pivot more quickly, which means SOC teams will need to evolve. Which means the analyst is likely going to need those super-human skills to fend off those AI-powered adversaries.
.jpg)
(1).jpg)
.png)