Business Fortitude
    🔥 Trending
    Autonomous AI Attacks Redefine Cybersecurity. UK Firms Lag Behind.
    Tech & Innovation

    Autonomous AI Attacks Redefine Cybersecurity. UK Firms Lag Behind.

    Ross WilliamsByRoss Williams··6 min read
    • Autonomous AI systems handled 80-90% of intrusion tasks in the GTG-1002 cyber-espionage campaign affecting approximately 30 organisations
    • 89% of organisations experienced risky prompts to enterprise AI systems within an average month
    • One in every 41 prompts submitted to enterprise AI tools was classified as high-risk
    • Campaign demonstrated structured autonomy with AI agents conducting reconnaissance, credential harvesting, privilege escalation and lateral movement independently

    British boardrooms have spent the better part of three years worrying about whether AI will make phishing emails more convincing. That concern now looks quaint. The real inflection point came in September 2025, when security researchers analysing a cyber-espionage campaign designated GTG-1002 observed something unsettling: an attack that appeared to execute itself, with minimal human intervention at each stage, across multiple organisational targets.

    According to Check Point Software's 2026 Cyber Security Report, autonomous AI systems handled between 80 and 90 per cent of intrusion tasks in this campaign, which affected approximately 30 organisations. Human operators remained involved in setting strategic objectives and reviewing outcomes, but the tactical work—reconnaissance, credential harvesting, privilege escalation, lateral movement—was conducted by software agents operating with what researchers describe as "structured autonomy." The distinction between automation and autonomy matters. One follows a script. The other adapts.

    For UK businesses still calibrating defences against human-speed adversaries, this represents a fundamental mismatch in operational tempo. The threat model has shifted beneath our feet.

    Enjoying this article?

    Get stories like this in your inbox every week.

    Cyber security analyst monitoring threat detection systems
    Cyber security analyst monitoring threat detection systems

    When the attacker doesn't need to sleep

    Traditional incident response operates on human timescales. Security teams detect anomalies, investigate their scope, contain the breach, then remediate. That cycle assumes the adversary operates at roughly the same speed—stealing credentials one evening, probing the network the next day, exfiltrating data when an opportunity presents itself.

    Autonomous agents compress that timeline dramatically. They can map an environment, identify vulnerabilities, test multiple pathways, and escalate privileges within hours rather than days. More troubling, they maintain operational "state" across sessions, resuming activity where they left off without needing to reorient themselves. This isn't just faster crime. It's qualitatively different crime.

    When adversaries adapt tactics in real time based on what they encounter, sequential investigation struggles to keep pace. Defence architectures designed around perimeter security and periodic audits become less relevant when the breach is already inside and moving laterally at machine speed.

    The implications for dwell time—the period between initial compromise and detection—are stark. When adversaries adapt tactics in real time based on what they encounter, sequential investigation struggles to keep pace. Defence architectures designed around perimeter security and periodic audits become less relevant when the breach is already inside and moving laterally at machine speed.

    What's interesting here is how this inverts conventional wisdom about cyber defence. The security industry has long emphasised threat intelligence and signature-based detection. But when the attacker is an adaptive system rather than a human following a playbook, historical patterns matter less than real-time containment. Zero Trust architecture and strict identity controls—long promoted as best practice—suddenly look less like compliance checkboxes and more like essential infrastructure.

    The risk already inside your network

    Whilst autonomous agents probe from outside, a parallel threat is emerging from within. British organisations have embraced generative AI tools with remarkable speed, deploying them across operations from customer service to code generation. That enthusiasm has outpaced governance.

    Check Point's report cites data suggesting 89 per cent of organisations experienced risky prompts to enterprise AI systems within an average month. Perhaps more concerning: one in every 41 prompts submitted to these tools was classified as high-risk, frequently involving exposure of personally identifiable information or proprietary source code. The methodology behind these classifications isn't detailed in the report, and independent verification of these thresholds would strengthen the claim. But the directional signal is clear.

    AI systems processing enterprise data and prompts
    AI systems processing enterprise data and prompts

    This creates an asymmetric vulnerability. Employees using AI assistants to draft contracts or analyse data may inadvertently feed sensitive information into systems with unclear data retention policies. Worse, prompt-injection attacks—where malicious instructions are embedded in data that AI systems later process—can turn internal tools into vectors for data exfiltration or privilege escalation.

    Organisations must defend against autonomous attackers trying to break in whilst simultaneously managing their own AI agents that may be leaking data outward or executing unintended instructions. The perimeter has become porous in both directions.

    The dual-threat scenario for 2026 becomes apparent: organisations must defend against autonomous attackers trying to break in whilst simultaneously managing their own AI agents that may be leaking data outward or executing unintended instructions. The perimeter has become porous in both directions.

    The governance gap

    For UK businesses, this presents a challenge that technology alone cannot solve. When the adversary operates at machine speed and your own productivity tools create new attack surfaces daily, the response must be systemic rather than tactical.

    Identity and access management becomes central. If an autonomous agent gains initial access to your network, the damage it can inflict depends entirely on what privileges it can obtain and where it can move. Strict access controls, continuous authentication, and micro-segmentation limit the blast radius of any breach—whether executed by humans or algorithms.

    Internal AI governance requires similar rigour. Treating AI agents like human employees means establishing clear access boundaries, monitoring their behaviour for anomalies, and assuming any agent could be compromised. This isn't a technical specification; it's an operational posture.

    The strategic question facing boards is whether they're prepared to match the tempo of algorithmic adversaries. Quarterly security reviews and annual penetration tests make little sense when threats evolve in real time. Continuous monitoring, automated response capabilities, and adaptive defence systems shift from nice-to-have to table stakes.

    Corporate boardroom discussing cyber security strategy
    Corporate boardroom discussing cyber security strategy

    What comes next

    The GTG-1002 campaign, assuming its characterisation as largely AI-operated is accurate, likely represents an early iteration rather than a mature capability. The technology will improve. Autonomous agents will become more sophisticated in their reconnaissance, more effective in their lateral movement, and better at masking their activity within normal network traffic.

    For British businesses, particularly mid-market firms without dedicated security operations centres, the calculus is uncomfortable. Competing against adversaries with algorithmic advantages requires either matching their capabilities with defensive AI systems or fundamentally rethinking security architecture around containment rather than prevention.

    The regulatory environment will eventually catch up. UK policymakers are already contemplating frameworks for AI governance, though cyber-defence applications remain relatively underspecified compared to consumer-facing AI regulation. Expect that to shift as incidents accumulate and attribution becomes more complex.

    The immediate imperative is recognition. Autonomous cyber-operations are not a theoretical horizon risk. They are operational today, documented across multiple targets, and improving with each iteration. Organisations that continue to defend against human-speed threats whilst algorithmic adversaries probe their networks are fighting the wrong war entirely.

    • Defence architectures must shift from periodic audits to continuous monitoring and real-time containment to match the operational tempo of autonomous threats
    • Zero Trust architecture and strict identity controls are no longer compliance checkboxes but essential infrastructure against algorithmic adversaries that compress attack timelines from days to hours
    • Internal AI governance requires treating AI agents like employees with clear access boundaries, as organisations now face threats from both autonomous attackers breaking in and their own AI tools potentially leaking data outward
    Ross Williams
    Ross Williams

    Co-Founder

    Multi-award winning serial entrepreneur and founder/CEO of Venntro Media Group, the company behind White Label Dating. Founded his first agency while at university in 1997. Awards include Ernst & Young Entrepreneur of the Year (2013) and IoD Young Director of the Year (2014). Co-founder of Business Fortitude.

    More articles by Ross Williams

    Comments

    đź’¬ What are your thoughts on this story? Join the conversation below.

    to join the conversation.

    More in Tech & Innovation

    View all →