
Jim Reavis
CEO and co-founder, Cloud Security Alliance
As AI agents gain autonomy, attackers are exploiting the gap between machine decision-making and human oversight.
Imagine receiving a text message confirming a massive payment from your corporate accounts payable system to a vendor you don’t recognise. You call finance — they never authorised it. You check the audit trail — it was approved by the AI agent your company deployed to streamline procurement. The agent did what it was designed to do — just for an attacker who knew how to ask.
Autonomy without accountability
As organisations race to deploy AI agents that can execute real-world actions — approving invoices, scheduling resources, managing clouds — attackers are developing sophisticated techniques to hijack autonomous systems through manipulated inputs, poisoned training data and compromised model supply chains.
In CSA’s 2025 State of AI Security and Governance survey, 72% of security professionals lack confidence in their organisation’s ability to secure AI1, even as ENISA reports AI now powers over 80% of social engineering attacks.2 These are contextually perfect messages crafted from data scraped across your digital footprint — professional networks, corporate filings, social media — delivered at precisely the moment you’re most likely to act.
72% of security professionals lack confidence in their organisation’s ability to secure AI1
Attack asymmetry
Attackers are weaponising AI across the entire kill chain: reconnaissance tools that map organisational structures in minutes rather than weeks, vulnerability scanners that prioritise exploits by business impact and malware that adapts in real-time to evade yesterday’s defences.
Organisations must assume AI-generated threats will bypass traditional defences and invest in behavioural detection, zero-trust architectures and identity verification that doesn’t rely on knowledge factors attackers can harvest or fabricate.
Just as we learned to scrutinise software dependencies after high-profile breaches, we must now inventory our AI models, training data and inference pipelines. You cannot defend what you cannot see.
The asymmetry between attackers and defenders hasn’t disappeared — it’s evolved. The organisations that thrive will harness AI for defence as aggressively as adversaries weaponise it for attack.
The agent economy is here, and the stakes are higher. The question is who’s really giving the orders.
[1] CSA. (2025). The State of AI Security and Governance.
[2] ENISA. (2025). ENISA Threat Landscape 2025.