Agentic AI Security: Navigating Trust, Autonomy and Resilience
Cambridge Wireless’ Security, Privacy, Identity & Trust (#CWSecurity) Special Interest Group hosted an event on Agentic AI Security at CGI’s Fenchurch Street offices in London. Alongside the stimulating discussions, attendees were treated to panoramic views of the city from the Sky Garden. The event offered valuable insights into the evolving relationship between AI, autonomy, agency and security at a time when AI systems are increasingly acting with purpose and independence.
The event was opened by Michaela Eschbach, CEO of Cambridge Wireless, followed by Kunle Anjorin from CGI, who co-chairs the Security SIG. He outlined CGI’s extensive cybersecurity expertise and the importance of collective industry collaboration in managing complex threats in an era of agentic AI.
The first keynote came from Dr Madeline Cheah, Associate Technology Director at Cambridge Consultants, who explored the concepts of autonomy, agency and action. She drew attention to the difference between autonomous systems that follow predefined parameters and agentic systems that can make adaptive, context-aware decisions. This distinction, she explained, introduces new layers of complexity in both assurance and control.
Dr Cheah discussed how threats to AI are expanding beyond data poisoning or compromised toolchains to include deceptive or manipulative behaviours by models themselves. From unintentional hallucinations to strategic deception, agentic AI can exhibit behaviours that challenge our assumptions about reliability and intent. Her talk examined the implications of embodied AI, where software agents interact physically with the real world, raising new concerns around anthropomorphism, emotional cues and misplaced human trust.
She highlighted four key themes emerging in AI security: uncertainty, authenticity, weaponisation and social impact. As AI systems interact, learn and make decisions independently, ensuring authenticity of information and safeguarding against epistemic erosion become central challenges. She emphasised that AI security is not just about protecting systems, but about maintaining societal trust and resilience. Rethinking system design, reimagining verification methods and embedding human accountability were identified as essential next steps.
Simon Thompson, an independent consultant, followed with a session titled Trust Me, I’m an Agent. Drawing on decades of experience in multi-agent systems, he revisited the perennial challenge of trust. Trust, he explained, is not the same as security or authentication; it’s the belief that an entity has both the competence and willingness to perform a task. He demonstrated how failures of trust, whether in software supply chains, data management or credential verification, can undermine entire systems.
Simon presented new experiments with language-model-based agents in simulated marketplaces to explore how they assess and act on trust. His findings showed that current models can handle concepts of trust only when these are made explicit in their prompts or structures. Without this guidance, they tend to make unreliable or arbitrary decisions. He concluded that explicit trust management infrastructures will be essential to ensure dependable multi-agent ecosystems.
The second session, chaired by Dr Bob Oates of Cambridge Consultants, began with Colin Selfridge, Director of Cyber Security at CGI, discussing Securing AI in the Age of Agents. With decades of experience as a security architect, he noted that while AI technologies are advancing rapidly, many of the foundational principles of security remain applicable. Security must be integrated from the earliest design stages, not bolted on later.
He cautioned against the tendency to rush into AI adoption without understanding the risks or ensuring proper assurance frameworks. Concepts like secure by design, zero trust, and continuous verification are as relevant to AI systems as they are to traditional software. Colin emphasised the need for resilient architectures, model registries, integrity checks and strong governance frameworks aligned with emerging standards such as ISO 42001 and the EU AI Act. He concluded by reminding attendees that AI security is not purely technical, it is a shared responsibility that requires collaboration between security, legal and data disciplines, with humans kept firmly in the loop.
The final session of the day was delivered by Jonathon Wright, Chief AI Officer and Head of R&D AI Labs at Eggplant, who spoke about Securing Agentic AI: Beyond Model Context Protocol (MCP) to Agent2Agent (A2A). Wright explored how AI agents are beginning to collaborate and communicate autonomously, leading to a new wave of agent-to-agent interactions that will require novel security approaches. He drew parallels between the emerging AI landscape and earlier transformations in software automation, urging the industry to focus on designing protocols that ensure transparency, traceability and secure communication between intelligent agents.
Across all sessions, a consistent message emerged: agentic AI is reshaping how we think about security, trust and autonomy. As systems gain greater independence, the traditional boundaries between human oversight and machine action are blurring. Ensuring that these systems remain safe, trustworthy and aligned with human values will demand a combination of robust engineering, ethical foresight and collaborative governance.
The discussions made clear that cybersecurity in the age of agentic AI is no longer confined to protecting networks and data. It extends to protecting reasoning, intent and trust itself.
Dr Bob Oates brought the day to a close with a concise summary that neatly tied together the themes of autonomy, trust and resilience. He reflected on how the speakers’ perspectives collectively highlighted both the opportunities and the urgent challenges that come with increasingly agentic systems. His remarks underlined the importance of continuous collaboration across disciplines to build the frameworks and safeguards needed for the future of AI security.
After the formal sessions, attendees took the private lift up to the Sky Garden on the top floor, where the discussions continued informally over drinks. It provided a fitting end to a thought-provoking event that combined expert insight, lively debate and a memorable view of London’s skyline.
Author: Zahid Ghadialy, Principal Analyst & Consultant, 3G4G Limited