- Home
- Resources
- Publications
- Securing Agentic AI: A Discussion Paper
Securing Agentic AI: A Discussion Paper
24 October 2025

Agentic AI systems can plan, take actions, and interact with external tools or other agents semi-autonomously without constant human prompting. This autonomy magnifies both the benefits (e.g. efficiency/productivity) and the risks (e.g. security failures) for business, government, and society. AI security must extend to these agentic features to protect the confidentiality, integrity, and availability of underlying systems and infrastructure. The threat landscape is evolving, and attackers are exploiting the autonomy of agentic AI systems to compromise them.
This discussion paper by the Cyber Security Agency of Singapore (CSA) and FAR.AI provides an exposition of key security issues for these systems. It explains how agentic AI systems differs from traditional/generative AI systems, where new attack surfaces arise, and why conventional controls are necessary but not sufficient. It surveys existing structure and frameworks relevant to agentic AI security and identifies important open problems where further investment should focus.
The discussion paper emphasises that securing agentic AI is a shared responsibility across the ecosystem – developer, vendors, enterprises, users, regulators, and researchers – who must collaborate to address these challenges. Developing coherent, practical safeguards from high-level ideas will require focused, coordinated work across technical, operational, and policy domains.
Read Securing Agentic AI: A Discussion Paper [PDF, 1.69 MB]
