Securing the next generation of AI requires a new security paradigm. As agentic AI and AI-native applications grow in use, they expand the attack surface and introduce a host of novel risks, from prompt injection and toxic flows to the unpredictability of non-deterministic behavior. Traditional rules-based security tools, designed for determinism and parameterization, simply won’t suffice. To enable secure AI application innovation, we need a new breed of security teams, AI Security Engineers who leverage their own agentic security orchestration systems. This session will explore the core tasks these systems need to cover, and the opportunity presented by orchestration to see, understand and address complex threats against AI-native apps.