Why ANZ Technology Leaders Are Rethinking How AI, Speed, and Security Intersect
June 15, 2025
0 mins readThe pace of technological change is always fast, but with AI everywhere, things have gone into overdrive. In Australia and New Zealand, businesses plan to spend heavily on generative AI—about $15 million on average, more than the global average. This puts immense pressure on technology, security, and engineering leaders. They must innovate quickly, but they also face complex risks from AI. This is forcing them to rethink how speed and security can work together.
Australia and New Zealand organizations are moving ahead with both ambition and caution. While AI brings new possibilities, it also brings clear security challenges. Leaders wrestle with real-world dilemmas. They want to harness AI without widening security gaps. Local challenges include:
Major talent shortages (36% of occupations in Australia are in shortage).
Changing regulations.
Managing cloud-native environments.
Compounding this is deep skepticism about AI security. In Australia, 74% rank security failures as their top concern, much higher than globally.
Key pressures are emerging as leaders try to balance innovation and security:
1. Secure AI adoption
The dream of AI building code and powering software is exciting, but AI-generated code often has vulnerabilities. As organizations scale AI and use more open source, they must govern risk carefully. AI needs built-in safety and transparency. Leaders must ensure AI adoption does not introduce new supply chain risks. They also face a trust challenge: Australians are skeptical of AI.
2. Speed without increased risk
AI tools like GitHub Copilot, Google Gemini, and Cursor promise faster coding. Early tests at ANZ Bank showed engineers were up to 55% faster. But this speed can lead to less secure code. AI-generated code can be risky, and developers may trust it too much. Studies show a lot of AI-generated code is vulnerable. Leaders need to build security checks directly into development—at the same speed as AI. Relying on periodic audits is not enough and leaves big gaps.
Here’s a chart that shows how AI-related vulnerabilities (CVEs) have surged from 2020 to 2024.

3. Visibility and risk prioritization
Modern apps, especially those using AI, are complex. They add new attack surfaces. With many security tools, leaders face too many findings and hidden blind spots. They need a clear view of risks, including AI-generated elements. They must also cut through the noise and focus on the real threats. This means using context like how reachable the risk is, what environment it runs in, how critical it is, and what data is involved. Shadow AI (unsanctioned AI tool use) adds to the challenge. AI Security Posture Management (AI-SPM) helps by managing risks across AI deployments.
4. Simpler security workflows
Platform teams already juggle many tools. Security can’t slow them down. Security must be easy and fit naturally into developer workflows. It should be “invisible until it matters.” Automating security tasks and giving real-time feedback and fixes is key.
5. Tool consolidation
Many organizations use too many security tools, creating data silos and overhead. Leaders want to bring security together on a single platform. This single view helps teams work better and close security gaps in development and deployment.
Practical examples: AI for security and security for AI
A dual approach is needed:
Use AI to boost security.
Secure AI systems themselves.
AI for security:
AI is being used to power security platforms. It helps find, correlate, and de-duplicate issues. It classifies data and sees if vulnerable functions can be reached. AI can even write fixes to speed up patching. Beyond development, AI speeds up threat detection and incident response by analyzing data in real-time. It also helps translate security findings into plain language for teams.
Security for AI:
AI systems and the apps they power need protection. This involves securing AI code and apps. Key areas include:
Prompt Injection: Attackers trick AI systems with crafted prompts. OWASP says it’s the top risk for LLM apps.
OWASP Top 10 for LLMs: A framework that identifies key security risks in AI apps.
Threat Modeling: Understand and reduce risks in the AI lifecycle.
LLM Red Teaming: Ethical hackers test AI for weaknesses.
AI-SPM: Scans AI environments for vulnerabilities.
Model Context Protocol (MCP): Connects AI workflows within LLMs but may expand risks.
AI-BOM (Bill of Materials): Lists AI components, libraries, and data to manage supply chain risk.

New attack vectors in AI Native Development: OWASP Top 10 for Large Language Model Applications
Building AI Trust in ANZ
ANZ leaders know that to thrive in AI-driven development, they must foster AI Trust. This means moving fast and staying secure in an AI world. It’s about reducing human effort while boosting security and governance.
AI Trust has three core pillars:
Full visibility into risks.
Intelligent prioritization of real threats.
Scalable policy enforcement to fix and prevent issues.
For ANZ, this means solutions that understand local talent shortages, regulations, and pressures. It means giving developers tools that scan and fix AI-generated code at the same speed it’s made. It requires platforms that give security teams a single, clear view across traditional and AI-native apps, focusing on real business risk. And it means enforcing guardrails that scale without slowing development.
The challenges are big. But by acknowledging these tensions and using the right strategies, ANZ leaders can build trust in this new AI world.
Start securing AI-generated code
Create your free Snyk account to start securing AI-generated code in minutes. Or book an expert demo to see how Snyk can fit your developer security use cases.