Skip to main content

Navigating the AI-powered development era in financial services

Écrit par:
wordpress-sync/feature-global-communication

26 août 2024

0 minutes de lecture

Australian and New Zealand financial service institutions (FSIs) are facing pressure to innovate quickly while maintaining robust security and regulatory compliance. Many, like ANZ Bank and Commonwealth Bank, are exploring Generative AI to accelerate software development, but is it a silver bullet?

The AI code explosion

AI coding assistants like GitHub Copilot, Google Gemini Code Assist, and Amazon Q Developer have dramatically increased the speed at which developers can produce code. What once took days or weeks can now be accomplished in hours or even minutes.

However, this increased velocity comes with a catch. As Danny Allan, CTO of Snyk, explains:

"Generative AI has intensified vulnerability management challenges. Security teams were already overwhelmed with issues from traditional development, and AI-assisted coding has multiplied this problem tenfold. We can now write and deploy code much faster, but this has also dramatically increased the potential vulnerabilities we need to manage."

A Stanford study found that developers using AI assistants wrote significantly less secure code and were more likely to believe their code was secure when it wasn't. The study concluded that AI assistants should be viewed with caution as they can mislead inexperienced developers and create security vulnerabilities.

  • 62% of developers already use AI tools, typically to write code.

  • Only 43% are confident about the accuracy of AI tools

  • 66% don’t trust the output. 

Developers continue to depend on LLM models to write code because they help them go from concept to feature faster. However, this can introduce issues such as compromised quality, security vulnerabilities, and replication of existing flaws. For example, 29.6% of Copilot-generated code is vulnerable.

This puts developers on the front lines of the application security battle. CTOs need to be aware of the false sense of security that AI-generated code fosters. They need to recognize the inherent risks and build systems that ensure every line of AI code is tested for vulnerabilities and security implications.

Shifting security left

Smart CTOs empower developers to own security and embed it into their code. Security teams set the guardrails, and development teams take ownership of the actions within that policy or framework. They complete threat modeling, software composition analysis, static application security testing, and application security posture management (ASPM). This integration of security into the CI/CD pipeline ensures constant vigilance.

FSIs in Australia and New Zealand must take a few steps to adopt AI-powered coding safely: 

  • Enhance LLMs (like ChatGPT or Gemini) with your organization's proprietary data. This improves the AI's effectiveness for your specific use cases.

  • Rigorously test AI-generated code. Before deploying, confirm that the code is reliable and free of errors and potential vulnerabilities.

  • Keep human experts involved in overseeing AI operations. They should be able to review, understand, and explain the actions of AI code.

  • Implement automated tools to review code, analyze it for potential issues, and predict vulnerabilities before going live.

  • Adopt governance frameworks and guidelines such as APRA CPS 234, NCSC, or NIST and establish detailed rules and policies to manage AI use.

  • Appoint security champions or mini-CISOs within development teams to drive security implementation. Task them with integrating and enforcing security measures, ensuring security is prioritized throughout the development process.

  • Provide developers with practical tools and training to understand how vulnerable code can lead to real-world security issues, helping them to write more secure code.

  • Stay alert to emerging cybersecurity threats by collaborating with the Australian Cyber Security Centre (ACSC) and New Zealand’s National Cyber Security Centre (NCSC) for threat intelligence.

To implement these security actions, CTOs and engineering leaders need to ask two pivotal questions: 

  1. Does their security solution scan code, identify vulnerabilities, and suggest fixes as quickly as AI generates code? 

  2. Can it be embedded into the existing CI/CD pipelines without disrupting workflows? 

FSIs that don’t have the tools to embed in-process real-time checks are forced to rely on periodic analysis. When security audits are done every three, six, or nine months, there is a massive window of risk. Developers may check in code that remains insecure for months before the next audit uncovers the issue. This forces an impossible choice: knowingly push vulnerable code or delay the feature.

Prioritizing speed and security

Developers need tools that analyze code at the speed of generation and surface vulnerabilities before being pushed into production. They can’t check code in and sit there for days or weeks, waiting to know whether it’s clean or needs fixing.

Enter Snyk. The developer-first platform provides 2.4x faster scans and automated one-click remediation to take the manual work out of fixing vulnerabilities. Snyk employs multiple AI models, is trained on security-specific data, and is curated by top security researchers to deliver all the power of AI with a layer of security.

Publié dans:IA, Sécurité du code

Snyk est une plateforme de sécurité des développeurs. S’intégrant directement aux outils, workflows et pipelines de développement, Snyk facilite la détection, la priorisation et la correction des failles de sécurité dans le code, les dépendances, les conteneurs et l’infrastructure en tant que code (IaC). Soutenu par une intelligence applicative et sécuritaire de pointe, Snyk intègre l'expertise de la sécurité au sein des outils de chaque développeur.

Démarrez gratuitementRéservez une démo en ligne

© 2024 Snyk Limited
Enregistré en Angleterre et au Pays de Galles

logo-devseccon