Skip to main content

Infosec Europe session: 4 tips for safer AI adoption

Escrito por:
Gerald Crescione

Gerald Crescione

feature-customer-mettle

1 de agosto de 2024

0 minutos de leitura

AI adoption continues to move at a breakneck speed for businesses across the globe, as shown in a Coatue report comparing AI adoption to early internet adoption and found that AI growth is happening at two times the speed of the former. 

As companies move forward with AI-driven software development solutions, security teams face the challenge of driving innovation while also “pumping the brakes” when necessary to prevent the risks that come with AI tools, such as insecure code suggestions and library hallucinations. 

Liqian Lim, Senior Product Marketing Manager at Snyk, spoke with Snyk customer Kevin Fielder, CISO at NatWest Boxed & Mettle at Infosecurity Europe 2024. They dove into many of the risks and opportunities that come with AI and offered some advice for CISOs who want to adopt these powerful technologies safely. Here are a few of their biggest suggestions:

1. Classify AI usage by business impact criticality 

Each area of your business has a different level of risk and criticality. Adopting AI in a non-critical area might not impact security levels, but if adopted in an essential area of the business, it could change your entire risk posture and compromise sensitive assets. In managing security for AI-related matters, CISOs can save time and resources by honing in on the areas that pose the most potential risks.

According to Fielder, “For example, [let’s look at using AI] to improve customer help. That is actually pretty low-risk in terms of AppSec-related harm it might cause…but depending on your vertical, there will be higher-risk areas. With things like that, go really slowly and be very careful.”

Don't rely on LLMs alone to tell you if the code is secure — rely on AI guardrails instead 

Because large language models (LLMs) learn from past prompts, it has an uncanny ability to pick up on what humans want to see and produce similar information. Amazingly, people have even seen AI offer deceptive rationale on why it came to a particular conclusion because our prompts have taught it what we want to hear. So, companies can’t rely on the LLM themselves to tell you about its security level, rationale for making a particular decision, etc. 

Instead, you need to use other different resources to gauge the security and quality of AI-generated code. To illustrate this point, Fielder mentioned the importance of testing AI-generated information in the right ways and introducing a human-in-the-loop in high-risk areas. Although testing AI-generated code looks similar to testing other types of code, there are some important differences in the former, like the ability to keep up with the speed and scale of AI-generated code. Fielder explained, “When you've got an AI producing the code, make sure that you've got the right guardrails around it like you should have already. That way, the output gets properly tested and secured.” 

He broke these key guardrails down into three steps:

  • Perform code reviews

  • Add a peer validation

  • Scan the code and do unit tests

An AI-fast, expert-accurate SAST tool like Snyk Code can help you to detect and automatically fix both human- and AI-created code in real-time — all from within the developers’ IDE, to minimize context-shifting and workflow disruption.

Pay attention to the training model and protect it from attacks.

The speakers also mentioned the importance of knowing which training models your AI tools use to produce outputs and encouraging the teams in charge of training to take precautions. Exercising caution around your AI tools’ training models is essential, as attackers can take advantage of your system using attack methods like prompt injection if you aren’t selective with training data. These attacks can occur when malicious users strategically prompt LLMs to perform tasks outside their intended scope. You can protect your training model from ingesting and using malicious prompts by keeping a skilled human in the loop and using input validation to block off prompts with tell-tale signs of malicious activity. 

In Fielder’s words, “Security teams aren’t necessarily involved in the training, but they can guide people on how to do the training and things to consider: What happens if someone asks the wrong thing? What happens if the wrong information gets into it? How do we manage that? How do we get it to unlearn information if it's learned the wrong things?”

In addition, these suggestions can help you ask questions about your third-party vendor’s AI solutions and dig deeper into their training models. Generally speaking, a “black box” approach to training AI models is a red flag.

Know where AI exists within your current tech stack.

It’s also important to realize that there are probably a lot of AI solutions already in use within your organization. However, it can be challenging to pinpoint which AI-powered solutions are already inside your business because many third-party applications roll out AI features without adequately informing their customers. 

Fielder said, “AI is just creeping into business apps. Vendors are rolling it out, and not all of them are necessarily giving you the option to not use it. Or maybe you can [opt out]... but you've got to go through a convoluted process to not take part in the AI features. For example, one of our terminal emulators was using an LLM to offer better prompts…you’d think a terminal emulator is the least risky thing you use, then suddenly, it's got AI in it.”

So, the ongoing challenge that enterprises must tackle will be inventorying all of these third-party applications and understanding how and where AI is used within them. Again, starting with the most business-critical areas of your organization and going from there can help.

Snyk: A security companion for your AI tools

While it can feel like a competition to advance your technology and win the “AI race,” moving cautiously for security’s sake is worth it. However, you don’t need to pause innovation or step on the development teams’ toes to do so. Using security solutions that work with existing development workflows and move at the speed of AI can make all the difference.

Snyk complements today’s AI coding assistants with AI-fast SAST scans and automatic, one-click vulnerability fixes. We also offer:

  • Pull request checks as an additional safety net to accommodate the faster speed at which AI code is generated and committed;

  • Suggested actions and explanations upon detection of a vulnerability, all within the developers’ existing IDEs;

  • Context-driven insights based on your organization’s specific business and risk profile.

To learn more about responding to today’s rapidly changing AI landscape, check out our cheat sheet for developing securely with AI.

Publicado em:
feature-customer-mettle

Quer experimentar?

Hear firsthand from Snyk customers on how implementing developer first security helped them reduce risk and increase developer productivity.