SAS and Snyk discuss the future of AI for development and security teams

Written by:
Brian Piper
Brian Piper
feature-snyk-sas

October 9, 2023

0 mins read

Composing song lyrics, writing code, securing networks — sometimes it seems like AI can do it all. And with the rise of LLM-based engines like ChatGPT and Google Bard, what once seemed like science fiction is now accessible to anyone with an internet connection. These AI advancements are top-of-mind for most businesses and bring up a lot of questions:

  • How should we harness these AI tools to be more productive?

  • How can we incorporate these AI engines into our own products?

  • Will they bring more help or harm to our business?

  • How do we best prepare for the future of AI technology?

At a recent fireside chat, Jared Peterson, SVP of Engineering at SAS, and Ravi Maira, VP of Product Marketing at Snyk, discussed the future of AI for security and development teams. Jared’s team at SAS focuses on transforming data into intelligence with AI, boosting the productivity of development and security teams without introducing new risks. Throughout their conversation, Jared and Ravi touched on four predictions about using AI in secure software development:

  1. AI will become even more critical for developer productivity.

  2. Teams should prioritize avoiding hallucinations.

  3. Prompt engineering will continue to be a significant threat.

  4. Bias detection for AI will become more prevalent.

AI will become even more critical for developer productivity 

According to Jared, the organizations that leverage AI for developer productivity will come out on top in the next few years. Development teams can use AI for several use cases, such as automating repetitive tasks (e.g., reproducing boilerplate code). Taking these small steps will lead to greater success later down the road. In Jared’s words: “My advice to anybody in the world of software development would be: ‘I understand why you would be cautious… But, I would encourage you to explore AI and figure out its pros and cons in your specific domain’.” 

Teams should prioritize avoiding hallucinations 

While AI can benefit development teams, its LLMs are prone to hallucinations: generating false — but often times very believable — information. To successfully integrate AI into their pipelines over time, development teams must fine-tune their models to work correctly. Jared recommends training them on smaller data sets related to your team’s specific use cases.

Jared also suggests using back-end prompt engineering when developing user functionality. He said, “The user’s text input is rarely going to be exactly what you provide to the large language model. That text will become part of a template, so to speak, which is the prompt you're actually providing to the large language model. And this is where the whole world of prompt engineering is very interesting. You can limit hallucinations by massaging the prompt to target a very specific area of the large language model.”

Prompt engineering will continue to be a significant threat

However, prompt engineering from the wrong places can seriously threaten your software. Because prompts are essentially templatized configuration files, a bad actor could use a man-in-the-middle attack to tamper with them. 

Jared explained, “There is this interesting new threat vector: getting access to that prompt and changing the values in the prompt, which are, by definition, going to change the output of a large language model. Then, that's going to come back to the software system.”

Because LLMs are prone to threats like malicious prompt engineering, teams that use AI must move forward cautiously. Organizations should establish checks and balances, such as regular code reviews and other quality assurance measures.

Bias detection for AI will become more prevalent

As organizations start to use AI, they should also be on the lookout for implicit bias perpetuated by these new tools. Because AI trains on data from the past, it can bring historical issues, such as racial biases, into the future. 

Fortunately, tools for eliminating these issues are on the rise. Many companies and academic communities are working on bias detection and trustworthy AI technologies. But, avoiding bias must first start with awareness and ownership of the problem. Jared explained, “There are several ways to help customers realize, ‘Do you have some sort of skew in your data that you need to be concerned about?’... when you get the magic output, it's easy to be tempted by the magic and not realize that you've got some unfortunate things that you're perpetuating in the future.”

Watch Jared and Ravi’s whole conversation

To stay ahead of the curve, more and more software development teams will need to dive into the world of AI. But safely adopting AI technology takes time, strategy, and caution — gradually testing tools and proactively establishing safety measures. To hear more about the future of AI in secure development, check out Ravi and Jared’s entire fireside chat below. In addition, learn how Snyk’s own DeepCode AI helps development teams find and fix vulnerabilities in their applications.

Posted in:AI
Patch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo SegmentPatch Logo Segment

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon