Skip to main content

3 tips from Snyk and Dynatrace’s AI security experts

Écrit par:

Sarah Conway

blog-feature-ai-violet

22 janvier 2024

0 minutes de lecture

McKinsey is calling 2023 “generative AI’s breakout year.” In one of their recent surveys, a third of respondents reported their organizations use GenAI regularly in at least one business function. But as advancements in AI continue to reshape the tech landscape, many CCISOs are left grappling with this question: How does AI impact software development cycles and the overall security of business applications? 

In a recent fireside chat, our speakers dove deep into AI and its implications for today’s businesses. Speakers included: 

  • Simon Maple, Snyk’s Head of AI Advocacy

  • Craig Charlton, Dynatrace Chief Information Security Officer, Vice President of Employee Digital Enablement, with around 10 years of experience in the information technology and security world 

  • Don Ferguson, who is Chair of the Privacy Office at Dynatrace, and has worked with Dynatrace’s AI product, Davis AI, for a decade

They covered several topics throughout their conversation, including today’s AI landscape, considerations for using AI in software development, and actionable tips for balancing the opportunities and risks of AI. According to these experts, using AI securely ultimately comes down to cross-team governance, devising thorough strategies when testing and implementing new technology, and maintaining focus on security throughout development.

3 key takeaways from Snyk and Dynatrace’s generative AI security conversation

Although businesses have been leveraging AI for several years — including Dynatrace’s 10-year-old Davis AI and Snyk’s 4-year-old DeepCode AI — the conversation has radically changed over the past year. 

In Maple’s words, “If we talked about this two or three years ago, there wouldn't be that inherent trust about what AI is capable of and whether it's ready for use in products…and it's very interesting how now, if you don't talk about AI, people question whether you're forward-thinking enough.”

During their conversation, Maple, Charlton, and Ferguson discussed this balance between AI innovation and security. A few of their biggest takeaways included the following words of advice.

Prioritize AI governance

AI governance is vital at this point in businesses’ generative AI journeys. Charlton said, “One of the most important things to us was not waiting to put together an AI governance organization. And it needs to include security, privacy, business apps, legal, development of your software, etc… it’s not just governance. It's ethics, it's request ingestion, it's review, it's all of it, as well as a whole communication piece.”

Take the next steps with patience and careful consideration

Maple, Charlton, and Ferguson also emphasized the importance of moving slowly and cautiously into new technologies. Ferguson said, “There's a lot of pressure to do things quickly and to act on things quickly…It's really important to take the time and get a feel for [generative AI], really understand how you can fully benefit from it, and how you want to take care of the risks…. You need to have that time to think.”

Charlton brought up two big priorities to focus on as you move forward cautiously:

  • Gauge the level of explainability and transparency in your chosen tools. Can you map data flow and understand which systems can view the data you input into your AI tools? In his words, “Where does the data come from? How can it be trained? Where does that data flow to…how does that then exfiltrate to other types of systems? Does it feed other types of systems down the road, which ultimately might have exposures that you hadn't planned on…It's a lot of following the data, In terms of who owns it, how it gets trained, and if there’s any inherent bias.” 

  • Put code security in place when using AI for development. He recommends asking the following questions, “Are we putting our AI code through the normal threat management scanning process that we typically do? Are we making sure that GenAI-developed code is well documented within our code base that we know was done by humans and what was ultimately done by GenAI?”

Balance AI opportunity and risk, especially in development

AI presents both opportunities and risks for teams of all kinds. However, developers must focus on security and privacy when they use AI tools. AI holds promise for development teams, from Snyk’s AppSec solutions leveraging AI for code fixing to LLMs writing hundreds of lines of code in milliseconds. But, misusing AI tools in development can lead to a lot of risks, such as poorly-written and insecure code, data overexposure, and the unintentional introduction of insecure third parties. 

Balancing risk starts with transparent, two-way conversations on AI tools and their proper uses. According to Charlton, businesses must establish “a constant feedback loop, where we have a group focused on how to properly do this within development specifically, and which products we're going to use, how we're training our language models, etc. It's like any cultural development, where you must have clear, consistent regular communication to develop that.”

Developing securely with AI

As we heard from this fireside chat, teams must consider many factors when leveraging AI, considering each tool’s data flow, ethical implications, and more. Fortunately, there are tons of resources for development teams to use the best of AI without security compromise. Tuning into Maple, Charlton, and Ferguson’s whole conversation is a great place to start.  

blog-feature-ai-violet

Best practices for AI in the SDLC

Download this cheat sheet today to learn best practices for how to leverage AI in your SDLC, securely.