Episode 153

Season 9, Episode 153

Implementing A DevSecOps Program For Large Organizations With David Imhoff

Hosts:
Danny Allan
Danny Allan
Guests:
Listen on Apple PodcastsListen on Spotify Podcasts

Episode Summary

In this episode of The Secure Developer, David Imhoff, Director of DevSecOps and Product Security at Kroger, shares insights on implementing DevSecOps in large organizations. He discusses balancing regulatory compliance with business objectives, fostering a security culture, and the challenges of risk mitigation. David also explores the importance of asset management, security champions, and the potential impact of AI on cybersecurity practices.

Show Notes

In this episode of The Secure Developer, host Danny Allan speaks with David Imhoff, Director of DevSecOps and Product Security at Kroger, about implementing security programs in large organizations. David shares his experience transitioning from blue team operations to engineering and back to security, emphasizing the importance of understanding both security and engineering perspectives to create effective DevSecOps programs.

The conversation delves into the challenges of starting a security program in a large retail organization, with David highlighting the importance of understanding regulatory requirements, such as HIPAA, and aligning security measures with business objectives. He discusses the use of the NIST Cybersecurity Framework for measuring and reporting security posture to the board, and the process of balancing security needs with business risk appetite.

David explains Kroger's approach to building a security culture, including the implementation of a security champions program and the use of Objectives and Key Results (OKRs) to drive security initiatives. He details the company's strategies for centralizing security policies while allowing flexibility in implementation across different engineering teams. The discussion also covers the integration of security tools into the development pipeline, including the use of GitHub Actions for vulnerability scanning and management.

The episode explores various security technologies employed at Kroger, including Software Composition Analysis (SCA), Static Application Security Testing (SAST), API security, and secrets scanning. David shares insights on the challenges of prioritizing security alerts and the ongoing effort to provide a cohesive view of risk across multiple tools. The conversation concludes with a discussion on the potential impact of AI on security practices, including the new challenges it presents in areas such as data poisoning and model management, as well as the potential for AI to improve threat modeling processes.

Links

Teilen

David Imhoff: We've done a lot of those actual business conversation activities to where we said, ‘Okay, what if this did happen, right? What does this really do to our business? What does that mean to us, right? How much would that cost us either reputationally, actual dollars, what have you?’ So it's a tough subjective balance on where oversecuring really lies. But, to me, I think that really comes down to those answers from those business leaders on, ‘Okay, in this situation, if this did happen, right, tell me what happens to our company.’”

[INTRO] 

[00:00:36] Guy Podjarny: You are listening to The Secure Developer, where we speak to industry leaders and experts about the past, present, and future of DevSecOps and AI security. We aim to help you bring developers and security together to build secure applications while moving fast and having fun. 

This podcast is brought to you by Snyk. Snyk’s Developer Security Platform helps developers build secure applications without slowing down. Snyk makes it easy to find and fix vulnerabilities in code, open source dependencies, containers, and infrastructure as code. All while providing actionable security insights and administration capabilities. To learn more, visit snyk.io/tsd. 

[INTERVIEW]

[00:01:16] Danny Allan:: Hello and welcome back to another episode of The Secure Developer, where we dive into all things application security and about development in a modern world. I am delighted today to be joined with David Imhoff, who is the Director of DevSecOps and Product Security at Kroger. David, welcome to the show. How are you?

[00:01:35] David Imhoff: I'm great. How are you doing, Danny?

[00:01:37] Danny Allan:: I am doing awesome. I always love talking security and software development. I know you've been in this space for a while, so maybe you could just introduce yourself to the audience, and we can start with that. 

[00:01:48] David Imhoff: Sure, yes. I'm Dave Imhoff. I've worked at a few different companies. I kind of got my start at General Electric, doing a lot more of the blue team-type things, right? Intel, detect, response, things like that, really love cybersecurity. Interestingly, I took what I call a walk in the wild and ran an engineering organization for a couple of years to help get a couple of initiatives off the ground at another company. I've lived in the engineering world and really got a taste for what that is to deliver software. 

With that knowledge, I came back into the security world where I plan to stay for a long time, hopefully, until I retire. Trying to use that knowledge of engineering and engineering practices to make sure that as we implement DevSecOps and any of the tooling, things like that, that it's actually useful and helpful and enabling for developers and engineering teams to reduce our risk versus just throwing some things over the wall, and maybe they work, maybe they don't. 

I’ve been at Kroger for a little over a year at this point and just really excited at some of the things that we're doing. 

[00:03:01] Danny Allan: That's awesome, and you touched on something that's interesting to me. Sometimes, the guests that I have in the show have spent their entire career in application security, and you have, obviously, spent time on the software development side. How important do you think it is to have that software development or software engineering experience in order to have a strong AppSec program?

[00:03:23] David Imhoff: That's a great question. Honestly, I think it's critical. I don't know that you would have needed to run it for several years. But I think having real exposure, not just superficially and being around and adjacent to our engineering teams but actually being embedded in understanding their problems I think is supercritical. I had a lot of reflection moments as I was sitting on the other side of the table in the engineering role where I had moments of, “Oh, my gosh. I don't like this at all.” That's being done to me right now, and I definitely did this to teams in my other role. 

I think like anything else, right? It's really good to understand the nuances of those worlds, the things that they are fighting, the things that they see, and the priorities that are pushed upon them, and to really understand how you can fit in there in a way that's going to be sustainable and helpful versus just making an assumption, thinking that, “Oh, it's security. They just have to do it. I'll just send them this list of things to do, and I'll just put them on a scorecard if they don't do it right.” That gets some results but not great results. Long answer to your question but I think it's absolutely critical. 

[00:04:39] Danny Allan:: Yes. I love that part about the empathy and understanding what they've gone through. I've been in software development myself, and I know that, historically, there's often a friction there. Something is imposed upon you. If you can understand that, making things frictionless, obviously, is a solid way of putting them and bringing them onto your side. That's awesome. 

I guess what I really wanted to dive into in this episode is talking about implementing a security program for large organizations because this is something that we deal with every day at Snyk. We have customers like yourself, or we have partners that we work with. Often the biggest challenge is knowing where to start. When you first come into a role, and I know that you've been in a regulated industry, this is, I guess, more of a retail or grocery industry. Where do you start in trying to implement or roll out a DevSecOps program?

[00:05:33] David Imhoff: Yes, good question. This may just be my defence background or compliance background, but I always start with the regulations, right? What are the must-haves? What are the things you absolutely have to do? Those are the things that could potentially get you shut down or fined or that, right? If you're not doing those must-dos, obviously, it's a massive problem. 

First, I'm making a point to really understand the compliance frameworks that are applicable to the space. This one was a little new to me. I hadn't had HIPAA regulatory issues that we faced before, so first thing I did was close that gap, right? Really understand HIPPA, what's covered, what are the systems that could come into that scope, things like that, and really trying to understand the controls that we have to have in place for those environments. 

Luckily, I knew the rest of the compliance frameworks by then, but that's definitely number one is what does it take to be in business, right? Then I think the next thing is really understanding the culture piece, right? I would probably do both of those things in tandem, to be honest, because depending on the culture, right? In a regulated environment, it's a little easier, I would say, to have a culture of security because you just have constant reminders of it, right? A lot more challenging when it's a lot less regulated or not necessarily regulated in a lot of areas. 

Culture became incredibly important here and really understanding who are the people that really lead things, right? I don't just mean position of leadership. I mean indirect leadership and influence. Who are the people that make the decisions? Who are the people that are going to be the people that are making architectural guidelines, right? Best practices, things like that. Then also understanding there's always nuance to organizations, and there's always conflict between what some of the different organizations are asked to do. 

I think it's incredibly important, especially in large companies where you have more of a tendency, to have a lot of bigger organizations that are led, and you get more drift as you get more scale. Understanding their goals and objectives and probably most importantly how they interact with each other or don't interact with each other because the most challenging thing from a culture standpoint, for me at least, has always been trying to figure out how to make a security standard or a program that is companywide, while also making sure that it is adaptable and will be adopted by these different organizations that may or may not like what you have to say or may or may not play nice with each other. 

[00:08:23] Danny Allan: I want to put a pin in the culture because I think that's super important, and I know that or I believe that you have a security champions program, and you've rolled that out. But if I go back to the regulatory side of this, is there a framework that you have to collect that data for to push upward in the organization? Is there a stick downward in the organization? Is that something that you have to influence, or it's mandated from the top and you have to communicate the regulations upward? 

[00:08:53] David Imhoff: It’s a bit of both. We basically use NIST, right? NIST CSF as the guidance framework. That is what we communicate as the health metric to our board and to our stakeholders. We are constantly doing – we do annual NIST assessments, right? Really trying to check the health of the organization, how things are going. There's a lot of understanding and buy-in of what that metric means. I'm sure not the nuances and the second, and third level as you get into what the controls are and things like that.

Especially at the board level, I wouldn't expect them to understand that. But they are very interested in how we're doing from a score perspective, how we're doing against our peers. From a scoring perspective, they want to make sure that we've got the right amount of security, right? As do we, right? We don't want to oversecure or especially undersecure things. But that definitely is near and dear to their hearts.

The other thing that they're very concerned with and that everybody in the leadership team is concerned with is vulnerabilities in general. They are surprisingly well-versed or not surprisingly sadly well-versed about supply chain security here, right? Especially third-party software supply chain security. They are very well aware of a lot of the third-party and upstream supply chain attacks and all of the consequences that they've seen, so they ask about that quite a bit. We get a lot of top-down support on those things. 

Now, on the other side of the coin, a lot of our spaces aren't regulated necessarily at all. There's a lot of things that we really should do. Obviously, I'm biased as a security professional on things that we should do, and they don't always share that sentiment. But there's a lot of things that we should do that I really do have to push up, right? I do a lot more work here than I've had to do in the financial industry and the defence industry for obvious reasons, right? The way we touched on. But definitely a lot more influence in this role, a lot more educating and pushing security upward and getting that buy-in. 

[00:10:57] Danny Allan: That makes sense. For those who aren't familiar on the podcast, the NIST CSF is Cybersecurity Framework. I think it has five components, David, if I'm right, which is Identify, Protect, Detect, Respond, and Recover. I think it's scored out of five, and I think the industry average is 2.89 if I'm not mistaken. I'm going by memory here. 

[00:11:17] David Imhoff: Yes. Yes, you’re right. 

[00:11:17] Danny Allan: Do you compare yourself to the industry average or to the vertical that you are situated within? How do you compare yourself? Is it peers or industry?

[00:11:28] David Imhoff: Both. We look at industry and overall, right? So company size, right? Other enterprise, large organizations, obviously, giving some leeway on super heavily regulated industries where they have to be a four or five, right? But especially we're very aware of how we're doing against other retail companies, especially our size retail companies, and just the industry in general. We don't want to be the slowest gazelle, right? That it's just more of a soft target that gets hit because they know, “Oh, well, yes, they don't have to be secure in a lot of these areas.” But I know that I can take – you don't have to be regulated to need security, right? But, yes, to answer your question, we compare against our peers within the industry, and generally companies overall. 

[00:12:21] Danny Allan: That aligns with what I've been seeing in the industry. We actually at Snyk do the same thing. We compare ourselves with the NIST Cybersecurity Framework as well. Like you say, you don't have to outrun the bear. You just have to be faster than your nearest neighbour in the industry, which is a crass way of looking at it. But it is the reality. 

[00:12:40] David Imhoff: It’s true. 

[00:12:40] Danny Allan: It is a return on investment. You made an interesting comment. You don't want to be oversecured. I don't think a lot of people pick up on that. Sometimes, I think you need to be secure, and I always tell them that's almost impossible to be completely secure. There is a point. There's a law of diminishing returns. Is that a documented law of diminishing returns? Or is that really just you're aligning, looking at the NIST Cybersecurity Framework and saying, “Well, we want to be 10% higher but not 30% higher.”? 

[00:13:08] David Imhoff: Yes. It's not necessarily documented. We just – through conversations, through, I mean, baselining against peers, things like that, right? There are certain areas. I will caveat it, right? There are certain environments where we do want to be as secure as possible, right? Health data, card data, right? Those we lock down as much as we possibly can. The environments that aren't in that, that's when we really start having the cost-benefit conversations of, “Okay. Well, what's the real risk, right?” 

We've done a lot of risk appetite conversations with our business leaders. Our BISOs, our business information security officers are aligned to those folks. We've done a lot of those actual business conversation activities to where we said, “Okay, what if this happen, right? What does this really do to our business? What does that mean to us, right? How much would that cost us either reputationally, actual dollars, what have you?” 

It's a tough subjective balance on where oversecuring really lies. But, to me, I think that really comes down to those answers from those business leaders on, “Okay. In this situation, if this did happen, right, tell me what happens to our company. Does it make sense to you from a business and finance aspect to pay X to avoid that issue? Or is it better to take that risk, right?”

Ultimately, they're the ones running the business, right? I'm a big believer that we're here to consult, inform, really make sure that they understand the ramifications of the decisions that they're making, and make sure they're really well-armed for making those decisions, right? Then they can make the decisions as they see. 

[00:14:57] Danny Allan: Yes. That's a very pragmatic way of looking at it. It really is risk identification and then risk mitigation. You just touched on one aspect that I always think about in implementing a program on the people side. There's, obviously, people process technology. On the people side, you mentioned business information security officers I think I heard you say. Talk to me about the people side of this. What are the things that you have done to roll out a program that supports people, both in terms of identifying champions or education? How did you approach that side of rolling out a program?

[00:15:32] David Imhoff: Yes. I did all the things that everybody would do at first, right? Consult Gartner. Look at best practices, things like that to get a baseline. What do people generally do? Everybody will tell you to create a security champions program. The thing is that means a lot of different things, depending on who you are, what you're doing, and what you are leading. I'll touch on that in a second. 

Outside of that, it's really making contact with those engineering teams to understand where are their problems right now, right? By taking a look at the teams that are either – you take the highs and the lows, right? We took a handful of teams that were really on the cutting end, really automating everything. They wanted everything baked into their pipelines and really seeing like, “Okay, what is working here? So we can point to that, right?” Like, “Hey, look.” Because there's always a big variance, right? We have thousands of developers, so there's always going to be one team on this end of the spectrum and one team on this end of the spectrum. 

We took the advanced teams and said, “What's working, right? What do you do? What can we proliferate here,” and took a lot of lessons there, wrote those down. They were doing a lot of great things. Also worked with them to figure out like, “Hey, here are some gaps that we see. What are your thoughts on that, right?” Really including them in those brainstorming processes is important, right? It's like anything else. If you can make your idea their idea, it's a whole lot easier on everybody, right?

[00:17:01] Danny Allan: Yes. 

[00:17:02] David Imhoff: That's kind of the play is helping educate those teams that were in that pilot, on that end of the spectrum to say, “Okay, what would you guys do,” right? Or the things. We got a lot of good results out of that, a couple of things that, honestly, I didn't think about right out of the gate that we implemented. 

It was an easy sell because that was ideation for engineering, and they had their own communities of practice. They pushed it. We helped educate, and we find ourselves a lot of times saying like, “Hey, we'll help make it a policy if people start to push back because everybody has a different idea on how to engineer something, right?” They're all the best ideas, right? Of course. 

Then for teams on the other side of the spectrum where maybe they were – because there's always a shorter staff team that has applications that while important aren't the sexy applications at the time, right? But a lot of times, those applications are very business-critical or can be that haven't necessarily gotten to love because they don't have the new shiny thing. Understanding from their standpoint, “Okay, what isn't working here? What are your difficulties, right? Where do you have issues?” 

Then pairing those groups together to figure out is it technically feasible to bring them up to those cutting-edge teams from a technology perspective. But then also figuring out, okay, there are definitely things that won't work right. I can't put something in that's only going to work on a really advanced pipeline that is out in AKS, running on the latest and greatest tech. Then go to somebody that's got some on-prem server that's on VMware that hasn't been touched in 20 years, right? 

That delta is not going to work. Understanding what from an overall program standpoint could work for everybody, while not excluding those folks that have critical applications but maybe haven't gotten that far. Yes. To answer your question, it's really about making contact and working with those teams and really understanding and walking in their shoes for a little bit to see what the situation is here. 

[00:19:19] Danny Allan: Are those security champions or business information security experts, are they self-selecting? Or do you go and assign them across the teams, across the thousands of developers it sounds like?

[00:19:33] David Imhoff: Yes. We went to the engineering leaders, the engineering VPs, and told them, “Hey, here's the –” Because they're part of the engineering organization, right? I had to make sure that they had buy-in, showed them the value, right? Hey, we did a pilot. Here's what happens when you actually implement the right tooling and processes. You're going to spend X amount less time fixing things in production, and look at all the things that we've avoided, actually getting into production, the whole shift-left piece, right? 

Showing them the results while you're selling them on the program but then asking them for candidates, right? We did have to do it a couple of different times because maybe sometimes people were multitasking, things like that, right? That like, “Oh, yes. Security – yes, I'll get you two names,” right? But we asked the engineering leaders for folks that they thought would be best for those roles. We went at that a few different times, had a couple of different iterations. 

But really, they ended up for the most part, right? Where it works best. They made it part of their HR and career-pathing discussions with their folks in their organizations where they said, “Okay, maybe you're ready for a next step. Or maybe you want to show that you're ready for a stretch assignment, and you can do more. Or maybe you're looking for that half step in between lead engine to architect. Or maybe you might just want to be in security, and you want to test it out.” 

They started having those HR discussions with their people and walked through that and said, “Okay, here's the handful of people from my organization that I think would make sense here.” Then we have a feedback loop on how things are going. 

[00:21:17] Danny Allan: That makes sense, and I like the idea actually of the badging or certification because it gives them career progression. It shows career progression and understanding of their space. On the process side, how centralized is it in your organization? In other words, do you set a policy that this is the way it has to be. It goes through the pipeline. At this point, X happens. Or is it less formalized than that? I guess I'm interested in how centralized or decentralized the process implementation is of the product security program. 

[00:21:52] David Imhoff: Yes. Like anything, it varies a little bit, right? But generally, right? We have companywide OKRs, right? Objectives and key results that are measured weekly. That helps quite a bit. We use those to really anchor. That’s the outcome, right? The value and the outcome of what we're trying to do. 

[00:22:15] Danny Allan: What's an example of an OKR, like specific?

[00:22:18] David Imhoff: Yes. One of our OKRs is vulnerabilities introduced within the first 0 to 30 days. Something that the objective is to make sure that it's all the shift-left security piece, right? It's save money, time. Make the engineering easier. Reduce vulnerabilities by implementing tools like Snyk in the pipelines. That OKR is essentially the measurement that we constantly review and go over with the leadership team of how are we doing toward them. 

It’s kind of a carrot stick hybrid, right? Everybody wants to hit their OKRs because they're very visible, they roll up, and people do care. That gets people motivated to perform that outcome and reach that outcome. That's very centralized. The other piece that's very centralized is I own the application security policies and standards and guidelines. That's your formal, “Hey, you have to do it because here's the policy. It's Kroger policy to do X, Y, and Z.” 

Now, no policy actually works without a good exception process and being able to talk through and understand where you might have exceptions, nuance, and things like that. We definitely have a formal process where if folks can't for some reason meet that policy or they need more time, right? An example of that would be you have 30 critical vulnerabilities on this system. Yes, and they tell me, “We need an exception for a year because this is a massive modernization effort, and it's going to take this much time to do this whole piece, right?” That's when they centrally get an exception for that. 

Outside of that, when you're getting into things that are more, I guess, a Snyk example, right? Snyk alerting and blocking on if you have vulnerabilities, we use GitHub Actions. In our GitHub Actions, you can either alert or block based upon the severity of vulnerabilities that you're about to introduce, right? That is where we do give the leeway to the engineering teams to say, “Okay, you have to at least alert, right? We want you to be aware of things. We would really like you to block if that makes sense for your product that you're working on, your workflow.” Or that business's appetite for risk is another factor there, right? 

I don't want to mandate things across the board that get into that deep level of engineering process because you'll have so many exceptions for a lot of times very valid reasons. But trying to set like, “You have to do this baseline, which is achievable by everybody.” That’s the art of it is figuring out what baseline is achievable by everybody. Then that guidance of, look, we really want you to get here. We will, hopefully, shift that baseline when everybody can get there. Our goal is to get everybody there by X date. But if you can do that now, we would really like it. 

By the way, we're going to publish teams that are already making that second stop, really doing the celebration piece, so teams get a pat on the head that they did those things. It’s a little bit of both. You have to have the centralization. But at the scale we're at, you also have the acceptance, right?

[00:25:52] Danny Allan: I met companies that implement the process both through the pipeline and through the source control management system. It sounds like you have done – I heard you use GitHub Actions there, so it sounds like you do more on the pull request on the source control as opposed to the pipeline where you're doing that type of assessment. Is that accurate? Yes.

[00:26:11] David Imhoff: Yes. 

[00:26:12] Danny Allan: On the process for documentation, you said you publish centralized documentation. Do you require the developers to attest to reading it or certifying on that? Or is it just published and available? How do you – what's the process of pushing that information in the policy out to the development teams?

[00:26:31] David Imhoff: Yes, good question. We have a learning management system where you can assign required courses or optional courses here. We have to pick and choose what goes in there right across all of security. So we all try and work in as much of our areas as we can into the security awareness training that everybody has to take in that training that everyone has to take, right? There is training that if you are an engineer, there's a couple of different things, right? If you work here, you have to take this awareness training. If you're an engineer, you have to take this additional training. 

There are – it touches on all that. In a couple of the areas, it goes a little deeper, depending on what the subject matter is. But it also has you basically download the link to a reference guide, so they know where to find policies, procedures, things like that, right? That's the tough part is making sure people find their way to one of the millions of things people need to reference right as they do their jobs. But, yes, we do have a universal secure developer [inaudible 00:27:32]. 

[00:27:34] Danny Allan: That’s great. On the technology side, I know that you're a customer of Snyk, and I don't really want to dive into that. That's not the intent. I guess just from a category perspective, what are the categories of technologies that you're using? You talked about supply chain, so I presume you're looking at open-source components. What other categories of technology? I'm thinking in my mind like DAST and SAST and secrets and APIs. What types of tools are you using in technologies in implementing the program?

[00:28:05] David Imhoff: Yes. Definitely started with the SCA and the SAST piece, right? Really trying to get a handle on our code assets themselves and the asset management related to that, right? The application security posture management space has been incredibly important here. The asset management piece is probably the most critical I think out of this program, right? It's not necessarily a piece of traditional security tooling. But if you don't know what your code is and what the actual data classification, regulatory flags that might be on that application that has that code, then you can't really apply the right controls, right? 

That service now, [inaudible 00:28:45], we use for our CMDB, making sure that that is really intact. Then we're constantly iterating, but we've got API runtime going right now, so really making sure that – because we have a developer portal, right? Third-party developers externally can use our APIs. We want to be an ecosystem that people can freely use, and it's part of our business model. But with that, obviously, is we have to have a lot of API runtime security because it is open to everybody that can register and get a token. API security run time there and the inventory pieces. 

Then DAST, we have a mix between DAST and – depending on certain criteria, we might do an actual, say, red team, kind of an app scan and more of a human consult like those PHIs, PCI environments. Get a little more white glove. Then letting a development team just review their generated DAST reports. But we do have both of those. 

Then secrets has been a really big one. We have had secret scanning where I'm sure, like most everybody else, secret scanning is really hard from a true and false-positive perspective. We're constantly looking to figure out how to increase the signal and drop some of the noise in that world. Secrets scanning is definitely a big one. 

It’s interesting where the world's blur, right? We have a cloud security team. Where the world of infrastructure as code and container runtime security exists, we have some tooling there where we are looking to step up our game and close some loops both organizationally and from a tool and reporting standpoint. Like everybody, we have a lot of tools that we really need to work to make into a cohesive story and one view of risk, so a lot of things going into that. I don't want to say single paint of glass. That's an overused term, but a more cohesive view of all the alerting that is happening. 

The thing that is near and dear to my heart is that we are sending nine different things to an engineering team and saying, “Hey, these are all the important things from security, and they aren't prioritized against each other, right?” That it's one view that we say, “Okay, you own this application and platform. Here are the security things we care about, and they're risk-ranked in terms of if you can only get to the top five, these are the top five to do it right, and so –

[00:31:32] Danny Allan: Is that cloud team that is doing the cloud security assessment, are they integrated into the ProdSec and application security team? Or is it a distinct team over on the operations side?

[00:31:42] David Imhoff: It's a distinct team, but we do talk pretty much every day. It's one of those – we're not technically in the same. We're both in security, right? My peer runs that, but we're not technically the same org. But as things just become code, as the technology goes, the organization goes. I think we're really finding ourselves acting more like one organization in a lot of cases. 

[00:32:14] Danny Allan: I know that everyone is talking AI these days. You didn't mention any AI tools. Are you using or what are your thoughts maybe using generative AI tools on the development side? Does that impact security, and how do you think about that? Because I know I hear every day about code assistant, CodeWhisper and Copilot, AI, AI. How do you think about those from an application security perspective?

[00:32:39] David Imhoff: We went through [inaudible 00:32:39] and a couple of that NIST has a framework, right? Really trying to stay up with the frameworks and the guidance and everything that's getting created. First thing we did was say, “Okay, AI. Big scary thing. But what does that actually mean for us,” right? We did an exercise to look at the delta of, okay, software development. Has these attack vectors? What are the additional attack vectors that we need to also add on as part of AI preparedness?

One of the biggest things that stuck out was really dependency management on steroids, right? Pulling in the right models, right? Most people are not creating an LLM model from scratch, so pulling in the models that are available, making sure they're actually the models that you think, right? Just like pulling in a bad library that's typosquatted, things like that. Same thing with LLM libraries, you pull in a bad one, and it's not great, so really making sure that we've got the dependency management piece buttoned up around that. 

Then the data poisoning is a new thing that we're honestly still brainstorming on, iterating on, working with a lot of folks. We definitely have detecting for a lot of that stuff, but it does present a lot of very new challenges that you really have to think about differently and make sure that you're not assuming that something is as easy as you might want to think it is, right? To me, poisoning is pretty straightforward. Just make sure that we limit the places people can get training data for those LLMs and make sure that those are not malicious. When you really look at how to do that and how fast that space is expanding and it's like the Wild West, that's a pretty scary proposition and – 

[00:34:29] Danny Allan: Yes. It’s an interesting – go ahead. Sorry. 

[00:34:30] David Imhoff: AI firewalls, everything else, right? Just really trying to make sure that as things present themselves and as people find new and innovative ways to break it, that we're in that perpetual fight that all of security people are in where we're trying to catch up and prevent them from breaking the thing that's new. 

[00:34:49] Danny Allan: Well, AI is definitely changing the space We see it being used in everything from, obviously, code generation but also in the security assessment and in fixes and in reducing attack surface. I do think it's going to change the industry in very significant ways. 

I guess two more questions for you because I know our time is almost up. You're a year into this, and you've rolled out a program. What's the best advice you would give someone? If they're tackling rolling out a program similar to what you've done across thousands of developers in a very huge organization, what is your best advice or first advice for that individual?

[00:35:25] David Imhoff: Network and make friends, the number one thing. You can't do this from sitting in this office, right? It'll never happen. So really spending a lot of time in investing and making sure that you are seen as a friend and [inaudible 00:35:42] that is legitimately trying to understand their problems and help them with their deliverables in a secure way. Without that, this program or any program will never go anywhere, right? It's too easy to just ignore people, especially in a company this size. 

[00:36:01] Danny Allan: It's actually an interesting point that people overlook. There's historically been so much friction between security and development because they've slowed them down. Ultimately, it's a relationship business of making friends and allies and partnerships within the organization. I like that, and I'm a big believer in the same thing. I always say the technology is easy. It's the people and process which is where you find your challenges. 

[00:36:25] David Imhoff: Yes. People are tough. 

[00:36:27] Danny Allan: People are tough every time. Last question, I always end on this question. AI is obviously changing the world. If you could take AI and automate one part of your job that you're doing right now, what part – and it doesn't have to be security-related. I'm just curious. I'm always looking at how do we actually use technology to make our lives better. What part of your job would you take AI to solve it or smooth it out or make better?

[00:36:53] David Imhoff: Honestly, so it's interesting. I literally had this conversation this morning. It would be finding a way to use AI to implement threat modelling. To me, that is the hardest thing in the world to do because it's so manual for teams to do. It takes so much education, and then they have to actually care enough and think about it in time to do threat modelling and to think about possible attack vectors. 

If there was some way to use AI to just magically help people at the right time, the right place to perform threat modelling or derive, at least pull up what are the potential threats that I should be designing for as before I talk about this product. Hands down, that would be it. 

[00:37:34] Danny Allan: Well, you've touched on a point that I loved. We could go for another hour on threat modelling because that's a discipline, as you know, that's been around for 20 years, and no one has really solved it. I do think that AI has a massive part to play there in addressing something that fundamentally is the furthest shift left. We talk about shifting left. Threat modelling is really the furthest left that you can possibly get because it's identifying the systems and then the mitigations that you should have in place. I love that answer, and maybe we have to have you back on to talk specifically about threat modelling. 

Is that a practice, David? Sorry, I said that was the last question. Is that a practice actually that you employ across the organization? Is that part of the process for the development teams?

[00:38:16] David Imhoff: We try really hard to. To your point, it is the furthest left. The hardest thing is just getting embedded in their minds enough to where they're actually in the ideation phase that they bring you up, right? Because it's impossible to know where to be if you don't know when they're thinking about building a new thing, right? We have some success. 

Like every company ever worked and I'm sure every other company in the world, some people are just not programmed to think about you as security when they're trying to think of the next thing they're going to build. But we certainly try to push that and do everything we can to spark some memory of, “Oh, I should bring in security. They make this easier, right?” 

[00:39:01] Danny Allan: Yes. I always say the win of threat modelling is not the fact that you come out with the artefact of a threat model, which is important. The win of threat modelling is simply just the awareness early in the cycle. You're really building a culture. To your first point, a culture of security within the engineering organization. 

Well, that is fantastic, David. Thank you for joining us today and talking a little bit about deploying the program. I know that that's something that a lot of organizations struggle with. Understanding your experience and what has been successful has been very valuable, so thank you for joining us. 

[00:39:33] David Imhoff: Thanks for having me. It's been a great time. 

[00:39:35] Danny Allan: Excellent. Thank you, everyone, for joining us on the podcast. We look forward to having you next time. Until then, stay secure, keep coding, keep delivering. Thank you. 

[END OF INTERVIEW]

[00:39:48] Guy Podjarny: Thanks for tuning in to The Secure Developer, brought to you by Snyk. We hope this episode gave you new insights and strategies to help you champion security in your organization. If you like these conversations, please leave us a review on iTunes, Spotify, or wherever you get your podcasts, and share the episode with fellow security leaders who might benefit from our discussions. We'd love to hear your recommendations for future guests, topics, or any feedback you might have to help us get better. Please contact us by connecting with us on LinkedIn under our Snyk account or by emailing us at thesecuredev@snyk.io. That's it for now. I hope you join us for the next one.

Up next

You're all caught up with the latest episodes!

Snyk ist eine Developer Security Plattform. Integrieren Sie Snyk in Ihre Tools, Workflows und Pipelines im Dev-Prozess – und Ihre Teams identifizieren, priorisieren und beheben Schwachstellen in Code, Abhängigkeiten, Containern, Cloud-Ressourcen und IaC nahtlos. Snyk bringt branchenführende Application & Security Intelligence in jede IDE.

Kostenlos startenLive-Demo buchen

© 2024 Snyk Limited
Alle Rechte vorbehalten

logo-devseccon