Episode 20

Season 3, Episode 20

Using ThreadFix With Dan Cornell

Guests:
Dan Cornell
Listen on Apple PodcastsListen on Spotify Podcasts

"Dan Cornell: I've always had an interest in security and now I think everyone's learned that everybody writing code is responsible for security. It's all evangelist. You've got to convince these people that the way they're viewing the world is not comprehensive enough and that's a big uphill climb.

Guy Podjarny: Developers, fundamentally, they're building functionality, they're building scale, they don’t set out to build something secure, they set out to build something that does something.

Dan Cornell: There's a couple of different reasons of why an organisation isn't addressing security risk. And that kind of goes along the maturity spectrum where at every level of maturity, you have an excuse not to address security."

[INTRODUCTION]

[0:00:38] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow.

The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter, @thesecuredev.

[INTERVIEW]

[0:01:09] Guy Podjarny: Hello, everybody. Welcome back to the secure developer. Thanks for joining us. Today, we have a great guest with us. We have Dan Cornell from the Denim Group. Welcome, Dan.
[0:01:18] Dan Cornell: Thanks for having me on.

[0:01:20] Guy Podjarny: It's great to have you join us, Dan for many, many reasons. There's a lot of interesting topics I want us to talk about today. But a big kind of starting point for it is that you have been in this world of application security for a long, long stretch. I'd love to sort of pick your brain and hear a little bit about the evolution of it. Before we dig into it, can you share a little bit about who you are, what you do, and a little bit of your background getting into today?

[0:01:45] Dan Cornell: Yes. I'm Dan Cornell, I'm the CTO and one of the founders of Denim Group. I'm a software developer by background, computer science degree, math minor, because I was really cool when I was in college. Again, software development, I did a lot of server-side, Java in the mid to late nineties. Did a lot of some early server-side .net stuff in the early 2000s. But really, what I've spent the majority of my time in my career doing over the last 15 years is working to bridge the gap between security teams and development teams. Trying to help organisations understand the risks that they are exposed to, because of the software that they're building, and to help them from a systematic standpoint to create a development, or development life cycles that let them create secure software reliably. I'm a software developer, who came into the world of security, as opposed to being someone with a more traditional security background. Either in systems administration, network penetration testing, or IT audit, or something like that.

[0:02:48] Guy Podjarny: Yes, that's definitely, I think, today a very valuable perspective. Coming into it, I think a lot of the conversations I've had the pleasure of having here on the podcast show that the brightest minds in the world of security today in AppSec today have had some good investment in dev in their background. I guess, during that time, tell us a little bit about your work in application security over the last, what is it, like 15 years now, as you do work. Just to set some context of what you've seen.

[0:03:19] Dan Cornell: I've always had an interest in security, even going through my university education. After that, I followed the stuff that the law folks were doing. Again, I was writing e-commerce websites at the time, not directly responsible for security or so I thought. Now, I think everyone's learned that everybody writing code is responsible for security, but that wasn't a primary concern of mine, but it was certainly an interest of mine.
Again, my background was in custom software development, running a consulting company that did that, or consulting companies that did that. About 15 years ago, I was introduced to John Dixon, who's now also one of the principals at Denim Group. John's background is much more than that of a traditional security guy. He came out of the Air Force as the Air Force Intelligence Officer, Information Warfare Officer, had worked in information risk management, KPMG. His resume looks much more like the traditional security practitioner.

We were introduced at a tech networking event and got to talking. As we were talking, I said, "Well, here's my background as a software developer. I'm interested in security stuff." He said, "Well, everybody in the security space has a resume that looks like mine, but I think the really interesting problems, the really scary, and challenging problems that there are right now are all around applications. It's not that the network and infrastructure layer has been solved, but everybody at this point has pretty much figured out they needed to put up a firewall. But they poke holes in the firewall, and ports 80, and 443."

The real security challenges that we're seeing right now are at this level, but none of the existing set of professionals or none of the installed base, if you will, of security people understand software development, understand IDEs, understand web programming. All the folks in security that have a background doing programming, did COBOL or Fortran way back whenever. That's really, at Denim Group, what started, the genesis of our security practice was saying, let's look at what we're doing on the custom application development side of things. There's things that we knew we had a responsibility for security, again, encrypting credit card data is it's in transit, and it's in rest. Really, that was a lot, that was those are kind of the limit. But let's look beyond that, to start to look at what are the security implications of the code that organisations are writing. Out of that set of conversations is really how both me, personally, and kind of professionally with what I was doing, as well as what we were doing at Denim Group, the consulting services that we're offering expanded when we started working with John, based on that set of conversations.

[0:06:02] Guy Podjarny: Maybe let's dig a little bit into some examples. You come in, you work with companies to an extent, fix either existing, or sort of explicit security flaws, or fix how they handle security in this process. Can you give us some examples of common blockers? I mean, why aren't these companies addressing these security risks? What changes do you sort of start by doing inside a company that keeps them from being self-sufficient here in the first place?

[0:06:33] Dan Cornell: Right. There's a couple of different reasons of why an organisation isn't addressing security risk. It kind of goes along the maturity spectrum, where at every level of maturity, you have an excuse not to address the security. Kind of at the base level, or the entry level, a lot of organisations haven't thought of their applications as being a conduit for security attacks. When people think security, they think, "Oh, well, that's a specialised subdivision of the IT genre. Those are the guys that do antivirus, they do firewalls, and they make us watch the training videos every year to not click on bad links." That's what security is.

Those organisations at an exceptionally low level of maturity don't even know that it's a problem. When you ask them if their applications are secure, they say, "Yes, we've got a firewall, and I bought the fancy SSL certificate from our provider." Those people don't even know they have a problem, and in a lot of ways, that's kind of the most challenging organisation to work with because it's all evangelism. You've got to convince these people that the way they're viewing the world is not comprehensive enough. That's a big uphill to climb.

Then, you get into organisations that are more sophisticated, where they know it's a problem, and probably, they've done some assessments, they've identified some vulnerabilities. But the challenge there is that a lot of times, the security team and the development team, they don't communicate well. They don't speak the same language, they don't use the same tools, and they have in a lot of cases, if you look, in the short term are competing aims. The challenge we see with security in more mature organisations is, they know they have a problem, but that problem is prioritised below other things that are being done. Hey, we've got all this other work to do. Now, you're adding this security stuff on top of that, that I maybe knew I needed to do, but didn't really recognise.

Like I said, at every stage of an organisation’s lifecycle development, they can always find something else to do, because security isn't the only thing that organisations need to care about. I think that's something that a lot of people in the security industry need to wrap their heads around is, what they're doing is important and valuable. Risk management is a key component of what organisations need to do if they want to survive. But in a lot of cases, that is defence, or certainly, in most organisations, it's seen as defence, not as an enabler. I think security people need to understand, there's a whole lot of other stuff going on that is much more directly generating business value. So, figuring out how to incorporate security as a component, but ultimately, making that more of an enabler is where I think a lot of security teams need to place their focus as they're trying to stay part of the conversation.

[0:09:35] Guy Podjarny: Yes, absolutely. I talk a lot about why humans make insecure decisions, and maybe specifically why developers or security people make insecure decisions. The first bullet in there is around motivations. It's around what is it that you're doing. Then, developers, fundamentally, they're building, they're building functionality, they're building scale, and security is a necessity within that. But it is not their primary motivation, they don't set out to build something secure. They set out to build something that does something.

[0:10:02] Dan Cornell: Exactly.

[0:10:03] Guy Podjarny: And they need it to be secure in that process.

[0:10:05] Dan Cornell: Right. We just think career path. To think of your reputation on your team, and if your reputation on the team is, that guy writes a really secure code, and he's never met a deadline that he signed up for, versus the guy who, hey, that guy's a little bit of a cowboy, but every time he delivers, on time, on target. Which of those two individuals is going to experience a greater customer success? Just from a competition standpoint, the person that can never quite get it done is less valuable, in most cases for teams than the person who finds a way to get the job done. Obviously, those are two extremes. But if you think of what characteristics, or what qualities, what accomplishments are given the credit, and are recognised, and are appreciated, and rewarded, it's important to see those motivations and the incentive factors that are in place.

[0:11:02] Guy Podjarny: Yes. I guess, actually, let's double-click a little bit into this communication. You come into a team, let's say that, sort of second level of maturity. They have the development team and security team not communicating well. You mentioned that it might even have competing aims. What are the different aims that you see in those, and what are a few types of practices that you try to instil to help fix that.
[0:11:28] Dan Cornell: From a security side, typically, the aim or the aim should be, let's bring an appropriate level of risk management to what we're doing in the organisation, so that we're supporting innovation, we're supporting progress. But we're not putting the organisation in a situation where it's too exposed. There's kind of different ways to look at that, but that's how I view most security teams, is at the base level, let's make sure that we don't do anything that's going to get us breached in an unacceptable way. In a better way, let's look at see how, by providing this risk management, we can enable the organisation to do even better things, like having a net under your tightrope act.

On the development side, it's their goal to provide new capabilities, to provide innovation. That is going to allow the organisation to be successful in the marketplace, and to provide value to its various stakeholders. Again, in a lot of situations, it's the development teams that are saying like, "We need to go, go, go, because the business told us go, go, go." Security is perceived as being the Department of No. Like, "Hey, we want to do this. Hey, fill out this form. No, we're not going to let you do that, but fill out this form instead." It's not that either group is right or wrong. But again, looked at in the most basic way, one group is trying to move forward, one group is perceived as trying to slow that forward progress. That creates a lot of problems.

[0:12:56] Guy Podjarny: How do we fix that? What are some sample advice, pieces of advice that you give to these organisations or that you work with them on applying to get them to a better place?

[0:13:06] Dan Cornell: The attitude, I think, in security, can change and it's like improv comedy. In improv comedy, the way you're not supposed to do that. If somebody says, "I'm a guy who forgot to take my umbrella under the rainstorm." And the next person takes over and says, "No, no, actually, we're going to do this other thing." In improv comedy, you're supposed to say, or whatever the other person say, you're supposed to say, "Yes, and." I think the security teams need to be a little bit of improv comedy, where when somebody comes up and says, "I've got this crazy idea where we're going to have a website, where you don't have to log in. It doesn't make you log in, but it's still going to give you access to your account." It's not security's job to say like, "No, that's the dumbest idea I've ever heard." It's their responsibility to say, "Yes, that's a very interesting idea, and here are some things that I think we could do to address some potential problems with that."
That kind of changing attitude toward, "Oh, my users are stupid. I wish they'd stop doing this. The developers are stupid, I need the executives to help me shut them down." A perspective of, I need to stop these folks is going to be pretty defeating for the security people, because in organisations, the value you derive from moving forward, and increasingly, the value you create from being able to move forward and innovate quickly. In a lot of cases, outstrips the short-term risks that you're exposed to. Again, organisations are going to say, "Where are our incentives?" Certain organisations may need to be the most secure. I think, even in organisations that believe that they need to be secure, like at the end of the day, really know that they need to be serving customers, that they need to be driving forward, that they need to be innovating. Those are the forces that are going to win in the marketplace.

For security folks, I think it's really important for them to view themselves as enablers, and how can we help us move more quickly and safely. As opposed to the default answer to anything being like, "Well, no, we can't do that" or "How can I slow this down or put some additional control in place?"

[0:15:15] Guy Podjarny: Yes. I always am struck by the analogies to the DevOps movement, or sort of how the word security there could have been swapped for the word ops in some earlier point in time, a decade ago, or not even. That has changed, and I think the most successful businesses are the ones indeed. Indeed, the operations is the business enabler, look, we do ops so amazingly well, that that allows us to move faster, and still do it in a high-calibre ops environment. I guess, the goal would be to have security work in the same fashion.

I also love that, a lot of times when you talk about developer security, talk about mobilising a more secure or modern dev environment, a lot of the conversation rotates around what developers should do. But I very much relate to the fact that there are just as important changes. If sometimes, more so, that the security team needs to change, that the security industry needs to change to get to that better place. Do you see that acceptance changing? I mean, you've seen the AppSec world, the OWASP world, all this evolution of application security for all this stretch. Do you feel like there's acceptance or reluctance to this right now? Is it becoming known wisdom that they need to change, that we, as an application security industry need to change? Do you think people are still pushing back against it?

[0:16:41] Dan Cornell: I think if you look at the people in the industry, they exist on a spectrum of that understanding. The same thing, if you look at organisations, they exist on a spectrum of that understanding. I think that, with my experience in OWASP, and the people, the type of people that are involved in OWASP, I think they very strongly have that understanding, or that that point of view, which I believe is right, and will ultimately see if it's right. But I think that a lot of the leaders in OWASP very much have that view, which is saying, this is per Steve Ballmer, all about the developers, developers, developers, developers.

The way that I look at it, at the end of the day, the developers are going to have to change their behaviour and change their actions if we want to see more secure code. Obviously, there are external things that you can bolt onto or insert into the process. There's WASP, there's RASP. I think those certainly have a place in a program and in a protection scheme. But at the end of the day, developers are going to need to change their behaviour to stop introducing new vulnerabilities into code. Developers are going to need to change their actions to fix the vulnerabilities that are already out there.

Security certainly has a role in advising, in building awareness and providing direction. But at the end of the day, if you want more secure applications in your organisation, it's the developers that are going to have to do something different tomorrow than what they did today. As a result, per my view, and I think per focus on OWASP, a lot of the leading voices in the industry. The way that they view this is very much, how do we support and enable developers with their training education, with the way that they are setting with the processes, with the tools they're using? How do we change these factors so that we get better security outcomes?

[0:18:35] Guy Podjarny: Yes, that makes perfect sense, the evolution at the end of the day, the entity that is taking on the responsibility, or rather, there's work that needs to get done, the security actions, the security activities needs to happen. You want to enable and empower developers, but they need to embrace that responsibility. Let's indeed sort of double-click a little bit about how to make it easier around tools. Maybe it's sort of a good opportunity to bring up ThreadFix. This is a product offering, I guess, from your space as well, that I find really exciting as I sort of dig into it. Do you want to tell us a little bit about what ThreadFix is and what brought it about?

[0:19:12] Dan Cornell: Right. First, it's not a product, it's a platform.

[0:19:17] Guy Podjarny: Okay. Already stand corrected.

[0:19:20] Dan Cornell: I've told a lot of people the story, but I'll tell you the story of how ThreadFix came about. It came about from us watching the interactions between security teams and dev teams. We were working in a financial services organisation, helping them set up their software security assurance, or program their secure development lifecycle. One of the security analysts needed to do testing on a web application. One other important line of business is web application. He took one of the commercial scanners, ran a scan, generated a 300-page PDF with a colour graph on the front, then handed it to the development team. And said, "I'm from security. I'm here to help you. We did some testing of your application. We found a number of vulnerabilities. Because security is really important, we need you to fix this stuff." The developer, you're playing along, said, "Okay. This is a pretty big report. Which of these vulnerabilities do we have to fix?" Security said, "Well, this is security, you have to fix all of them. This is the most important thing. There's hackers out there. The developer said, "Okay. Well, how do I fix them? We didn't put these vulnerabilities in there intentionally. How do we fix the vulnerabilities?" The security person said, "Well, I think, I'm pretty sure that there's some instructions in the report on how to do that."

Security person who wanders off, the developer takes the report, puts it in the bottom drawer of their desk, and forgets about it. A couple of months later, they were doing some perimeter scanning, and so they had a different service that they turned on pointed this application. Ran another scan and generated a PDF document with a 200-page PDF with a different colour graph on the front. A security representative went to the dev team lead and said, "Hey, we did some more security testing. Here's the report of additional stuff that you need to do. By the way, how did fixing all that other stuff turnout?" The dev team representative said, "Well, how is this different than what you gave me before? Is this the same vulnerabilities, different vulnerabilities? I don't understand." The security representative says, "Well, I don't know. There might be some overlap, there might be some same things, different things. Not sure."

The development team lead went up to their line of business management says, "Hey, the security guy came around again. And this time, he's actively wasting my time, because he's got this report that's 300 pages. He got this report that's 200 pages. He can't tell me where there's overlap, he can't tell me what I'm supposed to fix. I've got these features that this hotshot VP promised to an important customer, and I've got to get those out the door. We've got these performance bugs that are really aggravating some people, and we have non-security-related bugs that are making customers angry. How do I prioritise all of this stuff?"

A rock got dropped from on high on the security team that said, "You can't speak to another developer until you can provide them with a single list of what needs to be fixed until you can provide a justification for these vulnerabilities of why they need to be fixed. Rather than we continue implementing these features that we've promised to customers, or do other things that we need to do. And you need to provide specific instructions on how to fix these vulnerabilities. Until you can meet those criteria, you don't have the authorisation to speak to another person on a development team." The security representative does what is natural, fires up Excel and starts cutting and pasting the results from the different reports and trying to deduplicate them.

We watched this interaction, and a couple of things struck us. Number one, no organisation feels like their security team is overstaffed, or under work, or at least, I haven't met one. If there's a team out there, and you have a job opening, please let me know. I would love to be your colleague. Every security team has very limited resources. This is obviously a misuse of these resources. But what I also realised and noticed from this was, neither of these groups was working in bad faith. Nobody was trying to be a jerk. Nobody is playing crazy politics or anything like that.

The security analyst was trying to test the website, find vulnerabilities, and hopefully get their risk reduced by getting those vulnerabilities resolved. That was that person's job, and they were doing it the best that they knew how. Looking at the development team, it's not like they wanted to write code that had security vulnerabilities in it. It's not like they're maliciously saying, like, "We're going to show that security guy. Let's see if we get 10 More SQL injections in our application." But they were doing what they needed to do, which was to build features that allowed for innovation, and make customers happy, and to address the most glaring issues that were degrading their customers performance problems, non-security related bugs, and whatnot.

Both groups were acting in what they thought were appropriate ways. But the communication pattern was so horrible, that neither group was ever going to get anything done, and they were just destined to be in conflict with one another. Watching that interaction, and kind of, it was great to have a fly on the wall, if you will, to watch these interactions. Because that showed us a pattern that we saw over, and over again in organisations, which is that, the security teams and the development teams are speaking different languages. They have, at least in the short term, they have very different motivations and incentives in the way that they're communicating. Because they're using different tools, the way that they're communicating, they're in a lot of cases talking past one another.

Watching that interaction actually led us to build out ThreadFix, to say, how can we make it easier on the left side of the equation for security teams to manage all the different stuff that they're doing to identify vulnerabilities, dynamic scanning, static scanning, component, lifecycle management, open-source vulnerabilities, things of that nature, along with all the manual stuff?" But then on the right side of the equation, how do we turn these vulnerabilities that the security team cares about? How do we turn those into software change requests or software defects that the development teams care about? Because that is, if you think about it going to a development team, and saying, 90% of the time when you're doing your work, you guys are using Jira, or Bugzilla, or whatever defect tracking system you're using. But 90% of the time, you manage your workload. In this tool, 10% of the time when you're doing magic security stuff. You work off this PDF that we've printed out and put sticky notes on.

If you take a step back and describe it that way. It's a crazy way to communicate with dev teams, but that's still the way that security communicates with these development teams. Is, "Hey, we're going to do some testing, we're going to shoot over a PDF. We're going to do some testing, we're going to shoot you an Excel spreadsheet. We expect you to work down through that, in order to address these issues that we found.” It's those data management challenges, but more importantly, the communication challenges that led us to put together ThreadFix.

[0:26:35] Guy Podjarny: That's an amazing kind of cautionary tale. And you're right, it resonates also painfully when you hear it. It just sort of happens so often. We see it all often in the world of open security, where you ask somebody about how do they handle open-source vulnerabilities, and you'll get a 10-minute answer. But all the wondrous ways, well, hopefully, in some cases, you get a fairly alarming, "No, we don't" answer. But if there's good progress, you'd say, "We're finding them here, and finding them there, and finding them here." Then, you ask them, "Well, so you're finding them there, but how do you handle those vulnerabilities? What do you do next?" Then, suddenly, there's a disastrous story about multiple triage committees, and they fall into these buckets, and those buckets, into these custom processes, the sub-security team under the top security team.

At the end of the day, the sort of path to remediation is nowhere near as good or as concrete as it should be. At the end of the day. everybody agrees that that's the end goal. It's not just to find them. It's find them first, do the risk management, but then subsequently, fix it, improve the risk posture. ThreadFix is kind of aimed for that communication channel. Do you see it like, is it a tool that makes the developers smile, or the security people, or both? Who's typically taking in?

[0:27:53] Dan Cornell: Our goal is to – if we're successful looking at the average developer, they don't even necessarily know that ThreadFix is being used in their organisation. Because from a philosophical standpoint, ThreadFix is really targeted at the application security team. Find all the teams developing software in your organisation, all the applications they're responsible for. Then, load in the results of all the testing you're doing; static, dynamic, IAST open-source management, manual pen testing, code reviews, all that stuff.

Let the security team manage the data inside of ThreadFix to determine which of the vulnerabilities that we think are the most serious, which has compliance implications, service level agreement implications, whatever that might be. But then, what we want to do is we want to reach out to developers in the tools that they're already using. In this case, most specifically, defect tracking tools. Let's bundle up these vulnerabilities that we consider to be sufficiently important to merit developer attention, bundled them up in a way that it's going to make sense, and create defects based on that. Maybe that's grouping things by vulnerability type, maybe by – where in the application they're located, whatever that might be. But, how do we bundle these things up and make that transition between vulnerabilities that the security teams care about, and bugs, or backlog that the development teams care about.

For most developers, they don't need to log into ThreadFix, they don't need to learn a tool, they don't need to have any log in, you don't need to train them. They're just going to get bugs that show up, they get assigned to them in their scrum meeting or whatever meeting tempo that the organisation has. Those bugs are just going to show up in their defect tracking system saying, "Here's the problem, here's how to fix it. Let us know when you're done." That's really what we strive to do, is how do we make it as easy as possible for developers to get the information they need to fix these problems? How do we take friction out of that process? Because if you could take friction out of the process, what we found is that, developers fix more bugs faster.

Want to say, in one organisation we worked with, the meantime to fix went down by, I think, 46%, I think, which is great. Because just like you said before, finding vulnerabilities is an important part of the process. But getting them fixed is where the world actually gets better. Again, I've been doing app security testing for 15 years or something like that. Finding vulnerabilities isn't the problem. That's never been a problem in any of the testing engagements. In any organisation that's rolled out static analysis or dynamic analysis, IAST, in all my experience doing testing, helping organisations set up testing programs, finding vulnerabilities isn't the issue.

Actually, finding vulnerabilities in a lot of cases is the problem, because you stack up this mound of vulnerabilities that is just monotonically increasing in size because things aren't getting fixed. Where organisations really get value, the win for organisations is to figure out which of the actually important vulnerabilities that need to be addressed and to get those resolved and pushed into production by the development and the operations teams. That is where organisations struggle. Again, we've seen so many static analysis rollouts where each app that you go through, you're stacking up more and more vulnerabilities, and especially looking at a lot of the kind of untuned static analysis engines that are just stacking up a bunch of info or low-level stuff.

Same thing with dynamic analysis, and any type of automated analysis you're doing, it's going to generate a lot of stuff. Some of it is false positives, some of the stuff you maybe don't worry about. But again, you get this mound of vulnerabilities that just increases in size over time. More attention needs to be paid to the other side of that process, which is, how do we figure out of the vulnerabilities we've identified, which of these are we actually going to fix? How do we get them in front of the developers to get them to fix them? It's a lot less sexy. You're not going to be speaking at DEF CON about your suite remediation hacks.
Although, I think Black Hat has some more blue team stuff this year, which is great. But again, if you look at the industry, the InfoSec rockstars are not the ones that are fixing the most stuff.

[0:32:14] Guy Podjarny: No. As well as hacking the GIPS, and finding whatever, breaking into your brain.

[0:32:20] Dan Cornell: Exactly. It's not to say that that is not valuable, but that is something that is very discrete. I went in, I did a test, I found this stuff. It's a challenging intellectual endeavour to do testing to find new things. The real challenge comes in the other side where you're fighting with – I'm going to butcher the quote, but I think [inaudible 0:32:41] calls like, level eight in the OSI model is like politics. Level seven is applications, level eight to politics. I'm sure I'm butchering that quote, but we're you see, okay, we've done all the technical stuff that we need to do to find the vulnerabilities. Now, we've got to hack humans and systems in order to get these vulnerabilities resolved. For better or for worse, that is not as cool as finding the stuff in the first place.

[0:33:07] Guy Podjarny: Yes. But no less, if not more important. Sounds awesome. If somebody – just as a subsequent. If somebody wanted to check out ThreadFix and try it out, where do they go?

[0:33:17] Dan Cornell: They can go to threadfix.it. We've got a free download trial. The easiest way to get up and running, if you just submit the contact form, we can share with you. We've got like an Amazon image, given an account number, and a region, we can just shoot that over to you, and you spin it up. It's already pre-configured.

[0:33:33] Guy Podjarny: Awesome. I think, Dan, this has been – there's like a whole bunch of other questions that I have, and all that, but I think we're running a little bit out of time. It's been a really, really fascinating conversation around kind of the evolution of not just the sort of the dev to security, but also the security to dev channel. Including kind of the start conversation and sort of ThreadFix, and what it represents. At the end of the day, communication cannot be one way. It has to be both channels, but we need to adapt and create those communication channels. Awesome conversations, and thanks for sort of sharing your experience.

Before I let you disappear here a little bit, I have one question that I like to ask every guest on the show. Which is, if you had one sort of pet peeve or top advice that you had around security to sort of offer a team looking to level up their sort of modern dev security posture, what would your sort of one bit of advice be?

[0:34:25] Dan Cornell: think my pet peeve. I don't know if I'd call it a pet peeve, but one of the things that I think has tremendous promise is this security champions model. Where you have, certainly, a central security team that is providing certain functions, but you start to embed security knowledge into dev teams. So that every dev team has, "Hey, here's the security person I can go talk to if I've got a question about a vulnerability or I've got a question about authentication authorisation." I think the model of having this monolithic security group that does everything is destined to fail. Again, it's not to say that the central security groups don't have an important role. But I think that looking at it, how do we embed some security knowledge, whether that it's taking someone on a given team that has an interest, or an aptitude for security and providing them with some training and some development. Or, whether that's taking someone from the outside and embedding them in that team.

I think that, again, along with the DevOps, cultural transformation to say, development operations, at the end of the day have the same goal of how do we generate shareholder value for this business. Or, how do we generate stakeholder value? I think, similarly, breaking those barriers down between security teams and development teams is critical for success. How do we make security knowledge local so that every team has the ability to easily reach out for it? Again, that kind of security champions model of having embedded expertise and knowledge is one that I've found to have a tremendous amount of success and value.

[0:36:01] Guy Podjarny: Excellent. That's a great tip and bit of advice. Dan, thanks a lot for coming on to the podcast.

[0:36:07] Dan Cornell: Well, thank you very much for having me. I really had a great time.

[0:36:10] Guy Podjarny: Thanks, everybody for tuning in and joining us for the next one.

[OUTRO]

[0:36:13] Announcer: That's all we have time for today. If you'd like to come on as a guest on this show or want us to cover a specific topic, find us on Twitter, @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over 100 videos about building developer tooling companies, given by top experts in the field.

Snyk é uma plataforma de segurança para desenvolvedores. Integrando-se diretamente a ferramentas de desenvolvimento, fluxos de trabalhos e pipelines de automação, a Snyk possibilita que as equipes encontrem, priorizem e corrijam mais facilmente vulnerabilidades em códigos, dependências, contêineres e infraestrutura como código. Com o suporte do melhor aplicativo do setor e inteligência em segurança, a Snyk coloca a experiência em segurança no kit de ferramentas de todo desenvolvedor.

Comece grátisAgende uma demonstração ao vivo

© 2024 Snyk Limited
Registrada na Inglaterra e País de Gales

logo-devseccon