Skip to main content
Episode 13

Season 2, Episode 13

How New Relic Does Security With Shaun Gordon

Guests:
Shaun Gordon
Listen on Apple PodcastsListen on Spotify Podcasts

In the latest episode of The Secure Developer, Guy is joined by Shaun Gordon, Chief Security Officer at New Relic. Shaun tells us how he got into a career in security and explains how the role of security has evolved at New Relic. He reveals their philosophy of adapting security processes to fit the way developers do their job and emphasizes the importance of exception alerts, scorecards, and automation to support a rapidly scaling organization.

The post Ep. #13, How New Relic Does Security appeared first on Heavybit.

Partager

“Shaun Gordon: I was never the sort of traditional hacker who liked attacking things, and I really approached security as a developer, as a defender sort of from day one. We have tools that sort of automatically monitor that, alert us, and then we can now then go and ping those developers and decide sort of how much we need to get involved in the process. We try to be as lightweight as possible as developers. I have a philosophy. I want to change the way we do security to fit in with the way the developers perform their job.”

[INTRO]

[00:00:34] Guy Podjarny: Hi. I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow. The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show or if you would like to suggest a topic for us to discuss, find us on Twitter @thesecuredev.

[INTERVIEW]

[00:01:05] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today, we have with us Shaun Gordon from New Relic, who's the Chief Security Officer at New Relic. Thanks for coming on the show, Shaun.

[00:01:14] Shaun Gordon: Thank you.

[00:01:15] Guy Podjarny: Before we get going, do you mind giving the audience a bit of an introduction about yourself? What do you do? What's your background?

[00:01:22] Shaun Gordon: Sure. Well, I actually started my career way back when as a software developer actually doing IBM networking to HP mini computers so many years ago, but moved through a whole bunch of roles and sort of ended up in security almost through accident about 12, 13 years ago when I was sort of helping deal with a security incident at Intuit where I was working back then. It turned out to be something I was really interested in. I sort of moved into security, worked at Intuit for almost 10 years in security, I think, there and then joined New Relic about five years ago. Basically, its a first security hire when the company was pretty small, about 140 people, and basically built up the entire security team to the point we are now with about 15 people basically focused on security.

[00:02:06] Guy Podjarny: Cool. Interesting. The switch from Dev to security, was that well-embraced in Intuit at the time?

[00:02:11] Shaun Gordon: Yes. I actually made the shift. It was almost to an operations role, where I was sort of looking at the health of the overall website for Intuit. That's sort of – we had a security incident, and there weren't a lot of security people, and no one knew how to handle this. So I sort of took it on, and it just became a passion for me.

[00:02:28] Guy Podjarny: Cool. I think companies tend to embrace anybody today, anybody sort of moving into security, and it’s always this. First of all, you're probably – most organizations would have a whole bunch of open wrecks in security because talent shortage is a problem. But, no, I think maybe 10, 15 years ago, that would have been a little bit more novel.

[00:02:45] Shaun Gordon: Yes. It's actually interesting. When I talked to a lot of my peers, they come from different directions. I mean, I was never the sort of traditional hacker who liked attacking things. I never got those skills formally anywhere. Just sort of backed into it and really have approached security as a developer, as a defender sort of from day one.

[00:03:03] Guy Podjarny: Yes. Makes perfect sense and, in fact, spot on for this show as well. I guess while we're at it, can you tell us a little bit about how does security work at New Relic, just sort of the headlines? Clearly, a lot of the focus is on sort of Dev and how is sort of the Dev process involved with security.

[00:03:21] Shaun Gordon: We have a central security team. As I mentioned, it sort of started with me as the single security person five years ago, added an AppSec person which was sort of it was something I needed to do, but it was actually something a sad day for me because that's always my passion, so sort of handing that off to somebody else. Since then, I've built the team up into really three areas. I've got my AppSec team, which I'm sure we'll get into a lot more and sort of what they do. I've got a compliance team that does sort of traditional compliance activities, our SOC 2 certification, FedRAMP, SOX, all those sort of things. Then I've got sort of the traditional infrastructure, IT security team that focuses on both the security of our product itself, the infrastructure, data centre security, as well as our corporate IT security.

[00:04:06] Guy Podjarny: Okay. It's a 15-person team right now. You're split, right? You're distributed to that team or?

[00:04:11] Shaun Gordon: Yes. It turned out, when I joined the company, the company is fairly evenly split between San Francisco where we’re sort of headquartered and Portland where we had most of our development and support. I assumed. I built the team here. Most of my hires would be here. It turns out I only have myself and two other people in San Francisco, and really have built up the team in Portland primarily. A large part of that is because our development organization is there, and it really makes sense to have the developers working very closely with my application security people, as well as we've had a really good luck finding really good talent there versus in San Francisco area. There's a lot of competition.

[00:04:46] Guy Podjarny: Yes, definitely. I think on all fronts in security probably yes. It’s no different on that front.

[00:04:50] Shaun Gordon: Yes. Really, right now, the majority of the team, the AppSec team, the infrastructure team are pretty much primarily located up in Portland working with the developers, which is – that's been a great thing for us because they act as if they're developers. They attend the same meetings. They tend to follow the same processes. Although we're a completely separate organization, from all outward appearances they are developers.

[00:05:16] Guy Podjarny: Yes, interesting. I mean, I think that probably brings us into sort of a good opportunity to dig into the AppSec process, but I like the starting point of basically prioritizing security's physical proximity to Dev as part of it. Okay, cool. That's sort of the overlay team. How does application security work?

[00:05:36] Shaun Gordon: We try and be as lightweight as possible with the developers. I have a philosophy for my team which is I want to change the way we do security to fit in with the way the developers perform their job versus trying to get them to adapt the way they work to what we're doing. That means a lot of what we do is as transparent as possible. We're trying to do it in the background. We do a lot of lightweight processes, I guess, I would consider them. Some of the examples are when they create a new ticket, basically, they're going to create a new product, a new feature. They generally create a [inaudible 00:06:13] for that. We have tools that sort of automatically monitor that, alert us. Then we can now then go and ping those developers and ask them a few basic questions about what they're doing and decide sort of how much we need to get involved in their process.

In some cases, we just have a short conversation with these developers, and we say, “We're done. You're good. Here's a few things to worry about, but don't come and bother us.” Other cases, we say we have to dig deeper. We have to go and do some sort of threat modeling and really understand what they're doing, provide a lot of guidance during the development process. We also do a lot of monitoring of their commits and things. We have things that we do, so automated tools that look at the commits and try and find things that might be a risk, methods that we know have caused us problems before, comments that mention we're doing encryption or passwords, that sort of thing. Monitor that sort of thing. When we need that, we dig in further with them as well.

[00:07:10] Guy Podjarny: Yes. That's pretty cool. Basically, you're monitoring developer activity automatically so that they don't need to worry about that. They might proactively initiate something. But you can track those, and then that alerts you to decide whether you need to intervene.

[00:07:28] Shaun Gordon: A lot of it. I mean, we do want to train our developers to reach out when they need to. We've done a lot of things to make it easy for them to do that. But we also want to create the processes, so we know when to get involved. Once again, so the developers don't have to spend a lot of time thinking about, “When should I get security involved? What do I need to do?” We want triggers that cause us to go off and say, “Hey, you're doing this thing. We should talk a little bit.”

[00:07:56] Guy Podjarny: Yes, cool. I think it's really cool. It reminds me a little bit that at Snyk, we try to surface vulnerabilities from the world of open source GitHub, and we track all sorts of activities, probably slightly given the volume of traffic, slightly more specific mentions around somebody opening a GitHub issue or the likes that mentions a potential vulnerability. But it never really kind of occurred to me to apply that in the context of a specific organization to try and be an alerting mechanism of sorts, I guess, for the security team to intervene. How often does this catch something? I mean, how –

[00:08:30] Shaun Gordon: I mean, I think we have a pretty high level of engagement with the developers. I think most of our products, we are getting engaged. We are seeing early on what they're doing, getting engaged. I think from that standpoint, we're catching most of the products that we think have a security risk. There's very few times when we're actually surprised by something go out in the door. I won't say we're never surprised. But it's very few times, generally.

[00:08:56] Guy Podjarny: Cool. This is like security monitoring and maybe proactively engaging. There's a reason or sort of assessing this risk assessment. What about the Dev side? What do you do to do work to sort of empower or educate the devs themselves on security?

[00:09:11] Shaun Gordon: We do have a certain amount of training. We try to push a certain number of resources out to them. We have our website, our [inaudible 00:09:20] site where we basically put guidance for developers in general. We're not doing a huge amount of formal developer training. I actually have mixed opinions on developer training. I've seen a little bit of it done well. But for the most part, I haven't seen anybody being really successful with it. Put them in front of this classroom, in a classroom. Have an instructor teach them about all the secure coding practices. Within two weeks, it's basically all gone.

My real focus is how can we catch those things without having to train the developers. I don't want the developers to have to think about security that much, static analysis tools, that sort of thing. Static analysis tools are actually a real challenge for me. We do a certain amount of that. But one of my concerns is if I want to be transparent, I have to make sure I'm not scaring developers by giving them huge lists of unactionable results, that sort of thing. Figuring out how to implement this sort of tool, this sort of monitoring produce actual results I can get to developers quickly without having them have to become security experts.

That's always been a challenge for me, and that's why we do some of the very lightweight things like I may monitor the GitHub commits, that sort of thing that I mentioned before.

[00:10:45] Guy Podjarny: Yes. No, I think it makes sense. I think that's been sort of the bane of existence for sort of static application security tools which are –

[00:10:50] Shaun Gordon: Yes. I'm not going to mention any names but –

[00:10:52] Guy Podjarny: Yes. But it's been – I built one in AppScan source. Well, that was an acquisition. But then before that, developer edition. It’s a challenge doing this static analysis. It’s just the fundamentally false positive prone and finding the ways to not make it false positive prone. There’s gems in there, but there's a lot of noise around.

[00:11:13] Shaun Gordon: I'm using that sort of as a just poster child for a problem I see in general, which is tools that can't really decide if their target audience is a security professional or a developer. I think looking at the industry and the tools around, I see that challenge all over the place. Are you trying to produce very detailed results that can be used by the professionals but then interpreted for developers? Or are you trying to produce something that's aiming at just the developers?

That latter half is where I think we really need to move as an industry and where I haven't seen a lot of people do it really well yet. It's very easy to turn a developer off of a tool very quickly by giving them unactionable information, by calling them out on something that they don't understand what it is and more importantly how to fix it. That's been a challenge I've seen. It’s something I see we're getting better at, and I think people are starting to decide. But every time I see a product pitch for something, I can tell they're trying to be two things, and they need to just choose one.

[00:12:25] Guy Podjarny: Yes. Like I am 100% on board. It's kind of my philosophy all around. I think it even goes beyond probably the – it's not just about actionable and false positive. It's also about, is it onboarding? And how well does it integrate with your workflow, and not just the cost of ownership? The effort level of using it has to be on par, competitive with other developer tools, which is a pretty high bar. I mean, those are good tools, and you have to invest in developer UX, not just an auditing tool that happens to run in the context of a developer tool.

[00:12:59] Shaun Gordon: Yes, I agree. That integration is key. Adding another step, two steps, three steps, whatever it is to my development process, I'm going to fail because people are either going to get really upset, the fact that I've had – making them do all these extra things. Or they're not going to do it. In either way, I'm not getting any value out of this. I'm either ruining my reputation within the company. This is a team that just keeps imposing things on us. Or like I said, I bought this tool and just nobody's going to use it.

[00:13:29] Guy Podjarny: Right, because everybody knows. Yes. Fully aligned there. Moving kind of further up that stack, so this is developers building hopefully secure applications. What happens on the – who owns the alerts if there's something? You're in New Relic. You’re sort of well familiar with alerting mechanisms and dashboards. Who looks at those dashboards?

[00:13:52] Shaun Gordon: That is actually my team, the AppSec team primarily, as well as the infrastructure team. But, yes, we are the ones who are sort of looking at those alerts, triaging when we're seeing things, and then basically translating for the developers. Once again, because I don't think the level of alerts we're getting through any tool, no matter how good the tool is, are clean enough to send directly to developers. We have a lot of alerting. We actually use a lot of our own software for doing that, so our own insights product. A lot of the alerts go in there, flag us for various things. A lot of them are feeds from sort of traditional tools and scanners that we've either using as SaaS services or brought in-house. Or in some cases, they're very custom-built monitoring tools that we've created ourselves.

[00:14:43] Guy Podjarny: Cool. Yes. I think that very much aligns with the previous point, right? If it's fully actionable, you can put it, and it doesn't require security knowledge. You can put it in the hands of the regular Ops team, as long as you need security expertise and/or there's too much noise. You have to sort of bear that burden in the context of AppSec.

[00:14:59] Shaun Gordon: I mean, that's always sort of one of my mantras. Something I always push my team towards is, make sure that if we’re telling anybody anything about the security of their product that's actionable. That applies to those sort of alerts that we're sending around security vulnerabilities. It even applies to things like executive-level dashboards. If I'm going to create an executive-level dashboard that says that this business unit or this group is red because they've done something bad, there better be a very clear path for them to go from red to green. Otherwise, I'm just pissing them off.

[00:15:34] Guy Podjarny: Yes. This is all very practical in sort of the logical world of software development and security. Compliance and regulations don't necessarily always sort of follow common logic. How do you find these practices work with needing to be compliant? You're a big public company now.

[00:15:52] Shaun Gordon: Yes. I mean, we've done SOC 2-compliant for four years or so. Now, we're a public company. We have to be SOX-compliant. We're going through FedRAMP compliance, so pretty familiar with a compliance product. It’s always a struggle in my mind, and probably one of my biggest struggles is exactly what you asked. How do we be compliant without interfering with all these processes? This is actually – I actually did an RSA talk a few years back about how to do this in a SOC 2 world. I think my major points are really that our job is to educate and manage the auditors, and help them understand how the industry is changing and how some of the things we're doing now not only are equivalent to what we used to look for in the old world but are in some cases a lot better.

It’s not simple to do. Basically, [inaudible 00:16:49] process in my experience is they come in with a big long list of these are the requests we have. This is the way companies generally deliver this. This is sort of evidence you provide. Provide us this evidence. A lot of times, what we need to do then is understand what is the real question. What is the real control they're trying to solve, not what is the evidence they're asking for? Where I've seen audits fail a lot is when the security team takes that list that the auditor is asking for and just then hands it to the developers, hands it to IT, hands it to HR, rather than taking a look at that and saying, “Wait, this is what they're really asking for. This is what we're doing in our environment.”

One of the big challenges I've seen in that is that in order to do that successfully, you sort of have to have a foot in both worlds, you have to have a foot in the compliance world, and you have to have a foot in the development technology world. In order for me to understand what the auditor is really asking for, I need to understand the compliance world. But then to understand what we can actually deliver that will meet that, I need to understand our development processes. I need to understand our technology stack. That's been one of the biggest challenges that I've seen. I don't know that there's great answers for that, but it's definitely, I think, what we need to do.

We've been honestly pretty successful with this. There are very few controls that I can think of right now, and I'll give you one example in a minute, but that we have implemented purely because of audit.

[00:18:17] Guy Podjarny: For compliance.

[00:18:18] Shaun Gordon: Yes, purely because of compliance, where I don't think it's actually adding any real value. The big one, I actually mentioned this in my security awareness training, is password resets and password requirements. Passwords must be reset every 90 days. Passwords must consist of eight characters, a mixed type of characters. That's basically not a complaint, even if you look at the latest NIS guidance. They're saying this doesn't make sense anymore. We're using multifactor. There's no real reason. This one we just haven't been able to push back on yet. Not that I won't keep trying pushing on the auditor saying, “Hey, look. This is better than what you're asking for, but we're not there yet.”

[00:18:56] Guy Podjarny: Yes. Well, I hope everybody after that talk went up and bought you a beer because I think the notion of educating an auditor is those words. People get a shiver.

[00:19:07] Shaun Gordon: Yes. To be honest, it really depends on the audit. I mean, the SOC 2, we've been very successful on that because a lot of companies that are sort of DevOps, very agile, fast-moving are doing SOC 2. The auditors get that now. Other audits, it's a little more challenging. I mean, SOX, they've got you a little more over a barrel there than you do for the SOC 2. SOC 2 we're doing because it's voluntary, and we want to do it for our customers. SOX, we're doing it because we have to do it. FedRAMPs, even a different story. In that case, we're doing it because we want to do it. But you're also dealing with the government, so there's challenges there and there's rules that have to be followed.

[00:19:43] Guy Podjarny: Yes. I think, hopefully, the process here is indeed an education exercise that eventually auditors would have a challenge with the first time they encounter, the second time they encounter when it's the 15th company that deals with it and in a certain capacity. I think we've seen this with cloud security where cloud as a whole, running something in the cloud used to be a no-no. You would have to educate the auditors to do it. I mean, and now it's fairly well-understood to the extent that that's also maybe a frustration. They might even have the specific AWS IAM thing that you need to set up in their sort of list of prescribed steps.

[00:20:20] Shaun Gordon: Yes. One of the other challenges we're facing or have faced with audits, too, is just like development world, DevOps world, it's all about continuous improvement. We're doing the same thing with the security team. We're changing in some cases the way we do things. In theory, we're doing them because it's going to improve our processes. Every time we do this, we're now going to have to re-educate the auditors and perhaps even take a risk that they're not going to buy into this new process. I honestly believe that I'm doing this. I'm not going to drive what I do for security for compliance. I'm going to do what I need to do to make us secure, to our customers’ data secure. We'll figure out how to make the compliance follow.

[00:21:00] Guy Podjarny: First of all, I love the notion. Even the terminology you use all comes back indeed to your background around being a developer and coming into security from the development approach and how you build there. One thing that stayed with me, though, from your intro was that New Relic was a small 140-person company when you were the first security hire. Can you tell us a bit about the evolution there? On one hand, that's entirely typical and normal. We've had Kyle at Optimizely here, and we had the PagerDuty team. It takes a while until a full-time security person gets hired.

On the other hand, 140 people is also, to many people, not that small. You're already processing a pretty substantial amount of money and data at that time. How was security handled before? Can you tell us a little bit about how was the evolution of security at New Relic?

[00:21:51] Shaun Gordon: I was actually surprised when I came in that – I mean, well, definitely nobody had security in their job. I don't think anybody was really even consciously thinking about security sort top of mind. But I was surprised at how security-conscious people were in general and how little, just lack of a better term, sort of cleanup I had to do when I got there. It's actually an interesting story. When I was hired, I mean, one of the impetuses for hiring me was the company was growing and realised we need to start thinking more about security. But I think probably the biggest impetus is we had promised as a company to get a SOC 2 certification. I joined the company, I think, about a month before the auditors came basically to get us through the audit.

[00:22:37] Guy Podjarny: That’s a good trigger. Yes.

[00:22:38] Shaun Gordon: It turned out that I was able to actually take a lot of what we were already doing which were good security practices, frame it in the terms that the auditors needed. We were actually able to get through that audit with no findings, basically an unqualified report. That's always a terrible term. These people get really – unqualified sounds bad. But unqualified report to SOC 2, that's a good thing. We were able to do that because we were basically doing the right things before people were focusing on good development practices, which in a lot of cases is good security practices.

A lot of what we ended up having to build was just sort of more formality around a lot what we're doing, so being able to actually formally check that we're doing these things, continuing to do with them, actually starting to look for vulnerabilities, instead of just assuming they weren't creating the vulnerabilities, that sort of thing. Once I joined, I mean, I think that a big part was just sort of now being able to actually sit down with the developers and really help them as they architect the product and get in early versus before it was just sort of hoping they had done the right thing through the process.

[00:23:49] Guy Podjarny: Yes. I wonder how much is there even – there's a lack of expertise, but there might even be some sort of reflecting away the responsibility when there is a security person. When there's no security person, then there's nobody else to sort of I don't know if blame, but sort of expect to have assumed this responsibility. So you have to take it on.

[00:24:11] Shaun Gordon: That's a really interesting point. That's actually interesting because that gets back to there's two perpetual arguments I always have with other people and in my own mind in security. One is sort of where does the security team sit, the application security people? Do they sit on a security team, or do they sit with developers? Two, should the application security team be developing security code? Should they be coding? Should they be helping fix the vulnerabilities? One of my arguments against having them help fix the vulnerabilities do that coding is always it starts saying that security is a security team's responsibility and not the developer’s responsibility. So it's a fine line.

[00:24:53] Guy Podjarny: Yes. No, agreed. I fully agree with that. I think vulnerability is a bug, and it should be fixed. It should be prevented. It should be regressed. It should all of that, just like any other bug. The difference between it and a functional bug is that it's an implication of a control that you did not have more often than not versus maybe the functional bug, which is like it's intentional behaviour that you are now not supporting. With security, it's the unintended behaviour that you're looking for.

[00:25:21] Shaun Gordon: Exactly, yes.

[00:25:23] Guy Podjarny: Yes. I think the size of it. I must say when I started having some of these conversations, if you asked me before what size company do I think for startups, for instance, they should hire a security person? I would have assumed or would have recommended a smaller size. But it seems like that rough size of company that is in the sort of 100 to 200 clearly depends on the nature of the business as well. It seems to be where a security person comes in, and maybe it comes back to that tooling, if the tools – the better the tools get, the more they allow Dev teams to sort of build security practices without necessarily that focal point.

[00:26:00] Shaun Gordon: Yes. I would absolutely echo that, and I get asked this question a lot is, when should we hire a security person? A lot of it really does depend on the nature of what you're doing. I mean, if you're a company that's collecting some really sensitive data-handling credit cards, Bitcoin, whatever it might be, you should be hiring somebody as your 10th, 12th hire, something like that. But for a company, larger that's handling less sensitive data, you want to focus on a lot of the other things; the stability, the feature set. Just provide the developers the tools. As we talked about earlier, there's a lot of these tools that are becoming more and more developer-friendly, so they can handle a lot this themselves.

[00:26:38] Guy Podjarny: Yes. I think this is, to an extent, an evolution of the whole DevOps and DevOpSec revolution. Maybe on that topic, New Relic is oftentimes, at least in my mind, I think in many one of the pioneers of the DevOps notion, right? It's definitely one of the enablers of it. In fact, many, many teams today don't have a dedicated. Fairly good-size companies don't have a dedicated Ops team. They might have DevOps as a concept, and they clearly do Ops work, but that Ops work is done, embedded into the application. How do you see maybe in practice but also where should we strive to the evolution of security? Is it along those lines? Is it different?

[00:27:24] Shaun Gordon: No. I do believe – I mean, the two things are getting security close to the developer, both in terms of the tools and processes they're using and in terms of time, making security information available to them real-time. As soon as I do my commit, I get some sort of information about what I might have done incorrectly, where I might be introducing security vulnerabilities. As soon as my code goes to staging, it's scanned immediately, and I get real-time notification that there's something there. I absolutely – that's the way we're going to need to scale our team. I mean, that's the way. Automation is how I see my team scaling. I see it's the only way we can continue to keep up with this very rapid development cycles and the speed at which the company itself is growing, the number of developers is growing. We're never going to be able to scale with them.

[00:28:21] Guy Podjarny: Yes, without them. Pushing back on that a little bit, I mean, when people talk about DevOps, automation and maybe controls and visibility were sort of key on it, for sure. But many people would come back and say, “Well, DevOps, first and foremost, is a cultural shift.” Yes, automation was sort of a key enabler for it, but it's a cultural shift about understanding developers, understanding that they need to build operable software and Ops people understanding that they can't just point a figure at developers, but that they need to be yay-sayers, not naysayers in the company. What comes first, the chicken or the egg, here in security? I mean, is it our primary barrier tooling and automation? Or is it more culture?

[00:29:03] Shaun Gordon: I guess I haven't thought a lot about this one. But, I mean, I think the tooling has to be there in order to be successful here. I'm torn. The reason this is hard for me to answer is I'm torn between that whole wanting security to be as transparent as possible and wanting developers to have to think about this. I do want developers to understand that they are responsible for their security, their code. It's not something just like you can't throw it over the wall. The Ops team say, “Support this.” You can't throw this over the wall to the security team and say, “Secure this.” They have to understand they are responsible for that security.

At the same time, I don't want to have to make them security experts because that's taking away from their day job. The security skills, it takes a long time for a lot of us to learn, and it's not the best use of their time.

[00:29:55] Guy Podjarny: Yes. I guess that does come back to both what you mentioned previously about having a path to green, but also the DevOp side is you can demand and try to set up developers to own security, but it's only if you have a path to green there.

[00:30:13] Shaun Gordon: Exactly. I'm a big fan. We've been discussing a lot within my team recently about things like scorecards, dashboards, that sort of thing. Maybe we're not consciously or very in the forefront saying you do this, this, this. But we're going to show you this is your security scorecard. This is how well you are doing with your product and whether that's a combination of the number of vulnerabilities they're introducing or what it might be. I don't know yet, but I think that's a path to helping them understand that they are responsible for security, that what they do impacts the security of their product and the company in general.

[00:30:53] Guy Podjarny: Yes. I think these types of mechanisms are critical in my mind. The security is naturally invisible. I mean, you use the word transparent in the sense that it's not behind a curtain or something. You want it to be a natural part of their flow that they do it by default. But at the same time, security is too invisible, right? There's no natural feedback loop for security, except at best actually failing an audit and at worst getting hacked. There's no medium pain that you feel before the big pain that would kind of move you or push you to the right behaviour. We have to give some mechanisms that give you that visibility over time.

[00:31:31] Shaun Gordon: Yes.

[00:31:31] Guy Podjarny: Cool. This is fascinating. I actually have a whole slew of other topics to dig into, but I think we're getting out of time. Before I let you go, I always ask my guests here for one tip or if you're thinking about a Dev team that is trying to kind of upgrade their handling of security, what's your sort of one tip or maybe pet peeve for something they should do better?

[00:31:54] Shaun Gordon: I think the thing I'll talk about is sort of one thing if I could teach them to do. This is going to go back to sort of some of the things I talked about before, particularly to the transparent point. I don't want them to be security experts. Well, sure, I want them to be security experts, but I don't expect them to be security experts. I'm not going to try and make them security experts. I want them developers to understand risk and understand that every decision they make is a risk decision and that they need to understand what is the appropriate level of risk that they can take on as they develop their code.

What that means is in some cases, I'm going to take a shortcut, or I'm going to do something that might introduce a little bit risk for my one feature. In some cases, I might be doing something, creating a new feature, creating a new product that's going to expose the entire company to an enormous amount of risk by, let's say, collecting sensitive information. The developers need to understand what their risk is, why it matters, and when they are empowered to make those risk decisions, and when they need to bring in experts from the security team or somebody more senior in the company to help them make those risky decisions.

[00:33:09] Guy Podjarny: I like that. Understand the fork in the road, just being able to assess how big a deal is this decision you're making right now and adopt it.

[00:33:17] Shaun Gordon: Exactly.

[00:33:18] Guy Podjarny: Cool. Well, hopefully, people take that to heart. Well, thanks a lot, Shaun, for coming in the show.

[00:33:23] Shaun Gordon: Thank you.

[00:33:24] Guy Podjarny: Thanks for everybody who tuned in.

[END OF INTERVIEW]

[00:33:27] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show or want us to cover a specific topic, find us on Twitter @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over 100 videos about building developer tooling companies given by top experts in the field.