Skip to main content
Episode 11

Season 2, Episode 11

Keeping PagerDuty Secure With Arup Chakrabarti, Kevin Babcock, And Rich Adams

Guests:

Arup Chakrabarti

Kevin Babcock

Rich Adams

Listen on Apple PodcastsListen on Spotify Podcasts

In the latest episode of The Secure Developer, Guy is joined by Arup Chakrabarti, Kevin Babcock and Rich Adams from PagerDuty. They discuss how they put into practice their security vision of “making it easy to do the right thing”.

This involves picking the right tooling and designing a security experience that doesn’t force people to do things, but rather provides insight into how vulnerabilities can be exposed. Giving people the opportunity to break things also creates a strong desire to want to then protect those things.

The post Ep. #11, Keeping PagerDuty Secure appeared first on Heavybit.

Teilen

“Arup Chakrabarti: A lot of the security tools make an implicit assumption that, “Oh, you have an army of security and analysts.” And look at this room, there's not an army of us. Unfortunately, it's 100% the PagerDuty security team is present right here in this room.

Rich Adams: We have a phrase we like on the team, which is we're here to make it easy to do the right thing.

Arup Chakrabarti: There's this entire class of security problems that only get harder as companies get bigger, your teams get bigger.

Rich Adams: To be successful in security, you need to work with other people. Security can't be solved by yourself. If you try, you will fail.

Rich Adams: We're all in this together.”

[INTRODUCTION]

[0:00:37] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers, covering security tools and practices you can and should adopt into your development workflow.

The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter @thesecuredev.

[INTERVIEW]

[0:01:08] Guy Podjarny: Hello, everybody. Welcome back to The Secure Developer. Today, we have three guests, three awesome guests from the awesome PagerDuty company to talk to us about security and how it's handled in PagerDuty. Thanks for coming over. Can I ask you first to introduce yourself? Maybe Arup, we’ll start with you.

[0:01:26] Arup Chakrabarti: Sure. My name is Arup Chakrabarti. I head up our infrastructure engineering teams at PagerDuty, which of course, includes security. I've been at the company for about four-plus years now. I've been involved in security, whether I liked it, or not, in one way or another over the last four-plus years at PagerDuty.

[0:01:43] Kevin Babcock: My name is Kevin Babcock. I'm Principal Security Engineer at PagerDuty, and I like working to secure software as a service systems. I think it's an exciting challenge. Before PagerDuty, I worked at Box, and prior to that, I was at Symantec for quite some time building security products.

[0:01:59] Rich Adams: Hi, my name is Rich Adams. I'm a Senior Engineer on the security team. Originally, I have an Ops and a software developer background, and I got interested in security by playing CTFs and getting into breaking things and realising just how easy it was sometimes, and that got me excited to work on the other end of it and trying to stop those things from happening.

[0:02:19] Guy Podjarny: Got it. Cool. CTF is always a fun part. To have an episode on CTF alone, which is worth checking out. Is this how big a percentage are the people in the room right now of the PagerDuty security team?

[0:02:34] Guy Podjarny: This is the entire PagerDuty security team.

[0:02:37] Rich Adams: One hundred percent, 100% of the PagerDuty security team is present right here in this room.

[0:02:42] Guy Podjarny: Okay. Excellent.

[0:02:43] Kevin Babcock: Yeah. I do want to say because the security team is in the room, it doesn't mean security stops. The important aspect of our philosophy is that everyone ends up being involved in security, and we're going to talk more about that later.
[0:02:56] Guy Podjarny: Yeah. Kevin, actually, that's a great segue into it. We had a bit of a chat here about how you work, and a lot of the emphasis that you pointed out was around collaboration and security. Maybe Kevin, can I ask you, how do you see security and what's your philosophy around security and how to handle it?

[0:03:19] Kevin Babcock: I see the security team as the subject matter experts within the organization. That doesn't mean that that team is the only team that will work on it. In fact, to be successful in security, you need to work with other people. That's why there's three of us here today. Security can't be solved by yourself. And if you try, you will fail. Having that collaboration and that ability to work effectively with others outside your team from a security practitioner's perspective, or others on a different part of an engineering team from a developer, or DevOps practitioner perspective is very important because you really need to be able to approach the threats and risks of your business from a holistic perspective, where you won't be able to defend against them effectively.

[0:04:03] Guy Podjarny: I definitely subscribe to that perspective. But unfortunately, oftentimes, we hear the whole conversation about builders, breakers and how is the different mindset. How do you see, or even when you talk to security people, how do you screen for it, if you will, that different approach? How do you break through the concern, or the mindset of developers just don't understand security, or these security guys are just naysayers? How do you connect it to?

[0:04:35] Kevin Babcock: I feel it's an important part of my role to be a resource and someone who can educate and train other people. I'm here to help them make better decisions. If people don't feel they are able to do that, that means I'm not doing my job. That's on me.

[0:04:50] Rich Adams: One of the things we like as a security team doesn't just say no to everything. You can't have a team that sits there and someone says, “Oh, I want to work on some production data, somewhere that is in production and saying.” You have to go, “No. Absolutely not.” You figure out, what is it that you're trying to accomplish and work the goals around that and work to ways to get them to be able to do their job properly, while at the same time keeping your data secure.

We have a phrase we like on the team, which is, “We're here to make it easy to do the right thing.” If we build any tooling, the intent is to not hinder developers, or hinder anyone and to be able to do their job, it's to make it easy for them to just do the right thing naturally and without even thinking about it.

One of the things we've done is our own training internally. At previous companies, I've always been frustrated at security training, because it was the two-hour unskippable video and then obtuse use cases that never really come up. Some things are common sense. You don't pay attention. You just skip. You keep it in a background tab, keep it muted, and then answer some questions at the end where you get an unlimited number of chances and you just keep going until you get through. It's usually to check some compliance checkbox somewhere.

One of the things we've done at PagerDuty is done our own internal security training where we made it a bit more engaging, a bit more fun, and trying to teach people about real threats. One example is passwords. People are generally pretty bad at using passwords, and it’s usually a hard sell to get people to use password managers across the company. Rather than giving people all these rules like, “You must do this, this, this, and this,” we framed it in a way of, “Here’s what attackers do. Here’s how you break passwords.” Demonstrate it with some fancy animations and like, this is how easy it is to break passwords.

People get more engaged that way and focus more, pay more attention. Then you find that they actually come to you after and say, “Hey, that was really interesting. I've actually started to use a password manager now.” The idea is we've made it easy there for them to do the right thing and they've made the choice themselves and it's not something that we forced on them and say, you must do this.

[0:06:53] Kevin Babcock: My favourite part of this training, which delivered that was wonderful, is after this happened, we had someone come back and say, “Now that I understand how the attackers are working, I just spent three hours over the weekend going and changing all my passwords.” To me, that is real impact, because you're not only making people better and safer for the company, but you're improving the security in their own lives and really, that's why we're here.

[0:07:17] Guy Podjarny: Yeah, that's excellent. I always find that security at the boring sense is all about risk reduction and it's this not very exciting notion. The only advantage that it has is hacking is cool. If you can leverage that into your benefit, and I use that a fair bit when I give talks and all that. If you show an exploit, if you show a live exploit, if you let somebody do it, just the educational value is dramatically, dramatically higher than sitting down and talking about the bits and bytes.

[0:07:46] Rich Adams: That's why CTFs are popular as well. I think that's what originally roped me into security was seeing it happen is like, “Oh, this is –”

[0:07:53] Guy Podjarny: Which is a form of trading almost by itself.

[0:07:55] Rich Adams: Yeah, definitely.

[0:07:57] Guy Podjarny: I guess, Arup, how do you see, because you cover security, but you also touch a bunch of the different elements, or different functions in your team. How do you see the division of responsibility for security between the different groups, between the ops and security?

[0:08:13] Arup Chakrabarti: Yeah. I'm responsible for other teams as well at the company. I firmly do believe that security is becoming more of this operational problem, as opposed to a purely security problem. I look at a lot of the trends that we've used in that ops, DevOps phase of the last 10, 15 years around automation, monitoring, metrics, learning, telemetry, all these wonderful things. From a security aspect, that's where we as a team, we keep investing a lot more into that. We invest a lot more in telemetry. Why? Because we want to be able to react quickly to problems when they come up. We invest a lot into automation and making sure we have right tooling there.

It's very easy for us to figure out like, hey, do we have a set of servers that aren't subscribed by a certain rule set? If they are, well, okay, run Chef again and it's going to get rid of that anomaly. That's really important. One thing that it's been really interesting to watch security engineers change their habits over the last couple of years, just as I do believe that operations engineers had to change the way that they work, security engineers are now changing the way they have to work, too, which is very fun.

[0:09:19] Guy Podjarny: Yeah. Very much bringing the DevOps revolution, or the learnings from the evolution of the ops world into DevOps world.

[0:09:26] Arup Chakrabarti: Yeah. I think it's the learnings. I don't view these problems as the same problems, of course, right? They're very different.

[0:09:32] Guy Podjarny: Agree.

[0:09:33] Arup Chakrabarti: Very different ways to approach them and everything. I do see that in the security industry, there's a lot of opportunity to look at what a lot of companies went through in their DevOps transformations and look at, “Hey, what can we take from that and apply that towards security problems as well?”

[0:09:49] Guy Podjarny: I entirely agree. I think, yeah, the learnings, you need to adapt them, but you also don't want to stay focused. Many security teams today are very still gate-driven still about stop here and work, which just doesn't work in a DevOps world that tries not to stop as little as possible and how you work. Fundamentally, when you look at the activity of people, do you see engineers having explicit, like OARs, or goals that are security related, or are those still central? How do you manage the ownership, if you will, of tackling a security problem? Who would have that ticket sit on their plate?

[0:10:33] Rich Adams: I think it ranges depending on the problem. We have some tickets that would be, let's say, company-wide, things that are far-reaching that would belong to the security team, and we would liaise with other teams and get things into their agile cycle to flex things out.

[0:10:52] Kevin Babcock: These are tend to be broader reaching projects that are more strategic, where we're building tooling, or other infrastructure that will be used by other teams and we’ll be supporting that, or providing a service, but it's really something that everyone needs to be able to use and that will help us as an organisation operate more effectively.

[0:11:09] Guy Podjarny: Which comes back to making security easy, making it easy to do the secure thing at the right point.

[0:11:14] Kevin Babcock: Yeah. That's right.

[0:11:15] Rich Adams: Then, yeah. At the other end of the scale, there are little security changes, or not even little, or more narrow security changes that have a stronger focus. In those cases, the team that's responsible for that particular area of the system would take ownership of it. Sometimes depending on the type of change, they would perhaps come to us on the security team and request help. Maybe we would embed ourselves with that team for a week, or for their next sprint to help them through the door, but they would be ultimately responsible for owning the change. It ranges depending on the scale of the security problem, or the change that we want to make.

[0:11:52] Kevin Babcock: We do this as well for reactive and responsive security. For example, we have some tools that will be scanning for vulnerabilities in open-source software, and that will trigger a notification in PagerDuty that then will be dispatched to the on-call person for the appropriate team. This is a great way to expand the number of people working on security and caring about security in the organisation. If you're listening to us today, I know that you care about security, but there's probably someone sitting next to you who doesn't care yet or doesn't know.

One way that we find you can get the entire team involved is by using this rotation and dispatch, where when a particular problem comes in, whoever is up on call is going to have to understand and take care of the problem. Living through that experience is a great way to get people to start asking questions and learning more about, why is this important? Why do we have to fix this quickly? What happens if I don't do this?

[0:12:47] Arup Chakrabarti: One thing you talked about was, does everyone have OKRs, or goals against security? One of the, I think, things that's actually unique about our security team is so we actually work very closely with our sales team. We actually do look at what are we doing from a security standpoint to support our sales team? We actually have goals that are jointly tied between our sales engineering team and our security team.
The fun part there is that gives you that sense of like, wow, the security really does impact not just the engineering teams, not just that the – it really does have an impact across the entire company. I'm always torn on where security should sit, where should the goals lie and all those things. I always err on the side of like, when in doubt, add another security goal for a team that's not on your – for your non-security teams. I think that is a good habit to have because I think it encourages the right behaviours across the organisation.

[0:13:42] Guy Podjarny: Yeah, excellent. I think security, one of the challenges with it is that it's quiet as long as it's working, right? It's one of those things that you only hear about it when it goes bad. I'm a big fan of trying to find opportunities to surface it when it helps when it's a positive impact. I think the not-always-fun security questionnaires are maybe a good example of that, which is you can demonstrate how awesome you are. At Snyk, we do this badge that says the number of vulnerabilities, the number of vulnerable dependencies you have on your repo. It's been growing. There's hundreds of these on GitHub. I think a lot of the premise is to say, “Hey, if you care about this problem and you've bothered checking if you're using vulnerable dependencies and you bother maintaining that, you're awesome. Why don't you show it off? You help show the world that you care and that they should care.” It's fine. Win a point. That's okay, because you've made this effort and you moved it forward.

It's great to hear. I love the mentality. Unfortunately, there aren't a ton of those that are so easy to point to. When an issue actually does come up, or when there's a problem, what's the process there? Who gets involved? Do you mention before a bit of an on-call page, but what happens after?

[0:15:01] Rich Adams: Sure. Let's take an example of an external report. Some member of the public has emailed security at PagerDuty saying, they've found a bug in our system. That pages the security on-call engineer. 24/7, they'll get paged if a vulnerability report comes in. The first thing you'll do is obviously, read the reports, see what it's about. If it's something that's a known issue, something we've accepted the risk of, something that is not an issue, we can kill it and move on.

If it looks legitimate, we will try and reproduce it in some test accounts. If we're able to reproduce the vulnerability and that it's real, we'll start engaging a response team. We'll pull in the on calls from whoever are on teams that are affected by this. Again, they'll get paged 24/7, that's just 2am in the morning, which has happened before. We'll page them. This security report's been raised, we've replicated it, it's valid, we need to fix it ASAP. They'll work on it, deploy the fix as quickly as it can. Once it has, we'll get back to the person who reported to us, say, “It's fixed. Can you confirm from your site as well?”

Maybe there was some nuance to the way they'd done it, some edge case we've missed that they didn't let us know about. We always find it important to ask them like, can you confirm as well that it's fixed? Sometimes they don't get back to us. Sometimes they do. Then generally, once that's fixed, we'll consider it closed. We'll also then kick off a post-review task to see if there are potentially any other similar vulnerabilities elsewhere in our code base. Let's say, I don't know, is cross-site scripting on a particular field that got missed somewhere, or wasn't covered by automation, we'd kick off a review process to like, okay, we need to scan everything and just make sure that this same bug didn't get introduced elsewhere in the system as well. That's usually done during business hours or next day. We wouldn’t keep people –

[0:16:54] Guy Podjarny: Yeah. Might not be rushed. Just to confirm, you mentioned, there were a bunch of we’s in there, like we do this, or we do that. The vulnerability report today still goes to the security team to assess it, or that that's the on-call ops person?

[0:17:08] Rich Adams: It's the on-call security person. The three of those are on a security on-call rotation. We essentially triage all of the inbound security reports. If it is something that is operational-based, or let's say, it's something where we don't know how to reproduce it ourselves, maybe we don't have the technical expertise, it's something very deep in a particular system, we'll page the on-call responsible for that system. If it's an operational, that would be the operations team on-call.

[0:17:31] Kevin Babcock: This often ends up being a collaborative effort. Something may come in and I don't understand the other system well enough to know exactly what the impact is, but I've seen this class of vulnerability 10 times before and I know the ways it might be manifested and what the actual impact to the organisation would be. I'll bring that knowledge, which is here's how bad it could be, here's some other ways this might be exploited, and I'll share that with the system owner, who then will tell me, “Here's how our system works.” And oftentimes say, “Oh, I can do that here and I can also do it in this three other places. Let's make sure that they all get fixed.”

[0:18:07] Arup Chakrabarti: The important thing here is that the security team is not the one responsible for resolving the issue. Responsible for triage again, initially assessing like, what do we think the – could this get worse with the attack vector, and all that? But then, what Kevin said, it's that collaborative piece that's super important to us that. We've been actually very fortunate. I can't think of a single instance in the last couple of years where one of our collaborative engineering teams said, “No, you deal with it instead. I cannot.”

[0:18:34] Rich Adams: I don't think that's ever come up. At least, not while I've been at it.

[0:18:36] Arup Chakrabarti: Yeah. I can't honestly remember a single time. I think that it's one of those, maybe it's the shared misery piece of like, “Well, Rich, you're up at 2am. Fine, I'll be up also.” But I do think it creates that shared ownership, which is, it's really hard to do that well, and that's something that we're constantly trying to find the right balance. For us right now, the right balance is security team triages it and assesses the vulnerability and then immediately starts dispatching and getting additional people involved.

[0:19:05] Kevin Babcock: I firmly believe that collaboration comes from a conscious effort to be a teammate who can support your other colleagues. For example, I've gone and embedded myself with an engineering team and worked with them for a number of sprints to help them solve their projects, because that allows me to have the right context for how that team works and understand the problems they're facing. Now I have knowledge that I can use to design better security tools that fit right into that team's workflows.

Similarly, they get a sense of me, how I'm working. I ask them questions about security and they start having a different perspective than they may have about some of the challenges that the security team is looking at. Now I have relationships and people will come to me with questions and I can use that as a way to identify security problems that I might never have known existed.
[0:19:50] Rich Adams: Yeah, and it's definitely an approach of, or a feeling that we're all in this together. I never feel bad about paging someone on another team, even at 2am in the morning, if I'm not sure about how their system works and can't accurately determine whether this security threat is valid or not. Again, I have no qualms about paging these two either, if I'm not convinced that I've replicated this properly, or anything like that. We have a motto that I like. It’s, “Never hesitate to escalate.” It's always hit the button if you're unsure. I've never had anyone on any team complain about that. It's always been, “Oh, yeah.”

[0:20:24] Kevin Babcock: This goes both ways. I recall a time when an engineer started our security incident response process, just because he found something suspicious. He wasn't sure how bad it was, but he knew it looked suspicious and he wanted to make sure that it was covered. I was very happy that he made that decision and that we were paged and brought in to respond quickly, so we could look at the issue and determine what we should do.

[0:20:47] Guy Podjarny: Yeah. I love that approach. First of all, that one is very much the, if you see something, say something, right? That implies it's almost better than being willing to be working up in the middle of the night because it means unsolicited. They've considered security, which I think is maybe an even bigger achievement in it. I like it. I guess, the way maybe I would echo back is it's less about educating developers about security. It's about collaborating with development for security. That does imply learning on both sides. It's not something that comes down from security into dev. You have to absorb knowledge on the other side and adapt your own knowledge into the context that they would include it.

Let's shift down maybe in the stack to go, because we talked a lot about first, the philosophy and then practices you do in the team, which seems super, super useful. Let's talk tools. Practically speaking, you run this. What are some notable tools that you have in your security stack that you use?

[0:21:45] Arup Chakrabarti: Rich, I’ll hand this one off to you –

[0:21:47] Rich Adams: Sure.

[0:21:47] Arup Chakrabarti: - since, and also that.

[0:21:50] Rich Adams: Let's talk about two-factor authentication. It's a long-running project we've had going. Specific tools we use for our two-factor on SSH is Duo, Duo Security, using their PAM Duo module. That is specifically tied to YubiKeys, which are the nice, little USB hardware tokens. We went through a few different options on methods of two-factor, starting with the basic TOTP, the six digits, Google authenticator style codes. That was a lot of friction with that. If an engineer wants to log into a server to debug an issue, they've got to pull out their phone, they're going to type a six-digit number and it was quite a painful process.

[0:22:37] Guy Podjarny: Just to clarify, these are for two-factor authentications for internal systems, like as you have this your own sort of all –

[0:22:43] Rich Adams: Yeah, to access our own systems internally. We went to the Duo push, which is where they send a push notification to your phone and you have to approve it. Better, but not great. We worked with a few beta testers and our engineering teams and people who SSH a lot and try and find out the pain points and how they use it. There was a lot of negative feedback on using push and TOTP and things like that. We tried YubiKeys and that was a much smoother approach. Everyone really liked that it's just a simple tap of the button.

[0:23:13] Kevin Babcock: What’s a YubiKey, Rich?

[0:23:14] Rich Adams: I explained that. It's a USB hardware token that you stick in and press a button on it and it does stuff. It does magic that just works. It should work for Apple. Well, we had a lot more positive feedback once we started to roll out YubiKeys instead. That's when we decided to, let's just get YubiKeys for everyone and pre-enroll them. We've had a lot of success with that now. All of our engineering organisation are using this method, support engineers, sales folks. Anyone that could possibly access our infrastructure for any way, whether they're jumping through a gateway host or anything, uses YubiKeys and two-factor authentication with Duo.

That's been really good for us to strengthen the access to our infrastructure in a way that doesn't too negatively impact. Obviously, you still got to put in the YubiKey with next step than you had before, but I think everyone recognises that we're getting a huge security benefit for not too much of an extra hassle.
[0:24:17] Guy Podjarny: Fundamentally, security does imply introducing some extra work by putting in the effort to make it as usable as it can be, make it simple, as you pointed out earlier on, make it easy to do the right thing is a big deal. This was just, again, for being everybody understands and mimic maybe in their org how it works. This sounds like an initiative, or all of this exploration was done by the security team, driven to be enterprise-wide, but the application of the security control, if you will, just the use of YubiKeys now company-wide outside tech, outside, just including, as you pointed out, sales support and the likes.

[0:24:57] Rich Adams: The way we rolled it out, I think, was important as well. It wasn't everyone gets a YubiKey today and go through it. We trialled it with a few power users first. Obviously, we didn't go to them and say, “You will use this from now on.” We solicited volunteers who are excited about trying it out. They tried the painful methods first as well, and that's how we got the feedback. It hasn't been an entirely painless process. There are some issues with certain tools don't work well with it. We're having to find workarounds for those, and it's all been a learning process. Rolling it out in stages with some key users first, ironing out the kinks before you get to the non-engineering teams and people who perhaps don't know how to use an SSH tunnel workaround for some tool and having to find more easier approaches to work around any pain points there.

[0:25:41] Guy Podjarny: Okay. Cool. So, this is great for two-factor auth, and maybe some other tools that are used that people might care to consider themselves?

[0:25:51] Arup Chakrabarti: Going back to my point earlier around treating security problems as operational problems, we have that full suite as well that helps us there. Things like Chef, Splunk, AWS tooling, and those audit toolings they use for those operational problems, we use them for security challenges as well. We have monitors in Splunk constantly running, looking for malicious behaviour in our audit logs and looking for malicious behaviour in the access logs as well. That whole suite also.

[0:26:20] Guy Podjarny: Chef is an interesting one. I mean, it's very much an obstacle. How do you use Chef for security purpose?

[0:26:27] Kevin Babcock: It's important for a security team to be able to react quickly and move quickly. Automation like Chef, or Puppet, or whatever else you're using gives you that benefit. You already have it in place for your infrastructure to improve operations. Take advantage of that to allow security to work faster and more effectively as well. For example, if you want to roll out a patch across the entire infrastructure, you can configure Chef and push out that change and be confident that it gets everywhere and then it's been applied universally and it's not something you have to worry about anymore.

[0:26:59] Guy Podjarny: Yeah, I like that. I think in general in continuous deployment, or in some continuous environments, or fast-moving environments, a lot of the pushback in those claiming DevOps hurts security is that there's a lot of change and that change introduces risk. I think one of the best claims on the other side is to say, alongside with that change, or that faster change, comes faster response and comes the ability to respond to issues quickly and across the entire system. I like Chef. Adam Jacob was on the show and we talked about InSpec and how there's some tools that are built into it that really try to do it. I'd love to see more security features coming into those tools as good checkboxes and easy-to-use capabilities.

[0:27:46] Arup Chakrabarti: One thing you just said around the increase of the rate of change introduces more risk. I do agree with that. One thing that a lot of these tools do support is auditability. It's that ability to go back through and figure out, hey, on this day at this time, what changes were being made? While, yes, that risk is increasing over time, or it's very hard to keep up sometimes, when you do have to respond quickly, when you do have to react, it's actually much easier if you have the automation in place that allowed you to move faster in the first place.

I think a lot of security teams make the mistake where they insert friction and they'll reduce the amount of automation sometimes, again, with the wonderful intent of reducing risk. But a lot of times, they actually end up creating more risk in the long run, because they've lost that auditability, because they don't have that automation in place.

[0:28:37] Guy Podjarny: That's a really good point. How do you in general see the delta, or what's your view on prevention versus response, on putting something as a blocker, as opposed to responding quickly to issues?

[0:28:53] Arup Chakrabarti: Well, I just asked Kevin. What should I do here?

[0:28:56] Kevin Babcock: Right. Well, I have an answer for you. Security fundamentally comes down to risk assessment. In a corporation or an enterprise, you need to enable the business to make the right decisions for security. If you are shutting down operations and you have no ability to change, because everything's locked down to the point where you're very confident, you know the state of everything and it's running correctly, you haven't shifted any new products, you haven't updated your product, your customers are complaining, the business is not going to be successful.

Security has to be about understanding the context of the business and the risk that it's willing to take on and making the right decisions for where you put in place controls and protection to reduce your risk and make sure that you're always operating right at that brink of what you're willing to accept, but no higher.

[0:29:45] Guy Podjarny: That's an excellent statement. I like to use the phrase that you can be secure all the way to bankruptcy, which is not very helpful as a business methodology, even though you might be able to pass all the audits that come by. Of course, we talked about a bunch of tools that you use. Maybe before we close that section off, just talk a little bit about what would disqualify a tool for you. I mean, you talked about some of the good things. What type of properties have you seen in security tools that you saw that and said, “Yeah, if this tool behaves this way, or if I'm seeing this property, I'm not going to use it”

[0:30:23] Rich Adams: We've had tools where it's been very difficult to integrate them because they might not play nice with other tools that we've already integrated. It might be bad luck on the part of that vendor that we implemented the other tool first and then they both don't play nice with another. Generally, if we can't figure out a way to get it integrated in our systems within a week, we pretty much just cut our losses and move on, because it's not worth investing additional time there.

The other one, especially with security tools, is the false positive rate. If things are paging us, saying you have a critical issue and we find out we don't a lot, that's introducing a lot of encore burnout to us and is something that we try to avoid as much as possible. Any tool that is needlessly, maybe 90% of is noise then is just not useful to us, because we can't filter out the noise in an easy way. Again, we've had tools in the past where they're great, but they're too much noise and we can't find a way to filter it out properly and it reduces usefulness and it goes from when you see an alert from that tool, you think, “Oh, great. I almost get on this immediately,” to, “Oh, it'll be that thing again,” and you ignore it. At that point, especially for a security tool, it's lost all usage.

[0:31:38] Guy Podjarny: Yeah. Just a boy who cried wolf.

[0:31:39] Rich Adams: When you lose trust in the tool, then you have to move on.

[0:31:43] Kevin Babcock: We've also encountered some challenges as an early adopter. There are some very good tools out there for DevOps-type organisations, like you mentioned Twistlock earlier, Signal Sciences. We've also evaluated some other tools, where it was very early on in the product cycle and there's an advantage in looking at that, because you may get a new protection that's not broadly available. You're also taking on some risk because that company is still new, it's still establishing the product. In some cases, we definitely saw the potential and we wanted the functionality, but as Rich was saying, the time to integrate was too high. We ended up pushing off and saying, “Well, we're going to keep an eye on this technology and re-evaluate six months down the road, a year down the road, but it's not something we can do today.”

[0:32:31] Guy Podjarny: Just an ROI type of calculation, right? There's just the anticipate more investment necessary in it.

[0:32:37] Rich Adams: Yeah. I think another important thing is the responsiveness of support as well. We've certainly had tools where we might have – we've hit a roadblock. The documentation isn't telling us what we need to do. It's not obvious what we need to do to continue on something. We'll reach out to a support team and we won't get a response for a week. At that point, we've moved on. It might turn out, the response is, oh, just flip this configuration settings. Like, oh, that would have been easy. But once the week's gone, it's like, well, we've moved on now to other things. There's definitely sometimes a missed opportunity there, if it's not a very responsive support. That can affect whether we end up using the tool or not.

[0:33:14] Guy Podjarny: I love that all of these definitions are bread and butter for any dev tooling, or obstacles out there. Unfortunately, not at all the default, or the given for security tools that are out there, so that's maybe another evolution the ecosystem needs to go through.

[0:33:28] Arup Chakrabarti: Yeah. I think it's interesting, because, so we were talking earlier about accountability in our environment, how we have individual teams account for the code that they ship, a lot of the security tools make an implicit assumption that, “Oh, you have an army of security and analysts that are looking at this,” right? They make that assumption. I don't know, you can look at this room. There's not an army of us, unfortunately. It's always interesting where I'll see a tool out there and they'll make some bold claim. I look at it and like, “Oh, this is fantastic. Wait a second. You're expecting me to have an army of 20 people watching these screens constantly? That doesn't work for our organisation.”

One of the lessons I've learned the hard way, unfortunately, multiple times is you don't look at which audience the tool was built for when you go to buy it. You end up buying it and then realise after the fact that you were not the audience that the tool was built for. You end up, again, with integration challenges going beyond whatever it is. That's something that at least for me, at least I've tried to be more mindful of going forward, is this tool that I'm buying was it built for our audience? That audience is different for each company. It's different for each.

[0:34:40] Kevin Babcock: There's a promising new set of tools out there that I think are very interesting that enable people who may not be full security specialists to do security work. These are the security orchestration products, like Phantom, or Exabeam and others that are emerging in the last few years. I think there's a lot of promise for being able to implement these to get higher leverage out of your security organisation by enabling people without a security background to effectively do security tasks.

[0:35:09] Guy Podjarny: Yeah. I love hearing this. This is entirely, for me, when I founded Snyk, the whole definition was to say, Snyk is a dev tooling company that does security. It absolutely must. No matter what it does, it needs to be operated on a daily basis by developers, by dev opts teams. If it's being used by security, we've lost. It needs the guidance, it needs expertise. It's because like you want developers to engage with security, but you can't expect developers to be the experts in security every time. When we don't have expertise, we revert to tools. The tools should bundle in some of that expertise for us and then make it accessible for us. I love to hear the philosophy of it and hear it working in action. I think this was super useful. Before I let you go and continue securing PagerDuty. Well, actually, you don't need to, because the whole team, the rest of the team is doing that, right?

[0:36:01] Rich Adams: It's fully secure. There is nothing left to do.

[0:36:06] Guy Podjarny: I like to ask my guests before I let them go, to ask for one tip. If you're talking to a dev team that's trying to, or ops team, trying to up level their security calibre, the security foo, what's the one tip, the one pet peeve that you would highlight right now? Maybe Rich, I'll just start with you.

[0:36:29] Rich Adams: Sure. For a development team, I think it's key to get the team excited about security. If a team just sees it as a hindrance and something like, “Oh, we have to do this security thing,” it's never going to kick off. I think these things work best when people take it on their own, on their own initiative, and then they pitch the idea to other people who take it on and take it on and grows that way. One of the things I always like to pitch into teams is work from the side of an attacker.

I mentioned CTF things at the beginning. Play a CTF. Try and execute a buffer overflow vulnerability. See just how easy it is to do these things. Try and do some cross-site scripting if it's a web application, some cross-site request forgery, and just to see how simple these things are to break. It's always, at least with engineers and development teams, I always think it's very exciting when you break that first thing and you're like, “Oh, wow. It was that easy?” It’s like, “I just did this one little SQL injection, now I've got all that data? Maybe I should fix that.” That journey gets people excited. I think there's a lot of, especially in movies and TVs, this hacker mentality, and people want to do the cool thing. I think seeing that work in real life and seeing things being exploited always gets people excited and want to protect those things and defend against those things.

[0:37:46] Guy Podjarny: Excellent. How about yourself, Kevin?

[0:37:48] Kevin Babcock: To do security well, you need to take it in context. You need to know what your valuable assets are and what's at risk. It's not enough to say, we need to have strong passwords, we need to use encryption, we need two-factor authentication. Unless you understand why you are implementing those controls, you're missing the point. The reason we went and implemented two-factor authentication for SSH is because we're concerned about this very common attack vector, where a phishing email comes in, someone deploys malware on a machine, and then there's lateral movement into the production network.

We know that all of our most sensitive data is inside that production network, and so we're interested in putting additional controls in place, so that if and when there's malware operating inside the corporate network, it's very, very difficult to move laterally and get at the most valuable assets.

[0:38:38] Arup Chakrabarti: There's this entire class of security problems that only get harder as your companies get bigger, your teams get bigger. Having seen multiple companies now go through these crazy growth stages, and then they bolt security on as an after effect, you're signing up for an uphill battle there. Starting early doesn't mean dedicating 50% of your workforce now, but that might look like early on is you have a single engineer that cares about this early on at the company's history, let them spend part of their time on it. Enable them and let them be successful. It just pays dividends down the line. If you really try to think of security like, oh, we're going to go out and buy a security product, we're going to go buy a security team, we're going to bolt this on effort, it rarely works. If you're starting to thinking about it, chances are you should have been doing it yesterday. Just do it today and keep investing in this stuff as best as you can.

[0:39:40] Guy Podjarny: Arup, Kevin Rich, thanks for joining us today. This has been super insightful. Before you disappear here, if somebody, one of our listeners has questions for you, wants to follow-up, get some of your further advice out of band, how can they reach you?

[0:39:55] Rich Adams: I am @r_adams on Twitter. That's one D.

[0:40:00] Arup Chakrabarti: I am @arupchack, A-R-U-P-C-H-A-K on Twitter.

[0:40:04] Kevin Babcock: I'm not on Twitter, but I would be happy to entertain conversations if you reach out to me on LinkedIn, you can find me under my name, Kevin Babcock, and just make a connection.

[0:40:13] Guy Podjarny: Perfect. Okay. Well, thanks a lot. For all those joining us online, I hope you enjoyed the episode, and join us in the future.

[0:40:19] Arup Chakrabarti: Thanks.

[END OF INTERVIEW]

[0:40:21] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show or want us to cover a specific topic, find us on Twitter @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over a hundred videos about building developer tooling companies, given by top experts in the field.

Up next

Keeping Cloud Foundry Secure With Molly Crowther

Episode 12

Keeping Cloud Foundry Secure With Molly Crowther

View episode
How New Relic Does Security With Shaun Gordon

Episode 13

How New Relic Does Security With Shaun Gordon

View episode
How Slack Stays Secure During Hyper Growth With Geoff Belknap

Episode 14

How Slack Stays Secure During Hyper Growth With Geoff Belknap

View episode
Enterprise Security With James Governor

Episode 15

Enterprise Security With James Governor

View episode