Skip to main content
Episode 120

Season 7, Episode 120

How To Build A Successful Bug Bounty Program With Sean Poris

Guests:
Sean Poris
Listen on Apple PodcastsListen on Spotify Podcasts

A successful bug bounty program can play a pivotal role in the security strategy for a company but defining and running such a program requires structure and maturity within an organisation. Sean Poris, Senior Director of Cyber Resilience at Yahoo knows all about the anchor elements that you need in a bug bounty program and how to drive maturity of such a program. In this fascinating conversation, Sean goes deep into how bug bounties fit into their security philosophy, and how this program has been developed and adapted over time. From there, we turn to the actual structure of the security team, with our guest shedding some light on what is required from the different roles on the teams. He explains what the Deputy Paranoids stay busy with, and how they approach hiring and educating for this position.

Compartilhar

EPISODE 119

“Sean Poris: There are a handful of anchor elements that you need in a bug bounty program, and then you could wrap the rest around and drive maturity. The first thing that's key is you've got the scope. What are you bringing to the researchers for them to hack on? Is it interesting? Is it compelling? How big is it? How wide is it? What business problem space is it in?”

[INTRODUCTION]

[00:00:23] ANNOUNCER: Hi. You're listening to The Secure Developer. It's part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

This podcast is Sponsored by Snyk. Snyk's developer security platform helps developers both secure applications without slowing down, fixing vulnerabilities in code, open-source containers and infrastructure as code. To learn more, visit snyk.io/tsd. That's S-N-Y-K.I-O/T-S-D.

[INTERVIEW]

[00:01:13] Guy Podjarny: Hello, everyone, thanks for tuning back in. In this episode, I have the pleasure of hosting Sean Poris of Yahoo. Sean, actually, if you look at his LinkedIn, you'll sort of see that he has been in four different companies since 2016, but in practice, there’s sort of Yahoo in all sorts of shapes and forms, leading and kind of building up their product security, application security programs, and kind of much more into now his current role. Which is a Senior Director for Cyber Resilience, which includes product security and application security under it, but also an aspect that he calls community driven security, which I find fascinating. So we're going to be talking about bug bounties, and about security champions, and about how to take a community approach to all of those. So hope you enjoy the episode. Let's tune in. 

Sean, thanks for coming onto the show.

[00:02:02] Sean Poris: Thank you so much for having me.

[00:02:03] Guy Podjarny: Sean, you've been in security for a decent amount of time, I don't know if it's like me or sort of a little bit uncomfortable with saying just how long on it. There's a lot that we can talk about, whether it's in your sort of current Yahoo role, or sort of previous positions. But I know that like one topic that you're very big on is bug bounties. I’m sort of a fan conceptually, but I always find them to be sort of interesting about how people operate them and the lenses on it. I guess, if we can drill down into that a little bit. For starters, can you just sort of give a little bit of your view of what's the purpose of bug bounties? Why bother at all? What's the role you see them sort of serving in our world?

[00:02:42] Sean Poris: Sure. I mean, there's a tremendous body of capabilities and skills that exist out in the marketplace in security in general. Typically, a company can only afford X number of FTEs and consultants. So you get that level of knowledge and skill. When you expand it to something like bug bounty, you bring in a diverse community of not just security skills, but just ways of thinking about problems and ways of thinking about being destructive. I say destructive in a healthy way, like a destructive mindset. How do I break apart a system –

[00:03:20] Guy Podjarny: Find the gap.

[00:03:21] Sean Poris: – and find the gap, find the vulnerability, exactly. That is tremendously valuable for us, and leveraging that community, and partnering up, and ultimately building security skills around the industry as well, so that these folks can either continue to research and other companies or eventually work their way into become security engineers. Or there are already security engineers, and they're expanding their skill set through their work on bug bounty programs.

[00:03:45] Guy Podjarny: Yeah. Is it sort of purely a budgetary thing, if like in a theoretical world in which you could hire sort of all the auditors, and red teams, instead of pen testers into your own company? Would you see that as better or do you actually feel like the program gives you something that is more than no practical results-based payments?

[00:04:02] Sean Poris: That's a great question. Actually, you're dovetailing into, and we haven't talked about this, but there's a talk I'm working on around bug bounty and the partnerships that you have. Because I've had questions from finance departments and other areas around the business of the investment in bug bounty, and how does that compare to bringing on those roles instead. The thing is, those roles once you join a company, you get into a certain – there's a certain groupthink, a certain mindset. We think about the products a certain way, we think about our infrastructure the way we do business. Over time, you can lose just a little bit of that freshness that I love that I get from the bug bounty community. I think that and finding one great vulnerability makes the investment well worth it.

[00:04:46] Guy Podjarny: Yeah. It's basically a diversity, a statement. I like to tell everybody that joined Snyk, I have a little like with the founder type entry. One of the thing that I say, “You have something I don't have, which is a fresh perspective. You come in and you're sort of saying things for the first time. You haven't drunk the Kool Aid yet.” I think that's a great lens, especially when you think about something that is ongoing.

[00:05:07] Sean Poris: Yeah. I was just going to say it's beautiful, because even when I bring in any new hire, within the first month, I try to get time with them in a one-on-one basis for the same exact reason. What have you observed? How's the onboarding been going? Are you enjoying it? Is there something that we've missed that we thought we got that we can continually improve upon? So getting that dialogue with folks in the organisation is huge.

[00:05:29] Guy Podjarny: Yeah, absolutely. So say you're sort of sold on bug bounty, and it's not just the budget thing, it's also our results. What are the best practices around a bug bounty, and maybe, even if I can bother you for almost like a pseudo maturity model, I know that we all like love and hate the DevOps and security world. When you think about a successful bug bounty program, what does successful mean, and maybe a little bit of what's the journey that our organisation might go through?

[00:05:53] Sean Poris: There are a handful of anchor elements that you need in a bug bounty program, and then you could wrap the rest around and drive maturity. The first thing that's key is you've got the scope. What are you bringing to the researchers for them to hack on? Is it interesting? Is it compelling? How big is it? How wide is it? What business problem space is it in? So that's the first thing that's exciting. The second thing is the policy for your program. Those kind of four corners are the rules of engagement, of how you will interface what is acceptable testing, what is the appropriate way to report. what are the things that we don't want the researchers to be doing, and how do we point you towards that scope that we've identified. That's another key element. You got scope, you got policy. 

The next thing is you have the community and the researchers themselves. So how are you investing in the conversation you're going to have with the researchers? We really do our best, and we often do it well. Every once in a while, we just have a challenge. But in working with the community, we want to understand what the researchers are interested in, what's working, what's not, get feedback, and incorporate that into the program, and treat them almost as like this extension of the security program itself, that they're playing a role in protecting users on the internet and our data. 

The last piece is, you've got to have a platform to ingest the findings from the researchers. You've got to have a vulnerability management program. If you don't have a programmatic ability to respond to the vulnerabilities you're getting, it's going to be very, very painful. It's going to be a bad experience for all. So you got to have a way to ingest, you have to have a vulnerability management program to take the bug from identification to closure. Then you have to have a way to get payments safely into the hands of the researchers in exchange for those bugs. Those are sort of the core elements. 

The other thing that I found at a larger company, I’m curious if it’s significant at a smaller company, but it's in stakeholder engagement and getting executive buy in. We have a lot of governance and oversight that we provide for the program. In part, when I first took over bug bounty, it's a very sexy thing. I think bug bounty and red team are like the two super – everybody's excited about it.

[00:08:13] Guy Podjarny: You’re sort of you breaking things around, and you get to just find the flaws, and sort of throw them off somewhere else to do something about them.

[00:08:19] Sean Poris: That's exactly it, so everybody gets really excited. They want to know what's happening with the program. All of a sudden, you Spend a lot of your time communicating out to people and managing the message of the bug bounty program. Is it successful? Is it not? What's happening with it? Are things Spinning up around it even internally, not just externally? Another element on a large company’s big bug bounty program is making sure that you've got the right executives coming to bear at the right time to get the right information about the program. You have to manage it like any other program. What are the objectives? What are the metrics? Are we succeeding or not? What dials are we turning as a result? Is it providing the amount of value that we thought it would provide? So you have to communicate that to legal, to finance, to engineering, to security, to the CTO. I mean, there's a lot of players who are interested in it. That's another key critical success factor, I think.

[00:09:07] Guy Podjarny: This is like what you've described right now is kind of the maximum. This is the working, functioning, or high-functioning bug bounty program has these five elements in it. What would you say is the minimum if someone doesn't have a bug bounty program at all, all these like platform? I guess the most tempting thing is to go by one of the sort of existing bug bounty platforms and say, “I have a bug bounty program.” Do you think you can be successful that way? What else beyond maybe I think you can purchase you really have to have to be successful?

[00:09:35] Sean Poris: I think you can start with the basics of a vulnerability disclosure program. So just how can anybody report a vulnerability into you and the mechanisms to ingest that successfully right, that consistent way to bring the information in, and then process it within the company, and communicate out in a full cradle to grave cycle. So you need that. If somebody were to be interested in the next step, what is the baby step for bug bounty? I'd probably say, go through one of the several platforms that are out there, do a private program on a limited set of scope, and kind of test out your processes, and test out your capabilities. Before you do that, exercise your AppSec or ProdSec program. So any of the low hanging vulnerabilities that you can probably find with a scanner or a quick review on that limited scope, that you can address and find any gaps and determine, do you even want to head down that path. You might want to pause and work on some hygiene before you unleash your program to a handful of researchers who – they get very excited about new scope. 

When a new scope shows up, it's a fun playground for them, and they will find things, so you don't want to unnecessarily pay for things that maybe your program is finding. For instance, you have a scanner running, but you haven't looked at in a while. You might find some of those cross site script things that you're going to end up paying your researchers for. But that's it. So small program, private, handful of researchers, test out the program and then kind of fail, fail fast.

[00:10:58] Guy Podjarny: The sentence that comes to mind is you have to clean the house before you invite the guest over. You have to kind of sweep around a little bit, and make sure that at least those things have been taken care of. I think that's a good sort of starting point to do it. How much trust do you put in the tester? When you think about that internal program, you actually have access to assets that the bug bounty, sort of the attackers, the hackers, they don't necessarily have. For instance, source code, or maybe you've sort of scanned it with some sort of SCA tool, and you've identified open-source vulnerabilities. Now, you might be playing with. So like the red team or others, they might have access. Or even the AppSec team itself might have access to that information. Do you ever share or do you see people sharing that type of information with another bug bounty hackers? Do you see yourself getting to that point if not, today?

[00:11:47] Sean Poris: There are a handful of different scenarios there. I mean, the first one is, no, it's more like runtime, what can you do from the outside blind. Well, we have done and actually, just recently, in our most recent live hacking event, which is where we bring our security staff, we bring a subset of researchers to a physical location. We Spend time together, getting to know each other, building the community, but then also hacking on Specific scope. We'll have certain swag, and bonuses, and just a lot of fun together. But this go around, we try that exactly what you just mentioned, and I think some other companies have, as well. 

But we brought our engineering teams to that live hacking event, and we realised that we actually had some scope that was a little tricky to hack on without being pointed a little bit more in the right direction. We brought our engineering team up to talk a little bit about the architecture of the solution. These were things that can be found in the public domain, but there was just such a large body of documentation. It would have been hard within the context of a few weeks of hacking. We shared some interesting areas that we wanted to get the hackers to focus on, and then also talked a little bit with them about the system architecture. I think that was a really great test for our program, to bring those engineers and engineering leaders to this function, get them up in front of the hackers, making the hackers real to them, and then real to the hackers. Then also helping point them in a place that would drive better research for the time that we had.

[00:13:19] Guy Podjarny: Yeah. I mean, I think I love the exposure, again, kind of DevOps roots a little bit, and sort of thinking about the analogy of like breaking down the barriers and walk a mile in their shoes. Instead of, a lot of times, it's just about seeing a person on the other side and what they do. But still to challenge that a little bit, in theory, these hackers are – they’re on your side. You probably have the problem like everybody does, which is, there's a whole bunch of local issues that you have, that you would be very keen to know if they're exploitable or not, and you're kind of getting free time, if you will, unless there was a fine – to get those, what would you need to see, or to have, to share, not just information that's available online and is hard to digest, but actually internal information to help support it? Do you think that’s the journey that you will and it's just you haven't yet?

[00:14:05] Sean Poris: Well, we have in certain context. We have in the live hacking environment. We also have an elite program. As we're talking, looking at private programs, we have a public program, which is open to all. We have this thing called an elite program. It's kind of like a VIP hacking group, based on performance on the public program. They receive invites to join this program, and they get access to some of the things that are closer to what you're talking about some inside information, some tools. There's an NDA in place with them, but there's that fine line between there's the black and white of open and closed. Then there's kind of, where do you crossover and start leaning more towards open and how far do you go? 

We're continuing to explore that, and I think the whole bug bounty community — I'm trying to think, maybe 10 years ago, people were talking about bug bounty and immediately, lawyers would say, “Absolutely not.” If somebody if somebody reported a bug, you give a cease-and-desist order. All the way now to where it's kind of the norm to have a bug bounty program or at least to be considering it. Where will we be in five or 10 years? Maybe there will be that embracing of bringing hackers in more closely to the scope and bringing more information to bear. We do share in our policy. I mean, we provide a lot of information about like, here's the places you can go, here's the domains very Specifically, where you might want to focus, where we don't want you to focus for a variety of reasons. We're open there. We don't make the hackers kind of guests on that front. But in terms of that border between intellectual property and sharing so much that the hackers can really identify interesting things, like maybe your internal red team does, where it's going to be a journey on that front.

[00:15:42] Guy Podjarny: Yeah, it makes sense. That's also I guess, a gap of trust. It's interesting to hear about the elite thing, which I've heard about before as well, which I guess is just another step in between the trust you put in an employee and the trust you put in like an open community. There might be sort of a more vetted step in between. Who internally is the kind of a conduit of this war and of this community? Who is responsible ? I mean, I think it's you in the meta, in the sort of at the leadership level. But organisationally, is it the AppSec team? I it sort of SecOps? Who owns the bug bounty program?

[00:16:15] Sean Poris: There's a partnership that we've established, and we also have a new team that I'm really pretty proud of. Let me talk about both. Bug bounty officially sits inside of the product security function. It can sit, it sits in different places in different orgs. We felt like the nexus between the vulnerabilities identified by researchers, and the work done in the product security team around vulnerability management, around architecture and design guidance from a security standpoint. Really, we felt like there's some good synergy there. We do pull in another team. That's our vulnerability and control operations team, sits inside of a bigger team called technical security services. But this team helps do the analyst work in analysing the bugs that are coming in and confirming that we can reproduce, understanding, do we know about it before or not, and they partner with the bug bounty team.

In some cases, that function reports directly into a product security team, or product security incident response team, piece or team. In this case, we have the two and they collaborate really well together. So they'll hand off, they'll validate the vulnerability and product security knows that particular product really well. At that point, bug bounty and ProdSec will partner up and understand what's happening, make sure that the ticketing that we do is clear and crisp for our engineers, product security might work with the engineer for the solution. Then we also have something that I'm really – actually, let me back up. The team that does that on the ProdSec side, we call them CDS or community driven security. We built a whole community around this. This includes bug bounty, our deputy paranoids, and some other facets of how we engage the community to drive security, the internal and external communities.

But another key element that we had talked about earlier for successful bug bounty programs that we have worked on is something we call the bug bounty lifecycle, or BBLC. This is meant to take a bug that we receive from a researcher, and then analyse across all of our people process and technology how we were not able to find that, and how can we improve as a result. We might find a bug in point A, and then identify, “Oh, it's actually sitting in points B and C as well, and we didn't know about that. Great, let's go ahead and address that.” We might realise there's a rule we can add to a scanner. We might realise there's a check we need to do, a baseline we need to adjust, a variety of things that we can do to avoid that in the future. This BBLC program is something that we've talked with our internal engineers and product folks to let them know that we're trying to drive the overall cost of each bug through bug bounty down. 

There's what we pay the researcher. But then, if we can find this rule and eradicate this moving forward, fantastic. That's X number of instances of that particular vulnerability that we will no longer have in our organisation, right? The nice thing about bug bounty is you can codify that and quantify that, so you can actually indicate a potential specific savings, which is kind of hard to do in security. It's more about risk avoidance than ROI.

[00:19:17] Guy Podjarny: Yeah, because you can kind of almost rest assure that once it was a successful bounty, kind of paid off for one entry point, that hacker is going to try it on all the entry points that they can find.

[00:19:25] Sean Poris: Absolutely.

[00:19:26] Guy Podjarny: Might as well sort of ramp that. I love them. By the way, I love the community driven security, and we'll get back to the deputy paranoids a bit. So just maybe a couple more questions on bug bounty. Maybe again, like I think of a lot of things with sort of the DevOps lenses, I think many listeners know. One of the sorts of analogies that's in my mind is to think about bug bounty as continuous monitoring of security. But in theory, if you're going to have a constant stream and a sufficient volume of pings, if you will from a bug bounty program, and they accumulate a certain amount of the findings, and you generate the vulnerability for them. Because vulnerability is because you change code and use technologies. Then in theory, that sort of represents like a steady state, almost like uptime, it constantly bothers me. I'm not alone, and that you never really know. Is their security getting better? Is it getting worse? If your security program is better or worse, but is your actual security poster? Does that resonate? Do you do anything of that nature? Is it a viable analogy at all?

[00:20:25] Sean Poris: One hundred percent. We're always thinking about a couple of things. Number one is, how many researchers do we have on our program, and how much time are they spending, so we can get a sense for what level of poking and prodding is happening against the infrastructure in our products. The second thing is, by engaging with the community, we also hear some of the qualitative feedback that just shows up observationally, where folks say, “Hey, compared to X number of years ago, wow, you're a lot harder to hack on.” 

This is sort of that ability to report back to those who are investing in our security program and say, “We're getting some information from out in the industry that, yeah, it's getting tougher. Things are continuing to harden and go in the right direction.” As you invest more in your security program, your investment in bug bounties could go down, unless you decide to raise your pay tables, or include promotions that are pretty exciting. Hey, $100,000 for this bug kind of thing.

That's one of those situations where we want to look at the investment in the overall security program, and potential reduction at steady state, and investment in bug bounty if we have the same level of testing. There's a lot of variables you have to hold steady, but same level of testing, similar quality of testing. Then you can realise the fruits of your investments in the security program as a whole.

[00:21:42] Guy Podjarny: You're kind of forcing yourself to a certain bar, right? You're saying, “I'm going to pay X, whatever, I'll use a million dollars a year in bug bounties.” Then in theory, if you manage to sort of improve your investment security poster, and that million kind of goes down, and you're doing it, and then you have some real savings. Now, granted you're paying it one way or the other, you're sort of paying the dollars, and you're probably better off renting them. Then from there, you can kind of decide again to ratchet up the prices, which I guess is what oftentimes the sort of the tech giants of the world have often done, right? 

[00:22:09] Sean Poris: Yes.

[00:22:10] Guy Podjarny: Then maybe, one last question before we sort of move off this topic. Just curious, have you ever talked about all the goodness with bug bounties? Has it ever backfired on you? Are there any stories of hacker that went out of line and caused some problem, or you sort of maybe ignored potential breach because you thought it's a bug bounty, and it sort of allowed it to go a little bit further than it should have?

[00:22:33] Sean Poris: Yeah, certainly not the latter scenario. The former, we're dealing with humans in the end, and so there's personalities, there's certain experience levels, there's modes of communication. We are a diverse community, and so there's going to be moments when there are potential clashes. We invest in the community so that we can minimise those. We can have the conversations before they get out of hand. We're always taking a look at how we can get better and how we can make the interactions more standardised and reliable, so that there's a mechanism to deal with these things. 

We have certain policies and runbooks that deal with, when a researcher goes outside of those four walls of the policy. What do we do? How do we make sure that analyst A from bug bounty program will treat that the same way as engineer , so that there's not any preferential treatment, that it's very fair if you end up in this pathway of, “Hey, we need to work with you to get you kind of back in the fold, operating in bug bounty, above board in alignment with the policy.” So we make sure that's pretty consistent. We have a system for identifying that, tracking that, and closing those things out in a rigorous kind of way.

[00:23:45] Guy Podjarny: Yeah, very interesting. It sounds like you're basically almost thinking about these hackers as customers, like to an extent, you're servicing them. They're coming along, they're finding problems and sort of doing the service for you. So maybe vendor is a better analogy on them, but you're managing them, and it's important to kind of have a long-term outlook on how you treat them. I guess, they’ll – maybe hiring pipeline, maybe that's the – they're like prospective hires, and I guess sometimes, they literally are, right?

[00:24:08] Sean Poris: Yeah. Yeah, they are. Actually, we did have one example of that, and I'm always looking out for those opportunities. But that's exactly it. I mean, there's so many more programs now coming online every week, and so the researchers have a lot of places they can go. So they want to go places that are fun and treat them well, and this is what we're trying to achieve. It was a really great observation that we do treat them a little bit like customers, and make sure that we're providing what they need, and understand the pulse of the customer, and how are they doing, and are they still enjoying hacking on our program and staying engaged?

[00:24:44] Guy Podjarny: Yeah. I think like the more I think about it, the more I feel like indeed, probably like a hiring pipeline is a good example of it. If you're a good employer, then you would make sure that the candidate experience is a great experience, even if they are people like the majority of those people you ended up choosing not to hire. But how you treat them and how that works would greatly kind of impact how likely someone else to come try and work for you. Or whether those people will come back at a later time where you might want to hire them.

[00:25:11] Sean Poris: A lot of them are engineers, in companies on security engineering teams. Even if they're not interested in moving on and joining our team, they have colleagues all over the place, both in the marketplace, and right there in their job that might be interested in. So it's a great opportunity to say, “Hey, don't forget about the paranoids.”

[00:25:28] Guy Podjarny: Indeed. So maybe let's backtrack a bit. You talked about having a team that is community driven security. I love that. I haven't heard that acronym before. And that you have these deputy paranoids. Tell us a bit more about that. What is that program?

[00:25:41] Sean Poris: Sure. That's the classic pillar of AppSec program, is to have your satellite program or your champion program. This is a federated model, where we partner with our engineering teams, and they have individuals who are identified as deputies, and we have a thriving community where they're helping us engage in security. We identify the engineers who have a proclivity and an interest towards not just constructing good code, but also constructing code with a destructive mindset and thinking about how security needs to be applied. Then being our partner to bring in our services and tools. Then we bring this robust community where they can come together and discuss various security topics and keep it interesting and lively.

[00:26:24] Guy Podjarny: Yeah. Just for context by the way, deputy paranoids is because the security team at Yahoo is referred to as the paranoids, right?

[00:26:31] Sean Poris: Correct, correct.

[00:26:32] Guy Podjarny: Okay. What is the division? I mean, tell us a bit about this program? Is it a community? Is it an official – maybe for starters, like is it an official assignment when someone is a deputy paranoid? Does their manager know that? Is there some percentage over their time that they're allowed to sort of spend on security?

[00:26:49] Sean Poris: Yes, and yes. In the same level of rigour that we put in the bug bounty program, we've done this similar level of rigour with deputy paranoids, where we have certain activities that we would like them to perform over time, and they go through some training and get endorsed for those activities. Then they can perform those activities on behalf of the security team, or in partnership with the security team. We're constantly evaluating how we apply this program. Right now, it's primarily with our different engineering teams on our products. So you can figure, whatever Yahoo product we have, there will be one to end engineers, who are also deputy paranoids. 

Their managers are aware they participate and they represent – they're almost like an ambassador, who goes back to their program and says, “Hey, listen. I'm aware of this policy, I'm aware of this tool, I'm aware of this service before you launch, I'm aware of whatever the case may be, or I can provide more context to a bug bounty vulnerability that just came in” or even provide us insight about the compensating controls or other controls that are in place that we may not be aware about. 

So it looks like it's a potential vulnerability, but actually, it's not really because we have this compensating controls in place and it would address that. The engineer sometimes helps us understand that. They know their code better than we know their code, so we'd love to leverage them. Then, we're picturing expanding that out, so that's on the products or externally facing product side than on the internal facing products, corporate products. Getting those engineers engaged as well, and then expanding the program out as far and wide as makes sense and we can keep it viable.

[00:28:24] Guy Podjarny: Yeah. I’m pretty much a fan of security champions programs. Couple of questions like, what's your sense of the rough percentage of time that a deputy paranoid – I love that there's a name for it now – a deputy paranoid spends? What percentage of their time?

[00:28:39] Sean Poris: Yeah, it varies. Gosh, it’s been a few years since I've looked at the inner workings of the program. The gentleman who runs the community driven security team for us now, he's got that. But we came up with a rubric where we said, “Hey, we think an engineer would spend about X amount of time based on these numbers of endorsed activities. As that goes up, he would spend Y amount of time.” Then we described it as savings on the back end. So every vulnerability you identify as a result of being a deputy paranoid, you will not get the Friday at 6:00 PM call that you now have to drop everything, and fix something that's been identified by bug bounty or through another means. So there's been general appreciation for that. 

Then also, the fact that we can almost convince the engineering leaders that we will be shifting left as a result of the deputy paranoids being embedded. Kudos to the team. I’ll mention a few folks, Rob Hines and Will on the team have done a wonderful job of building this out, and finally getting it from the point of, why would I spend my valuable engineering time working with this? What is this program called? What does this do? To now, how do I sign more people up, because this is a really valuable program? I've tried to stand programs like this up in the past, have a lot of lessons learned on why they didn't work. We've applied almost all of those here to the point where it's now successful, more so than I've seen any other program that I've personally been involved in. So it's working really well in partnership with our engineering friends.

[00:30:09] Guy Podjarny: That's awesome. That's great to hear on that. It's not a small accomplishment. I guess on for the deputy paranoids themselves and individuals, what's the carrot for them? Do above and beyond, hopefully their natural interest in security? Is there some standard like sort of training you give them through ? How do you how do you incentivise them to join? 

[00:30:30] Sean Poris: First of all, if you identify people's passions and strengths and connect them with that kind of work, there's a natural pairing. 

[00:30:37] Guy Podjarny: They just want to do it, yeah. 

[00:30:38] Sean Poris: In fact, we don't take the most senior person on the team. We take the person who has shown a really good work level in the team. They know a lot of people, they're connected, they can explain concepts really well, and they're interested in this area, the security area, and so they bring all that to bear. We sort of selected those kinds of folks who are going to be influential on their team, understand it, can explain it in a way that's palatable. Then, we give them these trainings, and these endorsements so that they are endorsed to perform these activities. 

We had talked about the belt, your white, blue, yellow, green, all the way in. We talked about different levels, but we decided that was a lot of cruft that's hard to manage and keep track of, we do offer swag, which everybody loves, right? The last thing is, and I think this was part of the – when we were looking to rename the team, and came up with community driven security, we have built a community. The deputies come to periodic sessions, where they get exposure to either security professionals or the CISO, or somebody who's high up in the business they normally wouldn't get a chance to talk to. And we'll bring some of that programming directly to them, and this is more intimate, more interesting programming that they can't get through any other vector.

[00:31:56] Guy Podjarny: Yeah. No, that sounds, I guess kind of involved and in tune with what might motivate them. Maybe like looking a little bit at this team, the CDs team on it. What's the profile of a hire in that team? With the skill set you would expect someone in that team to have?

[00:32:11] Sean Poris: Oh, that's a really good question. I mean, there's a variety of other things that they do. This is sort of the typical, we have one piece, that's the core AppSec engineer, but there's an element of soft skills that are necessary to build the relationship with the various partner teams and the community. One of the key things that hone in on is empathy, that ability to understand what is it like to be an engineer trying to deliver a product at breakneck pace with tremendous complexity. On top of that, doing that in a secure fashion and absorbing some of the security work that has to happen throughout that lifecycle. So somebody, ideally, who has been involved in software development in the past has that ability, or someone who's innately great communicator, both verbally and written, and it has that empathy flavour, and just can understand the pain someone is going through, and then talk about our services within that context.

[00:33:07] Guy Podjarny: Yeah. It's almost like, it feels like it's almost more important to have program management, and community management skills in here than necessarily deep security expertise or deep softer skills. You need some of those because you need to understand the substance. But is it right to sort of say that for this role specifically, not to say to the individuals inside of it, they might be awesome, and all around. But could as well, specifically, those are actually the secondary skills versus the management of the program in the community?

[00:33:37] Sean Poris: Yeah. If you look at the mission statement for the community element, absolutely. To manage that community, there's a certain personality type, there's a certain set of skills that you need that are not hard skills to do that successfully. And you can tunnel back into other areas of the organisation to bring those hard skills to bear. We have in this particular group, it's got – not just those strong community elements, but there are other traditional AppSec things that we're doing inside of that function. So there's good cross pollination happening between the different resources.

[00:34:11] Guy Podjarny: Yeah. I think that's also true for when you think about someone organising conferences. If they're sort of proficient in the subject matter, they'll make better conferences. If they're a program manager in a development organisation, if they are also a software developer, or have deep understanding of software development. So here, you do that by mixing in your skills, which is great.

[00:34:29] Sean Poris: And it comes back to your comment about understanding your customer, right? I mean, these again are customers. In the past, I felt like regardless of the product, they would come and join this program. I'm talking about not deputy paranoids, but my past efforts at satellites and champion programs. That is not the case, right? You have to keep understanding your customer and providing something that they can do with the content. The other thing was, we do a deep dive on a vulnerability and then they go, “So what? How do I fix it?” I’d say, “Oh, we didn't provide that module.” We describe it conceptually, but we haven't like handed you, “Hey, here's the library that you can use that will address this from now on.” That's more of what I think deputies are looking for as well. How can they make sure they're not getting pulled in Friday night at 5:0 PM by doing some of these things? Then also, are we giving them value? Are we giving them things they can actually use and that are practical? Kudos to Rob and Will again for doing that.

[00:35:25] Guy Podjarny: I think that's great. I think in general, my view would be that, security needs to shift from the hero syndrome of the person indeed that tackles the problem, or find the bug, or sort of triages and assess everything to platform teams, the teams that are – that have more of a service mentality and sort of build a platform that enables developers. It's interesting that he was sort of looking in the attackers, the hackers on the other side. I guess maybe that's even a platform approach to the red team side. Doesn't exclude that they remain a centre of excellence, and they remain an escalation point for a complexity and such. But the majority of success comes by unlocking the ability of others, of the many more kind of 100 times 1000 times more people around that are in the organisation. So I think this addresses it very well.

So there's like a million more topics that I kind of want to talk to you about, but I think we're approaching at a time here. Before I let you go, I want to sort of ask you, I like asking the open-ended question at the end of every episode. If you had unlimited budget and resources to tackle a problem in our industry, in and around the security industry, what would that problem be? And if you have any thoughts, how would you approach tackling it?

[00:36:37] Sean Poris: Gosh, there's so many to choose from. I mean, one of the ones that still is so difficult for teams is just the deluge of vulnerabilities that are showing up from all the different sources that we have. How do you focus limited time and energy both on the security team, as well as by the business and product team on addressing the right ones? I think if the marketplace out there had a way to really intelligently, maybe leveraging true ML and/or AI once we get there to help identify which vulnerabilities have the highest likelihood of being true positives within our organisation and having the biggest impact. And being able to have a SWAT team that focus on those, that would be fantastic. 

We could talk about what products we've seen out there. I haven't seen that yet, and so it remains a major effort to stay on top of that deluge. It takes a lot of time, a lot of distraction, and impacts relationships security has with engineering teams. Where engineering teams are building or responding to things maybe they don't necessarily need to respond to and could be creating more value for the business and for our customers, as opposed to responding to security things. This is something I would love to see us solve.

[00:37:54]: Yeah. Great. That's definitely a great one. We talk about almost kind of in security — like you're never going to fix all the bugs. It's not a viable aspiration and quality. Why would it be a viable aspiration in security? But the problem is, that it's very hard, like it’s still kind of a labour-intensive decision to sort of figure out which bugs it is that you are, which vulnerabilities are the ones that are happening well, so we know. We're doing some stuff on that front, on the Snyk side. Hopefully we sort of deliver on a portion of it. I don't know if it's the whole thing, but I agree with some of the idea of aspiration.. 

Sean, this has been really, really great. Thanks for coming on to the show.

[00:38:27] Sean Poris: Thank you for having me. It was so fun. It went so fast. Appreciate it. 

[00:38:30] Guy Podjarny: Thanks, everyone for tuning in. I hope you'll join us for the next one.

[END OF INTERVIEW]

[00:38:38] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode. Bye for now.

[END]