Skip to main content
Episode 108

Season 7, Episode 108

A New And Improved Risk Assessment Model With Garrett Held

Guests:

Garrett Held

Listen on Apple PodcastsListen on Spotify Podcasts

Today’s guest is the CISO at Carta, a software company that helps other companies manage their valuations, investments, and equity plans. Garrett Held has many years of experience in many different arenas within the security space, as well as a degree in business and economics; the combination of these passions led him to develop the program which forms the basis of today’s conversation. Frustrated with the traditional risk assessment model, Garrett came up with a new one, built around the idea of credit card balances and credit scores. In this episode, he explains how the model works, why it is beneficial, the process that went into creating it, and how you can do something similar in your own organization. Tune in today to hear from a true security pioneer!

Compartilhar

[00:00:39] Announcer: Hi, you're listening to The Secure Developer. It's part of the DevSecCon community, a platform for developers, operators, and security people to share their views and practices on DevSecOps, Dev and Sec collaboration, cloud security, and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open source containers and infrastructure as code. To learn more, visit snyk.io/tsd. That's snyk.io/tsd.

On today's episode, Guy Podjarny, founder of Snyk, talks with Garrett Held, Chief Information Security Officer at Carta. Garrett has been working in information security for more than 15 years as a managing application security consultant, instructor, principal, product security engineer, and director of security. He has also laid the information security strategy and maturity process for dozens of acquired companies for multiple organizations. We hope you enjoy the conversation. And don't forget to leave us a review on iTunes if you enjoy today's episode.

[INTERVIEW]

[00:02:02] Guy Podjarny Hello, everyone. Welcome back to The Secure Developer. Today we're going to talk about various things, but notably about a really interesting model of how to measure risk. And to guide us through that and tell us about the great system that he built is Garrett Held, who is the CISO at Carta. Garrett, thanks for coming on to the show.

[00:02:18] Garrett Held: Great to be here, Guy.

[00:02:19] Guy Podjarny So Garrett, before we dig into the details, tell us a little bit about what is it that you do and maybe a bit about your journey into security and into your current role.

[00:02:27] Garrett Held: So currently leading security at Carta. I have been a part of several large companies and small companies throughout my career, as well as doing a lot of red teaming and building red teams. I was also a developer in my early days, and that was during the dot com days, which is when I dropped out of college to move out to California. When the dot com bust happened, I went back to college and finished up my degree in, actually, business and economics, which influenced a lot of this program we're going to be talking about today.

[00:02:57] Guy Podjarny Very cool. And it's interesting, the moving out. There're all these myths and such about sort of dropouts from college coming back into it when it's sort of been the reality. I think as you kind of went through those as well, you've done a stint on the vendor side, right? Like you were at tCell for a little bit, right? If I remember correctly, that's a security provider itself?

[00:03:19] Garrett Held: That's right. That was a security vendor of a RASP product.

[00:03:22] Guy Podjarny And was that a different experience to be on the seller side? Were you on the seller side or on the internal security of that company?

[00:03:28] Garrett Held: I was on both of those. So it’s a small company. I was employee number five at that point. So everybody did a lot of different things. But it was definitely a different experience than being on the attacker side or the buyer side of the security environment.

[00:03:42] Guy Podjarny Yeah. Two different lenses. And I mentioned this a few times kind of on the podcast, which is that it's interesting to think which one is the dark side. I'd say I guess the vendor is the dark side, but one of them is moving over to the dark side. And you've also spent a stretch of time at a Twilio, right? Working on security there.

[00:03:58] Garrett Held: That’s right. Saran was leading up security at Twilio. And he had been doing a lot of risk quantification work. It was an area that really interested me. And I saw a great opportunity to build a team there. I started off building the product security team. Expanded to just the cloud security team. And expanded to the corporate security team as well, and took on security automation. And it was just a really great time to join that company and had really awesome people to work with there.

[00:04:25] Guy Podjarny Yeah, very cool. Yeah. Twilio, I think, is one of those sources now that you see a lot of great security people building and coming out of that group. Well, thanks for that. And I think Carta is an amazing company in its own right. And I think a lot of companies included are kind of happy that you're handling its security.

Let's take a moment maybe to talk a little bit about this security organization. So you came from dev. You came from Twilio and kind of working on security engineering there. How do you structure your team today in Carta? Like, when you’re the CISO and you head security? What are the key pillars?

[00:04:58] Garrett Held: Sure. We divide it up by IT. We have a corporate security and GRC function. And then we have, which we call security assurance. This function assures the company that we're doing the right thing so that we’ll handle hamper customer compliance questionnaires, collateral for customers, and also auditing the other parts of the security organization to make sure that they're performing their actions. Then we have security operations, which handles everything defensive. So this would be intrusion detection, [inaudible 00:05:30] detection, handling incident response. Any investigations we need to do with legal. And then we have security engineering, which is really about building security into the processes and working with the engineering teams. This includes building internal tools, building features into the product that improve security, the security pipeline, and of course, the more traditional threat modeling and code reviews that we do with various engineering teams.

[00:05:57] Guy Podjarny Very cool. So if I drill a little bit into that sort of last part, it sounds closer to development and what we touched today. When you think about the division of work, so like you build some security related capabilities, like can you give us some examples of what would be the engineering team or sort of the application team would need to write versus things that maybe your organization will?

[00:06:21] Garrett Held: So we need to support our customers. Sometimes they need something changed in their account, or they need to look up what happened. In order to control access to those accounts, we don't want everyone, all of support, to have access to those accounts. So we have controls in place where in order to gain that access, you need to be attached to a support ticket, or there may be some other controls in place, or there's a pin you have to provide to a support agent. And then to log that access and make sure that we know who did what, and that that access is not being abused.

[00:06:54] Guy Podjarny So that's the type of system, for instance, that you would build, like your team would build and maintain.

[00:06:58] Garrett Held: Correct. We build that into the product to make sure that access to that data is controlled, and that there is no god mode that can be abused.

[00:07:06] Guy Podjarny Got it. Yeah. Okay. Sort of, I guess, the security conscience or capabilities in which security is the primary tenant, or the primary value proposition in that capability.

[00:07:17] Garrett Held: That's right. And it shows least privileged access to our auditors and our customers.

[00:07:23] Guy Podjarny When you hire, you have these different groups. I'm on a bit of a bend of asking people around favorite sort of hiring traits, or hiring traits to avoid, or profiles would you primarily look for when you hire someone, especially into that sort of security, like more product security ad tech, sec eng type view, who do you look for?

[00:07:45] Garrett Held: So we look for a few things. We're not looking for somebody who's done it all and seen it all before. We are looking for someone who has, number one, a passion for security, because I view security as sort of a calling. Otherwise, you'll quickly get burnt out in this field if you don't feel like a passion for it. And then really good problem solving abilities. So we don't give brain teasers or anything like that in interviews. We give actual problems, which may have multiple creative solutions and see if the candidate comes up with one or more solutions. They're not trick problems, but it shows that they can think in different ways, and a lot of security is that. It's that gray area where you need to come up with different solutions to the problems.

[00:08:30] Guy Podjarny That's awesome. So like you mentioned two things, and none of them were security backgrounds, nor software engineering skills. How important, if at all, do you find those two?

[00:08:40] Garrett Held: We expect some basic level of security proficiency, depending on the level you come in  at Carta. But a lot of that can be taught. So if they have the passion for security, they’ll generally have already known like what is cross-site scripting. They might not know all the different ways to exploit it. But a lot of that is not as important as being able to explain how to defend against it really.

[00:09:03] Guy Podjarny So because you're looking for these traits, versus necessarily the existing skills, is that helpful when you look to hire diversity? Does it make the top pipeline a bit more maybe diverse than typical?

[00:09:16] Garrett Held: I definitely think that's a necessity for diversity and just to open up the talent pool. There are so few people out there that do have 10 years of experience in just application security, know 10 different programming languages and will fit all the bullet points on some of these job descriptions. What you're really looking for is somebody that can adapt, somebody that can learn fast, and somebody who knows the fundamentals.

[00:09:42] Guy Podjarny Cares about it. I love a very healthy and forward-looking approach there. So I can see how you can build a great team that way. Just need to build up the technical skills sometimes for some individuals. So I think maybe let's get to the meat of it. What triggered this conversation or gave me an excuse to pull you in here was this great article that you wrote around the approach that you've taken to owning security risk and supporting security risk in an empowered team in a distributed fashion. So I'd love to spend much of the next chunk here understanding a bit what is it that you've built? Tell us a little bit about maybe the problem that you saw and maybe just tee up what is the solution that you put together in this process model?

[00:10:26] Garrett Held: Sure. So I've always been sort of frustrated with the high, medium, low vulnerability classification. And I think a lot of software engineering teams that I've worked with have too, in the past. It's really tough for them to know what to do first, how much they should really care about it and which one is the real high vulnerability. I've also had some exposure to risk quantification at previous companies, but mostly for determining cyber insurance levels or other executive level reports. And you're trying to talk in a financial sense to the other executives, but they challenge the model a lot to see, “Hey, is this really worth a million dollars per year loss? Do I really believe your estimates here?” Because when we deal with security, you're dealing with proving what is the risk of something not – Of this thing that just never happened. So we want to steer clear of like these financial numbers for that reason. Plus, we're getting better at estimating. And it's a lot easier to move to a point system than dollars. And so I wanted some model of communicating this to not only executives and the board, but also the engineering teams.

[00:11:35] Guy Podjarny I like the points approach on it. It reminds me of the velocity measure when you think about agile and those methodologies, a lot of the approach was to say try to disconnect, trying to give these exact elements and think about it in relative terms. So say this task is seven points. Is that an apt analogy? Is that what you feel that you're creating here as well? Just move to think about this in relative terms as opposed to actual dollars?

[00:11:59] Garrett Held: Yeah, not only relative terms, but we are able to shrink some of the numbers. So instead of 1 million, it became 1000. Instead of 1000, it became one point. And those are a lot easier for people to digest too when you talk about those things. So whether something is 30 million versus 1 million, versus one versus 30, it's a lot easier for people to see the difference.

[00:12:22] Guy Podjarny Cool. Okay, great. So, strong start there. Move it from a financial model for starters, measure it, care about it. High, medium, low is not enough. And two is humanize it maybe with a point system for the reasons you said. What comes next? So that's step one.

[00:12:38] Garrett Held: So this program combines a lot of the programs you've talked about on this podcast before, and likely already exists in a lot of organizations. So you'll start with a risk register. You'll have vendors that may be risky to use. A list of security exceptions. And you'll go through a process of quantifying those things. What are the risks of those? Which quantification method you use really doesn't matter. You find one that works for you. We used a rather simple one based on 90% confidence intervals, and the odds of that happening in any given year.

Then we found representatives from the security team and engineering leaders across the organization to calibrate to say like, “Okay, here's some non-security examples of quantifying risk or quantifying the odds of something happening,” just to get their mind into that state of being able to estimate well. Then we gave them the security problems, the risk, and had them answer those questions. We average them out. And we came up with our numbers for each of the risks.

[00:13:43] Guy Podjarny And these numbers, they need to be consistent across the organization, or do you actually feel this could even work if different teams use different numbers?

[00:13:51] Garrett Held: They're attached to dollar values. So there'll be pretty consistent across the organization. So we just have one set of estimators across the organization.

[00:13:58] Guy Podjarny Got it. So this exercise was to kind of establish the currency, I guess, or like I understand the base prices of risk X is worth this many points?

[00:14:10] Garrett Held: Right. And it's not so much – Again, we don't care if it's actually only worth 20 points. We only care about its relative value compared to the others. So if you rerun this, and inflation hits, and everything goes up 20%, it really shouldn't matter because everything went up like 20%.

[00:14:29] Guy Podjarny Yeah, yeah, yeah. Makes sense. Okay, cool. So you have this mechanism now. You have this quantification of the different individual risks, and you've engaged them – At this point, you've engaged the development teams or the engineering teams in the creation of this barometer, right? Of this sort of price list.

[00:14:47] Garrett Held: That's right. So we've used it from a variety of sources. So the risk register was built from things we know that are wrong, things that came from a penetration test, things that came from traditional maturity assessment, things that have happened to other companies. We’ve worked with the teams to find these things. And now we have this list of quantified risk. We will divide this risk into three categories. So the categories are going to be risk that the team or product line can fix themselves. So this may be something that's available to them, a control, or something, a feature they could add to the product. And if they add this, then that goes away from their risk total. Also divide it among things that only the platform team can fix. So it's not up to the individual team to do. It needs something that's fixed across the organization.

And then the third one is things the platform team can only fix, but that the team can opt out of. So if there's some bad process that a lot of teams use, but the team start saying, “I don't use that. I don't care if it goes away. If it disappeared tomorrow, that's fine.” They can then remove that from their risk. And if enough teams do that, it makes it a lot easier for the platform team to get rid of that risk. And then we measure those things, and we compare them against the product lines against each other.

In this program model, that makes up this risk score. And at this point, we're thinking, “Okay, like how do we – We've measured this. We've measured them against each other. Where do we want them to be?” And we thought, “Okay, we need a limit somewhere.” So we need a goal. And that sort of became – That's when the credit card model sort of became apparent that we were going to use. And that may be just because we're in FinTech and so we started thinking that way, too. But we added this idea of a credit card limit. So they could be over their limit, they could be under their limit. And they have these three things that they can manipulate to change their score.

[00:16:49] Guy Podjarny Yeah. So before, like that's a key piece of it we're going to dig in. Just curious, like when you've done the initial modeling, and you've looked at these three categories of sort of team, or risks that the team could address, risks the platform can address, or, I guess, risky capabilities, the platform can obviously eventually eliminate by a team opting out or stopping relying on. How did it end up spreading out roughly? Did it end up being a third, third, third, in terms of the accounts? Was there an obvious bias in favor of one of these areas?

[00:17:20] Garrett Held: In our pilot program, we did about 15. And so we did five of each. So obviously, it was very easy to say a third, a third, a third. But as we expanded the risk catalog, and have done this a couple of times now, I think there's a lot that's on the – We try to focus a lot on stuff that they can fix themselves. The reason is, is they have a lot more control over it. So that makes the program a lot more effective that way.

[00:17:44] Guy Podjarny Right. Drive to the actual leader. Got it. Cool. Okay, so with that, you teed up the credit card score. So that's interesting. So tell us more about it.

[00:17:53] Garrett Held: Sure. So we have these three numbers presented as credit card scores. So we total those up for each of the teams. And it sort of shows which teams are taking on the most risk, whether it be through lack of controls, or risky behavior, and compares them against each other. And almost instantly, when we were showing this to teams, they started to get this sense of competition and wanted to know what they could do to reduce their score. So we knew we're on the right path at that point.

Then we came up with this idea, as I mentioned, the credit limit. And that is we set this goal for them. And we really took this metaphor pretty far. And I'll explain that in a little bit. It gives them an idea of like where the executive team and the security team think their risk should be. And within that, they can control what they want to go after. So we set guidelines on what they can do within these risks. So a lot of things in the risk catalog are things that they can handle, or acceptable risks that they may be able to take. There's no risk out there of like storing data somewhere really insecurely. It's all risks that are within acceptable range.

And so that allows them to manage, “Hey, do I want to just do stuff that's for RBU? Do I want to like just work on opting out of stuff? Or do I want to work with the platform team and like reduce everyone's risk across the organization? And which of these two I want to do first? That's really up to me.”

[00:19:19] Guy Podjarny It's interesting to think about how you calibrated the risk, and how much does it stay flat? So I guess a couple of questions here. One, over time, for instance, a lot of times when we think about known vulnerabilities, a new vulnerability will get disclosed. And practically speaking, if you were to assess it, now they have more risk in that team. Does that happen here as well? Like when a new vulnerability comes out, does it charge on your credit card here? Does it increase your balance?

[00:19:45] Garrett Held: So as long as they fix the vulnerability within the SLA, we're not going to count it against them. So things that are standard risk. So if we're hosting on a cloud provider, that has some risk on it, but we're not going to count that against the team, because that's just the normal operating risk. Things that have vulnerabilities in them are going to have the normal operating risk. But if you're using a third-party library, which has a bunch of vulnerabilities or has no support on it, that may be an issue that becomes a piece of risk debt.

[00:20:18] Guy Podjarny Got it. So if it's within the SLA, you don't count it against them. But if the SLA was, whatever, call it a week or a month, for you to have remediated an issue that came up and you didn't, you do start accumulating debt on it.

[00:20:31] Garrett Held: Or if they file for a security exception because they need another month or something. Yes, that would count against them. Yeah.

[00:20:37] Guy Podjarny Yeah. Got it. And then the second question is how do you choose how much credit limits to give every team?

[00:20:43] Garrett Held: So we look at a few things. We're going to be looking at, really, the risk tolerance of that particular product line or organization. So it may depend on what data they hold, how much data they hold, the revenue that they produce. And then we're going to look at, really, another concept which you've discussed in this podcast is this gamification of leaderboard points and things like that. So we take proactive security and give points for that, so going and taking extra training or finishing stuff well before SLA.

[00:21:15] Guy Podjarny And do these different changes, do they add to the limit or to the balance? I'm trying to understand, does it reduce your actual risk or does it kind of allow you, afford you to accumulate more risk because you seem to know what you're doing?

[00:21:28] Garrett Held: So these leaderboard points are going to act like your credit score. So it's like a FICO score. Again, going back to our – Abusing our metaphor, and it'll guide the executive risk committee on how to set these things. So if you have been really proactive, you take on risk, but you fix it really fast. Or you take security exceptions and you fix it before the exceptions do and generally, you're doing stuff within SLA, the committee may say, “You know what? This team can take the risk.” Just like a lender may say, “I will lend you money, because you've shown to be worthy of taking this risk on for us.” And so we use that same sort of model to treat the teams to act responsibly and get rewarded for that with building trust between us.

[00:22:15] Guy Podjarny Yeah, interesting. So that's like a measure of trust, you're basically saying, which I guess is very much a credit score, “How likely are you to repay this debt?”

[00:22:24] Garrett Held: That's right. And how likely are you to know what you're getting into as well? So by the time [inaudible 00:22:28] your history.

[00:22:30] Guy Podjarny So this system sounds great, very manageable, and I love the gamification and the leadership. How did you kick off? I mean, at the beginning, I imagine different teams had different levels of debt within them. You explained how you kicked off the original scoring system. But what was the experience of unrolling this out of the beginning in terms of the differing kind of debt realities?

[00:22:56] Garrett Held: Sure. So we put together these reports and gave them to talk that through with the teams. And we got a lot of feedback. One of those was like, “I want to fix this stuff right now. And do I have to wait for the next round before I get my new points?” And so we're like, “Wow! We have to come up with a way of automatically updating this.” So we started writing some custom code around the portal, I'll talk about in a second. And the other one was like, “Alright. But this has a really high score. But which one should I do first, do you think? I appreciate you leaving up to me, but do you have recommendations?” And so we added this ROI factor as well, which is we started working with the teams to estimate how long it would take to fix, how much money it would take to fix, And which ones we thought had the best ROI for them.

And again, it's not, “You have to do this first and the second.” It's, “Here's our recommendations for the best ROI for you and your team.” And just giving that sort of guidance to the various teams has helped a lot.

[00:23:54] Guy Podjarny Very cool. So let's talk indeed about tech a little bit. So we've described a great model here. You gave some great guidance on how to get going. How did you build this up?

[00:24:04] Garrett Held: Well, initially, we built this with a spreadsheet, and it quickly became really complicated. But it was a good way to build a data model, and work with teams, and modify it during the pilot. But we're building this into sort of a portal now where they can go in and see their current scores, how they stack against the other teams, and submit a report and say, “Hey, I fixed this thing. Will you review it?” Our team gets a chance to approve it. And then their score gets updated. And so it's a lot more real time around the risk. And we're working on something that will publish these results to Slack every week and keep people in that sort of competitive mode across the board. Along with the leaderboard also exist in this portal. So we can tie the two together.

[00:24:48] Guy Podjarny So, I mean, it's great that you don't actually have to engineer it right away to get going. Although, yeah, clearly, if you want the social dynamics, you need a certain amount of speed and instant gratification to be satisfied as well. What did you feel drove the most action? The whole model here drove it but as you were building these tech capabilities, were there some key points? And you said, “Well, once we build this, we really saw them kick into higher gear?”

[00:25:15] Garrett Held: I think once we were giving them the reports, and we were able to allow them to make changes and to modify those reports quickly so they didn't have to wait like half a year before they got a new risk score and they could see that change happen, that's when we saw things happen a lot more quickly.

[00:25:33] Guy Podjarny So, yeah. I guess at the end of the day, tech stack does win when you build it to be live. So you've built a risk register, and you have a bunch of those different elements. You mentioned that over time, as you iterated, you emphasized towards the risks that the teams themselves can control. Do you have, when you think about the list right now, you have listeners thinking about where do they start? Do you have some favorites there? Would you say if you were to do this again, you would start with these X things? I know risk differs from company to company. But there's definitely a lot of common patterns.

[00:26:05] Garrett Held: Sure. So you probably have vulnerability scans. You probably have penetration test. And there are benchmarks out there. And I think those are really good places to start. One of the things that we also want to use this programming as far as like things that we want to drive in the near future. So start thinking about your two-year roadmap for your team and what you want the team to do, and add those things. So if it's, “Hey, we don't have a static analysis tool yet.” When you've had one that you want the team to implement, maybe that becomes – For the first six months, that becomes leaderboard points. They get points for adding that into their stack. And after six months, it becomes debt. So they get six months chance to add it as points. After that, if they haven't added it, it becomes debt. And it’s a good migration path for adding controls into the organization.

[00:26:53] Guy Podjarny Interesting. So you actually have two scores here. You have the credit card balance, and you have the credit score. The credit score is really the proactive score. How trustworthy are you? So you want to win points there. But it's entirely in your control, you can invest more ahead of time. The credit limit is more the other way around, right? How close are you to bankruptcy? Where are you at the point at which the bank will come in and comandeer control? Is that the right way to think about it?

[00:27:18] Garrett Held: That’s right. The risk is the mandatory stuff. So you can use leaderboard points while it’s still optional, or you're introducing your program. And once things become mandatory, they don't have them, then it's dead.

[00:27:29] Guy Podjarny Yeah, yeah. So you mentioned before that some of the security risks are not included in the score, because they're table stakes. You can’t just publish PIA data on some open buckets and get away with it. What was the guideline? Is it compliance? Like what were the lines that decide whether a risk is in or out?

[00:27:50] Garrett Held: It was really, are we punishing them for creating new products or being successful and storing more user records? We didn't want to do that. And so we just sort of go over the risk and make sure that we're incentivizing the teams correctly. And that is to build good controls, build a mature program, and not to keep the product small, or try to game the system itself.

[00:28:13] Guy Podjarny Yeah. Very cool. So what would you say, like how do you measure success here? Like it sounds you've given already all sorts of examples of how we drove. Do you use the same score? What impact have you seen? And how do you try to kind of measure it over time?

[00:28:29] Garrett Held: I think as you get better at estimating your numbers, you're going to get a lot more consistent, however often you do this. If you do that, every half year is probably a good place to start. And then do a quarterly. And you'll be able to see if your debts going down. Can you start moving people's credit limits down and not completely reduce risk in the organization, that should never be the goal, but have that continuous movement down to that you're moving towards that area?

[00:28:59] Guy Podjarny Yeah. Got it. And do you use the same? Do you feel like you can measure your actual risk and their actual program success with the same barometer like within the program? Or do you feel like you need to have some other – A little bit more centralized risk measure that you use in parallel to counter it?

[00:29:19] Garrett Held: Internally, we don't use the points to measure our success. We more use the success of the program as how well it encouraged teams to adopt security controls and implement different programs, because the teams have a choice of what they do in that debt. So if they're ignoring one of those pieces of debt constantly across the board, we know something's wrong with the adoption. Whereas if the teams are like, “Oh, I see this. It's easy to fix. I'll go after this,” and a bunch of teams are doing that, we know we built a good program at that point.

[00:29:53] Guy Podjarny Yeah. Very cool. I got to say that the more you describe it, the more it does feel like the velocity metrics and things like that for engineering, the more you do it, the better you become at estimating the number of points that a thing is on the point. And then the more that is actually a predictor of how well or poorly you're doing. Every time you build something, there's iterations where you say, “Well, if I was to do this again, I would do differently,” or maybe cases in which you said, “Oh, we got that wrong.” Can you think of one or two things that you tried when you embarked on this and you turned out to be different, or things that you will do differently if you were to do this again?

[00:30:29] Garrett Held: I think there were several things we learned. One of them was, like I said, that ROI and an analysis section where they wanted us to really give our thoughts on the process. And so it was actually surprising that the teams wanted all that information. And we were happy to give it. But designing that from the beginning would have been nice.

The other one is how we designed the question to the teams. So we sat down with some of the pilot teams during the first phase and asked some questions that were vague at best. And we got a lot of different answers. And, “What about this? What about that?” And we sort of narrowed it down to a, “Yes, no, not applicable,” or, “I don't know,” answer type format. So like, “Do you have this in your product? Yes or no?” And sort of had to move to that format. Because otherwise, it's really difficult to gather some of those answers.

[00:31:21] Guy Podjarny Yeah, just the codification of it is just too hard.

[00:31:25] Garrett Held: Yeah, yeah. What's next? It sounds like quite an evolution. It's a great product. We’ve already defined it. Give us a glimpse as to what you haven't built yet here or you're thinking about.

[00:31:37] Garrett Held: We're going to continue work on the portal and some of the automation and the reports, as well as the metaphor can be extended to things like minimum payments that they need to make on their debt. So even if they have, they're under their debt limit, just like you're under your credit card limit, you still need to make minimum monthly payments towards that debt. And that is to encourage them not to be complacent if they're under their limit, and to always be moving the ball forward.

[00:32:01] Guy Podjarny Yeah. Yeah, that sounds useful. I guess, would there be interest as well? Do you interest, you're starting to get people to dig into the hole, that might be a bit problematic?

[00:32:08] Garrett Held: Like I said, you can take this metaphor and really stretch it. And I think you could do that as well. Yes.

[00:32:14] Guy Podjarny Yeah. Very cool. This is a great program here, Garrett. I feel like it's really useful to just translate, and I guess that's what you've done, is translate risk, which is such a murky type topic, into a concept that people can address to do it. If someone was to get going now, what would you say? Someone's listening right now, some CISO, some head of a part of security. Once they implement this right now, what would you say is like the minimum viable product for them? What would you say are the key steps they have to get going to get started?

[00:32:45] Garrett Held: I would say start with that pilot risk register, get some buy in from people in the organization that will be your estimators. And come up with that first set of quantification. Don't worry about the credit card model right now. And get that list of prioritized risk and see how that compares to your priorities. Because the other thing this can be used for is a gut check against your priorities and how important they are to your organization. And then categorize those, and you can start thinking about questions to the teams after that. And go from there.

[00:33:21] Guy Podjarny Yeah. Very cool. Super useful. Thanks, Garrett, for kind of describing through it. And I'm sure we'll check in. You also – And I’ll kind of post a link on the show notes, the way I found out about is from a great blog that you wrote about it, which I think was very well written, so we'll publish that on the episode notes as well. So these were great insights. Before I can let you go over here, you get the dubious honor of pioneering the first posing question. I like to ask somewhat open-ended questions at the end of every podcast, just to get a bit of a perspective from the smart guests that come on to the show.

So I don't know, apologies, or congrats for sort of being the first one to answer this new and guinea pig on it. But here's my question to you, if you got unlimited budget resources, you could start a company, you can build a team, you can buy something incredible, unlimited resources budget, and take on a problem that's somehow related to your job or to your security, what problem would you take on to solve? And why is that maybe even – How would you go about starting to solve it?

[00:34:27] Garrett Held: So the vulnerability I loved when I was red teaming, and now I hate, is when I was the product security side of things, is business logic vulnerabilities. And just adding one to an identifier and getting some other person's records, or changing something in a URL and being able to access things that I shouldn't be able to access, mostly because it's easy to write database queries that don't take all the information into context.

There have been some companies out there to develop custom things, that do these sorts of checks, and it usually changes, custom changes to ORMs. And so if I was to try to fix one thing in security, it would probably be to have some sort of product that works for all different companies that really allows you to build that authentication authorization easily into your ORM in your queries to make sure that that logic is enforced across the board.

[00:35:20] Guy Podjarny Yeah. Oh, that's awesome. That's definitely a great one to solve. It's the most complicated, because it's about business logic and modeling. That is really quite hard. Yeah, that's a great one. Do you have any kind of quick thoughts of like if you were to tackle this, how would you go about doing it?

[00:35:34] Garrett Held: If I did, I probably wouldn't share it. I'd be selling it.

[00:35:38] Guy Podjarny You’d go off and found a company to figure that out. But it's good. I think you've teed up an idea there. And we'll see if people rise to the occasion. Garrett, thanks a lot for coming onto the show and for sharing all these great perspectives on the credit card risk model that you've pioneered here.

[00:35:51] Garrett Held: Thank you so much for having me, Guy.

[00:35:53] Guy Podjarny And thanks, everybody, for tuning in. And I hope you join us for the next one.

[OUTRO]

[00:36:01] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show, or get involved in the community, find us on Twitter at @devseccon. Don't forget to leave us a review on iTunes if you enjoyed today's episode. Bye for now.

[END]

Up next

Empowerment In Security With Bryan D. Payne

Episode 109

Empowerment In Security With Bryan D. Payne

View episode
Supply Chain Security With Jonathan Meadows

Episode 110

Supply Chain Security With Jonathan Meadows

View episode
Alignment, Agility, And Security With Patrick O'Doherty

Episode 111

Alignment, Agility, And Security With Patrick O'Doherty

View episode
Executive Orders And Being The First CISO At A Company With Lena Smart

Episode 113

Executive Orders And Being The First CISO At A Company With Lena Smart

View episode