Skip to main content
Episode 117

Season 7, Episode 117

Shifting Security Left With Rupa Parameswaran

Guests:
Rupa Parameswaran
Listen on Apple PodcastsListen on Spotify Podcasts

In this episode, we are digging into Shift Left, what it really means, and how to accomplish it successfully. Sharing her insight is Rupa Parameswaran, head of security at Amplitude, and a security and privacy expert with 20 years of knowledge behind her.
She works closely with business leaders to create relevant secure by design and secure by default controls that help businesses run efficiently, but also be secure. She shared with us how she has really successfully transformed the security mindsets in the engineering teams at Amplitude.

Learn why Rupa wants developers and business owners to grow their understanding of security, and which metrics she uses to assess security success. Tune in to learn all about the evolving list of capabilities essential to security teams, and Rupa shares her thoughts about the future of security and standardization.

Compartilhar

EPISODE 118

“Rupa Parameswaran: I would rather have an application engineer with a good end on security rather than a security engineer who knows nothing about application infrastructure development. Issue is being fixed. Service security issues being addressed. It's difficult to find these individuals. Hiring a security person can take anywhere from six months to a year, a good security individual, but it's important to have those traits to be successful.”

[INTRODUCTION]

[00:00:28] ANNOUNCER: Hi. You're listening to The Secure Developer. It's part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk's developer security platform helps developers both secure applications without slowing down, fixing vulnerabilities in code, open-source containers and infrastructure as code. To learn more, visit snyk.io/tsd. That's S-N-Y-K.I-O/T-S-D.

[INTERVIEW]

[00:01:17] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning back in. Today, we're going to dig into shift left, and what it really means and maybe how to accomplish it successfully. To do that, we have Rupa Parameswaran, who is the head of security at Amplitude. Rupa, thanks for joining us here as a guest to share your views.

[00:01:37] Rupa Parameswaran: Hey, Guy. Thank you very much for having me here. Looking forward to our conversation today.

[00:01:41] Guy Podjarny: First, to tell you a little bit more about Rupa. Rupa is a security and privacy expert, has more than 20 years of research and industry experience, has been building and driving trust in security programmes at start-ups and sort of big billion-dollar brands, so we'll have a lot to learn here. She works closely with business leaders to create kind of relevant secure by design and kind of secure by default controls that help businesses run efficiently, but also be secure. Right now, she is as I said, head of security at Amplitude. She was responsible for engineering security, and incident response programmes and a lot more. We'll hear about those learnings in a moment. Rupa's focused on shifting security left, giving engineering and DevOps teams kind of relevant training, timely alerts and just the right recommendation for remediating security flaws and misconfigurations as they develop. Has really successfully transformed the security mindsets in the engineering teams at Amplitude and how they work. We'll dig a lot into the secret sauce here, and the learnings and how that has actually transpired.

Just to note that in a while, we're going to focus maybe on some Amplitude examples and activities. A lot of learnings come from the Rupa's previous experience as CISO at Demandbase, and in general, kind of working in security and privacy architects at Pinterest, Microsoft, EMC and more. Again, Rupa, thanks for – this is tons and tons of sort of experience for us to tap into here. Let's sort of dig in maybe with some terminology, shift left, you have to sort of love and hate buzzwords as they do it. When you hear shift left, at least in the context of security, what does that mean to you? How do you define it?

[00:03:20] Rupa Parameswaran: Thanks for the introductions. Shift left to me means bringing security closer and closer to the business owners, making them more aware of what needs to be done, why it needs to be done, and how it can be done. Where it's almost like bringing in a horse to the water, and you show them how to drink. You can put the water into the mouth and make sure that they're thirsty, they know their appetite for thirst, they know when the water is not good enough for them, but you bring them close enough so they can make the right calls. Because to me, the business owner is the best suited to either make a decision on whether that risk is acceptable or not. What we as security leaders need to do is to teach them or let them know what the risk is, what the issue is, how it can manifest itself and give the business owner the wherewithal to how to address the risk if they agree with our risk assessment. If they don't, then we learn and we help them put in controls so that the risk will show up and it's a real risk.

Where you said, "Okay, it's not a risk right now, but if A, B and C come together, then it becomes a risk.? That's what I mean by shifting security left. That ties in very nicely with my perception of security, which is it is a continuous process. Security is never a, "I'm done with security, let's move on." Security is a journey. It's a continuous process. You keep looking at things, new threats will keep coming in. You keep assessing them. You keep evaluating where the risks are and you keep implementing controls, where it's like vulnerability detection, identification detection protection, but the incidents will always happen. You remediate that, you identify what the indicators or compromise are. Then you push it back to now it's a normal vulnerability and it's this level, so it goes back into the pipeline. As you can see, what I'm defining is really a circle as you will, right? Things you can start at the very right, you can start at the very left, you go through it. Every time, you may have to look back and that's what is shifting security left. The more and more you can move security left, get people trained with an understanding of what these risks are, the earlier they can accommodate it into their learning process, into their design process, and reduce the number of hurdles you would have or obstacles you have in your path to delivery.

[00:05:52] Guy Podjarny: There's a bunch of things I like about the definition. One of them is, as business owner, you didn't say developers. You said of said the business owners, it's the people who make the decision. I really like that. I relate to it. It really boils down to bringing the security knowledge closer to the decision maker, to the person in practical. I really like that. But I guess, kind of a question a bit, how much do you think – we talk about understanding the risk, how much is it about understanding the risk and maybe how does that compare to the ease of becoming aware of the risk or the ease of doing something about it?

[00:06:28] Rupa Parameswaran: Beautiful. Love that question. Thanks for catching on to the fact that I call it out as a business owner, because there are different ways to showcase the risk to the owners and different levels, right? So if I was speaking to an engineering manager, or a leader, or a director, or even a CTO for instance, I would call it out as, "Hey! This is your crown jewel." For most small businesses, it's typically the production environment, production application. The risk vectors, I would call it around generic level, right? How can someone get spear fished, bring in malware into a device. That device gets – is a device belonging to an engineer, that engineer is hitting production that gets into your easy two instances and voilà, you have ransomware there.

An engineer will not be able to take that and put that into, "Hey! There is an issue. I need to make sure all my application, or I have my containers, and I have my container images, scanning for malware or anything. With them, I put together a demo. I put together an application, malicious web application and I'll walk them through a demo with an engineering person to showcase and make sure that that demo that I put together triggers an alert. Triggers an alert on your container scanning, triggers an alert when the individual is on his IDE writing his code. Without any other context that engineers are going to say, "Hey! I have all these other security groups, I have all of these other container images, vulnerabilities. What's this new one? I'm just going to suppress it and move on." The important part is to have that one meeting or that one demo with the engineering team, where you're showcasing, "Hey! Before you switch that off, look at this demonstration that I have. See if it's applicable. Just read that alert a little more. Is it nuanced? Is it something new or is it something existing?" If it's something existing, call it out and then suppress it? Otherwise, see what could happen. But you need to build that trust, you need to get there that the engineer now thinks about, "Hey! Could this be a problem? Should I look at it?"

Then, in the initial stages, chances are they could get it wrong, have ways in which they could reach out to you, talk to you about it, let you help them understand and make differentiate between, "Hey! We can suppress this. Hey! We cannot." Which will require a little bit of monitoring on the security side, where we'll have to see whether they're doing the right thing. So maybe an alert when things are suppressed. But once that becomes a well-oiled machine, he can let them go and they will thank you for that. They will thank you for the learning because you're not hindering their processes. They know what to do. It's only when there's something new that they don't understand, or a new initiative that they're working on, they will bring you in, because they know you're going to help them shift security left, so they don't have to either avoid things and be stuck with having to pay a bounty, or be stuck with you having to review things like after a 10-day SLA that you have because you have a whole pile of issues to take care of.

[00:09:33] Guy Podjarny: Yeah. You're very much highlighting the importance of contextualising almost like risk context. I sometimes talk about the difference between application context, which oftentimes security people forget to sort of offer developers. You just think about risk. But when you think about the sort of the developer like showing some sort of end-to-end, kind of understanding of the problem, I guess just to clarify, do you aim to have this type of end-to-end simulation for every alert you're sort of providing developers. Or is the intent more to just sort of open their eyes to potential risks and show them that on like a sample set of issues, so they just generally are not quite as readily dismissive of the alerts that come in.

[00:10:14] Rupa Parameswaran: Yes. That's an interesting problem to solve, I would say. It's important to feed security in a little at a time. Then make sure that the team has an appetite, understand what kind of appetite the team has, right? Different teams would have different appetites for security. Someone who's a senior engineer who just thinks about cranking up things, and who's been taking risks until today, who has like 10 years of experience, for instance, who's been working on things, has a mindset already in terms of what's an acceptable risk, what's not, what my business like, what my customers like. Their risk acceptance, or the amount that they could be hindered or asked to take a look at new risks or new alerts that come up is very small. So you need to pick your battles as a security person, do that initial level of triaging, which is another topic that we can talk about is alert fatigue, where the security team needs to make sure at the time of investing in building out that programme and shifting security left. What's the easiest way to get into the good books of the engineers. Just show or allow for the alerts to reach the engineers, if they pass that threshold, if it's critical, if it's high.

For that, define what critical and high means, define why they're critical and high. Therefore, they know what to look for, they understand this completely, and they're able to take care of them. Once you realise these little things are well understood, then you open up the mouth]. What I mean is, increase the diameter of the hole, in terms of like, add more items to that bucket a little at a time. It has to be slightly linear at first, and then you can expand it exponentially. Once you realise they're at the same level of security, you can then open up the pipe and let the whole thing flow in on the IDE, where people are getting warnings. Once it gets into the production pipeline, you tighten the controls a little bit in terms of what you can suppress and what you cannot. That's where the security team needs to work, understand the appetite of the engineering teams, understand how many people there are, how much they can take in. The maturity of the teams, and then work on that sweet spot in terms of how much is acceptable at this point of time, what the roadmap should be. It's important to have a roadmap in terms of introducing security left as well.

[00:12:45] Guy Podjarny: Got it. I think that makes a lot of sense. I think implied a little bit of what you were saying is that the security needs to be fairly consistent, so you would have if I kind of echoing it coRupa Parameswaranectly, you would have taken the security team or sort of the engineering team demonstrated to them a couple of bad scenarios end-to-end about critical vulnerabilities. Then when you tell them about critical vulnerabilities, which hopefully is a small enough number. You're reasonably consistent that when they see a critical vulnerability, they should address it with that level of severity. Then as you do that, the second point you make is, you can drown them right away. You have to indoctrinate them a bit sort of into the process as they build those out. You've done this a bunch of times now. Are there magic numbers in your mind of how many is too many that you use as a guideline?

[00:13:30] Rupa Parameswaran: The first thing to do is to do that first level of triage. A security team, at the onset of the programme, the security team needs to that initial triage, customise the language and – there's no sweet number, right? Hopefully a company is at a stage where the number of criticals that will come in would be at the most one a week. I mean, that's huge too. But at the most, one a week in terms of infrastructure, once it's triaged. It’s not like, once you set up any kind of scan tool or vulnerability management tool, it's going to puke out a whole bunch of issues. Are they all really critical? Are they all critical in in your environment? Maybe not? Maybe it is. But that's where it's important for all of these tools, the different tools, whether it's your dynamic scanner, your static scanner, your IAST, whether it's your cloud infrastructure scanner. All of these – it will be nice if you have all of these tools, the information they ingested from all of these tools, being put in context. For the security team to come up with – these three are critical because of all of these dependencies. We define critical as X, Y and Z. In which case, if there are 10 in the beginning, so be it. The teams will get the 10. Security team will work in concert with the engineers, maybe even drive those, lead those and help fix them. Get that buy-in, and then make sure that just the criticals, the SLAs are met and then move into highs.

[00:15:00] Guy Podjarny: Yeah. I really like it because I feel there's often a challenge between on one hand, you want – the engineers are doing a lot of things and you want the security team to simplify security. One way to do that is by taking the labour of triaging vulnerabilities to them. On the other hand, there aren't enough security out there. That's not kind of viable solution. You can never really keep up, and so you want engineers to do it. I like the model that you're proposing here, which is, at the beginning, take the burden on, do some things manually, start-up lingo. You're talking about, start by doing things that don't scale. That's okay. I know you started, but you kind of make the model work, you build those out. But you need to always do that with an eye, and I guess, an expectation setting in the engineering team that's like you're spoon feeding them right now. But they need to start preparing for being able to do some of those finessing themselves. I mean, if I'm coRupa Parameswaranect about that, is there – are you ready then for that phase two of the programme here?

[00:16:02] Rupa Parameswaran: Right. The part where you're still spoon feeding them at first. That's where a good security engineer will learn to automate himself or herself out of that, would start building the tooling for transfeRupa Parameswaraning what he or she is thinking about in terms of why am I considering this to be a high risk or critical risk and translating that into code? If I'm able to build a tool to say, "Hey! I'm taking input from all of these other tools. If the thresholds pass, if I could mountain attack, or my impact, likelihood for impact and my exposure is larger because these three things, can I codify it, or is there a tool over there that are open source or propriety that that's doing that? If that is, can I bring that in and improve that tool?" So that the tool is now the one that's, I wouldn't say spoon feeding, but at least helping set a little additional context to the engineer, and the engineer is able to take it from there and run with it. Make myself dispensable and then look for another challenge to work on.

[00:17:11] Guy Podjarny: Got it. Okay.

[00:17:13] Rupa Parameswaran: That what makes a good engineer, which is very right. The retention for the security team comes into play as well, right? Give them something to do that they can showcase, that they can be great at, then go to present at Ibiza and do a Black Hat or what have you. That's a learning curve, and the retention and building on security team comes in.

[00:17:32] Guy Podjarny: Yeah, got it. Okay. That's a great clarification, because you're saying the application engineer, that's one of the application developers, you actually don't expect them to encounter the fire hose at any stage, you expect the security team to initially manually triage, and then increasingly automate that triage. But even when the team – like when the team matures, is there a point in which you need the application development team to be able to cope with larger volumes of issues?

[00:18:03] Rupa Parameswaran: The thinking there is that, as the maturity level of the engineer, application engineers increase, or the infrastructure engineers increases, the solutions that they will be building would be addressing the legacy of the critical, or the high issues that will address the replay in very nicely with resolving or addressing the fire hose of issues that hasn't been sent to them already. That fire hose will automatically, or the pool will automatically come down. So now, unless there's a new Log4J, there wouldn't be another fire hose that comes down their pipe.

[00:18:41] Guy Podjarny: Got it. Okay. This is an actual improvement. You're sort of coRupa Parameswaranelating the actual improvement of security with the readiness to sort of take on security from the team's perspective. Hopefully, just a lot of these things have been running around. In your mind, what are the time kind of sort of – how long you're coming into an organization? This, let's say, it's like not in disastrous state, but the development team hasn't embraced security and you're looking to go through this process. How quickly do you think it's reasonable to get to a state in which there isn't heavy manual triage by the security team?

[00:19:20] Rupa Parameswaran: It depends is what I would say. It's easy to get a good jumpstart in light if you're able to piggyback or strike while the iron is hot, is what I would say. Things like the Log4J, for instance, or an issue like that, where the security team showcases comes up as a leader is working with everyone, creating awareness of why it's an issue, what's an issue, going and resolving it. There's an appetite that understanding of a security is something we need to think about comes in as important. Having said that, some teams will be more amenable even within your engineering team, depending on how each business organises engineering teams, if there's 10, if there's 15, if there's one. There would be some that are more amenable to take on security upfront, some that are not. It's important to work, build your allies. This is not just technical, but build your allies, show key wins, and create those champions in those teams that are your allies, and help them spread the security awareness of the winnings along. I would say, it could take up to a year to get that security buy-in across the company and get everyone to understand that it is important. Again, in terms of the maturity or the development lifecycle, as well.

If someone's working on new feature and a brand-new product. That's an easy way for security to come in and try to get security all the way to the left. Yes, it's always challenging with timelines and deliverables. But that's an easy way to try and build in a relationship with everyone from the product manager, to the engineering manager, to whoever else is a stakeholder in driving that development culture and get the true – rather than in something that's completely baked, a feature that's just like an underlying backdrop, or like a middleware feature, or like a component that 10 teams are using where one small change to that framework or to that platform is going to cause issues, which nobody knows about, because it's legacy, no one wants to touch it. Those are areas where security will have to work a little closer with the engineers or the infrastructure leads to understand timeline to fix those things.

[00:21:35] Guy Podjarny: Yeah. I like how you suggest changing it because it kind of maps a bit more to how dev tools are adopted versus how security controls some are rolled out. There tends to be a bias in favour of – in security, there tends to be a bias in favour of rolling out breadth first and just get everybody to a threshold before you go off and invest in having certain teams get to a higher level of competence. I think you're suggesting something that is more akin to have dev tools get adopted, which is, you have certain teams that lead the charge. They succeed, the organisations sees they succeed, and so other teams kind of lean into that as well. Easier for them to accept it. Is that right?

[00:22:12] Rupa Parameswaran: Yes. That's where these champions come in. We want engineers and other business owners, business teams, the doers to become champions, and to be able to speak the security language, and to bring new issues and new concerns or considerations to the security team, because security team cannot scale as much as the engineering team.

[00:22:34] Guy Podjarny: Actually, that's a great segue. I love the model, maybe we get like a bit more specific to sort of amplitude and some of the things that they're talking about. Indeed, one of the things I know you're doing is this notion of security champions. Tell us a bit about that sort of framework, programme, what it is.

[00:22:50] Rupa Parameswaran: Right. That's a topic that's very – I mean, that a pet project for me wherever I go, because it's very difficult to hire a security talent, and security engineers are or the security team is often strapped for resources and we're often the ones that are bottleneck, which is why the engineering teams are – whichever other business unit wants to avoid having to get to security, because it's not scalable. How do we solve the problem? It's about building that relationship. As we shift security left, it's important to win over individuals, and understand and identify which individuals have that appetite to learn a little bit more of security or aren't able to appreciate security. That can be done in multiple ways That could be done by security initially, right? So security lunch and learn sessions, brown bag sessions, and having security office hours to say, "Hey! You have a question, come to us." If there isn't anyone, we're going to talk about pixie, and then speak to them a little bit more in terms of, "Hey! Would you be interested in doing this once a month, every other week where you're learning something new." Just like, "Come in, let's do a tournament, see whether you're good at it, just like a competitive board." Start off a little bit, making it fun, and an experience. At the same time, start getting buy-in from the leaders or the management teams, in terms of your engineers who are interested, who would be able to serve as security leaders within your team so you don't have to reach out to security. They know a thing or two, they'll have another career path if they choose to get into security. But at the same time, they're learning.

By sharing that or mentoring across the entire engineering team, we're winning the security battle. But at the same time, making the engineers more aware, a full stack engineer should have security on his resume too or her resume too. I mean, that's the way I would look at it. But I think that's where the world is going today. Every engineer should know a thing about security. That's what the security team can provide to an engineer. The same time, the ask is that engineer is a true representative of the security team. He or she is able to participate in those meetings or in those forums, be able to bring in new issues, is signing up for reviewing all of the designs and the team for security issues and having those discussion in terms of, "Hey! We're building out a new authentication layer. What do we need to do? We're building our new Oauth service. What do we need to do?" They bring that up to security and then we talk about important things.

That's how you scale. You have at least one security champion in each engineering team, and then going beyond that in each product management team, and so on, and so forth. So far, we've talked about engineers. It's important for the product leaders, the visionaries of the product and people who are in the growth sector as well to keep thinking about security. What's out there, what's new in the market, what's important for our customers, and so on, and so forth and tie that. That's how you expand and roll out a really successful security champions programme.

[00:26:08] Guy Podjarny: So the concept makes perfect sense to me, and definitely sort of security champions is one of them sort of the thicker, one of the better and kind of fairly well adopted approaches to scaling security. Do you run it as a programme? Is there – you identified you seem to have sort of clear definitions about what is it you expect the individuals who do it. I guess, kind of a couple of questions. One is, the individuals that sign up for, do they need to get manager approval? Do they have actual kind of percentage of time allocated to it? Then related to that is, they're in official – you have those meetings you mentioned and all of that. Is there an official – do they do things together, the security champions or are they purely extensions of the security team?

[00:26:46] Rupa Parameswaran: It's, again, a crawl walk grant state. In a fully functional security champions programme, yes, that would be. That's like the North Star, is you would have – manager buy-in is required. Manager buy-in is a no brainer. If you bring in people say, "Hey! Come sit in a meeting. This is what you get out of it." They will do it for maybe a month, maybe two. Then, end of quarter, deliverables come into play, no one's thinking about anything. The manager is not aware that the individual is committed to reviewing things, so everything will break down within two months of rolling out the programme. It's good to get exposure, to get visibility, to get a quick understanding of who's interested in security at all, in terms of the programme rollout saying, "Hey! We're going to roll out this programme. Here are the asks. Anyone's interested, come in, we do some fun stuff." For month one, that works great.

Then once you identify those individuals, that's when you need to start putting together, here are the expectations of a security champion, get that understood and agreed upon by the managers as well and understand how we can quantify that or measure that. Being a data person, it's important for me to be able to measure whether the individual has been serving as a champion. And therefore, has there been value given back to the team. It's a bi-directional bucket, right? We need metrics to quantify what success looks like from a security perspective for me and my team. Examples I would take is, for these key deliverables, each of them goes through a design review, the individual has been involved in code reviews for each of them. We identify which ones could be more important from a security standpoint, how well the individual was able to capture the security requirements, how much of time put in by the security team, versus by the individual on the security deliverables.

Will the individual able to deliver security tests by himself, by herself, or teach others in the team to build things in? Therefore, when we expose these features out to either internal bounty programmes, or external bounty programmes, or a pen test, how many issues were identified? Those are some of the high-level metrics. I mean, of course, there are others that are internal to the teams, but high-level metrics you could start off with.

[00:29:08] Guy Podjarny: Yeah. Perfect. Great metrics. We're kind of quickly evolving here on the time, so I want to move to squeeze one more question and then we have our closing question to deal with. You talked about all these sorts of great processes and sort of changes in how you interact with development teams. One thing we didn't talk too much is about the people on the security team, and the skills that they need to – or like temperament, or mental kind of style to apply those. What is it that you look for in people you hire into the security team to apply this approach? How does that differ in your mind to maybe who might have been hired a decade or so ago?

[00:29:46] Rupa Parameswaran: The security team has evolved over time in terms of what the requirements are for a security individual. A security individual today needs to also be a good salesperson. Unfortunately, that's the truth of lean security teams, in addition, of course, there's a no brainer in terms of the security engineer needs to know security, in terms of understanding what are security best practices, what are the big risk factors, threat factors, must have a background and experience in security. Other than that, progressively so, if a security engineer working closely with engineers needs to be a full stack developer as well. At least have experience as a full stack developer, at least has learned the roads. I would rather have an application engineer with a good bend on security, rather than a security engineer who knows nothing about application infrastructure development. Issue is being fixed, so security issue is being addressed.

It's difficult to find these individuals. Hiring a security person can take anywhere from six months to a year, a good security individual, but it's important to have those traits to be successful. Someone who's interested can be trained, but people who are getting to the security profession should be mindful of the fact that patience, soft skills, and a good understanding of application development are key to being successful in as much as knowing security is.

[00:31:15] Guy Podjarny: Yeah. Just to sort of challenge a bit. It's hard to find these unicorns. I mean, sounds like a great kind of hire. But if you think of sort of the soft skill, sales skills on one hand, on sort of application development skills, and on maybe the security knowledge on it, which of these three, if you had to train on one of them, someone on a team, where would you compromise?

[00:31:33] Rupa Parameswaran: I'd say the sales pitch and the soft skills, I think are easier to teach on the ground. You could always have mentors, you could always have bring in external help to teach and train these individuals, as long as the individual is willing to.

[00:31:48] Guy Podjarny: Yeah, and they need to have some disposition.

[00:31:49] Rupa Parameswaran: Willing drive is what we will look for.

[00:31:51] Guy Podjarny: Yeah. Got it. Yeah, that makes less sense. This has been super helpful conversation. I think like tons and tons of learnings for someone coming in and trying to sort of apply this change of how do you successfully engage the dev team, successfully shift left, or whatever your term of choosing is. We have a great, great insights there. Before I let you go, I have one open-ended question for you to maybe sort of share a view on. If you had unlimited budgets, unlimited resources that you could apply to solve a problem in the industry, what would that problem be? And maybe if you have it, which direction would you take to try and solve it?

[00:32:25] Rupa Parameswaran: If I had unlimited resources, I would focus on building out a tool that reduces alert fatigue, something that would bring in all of my alerts that are screaming and kicking from everywhere. Put that in perspective, and pipe out numbers in terms of here's the real risk, or here's the approach. This is A, B, C in that order that I need to fix. It's easy to say than do because there are so many dimensions, but we're getting there. The industry is getting there. They're not there yet. Each of these, I'm not sure whether one company's tool would fit another company today. But we shouldn't be able to get to some level of standardisation. Yes, there may be customisations need, we should be able to get to some baselines that are better than we are.

[00:33:15] Guy Podjarny: Perfect. Thanks. That's definitely a worthy problem here to address in order to kind of reduce some of this noise. There too many, probably even like real security problems, but just like with the impact that sort of – just a bit too small, and so we have to pick our battles. Rupa, this has been super insightful. Thanks again for coming on to the show.

[00:33:32] Rupa Parameswaran: It's my pleasure. Really great speaking with you, Guy.

[00:33:35] Guy Podjarny: Thanks, everybody for tuning in and I hope you'll join us for the next one.

[END OF INTERVIEW]

[00:33:43] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode. Bye for now.

[END]