Episode 104

Season 6, Episode 104

Implementing DevSecOps In Regulated Versus Unregulated Industries With Rohit Parchuri

Guests:
Rohit Parchuri
Listen on Apple PodcastsListen on Spotify Podcasts

Welcome back to another installment of The Secure Developer, where we have another fascinating conversation lined up! Today your host Guy Podjamy sits down with Rohit Parchuri, Chief Information Security Officer at Yext, to pick his powerhouse brain about DevSecOps frameworks. Rohit is an accomplished security leader with an established record building, structuring, and institutionalizing security principles and disciplines in the cloud hosting, network hardware, cloud software, and healthcare domains. In this episode, the listener hears a comprehensive understanding of the differences between a health platform and a tech platform, the crucial component of building a culture or security mindset across a company, and the challenges of weaving security into fast-paced and leading-edge organizations. We then touch on the 3 frameworks which Rohit delineates before he starts building a program, before diving deep into the different approaches needed for more heavily regulated industries versus the less regulated spaces. Plus, you'll get a sneak peek into Rohit's favorite interview question, and his hard-won take on the need for dual skills in security as well as the programming landscape. Finally, we look to the future and hear some exciting and pretty accurate projections into what cybersecurity will look like in 5 years time. Press play to hear all this and more!

共有

[0:00:17.5] ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the
DevSecCon community, a platform for developers, operators and security people to share their
views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check
out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk's developer security platform, helps developers build
secure applications without slowing down. Fixing vulnerabilities in code, open-source,
containers, and infrastructure as code. To learn more, visit snyk.io/tsd.

On today’s episode, Guy Podjarny, Founder of Snyk, talks to Rohit Parchuri, Chief Information
Security Officer at Yext. Rohit is an accomplished security leader with an established record
building, structuring and institutionalizing security principles and disciplines in the cloud hosting,
network hardware, cloud software and healthcare domains. We hope you enjoyed the
conversation and don't forget to leave us a review on iTunes if you enjoyed today's episode.

[INTERVIEW]

[00:01:55] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today we have with us the VP and Chief Information Security Officer of Yext, Rohit Parchuri. Rohit, thanks for jumping onto the show.

[00:02:06] Rohit Parchuri: Of course, thanks for having me.

[00:02:08] Guy Podjarny: Rohit, there's a lot, you're the CISO there at Yext, but we talked about all sorts of interesting conversations that we can have over here. But today, we're going to dig primarily into this notion of DevSecOps frameworks and how you think about it, how do you think about it from a people's perspective, research, maybe we get a little bit into measurements. That sounds good, ready to go?

[00:02:29] Rohit Parchuri: It sounds perfect.

[00:02:31] Guy Podjarny: Before we dig in, can you please tell us a little bit about what is it that you do today, and maybe a bit about the journey into security into where you are today?

[00:02:40] Rohit Parchuri: Sure, I would love to. Currently, like you just talked about, I'm the VP and Chief Information Security Officer at Yext and this is my fifth month in the company, so a little new, still trying to figure out or get my bearings in. But before this, and you know primarily I'm responsible for building the cybersecurity program, and also GRC and privacy components, but then it's the journey has been fantastic so far. I'm looking forward to all the great things that the company is going to do, and also how my role fits into that.

Before this, I was with a company called Collective Health. So a little bit of healthcare experience and certainly was interesting, that was one challenge I wanted to take upon myself, and I'm super glad I did. Healthcare is completely a different domain and when it comes to cybersecurity, it's just all over the place. Well, I'm sure we'll get into more specifics and details around that. I was there for maybe a little over a couple of years and before that I was with ServiceNow for about eight years. I started in systems and network security, and then made my way into App and product security, a little bit of security operations.

I was basically responsible in building a few teams and disciplines within ServiceNow. Before
when I felt I was maybe being more jaded with the things that I've been doing and I wanted to
maybe take up a new challenge. That's when I shifted roles and took a CISO role of collective. You asked about the journey. For me, I think the cybersecurity concept itself dawned
unconsciously more so during my undergraduate degree. I was my degree in undergraduate as
an undergraduate was the Electronics and Communications. I was not in the computer field. I used to do an night college as well, which was more towards the software side of things when I
was satisfying my family, the other is satisfying myself.

So there was this project that we were working on in the electronics domain and given that was
the major, and we stumbled upon something called a rootkit. I'm sure everybody knows what a
rootkit now is, but back in the day, this I was talking about in 1995 days. Rootkit was pretty new
back then, and it was very fancy and shiny thing to work on. We stumbled upon it. I was really
thrilled about how it could actually tamper the programs and manipulate the computer
processes. I was pretty interested to look a little deeper into that. But my journey really began
not more so from a thread standpoint, but more so how do I actually think about defending from
something like this happening to the victims or people who are actually interacting with that
unconsciously, for the most part.

So that's really how it began. Then I applied for a few universities to see if cybersecurity is even
a thing, or some people just talk about it. Well, I soon figured it was actually a thing and there
were a few universities or given out the degrees, from then on, I took up that.

[00:05:35] Guy Podjarny: Got it. Very cool. Well, thanks for the reverse journey a little bit there. You just
need to get the bug at some point and then. We're going to dig a bit more maybe the specifics of
it, but you've gone from a tech platform to a health platform, I guess, also tech, but from a
different level of maybe regulations and such into Yext which probably resembles ServiceNow,
more in terms of its regulatory perspective. How is it a stark difference? I mean, how significant
is it to change those worlds?

[00:06:11] Rohit Parchuri: That's a great question. I think it is. I wouldn't have responded the way I'm doing
right now, if you asked me a couple years ago, because I wasn't expecting that to be the case. I
felt cybersecurity is more comments and stuff. Again, people would actually embrace the
concept and start working on that. But that's not really the case. I'm sure the listeners here
coming from the security background, you can relate to that. It's always a journey, there's a
certain friction, you have to navigate the space etc. But for me, the stark difference, really, I was
able to see is between the product delivery versus the healthcare delivery.

So ServiceNow and Yext product facing more on the product side of the house, whereas
Collective Health is completely on the healthcare. It also has to do with more the way I would
think about the security programs within any healthcare space, it's more inside out than outside
in. What I really mean by that is, you're not really delivering a lot of features or what I mean is
security features or security enhancements, per se. But you're managing your partners and
vendor ecosystem a little more than you would typically do on the vendor or sorry, the product
side of the house.

The reason for that is healthcare has a number of antiquated processes and vendors we have
to work with, and a lot of that work has to do with paper, physical paper. You would think in this
day and age, you still have to deal with paper, which is absolutely true, you still have to deal with
it. The problem is there's a lot of PHI, Protected Health Information happens to be one of the
most sensitive and confidential pieces of information that you can find in this country right now.
With that in mind, we're not only talking about the paper, but also how do you transform that,
convert that into EPHI, Electronic form of PHI, and then you have to have a program built
around that.

So you're capturing the physical aspect and the digital aspect of things, which is where it gets a
little complicated I felt, and more importantly, you're talking or you're partnering with, when I
talked about antiquated I'm not just talking about the processes themselves, but also talking
about the mindset of the people. When we partner with these folks, they hardly know what the
difference between a security verses or something else is. A lot of people are still catching up to
the digital world. A lot of people are still going through the migrations from physical to digital. It's
really difficult for us to even have a straight conversation about security, let alone talk about the
programs and maturity frameworks, things like that. That's never going to happen. That's a non-
starter.

For me, that's a stressful place to be at. At the same time, I can say I learned a lot of things
really about how do you build the culture, not just for the people who understand the concepts,
but with the people who don't understand the concepts where you actually want to empower
them with the right things. This is not in terms of tools, right? You are actually trying to build the
mindset. The most difficult terrain you can find is a human mind landscape. That was one of the biggest things where I found about how a technology farm operates versus a healthcare farm,
for example.

[00:09:17] Guy Podjarny: No, I think there's a there's definitely a lot there. For context, Collective Health
itself is a more technology-driven company in the healthcare industry. And a lot of those
challenges, did you encounter them within Collective Health and the reality they're in, or is it
more about Collective Health as partners and customers in the ecosystem that was the
challenge?

[00:09:37] Rohit Parchuri: Yeah. That's a great point you brought up. So Collective Health is a cloud-first,
microservices, architecture, everything you can think about. How we operate our infrastructure
applications at scale in this day and age, that's what Collective Health is and they're pretty
focused and pretty - they have a great vision in how they want to build the products that are
upgraded, used and benefit for people. The problems are the friction that I've seen, it's not with
the leaders not with the company itself, the culture has been pretty strong, right? When you're in
healthcare, HIPAA is its table stakes, right? HIPAA by something you have to do and more. It's
actually much easier for us to talk about GRC in a healthcare or a heavily regulated space than
in the product space, for example.

The problem is more so, how do I take that? How do I take the HIPAA journey, for example, with
my vendors and partners? That's the difficult part, which is when the external-facing interactions
happen.

[00:10:34] Guy Podjarny: Yeah, yeah. No, that makes a lot of sense. I guess, indeed, let's talk
DevSecOps. Talk a little bit about how do you implement it? I think one of the great perspectives
you have here is indeed the healthcare surrounding maybe, and the Yext running, but in both
those cases, your own company Yext and Collective Health are agile DevOps, fast-moving
companies. How do you think when you talk about weaving security into development or into the
activities? What are some of the core principles or approaches that you use in such a fast-
moving space? How do you tackle it?

[00:11:15] Rohit Parchuri: Yeah. It's a difficult question to answer. I will definitely start with that disclaimer.
There is no one size fits all, every fast-paced organization, every leading-edge organization, for example. You have to start or leave with the why. Like why are we even doing this in the first
place? Then assess the other attributes on how you want to build the program. But I'll talk about
what worked for me, and maybe we can expand it from there. There are a few key ingredients
that I focus on when I'm taking up the hill, like when I come into a company, when I'm trying to
assess what cybersecurity program do I need to build both from a culture standpoint, and also
from a staffing standpoint?

The ingredients I want to talk about is a culture. First and foremost, like how do you build a
culture or a security mindset across the company? Are you starting from ground zero or did
somebody else do the work for you and are you starting at level one, level two? Whatever that
may be. But the culture is so important, I can't stress enough the importance of that, and a lot of
people don't take that. I definitely am not an exception to that. I've learned this the hard way
myself, in order for you to have a successful program, a successful interaction, and influence
people to take the right decisions from a cybersecurity standpoint, because that's what you're
responsible for in a company, you need everybody rowing in the same direction. And culture
plays a critical role in that.

However, there is a combination of things still. You can just focus on the culture, there's also
automation enforcement. When you establish the culture as the foundational principle then you
take that level up and say that, "Okay, this is what - how we want to define the practice within a
company. And these are the tools or the frameworks that we would enable for us to get things
done at scale, and also optimize as we go along." I would start there, and maybe depending on
how we work as a company, or what direction are we taking from a product or a sales strategy?
I would pivot then course correct, as needed.

[00:13:19] Guy Podjarny: Are there specific frameworks that you start from? I mean, when you're, you've
worn a similar hat in three companies now. Are some go-to?

[00:13:27] Rohit Parchuri: Oh, absolutely, absolutely. I wouldn't lie by saying that I start something of my
own, right, because there are a lot of great frameworks and a lot of industry experts who
actually worked on it. So why reinvent the wheel when you already have a trove of information
available. But the way - it's not so much about the framework, but the way I approach it is what I feel comfortable with the things that I do, and I'll talk about the frameworks in a minute. But I
would delineate this to into program-level frameworks and control frameworks.

A program-level framework for a CISO is so important, and even for folks in the internal audit, is
"What direction are we taking as the company when it comes to cybersecurity staffing, spend,
the program objectives, etc?" So that's really setting the narrative or the dialogue to begin with.
This is what I would use to talk to my board, talk to my executive team, figure out what direction
we need to take in terms of compliance certifications, regulatory, etc. The extension to that is
what the control framework itself is. So the control frameworks are how do you manifest the
program framework into practice? Simple things like how do you build the security principles,
the architecture, the principle of lease privilege, architectural considerations, threat modeling,
scanning the whole gamut, right?

A few examples for me in terms of program framework, NIST CSF has been super helpful. I'm
sure most of your listeners or folks in cybersecurity attest to that. It's very comprehensive. It's
very holistic, but also you have to make a judgment call in terms of how much does it align with
your company objectives. All it comes down to when you're trying to select a certain you don't
have to be married with a certain framework, but what all it comes down to is, like how do you
align that with your company objectives with your company goals? Because having a framework
by itself is not really going to help anyone, including yourself.

How do you align with that, and then show you the direction that you want to take both from a
cyber-standpoint, also from a company standpoint, is super helpful. Controls framework, what
I've used in the past is the CIS controls. They - I think they used to have Top 20, but now they
actually came up with the CIS controls. I've used that in the past. I feel comfortable
recommending that myself, because there are a few things you want to start off working. They
give a step by step guidance on how you want to achieve that.

The last one, I would say is the behavioral framework when it comes to DevSecOps specifically.
A couple, one is the BCM. This was developed by Synopsys, I believe. Then the other is the
Open SAMM, and I'm sure many of you know about Open SAMM, this has been developed by
OWASP. I think it had multiple iterations. So those are a few things I would start with.

[00:16:10] Guy Podjarny: Yeah, no. That's a great outline there for, you have your program and the
framework for that, the framework for the controls and the ones for the behavior. You said, that
it's specifically BCM and Open SAMM are especially useful for DevSecOps. I mean, what makes
them I guess, what's different about what you would look for in a framework, when you think
about DevSecOps surrounding versus not?

[00:16:35] Rohit Parchuri: Yeah, I think, maybe let me take a step back and help you understand about
how I would approach the whole DevSecOps model or the maturity in general. It all depends on
what product software development life cycles that you have as a company, and how you're
building your products, how you're deploying your products, the post-deployment monitoring,
things like that, right? You have an engineering team, you have technology operations, and they
do follow certain disciplines and functions.

So it's not - they should not align with how you're building out, but rather it should be the
opposite, so you have to align with theirs and this is such a great example of what Snyk does and
what you guys do is exactly going there, right? Because you're trying to integrate with the
workflows and processes that the dev teams have, and for me that's been a holy grail. That's
really how I got a lot of things down, like really influencing people on the why, and then trying to
pivot my direction towards how the PDLC and SDLC operations really happen.

[00:17:41] Guy Podjarny: Yeah, yeah, it makes a lot of sense. Really it's all about empathy and the job
and, I guess, adapting to where you want security to be applied for instance, within
development. Is there a difference - I guess you've worked mostly in these types of fast-paced
surroundings, but are there any notable failures, if you will, like things that you tried to apply that
doesn't seem to be working in this fast surrounding?

[00:18:06] Rohit Parchuri: Oh, man, a lot of failures, a lot of failures. I mean, I would say 75% failures and
maybe 25% successes. Maybe I'll talk about a few examples. One is this whole practice about
getting closer to the developers or developer ecosystems only happened because I failed the other way. I failed because I was creating my own dashboards. I was asking folks to work on the
security tools that I built out the processes that I am managing, right? So when you have so
many isolations and when you're working in a vacuum, it gets really difficult not for the people
not just within your team, but also people who are interacting with you on a day-to-day basis.

Our customers, of course, happened to be developers and at least when you talk about the
DevSecOps, and you should be catering to them, and really understanding the pain points and
the challenges that they have gone through. I did not do that. I can see this loud and clear, I've
failed on that. This definitely goes back to my Service Now days, when I was trying to establish
this practice, when I was trying to create the DevSecOps, which is pretty new still.

I think anything that I'm trying to take them out of their workable processes or something that
they're used to doing, and place them in a different channel, you would feel it's going to work for
some time, but it's not a long term accomplishment. You're never going to keep up with that.
There's always going to be something coming up and there's more friction than actual work
that's being done. That I would say is definitely the biggest one I know I experienced myself.

[00:19:36] Guy Podjarny: Yeah, well, I think that's a great, great lesson. I think I dove a bit deeper into this
rabbit hole. Maybe if we, if we bump up back a little bit and we talk about the different
surroundings there, of Yext versus Collective Health, the more regulated versus less. Now you
have these different frameworks, you have the desire to adapt to developers. Is that different in
the highly regulatory - does the same approach work?

[00:20:07] Rohit Parchuri: It's a good question. I'm trying to think. I'm sure it's different. So, you know how I
spoke about how highly regulated industries start off with a premise that you have to comply
with certain regulatory obligations. It's not so much more contractual obligations, I’ll talk about
that in a minute, but more so what do you need to satisfy to the government or in the legal
space before you can start operating as a business? With that in mind, I think as soon as you’re
as a part of your new hire orientation, it's injected into the employee's minds that this is the bare
minimum that we need to have. This is the table stakes, right? So work towards that.

So you're not starting at ground zero when it comes to GRC aspect of it. I'm not going to say the
same thing applies for all the other cyber practices. DevSecOps is a very specialty skill, and a
very specialty domain that not a lot of people understand exactly the benefits or sometimes not
even know how to operate that. At the highest level, people would know developers or any
employee at a company would know, that this is what we need to achieve, this is a check box,
right? Just like you would do a performance check box, you need to do a security check box, so that your HIPAA compliant or PCI DSS compliant or FedRAMP, compliant, whatever that, but
that's not really going to take you to a place where you're secure in your own true nature.

That's one thing I've found, there's a difference in how you approach the discussion, approach
the dialogue with the folks when you're in a heavily regulated space versus not. In the product,
though, everything happens, the discussions that you would have, you need to lead with the
why, right? What's the benefit for me to do this? Everybody wants to do the right thing, that's this

  • it's not that they hate security. I get this a lot. People feel that security is a policing agent,
    they're coming in, and they're throwing a bunch of things in our heads and they leave.

A lot of effective security leaders don't do that, they're actually the opposite. We enable the
teams to do the right work at the right times. At the same time, we want to defend ourselves
from certain things, bad actors, regulators, etc. So in the product side of the house, at Yext that
Service Now, we talk about the benefits to begin with and then we showcase what models have
been driven to support the case? This could be an enterprise security assessment at the highest
level, or bring it down a notch and talk about threat modeling, use cases that we come came
about, and then we drive the discussion downwards.

So in both cases, though, one thing is common, you need to have the executive or board weigh
in on the decisions that you're taking as a company when it comes to cyber. If that doesn't
happen, the bottom-top approach is never going to work. The top-bottom is something you want
to start with and then include the operations and practice at the line manager levels or adding
chair levels, for example.

[00:23:02] Guy Podjarny: Yeah, got it. Let me echo this back. You're saying in a regular surrounding,
achieving the needs for compliance is easier than maybe in an unregulated one, because
people are like, it's hardcoded into people that you have to do X Y, Z, and they understand I
mean, health care, I'm going to worry about it. But it almost makes it by virtue, you need to -
they need to unlearn that a little bit when they talk about slightly less compliance-oriented, just
pure security, mindfulness, or awareness, to security concerns. But either way, whether it's
regulated or unregulated, you lead with the why, and just the why is not compliance, or
sometimes its compliance but the why can be, it can be a fair bit more. Did I get that correctly?

[00:23:45] RP: That's accurate. Also, I'll touch on this, like why it's a combination of your
obligations as a company that you have to meet for your customers and government at the
same time, but also your due diligence, right? What do you as a leader, what do you as a
security person want to defend your organization from? Because that technically is how it
should start, but unfortunately, we can't have that dialogue without satisfying the first part,
because the company leadership is looking for, how is this going to enhance my bottom line?

How is this going to push my expenditure revenue for the company? How would you think about
this? Like you have to start with a business mindset and then you go into technical mindset. But
for me, being a technologist myself, I definitely touched on, more on the thread aspect of things,
which I shouldn't, which again, is another lesson that I've learned recently. That's yeah, those
are a few things I would also touch on.

[00:24:40] Guy Podjarny: Yeah, yeah, for sure. We'll get back - I'm actually tempted to go to measurement
here, but I don't like, let's put a pin in that and come back to it after. So instead, let's talk people.
So we talked a little bit about the developer engagement, but the security team itself, when you
talk about building a team staffing up for it, what do you look for in a hire? And maybe what are
the warning signs? What are you moving to avoid maybe if you're talking about this being
different and difficult being a specialized mindset?

[00:25:11] Rohit Parchuri: Yeah, yeah, even, that's a great question, by the way. And I think security's
discussion or anything that has to do with cybersecurity right now, it's in each element in the
market. A lot of people tend to look at this in different ways. My way of thinking about this, you
start with the baseline, for me respective of the specialty, you start with a few things. Like for
me, you have to have a threat analysis skillset, combined with programming and systems
knowledge, not experience, per se, but understanding concepts, the basic programming
concepts, the basic system concepts.

Given we're in the cloud-first and microservices architectures, right now, you see this more and
more now, because you're not only dealing with a single layer defense, you're not talking about
application by itself, even though you have an application security engineering, they have to do
much more than looking at the application threats. They have to go into Docker for example, what orchestration services are we running? How is this all combined? How is this actually pulling it into a threat vector? Or what threat landscape are you looking for?

I would say security folks need to have a holistic security coverage, no different than a full stack
developer would. In the past, we never had a concept of full-stack, but now we do, because
there's a reason for us to do it. Otherwise, we wouldn't actually build the apps that we're
developing right now. That's the first thing that's the baseline, I would start off with.

[00:26:34] GP: It’s a cool concept. I don't know that I've heard of a full-stack security person. It's interesting, because they do indeed, hear about the - and I can relate to the dual need of
understanding security on one side, but then have some programming mindset or understanding the platform. How deep do you require going into that programming side? Do you expect people to hire to know how to code? Do you expect them to show some TerraForm proficiency or Kubernetes proficiency? How do you think about weights, if you will, right, in terms of what's more important? Can you teach security more easily than programming or infrastructure? Or vice versa?

[00:27:16] Rohit Parchuri: Yeah, and also it shouldn't be or it's not supposed to be like, it's a single kind of
discipline we're looking for, right. There are different specialties within security. Security is a pretty broad topic. We need different people. We need a really good coverage across the realm
in terms of how we view these people. They're really strong, non-technical people. There's a
really good benefit that we would get out of them, and then there is a really strong technical
people, but there is a middle ground too.

The way I would think about this is, I think, this is, I'm not going to credit myself on this one. Phil
Venables from Google, I think he's a Google Cloud CISO right now, he actually spoke about a
model where you combine a few elements of skills within a security group. He talks about risk
advisors, subject matter experts, and operational analysts. What that means is, when you're
trying to build a well-rounded security team, you need to have a good combination or a healthy
combination of people coming from different disciplines. Think about an application security
engineer, what's the easiest way for me to actually have a person come in, hit the ground
running, and actually get to the things that I need him or her to do?

Coming from a software background is super helpful, because it's easier for me to teach them
security. Security, in my opinion and not a lot of people agree with this, but it's more common
sense than the actual skill. When I say common sense, I'm not talking just about the technical
part of it, but also the people skills, the influential skills. Like when you come in, I talk about this
and other their discussions too. When you're thinking about security, you're talking more in
terms of not enforcing something, but how do you bring the people along the journey, in terms of
getting things done.

So you're reporting the risk, you're not really executing on it. Somebody else is executing. So
you got to influence people to do the right thing and of course, you got to show the benefits and
whatnot. But the way I would position that is, if you're coming from a developer background, it's
so much easier for me to teach that element, as opposed to if it's turned down, like if you're
solely on the risk or maybe on thread stat style side of things. For example, you're a security
engineer, you've been working on this for a little while, but your focus on programming is so
limited, but your role dictates that you need that in order for you to move forward or you don't
take the function within the company forward. I would position the background from software is
better.

The same thing would apply to network security engineers or cloud security engineers, for that
matter. You spoke about Terraform right? What’s Terraform? Just a different form of programming. You have the concept you have the fundamentals built in. In the past, we used to
use, we used to, I used to have a template just for object-oriented programming. I used to have
just a few things that I would ask them and do see how, what expertise do they have in that?
Then I would take on the next threat modeling or whatever the security terminology is there,
right. I would say having a strong skill set and one of the functions that are outside of security,
and then coming into that, it's much more easier for me to actually keep the data for those
people.

[00:30:20] Guy Podjarny: Yeah, yeah, that makes sense. That's consistent with what you've been
describing around the tooling, so you have to relate to that tooling and around the centricity. The
idea of, you have to align yourself to the model of, know how the team that you're looking to
help secure their work is actually operating. So having the core over there is probably better
then coming with the, "I know what to do, just listen to me," starting point.

[00:30:47] Rohit Parchuri: That's the worst. Oh, my God. In security, I think you've got to be people smart,
right? Otherwise, you wouldn't go far, especially in security. It's true for other fields too. But just
given you're trying to make other people work, you got to know how to pull the strings, you got
to know how to talk about the right things at the right time. Otherwise, it's just, it's going to be a
broken concept, it wouldn't really manifest itself into a true benefit, for example.

[00:31:13] Guy Podjarny: Yeah. What's, if we're maybe thinking a little bit about tips and tricks, what's a
favorite interview question that you have to try and people's disposition?

[00:31:22] Rohit Parchuri: Oh, I'm glad you asked. I think I do have one. Maybe I'm spilling the beans now,
because everybody listening to this would have the answer ready. One thing I would ask for is to
threat model. I wouldn't have a certain scenario in mind. I would build up a scenario when I'm
talking to this person, whatever the expertise is. I would risk request for specifics when you're
performing Threat Modeling, and how that applies to their specific role. So I would start off with
the non-technical aspect of threat modeling, and then go into technical aspects. This is for me to
assess the path process on how people think about security. The reason for that is, it's a certain
skill in terms of how you put the thought into practice, when you come to security, right?

Everybody can think about security, right? How do you carry that torch to a place where you're
not only identifying the issues, but also making sure you do have the right defenses in place.
This is where thinking out of box comes into play, because academically your thought about
your top two, if there's a cross site scripting, okay, what's the fix? Whitelisting or escaping?
That's not the reality, right? Reality comes in multiple forms and sometimes you don't even have
an answer. So how do you adapt, and course-correct, depending on what situation you're faced
in. So that's what really blends between the non-technical, technical aspect of threat modeling.
That is something I've had much success in the past and differentiating between a successful
candidate versus an unsuccessful candidate.

[00:32:53] Guy Podjarny: It's great. I don't think you're hiding anything, because I think the beauty of such
a question is that it's broad enough that it shows you that people are doing it. When I was a
CTO at Akamai, that was the webspace in one of my favorite questions was, you open a browser or you put a URL and you hit enter, and a webpage shows up? What happened in the process? Then people will take it down, some people would break down more the rendering of the webpage, some will do more network minded. Some would describe more high-level, some
would go low level. In oftentimes is actually these types of open questions are useful to just
show where people bias, right? –

[00:33:31] Rohit Parchuri: I like that. That's, yeah, that’s a really nice question to ask for sure.

[00:33:35] GP: So we're starting to run out of time, a little bit here. I do want to talk a little bit
about the methodologies and the frameworks. We talked about people and some of the hiring.
Let's talk KPIs and measurement. A lot of different activities here choices, frameworks, hiring,
how do if you're doing it correctly? What do you use to measure the success of yourself, of your
team, of the program?

[00:34:02] Rohit Parchuri: Yeah, oh God, you just open a can of worms now. The KPIs and it's a never-
ending dialogue when you come to security, right? Because you're always faced with the
questions about what's the return on investment when I'm trying to sign this off for you. I'm just
going to say this, right, there's no one size fits all. We have to tailor the metrics, the telemetry,
whatever the KPIs or key risk indicators now, what they're some of the leaders call it. It really
depends on how you're trying to build the program out and how do you see yourself accomplishing that?

A few things, ultimately, it boils down to, what do your customers need from you? What does the
government need from you? What do you yourself need from the program? So I would focus on
those three elements to begin with and let that be a guiding pathway to build out your respective
KPIs. A few things that work for me, I'm just going to talk about my experience governance
metrics. When you have GRC programs you got to focus on at the highest levels. How are you
trying to define a program? How do you show the success or the pathways that you're taking in
order to be successful?

Basic expected controls conformance, for example, patching Identity and Access, configuration
assurance, any deviations that you capture, this could be quantifiable, this could be quality-
related, it really depends on how you're building the program out or how specific you want to get
the information out. There's also the audience, right? Are you conveying this information to the board executive team, then it of course has to cater to that audience, which means it has to be
at the highest levels, and you don't go into specifics. But if you have an ISOC for example, in a
charter committee, or something where you have a group of people taking decisions in terms of
what security means to the company, you have a different way of approaching that.

Then there's also service and acid risk-oriented metrics. The way I would think about this is how
the measurement of risk to specific groups of assets or business services in relation to maybe
attacker motivation or attacker capabilities. The examples or some of the factors you could use
here would be the layers of complimentary defense from an attacker to a target, degree of
pressure on controls, for example, like are you making it difficult for an attacker even after they
compromise the first layer of defense to come in get the final target, the dwell time.

Things like these also, what you could also do is have simulation exercises, and you don't, not
every team would actually have the realistic metrics, you can always do simulations and figure
out what, how these metrics evolve and how they manifest themselves. The last thing I'll talk
about, which typically is most important for many of the companies is the commercial outcomes,
and also the competitive advantage that you have when you compare yourself with others in the
same market space. So what direct or adjacent benefits are expected in addition to loss
avoidance for example, risk reductions, security outcomes, right?

A few factors that you can think about is, maybe the customer sign upgrades, you do seamless
authentication processes that have as a product feature or maybe increase collaboration and
support off customers or reduction of employee overhead or maybe time spending, managing
control still, because every time you build a control, there's an operational lifecycle to that. So if
you're trying to reduce that, and still be able to optimize it, that's a great metric to talk about.

[00:37:29] Guy Podjarny: Yeah. For this last one, is it around measuring the avoidance of something? So
when we think about the authentication, do you mean here you assume some maybe baseline
of what's bad or maybe what's the normal? You're trying to measure how, how far away are you
from it? Just trying to understand.

[00:37:49] Rohit Parchuri: Yeah, so it's twofold, right? One is, what are the table stakes for you, when
you're trying to define the security controls for your customer space? What is the difference between yourself and your competitor? Are you making it easier for people to bootstrap security
principles or controls as a part of your product? Or are you making it difficult for them to actually
go through that journey?

In some cases, it has to be security faults, right? No questions, as you're going to get this by
default, you don't have to lift a finger. But in some cases, it's much more complicated than that,
you have to have a customer action before you could actually manifest the benefit of security
control. That's what I would think that's one part and the other part is also how do you manage
your own workforce from a security standpoint on managing or pushing these patches or
security features out? Are you trying to reduce the unit cost of a control?

If you're doing that successfully, that means you're actually going in the right direction, because
the - when you start off with the control, right, it's going to take a while before it can start
providing benefits, or you can reap the benefits off of it. But over time, if you have, if you're
doing the right things, it would be more of an automated enforcement. Like things you're
building out, the investment that you put in, the expense that you put in is actually going to come
down. So those are a few things. I would say that that speaks volumes to how you're building
the program.

[00:39:13] Guy Podjarny: Yeah, okay. Yeah, I think it's interesting. You're right, that it's a can of worms
here, could have probably gone deep on each one of these elements. I think it's a great
overview of the different types of KPIs, that you would have. Clearly oftentimes the challenge is
consolidating them. Maybe, squeeze one final question on this before we ask you my typical
closing question here. So you in passing there, said there's KPI when you said well, but there's
also key risk indicators. When you're in a security leadership role, do you see the two as the
same? Is KRI, I guess I haven't actually seen that in acronym form, but as a KRI the same as a
KPI?

[00:39:55] Rohit Parchuri: I wouldn't say that. Some people do inflate them together, but for me, I treat
them separate. Performance, in my opinion is trying to build elements within the program and
trying to look at what that accomplishment looks like and then derive the metrics off of it. A good
example would be, let's say we're trying to build a single sign-on. That's a performance metrics,
clearly, because you're building a project. Yes, there's a risk you're trying to address, but more importantly for me the direction is more important than what we're trying to address when you
talk about the KPI.

The KRI on the other hand, and you're right, it is called KRI's, the risk on how you talk - or
maybe even the best way to position that is, let's say you have a top 10, or top five risks that
you're trying to present your work. Those happen to be the highest level of theorize. Each KRI, I
can have multiple KPIs, but not the other way around. Risk is something you're trying to
address. What KPIs are you trying to perform or trying to retrieve? So you're actually trying to
address that key risk indicator, for example.

[00:40:58] Guy Podjarny: Yeah, yeah, that makes sense. The performance is really around the execution
towards reduction of risk.

[00:41:03] Rohit Parchuri: Yes. Good way to put it.

[00:41:05] GP: Rohit, this is, it's like a broad, and we have so many topics to do it. But we run
out of time. So before I let you go here, one final question that I to ask all guests coming on the
show. If you imagine someone sitting in your seat five years from now, not necessarily in your
company and more on your type of role, what would be most different about their reality? What
would be higher? Lower, harder, easier, most different?

[00:41:34] Rohit Parchuri: Oh, man, it's a tough one. There's so many things I can talk about. I'll just
maybe talk about a few things. I think inflation, I think we spoke about this a little while ago, the
security stopping. Right now, there's a lot of demand for security folks. Supply not so much,
right? Which is why, it's getting really difficult for us to find quality security engineers and when I
say quality, I talked about all the attributes that go into quality engineers, so that's exactly what
I'm talking about. Finding quality engineers is a difficult task. I think the inflation of these
specialties, in general for cybersecurity is only going to grow, both from a role diversity
standpoint.

Also, you see this now the proliferation of solutions that are out in the market for cybersecurity,
are growing in size. I think that's going to keep growing, I'm not sure what the upper limit is going to be. I think the person sitting in this position five years from now has to navigate through all this noise, weed out the, what exactly they need to have, in order to build the program, mature the program out, maintain the program, whatever that may be, rather than dealing with
the shiny or the beautiful thing out there.

I'm not saying everybody does that, but it's easy for us to look at what's new. We already have a
solution in place, but there's a new one that's cutting edge and something we want to implement. So it's easy for us to take that and refactor that, but not a lot of people think about the total cost of operations. When you do that, migration costs, deployment costs, it just adds up. On one hand, and no offense to the vendors out there, but you have lenders pushing you towards the solutions that are out there, and yeah, this solution is better than this one, please consider this, you do a POC, you do a POE and then end up doing that, but you fail to see what's really the long term.

That's one thing I feel is something that's only going to grow in size, and you have to have critical thought process put into before you start establishing something. I for one, don't shelfware. If I'm using something, I'll make sure we squeeze every penny out of it before I go to the next one, but that's just me, but I feel that that space is definitely going to go more complicated and inflated.

[00:43:52] Guy Podjarny: Yeah, yeah. Sounds like basically the fragmentation that aligns with I guess what we've seen with DevOps and to an extent with decentralization. That all these motions are causing, you have a lot of teams will pick different tools, they will need different tools for it.
That's a reality that we've, I can't say that the DevOps world is thrilled with it, but has made peace with it, maybe learns to accept it a bit more. So like a security with a would, would be facing that as well. Rohit, this has been great. Thanks a lot for coming onto the show.

[00:44:26] Rohit Parchuri: Of course. Yeah, it's been really fun. Thanks for all the beautiful questions.
[00:44:30] Guy Podjarny: Thanks everybody for tuning in. I hope you join us for the next one.
[00:44:34] Rohit Parchuri: Thank you guys.

[OUTRO]

[00:44:39] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have
time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com.
If you'd like to be a guest on the show or get involved in the community, find us on Twitter at
@DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode. Bye
for now.

[END]

Snyk (スニーク) は、デベロッパーセキュリティプラットフォームです。Snyk は、コードやオープンソースとその依存関係、コンテナや IaC (Infrastructure as a Code) における脆弱性を見つけるだけでなく、優先順位をつけて修正するためのツールです。世界最高峰の脆弱性データベースを基盤に、Snyk の脆弱性に関する専門家としての知見が提供されます。

無料で始める資料請求

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon