Skip to main content
Episode 99

Season 6, Episode 99

The Role Of Flexibility In Success With Geoff Belknap

Guests:

Geoff Belknap

Listen on Apple PodcastsListen on Spotify Podcasts

An initial passion for networking and telecommunications led today’s guest on a journey into the world of security. After gaining experience building security from the ground up in a few companies, he is now working as the chief information security officer (CISO) at LinkedIn. Geoff Belknap, in his second appearance on The Secure Developer, dives into the elements which he believes are key to a successful security organization, and a successful company as a whole (hint: flexibility and adaptability are non-negotiable!). We discuss the process of identifying security problems, who owns the risks, and why security is such a difficult thing to measure. Geoff also shares his perspective on changes that he expects to see happening in the CISO realm in the future, and offers some advice for any CISO’s trying to decide which company to work for.

Compartilhar

[00:00:22] ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community. Join us June 23rd for DevSecCon24. It's a free global vendor-neutral community-driven conference that connects developers, security, and operations teams to learn and enable the integration of security into their development practices. Find out more at devseccon.com.

This podcast is sponsored by Snyk. Snyk is a dev-first security company, helping companies fix vulnerabilities in their open source components and containers, without slowing down development. To learn more, visit snyk.io, S-N-Y-K.io.

Welcome back to The Secure Developer. On today's episode, Guy Podjarny, Founder of Snyk, talks to Geoff Belknap, Chief Information Security Officer at LinkedIn, with more than 20 years of experience in security and network architecture. Previously, Geoff was the Chief Security Officer at Slack, responsible for physical and information security. He has held numerous technical leadership roles in the financial services and telecommunications sector. Currently, Geoff serves on the board of the Bay Area CISO Council. Geoff has a Bachelor of Science in Business Management from West Governors University, and is actively involved as an advisor to a number of startups on cybersecurity and policy. He is a member of the CSIS Cyber Policy Task Force for US and international cybersecurity policy. We hope you enjoy the conversation, and don't forget to leave us a review on iTunes if you enjoyed today's episode.

[INTERVIEW]

[00:02:04] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning back in. Today, we're going to talk about security at scale and how do you make it work, with someone who I'm fortunate to actually have for the second time on the podcast, which is a rare profile. That's great and that's Geoff Belknap, who is the CISO at LinkedIn. Geoff, thanks for coming onto the show. Again, clearly I haven't sort of chased you away sufficiently in the last session.

[00:02:28] Geoff Belknap: Thanks for making the mistake of having me on twice. Let's see if we can go for a third one after this.

[00:02:33] Guy Podjarny: Let’s see. One step at a time. Geoff, we have a whole bunch of kind of interesting topics to talk about from scale security and how do you measure it and know you're doing it right and how does that work in a large organization? Before we dig into all of those, can you tell us a little bit about what is it that you do and maybe like a little bit of the path that got you into security? I believe you kind of came in through the network routes and I don't know if it's evolved or devolved into security after that.

[00:03:03] Geoff Belknap: Yeah. I think, well, I sort of got here accidentally, but where is here? I am the chief information security officer at LinkedIn. I'm the senior most executive responsible for information security at LinkedIn. So everything is my fault, although in actuality, not quite everything. But I'm responsible for product security, application security, security operations organization. So hunting and finding bad people doing bad things on our environment, governance risk and compliance, and customer vendor security, things like that; cloud security, infrastructure security, etc., which is interesting because I got here because I was really passionate about networking and telecommunications.

Before I started in security, I had a career for, I don't know, 10, 15 years doing sort of network and telecommunications architecture, and that stuff was always really interesting to me. But, like a lot of people, when I was young, I wanted to be a cop or a firefighter or I think a helicopter pilot or something like that. Sometimes, we never let go of our sort of base little childhood instincts, and there was a point in my career where I had an opportunity to join a network security startup. What they were doing was a company called Solera Networks. They were recording packet traffic off the wire and sort of slicing it and dicing it and doing security stuff with it.

That was very interesting because I was doing a lot of that stuff with a network perspective, and applying security to it was obviously very appealing to me. I got an opportunity to learn what the security space was all about, what the problem space was, what the industry was all about. That was really fascinating for me and really just sort of drew me in. From there, I ended up in a role where I started the security program at a company called Palantir and was the chief information security officer there. I went to another company called Slack, which some of you may know, and helped, again, build a security team there.

Then after Slack, I was sort of looking for a different kind of opportunity. I'd sort of built up security teams from the ground up for organizations that were growing very, very fast and had eventually reached scale. So I'd gone from zero to one but I've never really gone from one to N, and I felt like that would be a really interesting challenge. As it happened, there was an opportunity at LinkedIn, and here we are today.

[00:05:19] Guy Podjarny: That's a great journey and actually a sharp perspective that maybe we can even kind of use this opportunity a little bit to dig in. Before we go, though, to the companies, I want to talk a little bit about titles. I know titles do and don't matter, but what I would find curious is your title says CISO and VP Engineering. The engineering and security duo there is the thing that caught my attention. How do you think about your role? Are you sort of in the security role? Are you in an engineering role? How do you think about your place in the org?

[00:05:49] Geoff Belknap: Sure. I think the thing I've learned from doing security at tech, and I've primarily done security in tech organizations, is you really have to either be directly part of the engineering organization, or you have to have credibility with the engineering organization. Because if you don't, a lot of your job is going to be identifying problems and risks that need to be addressed, or at least need to be accepted and managed through the organization. Two things are important. One, establishing that credibility, both with the board and the executive business leaders of the organization that you're identifying and prioritizing things that are actually important to the business. It’s not actually useful to just be the guy who's chicken-littling everything and then saying the sky is falling. Although it can be fun, not actually as useful to the business.

The other part is, how are you going to address that issue? I think it certainly would make my job much easier if it was my responsibility to sit back and go, “See that over there? Fix that. That's broken. Fix it.” As much as I yearn for some eventual end state where that is my responsibility, the reality is a lot of what the successful security organization needs to do is be able to help, if not identify precisely how to fix that problem that you've identified, you need to at least identify how to prioritize and approach to fixing it, right? You need to give people sort of a guided path towards where they need to invest their efforts.

Because when you find a problem, especially in a piece of tooling or infrastructure or a product that has already shipped, that team that you need to engage with, that partner team is almost always part of product engineering or engineering and R&D in general. The reality is they're part of the business. They can't just stop everything they're doing to immediately focus all their efforts on fixing this thing. They all think security is important, but the business is important too, and you have to help find a path that is the most technically correct way or the most technically proper way to fix that problem, that also is good for the business. That's also good for your customers and your members, and you have to take all of that into consideration while you're identifying a path to fixing that.

I think some of that can come from just, like I found at other organizations, just if you've been there long enough, you're going to get that credibility, sort of understand how the organization works. But sometimes, it's helpful to signal that like you're an engineering leader, that you have some default credibility. Then let people either just make the decisions on their own, and you can burn those bridges. Or you can sort of move on from there.

[00:08:19] Guy Podjarny: Yeah. I think that's a really sharp perspective and I relate to the credit or affiliation. It's also quite consistent. When I looked back to our episode nearly three years ago now or roughly three years ago, episode 14 of the podcast, you were mentioning the impact that, in Slack, at the time you were CISO at Slack, and you were mentioning sort of the impact of moving the organization from what was more of a side, I think privacy and compliance if I remember correctly, aspect of the organization to report into the CTO and into sort of the engineering organization. So it sounds like this is well-aligned. At the time, you were talking about how that was positively inclined, using words not that dissimilar to sort of the value of the engineering affinity or the relationship with engineers.

[00:09:05] Geoff Belknap: Yeah. I think there was quite the journey at Slack. Like I said, they grew very, very fast. So when I started, I was part of the privacy and policy organization. I eventually became part of the CFO’s organization and then eventually landed working for Cal Henderson under the CTO. That worked really well for Slack. Slack is a very product-focused but technically very competent organization. So things that security was working on were almost always in the technical or the CTO organization. The upside of being part of the engineering or the R&D organization is, it sort of gives you default permission to go fix things that you find yourself. Or like I said, if they're bigger than something, that you can just submit a code change for, or a pull request for. It sort of gives you that default set of relationships you need to connect with the people that are going to be working on or going to be impacted by that problem that you've identified or sort of a structural change that you think needs to happen in the platform of the product.

I think at LinkedIn, because they're obviously a much more mature organization, they'd already figured that out, right? When I walked in, security was already part of the engineering organization. It's organized a little bit differently than a startup, although I think in broad strokes it looks very similar, I'll just say, in that the CTO who I originally worked for when I started at LinkedIn is not the head of engineering. He's just the guy who is responsible for sort of charting the strategy and direction of technical strategy that they want to have for the platform.

We now just recently have moved under a different part of engineering that also owns some of the things like developer productivity, things that you would traditionally identify as IT but are still, because of the maturity of the organization, things like IT are not just IT, it's not just people that give you your laptops and your phone and send you on your merry way. LinkedIn is a fully integrated organization, where the people that you would traditionally think about as IT are also making things that touch and work in production, are building things that help drive the business. Like I said, it is a perfect place for us to be because we have direct access to the people that we need to influence to make that change in the organization.

I think where maybe, probably the last time we talked on that vintage episode 14, go look it up, go take a listen, everybody. Go listen to young, not quite burned out Geoff, I think I've softened in my perspective from having a very firm position on where a CISO or a security organization should report. I've sort of evolved my thinking to it doesn't matter. I've reported to CEOs and CTOs and CFOs and CIOs. What I found is what matters most is that person's influence in the organization and that person's interest in security. If that person is interested in security, and they recognize the value that security brings to the success of the business, to the technical success of the product and platform, that is the only thing that matters. That will, more than anything else, guide and govern your success as a security leader in the organization.

[00:12:05] Guy Podjarny: That's a great statement. You find the paths to sort of create that empathy to create those relationships, and the organizational structure is just a side effect to it.

[00:12:12] Geoff Belknap: Yeah. It's all about the relationships. But if you can jumpstart it by having the executive reporting line that you have be sort of already in the middle of those organizations, you're well on your way to success.

[00:12:23] Guy Podjarny: Yeah. I mean, I think that makes total sense to me. As someone kind of running a hyper growth startup and seeing all sorts of functions, this notion of affinity through a reporting line is just a repeating theme. As a company goes through different sizes, different organizational structures make sense. But at any given time, the organization structure that makes most sense is the one that sort of drives the affinity or strengthens the affinity between the two parties that currently don't have a great affinity. Actually, ironically, once they have great affinity, they actually don't need to report into the same place anymore. They can kind of be anywhere in the organization, and they'll work well together.

[00:13:00] Geoff Belknap: Yeah. I think that that's a really important fact is, organizations maybe 10 or 15 years ago, it really mattered the way they were organized. It made a lot of sense on paper. But especially in tech and especially in high growth, the size, structure, and sort of organizational architecture that’s there needs to change, and it needs to be fluid, and it needs to adapt to the needs of the business, and you need to take a look at that regularly.

[00:13:23] Guy Podjarny: Yeah. Let’s indeed dig into that structure. You mentioned that you joined LinkedIn. They were already running at scale. Can you give us a bit of a snapshot of how is the security organization split up and maybe contrast that even with some thoughts from how Slack or Palantir, minding that there's also a time difference, how was that structured?

[00:13:45] Geoff Belknap: Yeah. I think what I've learned is, again, like we just talked about, the organizational architecture is probably just as important as your infrastructure’s architecture. How you're organized really is important because it reflects you as a CISO. It reflects your understanding of what the business needs to be successful, and what the products and the platform and your customers need from you for you to have a successful organization. So I think at, sort of, Palantir and Slack, where you're starting from scratch and you have nothing, the organization really doesn't matter until you get to a critical mass of, say, 25, 50 people. Then it’s helpful to organize around either one of two things, sort of, what you're delivering to the organization or what your customers are asking for you the most.

So I find that in most organizations, you have about three key components, and that varies depending on what the organization's phase of growth is. One is security operations and engineering. So people I think, first and foremost, that are doing the detection and response work that are either responding already to telemetry [inaudible 00:14:51] from the organization, whether that be whatever you’re using for endpoint detection to the corporate environment or how you're collecting telemetry in the production environment. I think it's helpful to put all those people together because their work is dissimilar from everybody else. I think everybody else sort of includes one of the other pillars, which is product security or application security. Especially if you're in tech, you are shipping a product. Even if you're not in tech, if you are, say, an insurance company, you're still in tech and you're still shipping your product. I think at this point, most insurance companies and financial services companies understand that and have started to think about that.

But I see application or product security as a key element because those people are focused on how do you ship the most secure product that you possibly can? Not how do you ship a product completely free of flaws because that's not technically the objective, unless you're working for an organization whose customers and investors and board members agree that you should never ship a product without a flaw, regardless of the cost. I think that second part is really important. Everybody listening to this can ship a product that has zero flaws. It just might never ship, where it might ship on a once a 10-year cycle and even then.

Anyway, so I think you've got security operations engineering. You've got your product security. Then I think everything else for me is sort of risk and compliance or things I might call governance, risk, and compliance. That might be people that are looking at the customers that are buying from you, and they're asking you questions about how do you represent the security of your environment? How are you sure that you're doing well at your job? They might be looking at third-party vendors or third-party tooling that you're using in the environment and deciding and making judgments on whether those are secure or whether you've mitigated the risk from those things.

Then generally, they're operating the governance of your security environment. So if you've got audits, if you've got regulatory requirements, if you have made contractual commitments to customers, you're making sure that you're meeting all those from a security and risk perspective. Then I think, most importantly, that you're taking all the risks that the other parts of your organization, whether it be security engineering operations, app sec, your bug bounty, the other members of teams building software for the organization are identifying risks, and they're bringing all those together and helping identify and prioritize which are the most critical ones in the business and how should you prioritize fixing those problems?

For me, it's those three core components. Then I think you look at whatever organization that you’re a security leader of and you say, “What are we trying to do? What's important to us? What's important to our members and our customers? How do we leverage those things or adapt those three key pillars into what's going to work best for that organization?” I think it's key to identify that you cannot run into each organization with a firm structure that you're like, “This is what works for everywhere, and that's what's going to work here,” because it's going to fail.

For me, I walk in with like, “I like these three key things, but they're like Legos. I can put them together differently if we need to build something on over here.” For example, in the application security space, sometimes you need partner-focused teams. Sometimes you need teams that are distributed and living forward or – I guess people call it 'shift left' these days. But you are working very closely with the people that are consuming those needs or building that software. Some people don't need that. Some people just need a centralized place where you do all the review like, “Right before you ship something, send it to me, and we'll run it through a scanner. We'll do a manual review. We'll tell you a thumbs up or a thumbs down.” That is what some organizations need, not every organization.

The same thing on security, engineering, and operations. Sometimes, if you're part of a small growing startup, you just have a few people that are looking at things. Sometimes, if you're part of a $2 trillion organization, you have lots of different business units that have security, engineering, and operations. You're partnering and collaborating together there, so you structure things sort of relative to what your organization's needs are.

[00:18:36] Guy Podjarny: I like the split and I'm aligned with it. I've got sort of two or three questions really that kind of come out of that. I'll start with a tactical one, which is, where do you place cloud security in that  mix? It’s clearly not in the risk and compliance piece, but is it in the sort of SecOps side of the fence? Or is it in the product security side?

[00:18:53] Geoff Belknap: I'll say what we're doing at LinkedIn is it lives in the organization that includes application security and product security. At LinkedIn, we have one leader that leads that space, and the reason we do it differently or the reason we do it like that here, where we did it sort of differently at Slack, is what we find is the cloud, you can't see my air quotes but in air quotes “the cloud” is so integral to where we're growing as a business and how we're delivering LinkedIn that it is just application security, all the principles that you might bring to application security but applied to infrastructure. Since infrastructure is  code now in most places, you're working in kind of a similar mindset, where you're saying, “Here are the guardrails. Here are the guidelines, requirements, and standards that you need to meet when you're shipping software, and the tooling that's going to help you sort of stay on that path.”

Well, the same thing is true of when you're writing infrastructure as a code, when you're writing software-defined networks or you're building a policy that's going to sort of govern how you deploy products at scale. It is not that different. So the difference though is, you know, in product security, you've got these experts that understand what are the common flaws and kind of, we'll say, common 'gotchas' that get you in application security. We can think about how we build automatic tooling around that to help elevate the visibility of that to the developers so that as they're developing it, they can understand the risk.

The same thing is true for infrastructure. It's just a different set of experts, right? They're looking at network and what does network mean in the cloud context and how do we write policy that governs decisions we might make there so that we're not running into problems as people push products out the door that they're sort of completely immune to understanding how the infrastructure works and where the risks are.

[00:20:34] Guy Podjarny: Yeah. I mean, that makes sense to me, and we talk about cloud native app sec, which is a similar concept. But just curious, how do you call the title of that team that includes the app sec and the cloud sec side?

[00:20:47] Geoff Belknap: I think that's very much in progress. But today, we call it platform security. I don't know if that's the best thing to call it because certainly the word platform is loaded and means lots of things to different people. But calling it cloud security felt sort of limiting, right? I think those people, while certainly they're doing a lot of cloud work and things that are adjacent to cloud, they're also working on things like identity and network security. They're working on things about how to on-prem systems communicate with cloud and vice versa? A lot of the policies that we’re writing get applied to on-premise systems. LinkedIn has an incredible amount of systems that are in data centers that we own and operate, just as we are also in the midst of a massive transition to a cloud that our parent company owns and operates.

[00:21:30] Guy Podjarny: Yeah, makes sense. So I guess the other question, which gets me to a question that I got for you with my LinkedIn post when I told people that I have the pleasure of having you on as a guest is sort of how does the rollout work? So your organization is structured with these three different aspects, and maybe we focus a bit on sort of this product and cloud security, this platform security area. How do you have that organization engage with the broader engineering infrastructure, the rest of the engineering organization?

[00:22:03] Geoff Belknap: Well, I sort of alluded to this earlier. The model that we are chasing, and we're not fully done with this yet because I think it takes time, is twofold. One, we're trying to change the culture around how people think about this. So certainly, LinkedIn started at a place where, because of their growth, they were relatively small and lean. I still think of us as lean, but relatively smaller organization. Today, it's about 20,000 people. When LinkedIn started, it was very much in the position of, “Let's review everything before it ships. Then that way, we don't have to burden the developers or the infrastructure engineers with thinking about security because we'll just take care of it centrally.”

What we've begun to shift towards is we're going to embed partners or assign partners, depending on which part of the organization you're in, so that you can do a couple of things. One, you can have a security engineer that is effectively forward deployed with your organization that is there to really well understand what you're doing, what your goals are, what your current objectives are for your fiscal year or for the next five years and help sort of problem spot and steer you on a good direction for whether that be cloud security or app sec or just risk in general to make sure that you have that resource at the ideation phase of where you're at, or the earliest phase possible.

I think the other side of that is, we're looking at, what if you want to hire your own person and you want to identify a champion in your organization? So a security champion that you want to invest security training in, and maybe this person is the best in the world at, we’ll say, machine learning. Maybe it's better for you to just give them some security skill set, instead of teaching a security engineer everything there is to know about machine learning. In that case, that person can accelerate reviews. They can do the ideation. They're a qualified machine learning engineer or an AI engineer, whatever it might be. But now they have some security skill set and they're very tied in and they've built a default relationship with a security organization, so they understand, as policies change, what's going on, what are the current threats, what are the what are the big issues in security? They can be the person that has a direct tie to the security organization.

I find that that works really, really well. That's very impactful, but it's also very expensive. I think not every organization can afford or even warrant having an embedded security engineer or a security champion. In that case, what we have is we have engineers that are organized as partner-focused engineers that will be sort of assigned to your part of the organization and be responsible for driving success and security outcomes for you.

[00:24:33] Guy Podjarny: So if I go back, you're saying you have this hybrid. It might not be binary. It might not just be these two options but a path between, “Hey, if you are an organization, as in a business unit within LinkedIn or a group within LinkedIn and you want to hire your own person, I'll support you. I, the CISO, will support you in skilling up that person and having them be a part of my governance exercise and kind of knowing that security is being done. If you don't have the budget or are unwilling or otherwise choose not to go down that path, then I will assign you a security engineer that would accompany your team to the degree that it makes sense.” Is that correct?

[00:25:11] Geoff Belknap: Yeah. I think that that's perfect. Thanks for making sense of my long winded answer. I think the most important thing I always come back to is there's no one strategy that works perfectly for everybody, other than this; you have to have strong relationships between the business units that you support and security, and you have to demonstrate that you understand the unique aspects of their part of the business. If you're in a small business, it's really easy to understand what's going on everywhere. If you're in a large organization, you have to build that web of relationships so that you don't get disconnected from what they're trying to do. Not every product, not every part of the platform needs the same scrutiny or the same sort of solution for them.

Like you said, some people don't need a dedicated security engineer, and some people need lots of them. So just being flexible and adaptive, because at the end of the day, all you really want is to make sure that all the right parts of the organization get the right attention from security that they need to be successful, that our customers and our members need for us to protect their data.

[00:26:11] Guy Podjarny: Yeah. Okay. I think that makes sense from an operations perspective, and you run it, and you adapt to the business. Maybe challenging the point in which sometimes these things do need to come together is this notion of measuring risk or understanding how do you manage risk, govern risks, make statements about the risk level to prioritize? So how do you think? You have this system. It’s disparate, it’s large, and you work in different ways. How do I track risk across the org?

[00:26:40] Geoff Belknap: I think this is the one thing that I will say is important to keep consistent across the organization. You have to have a common vocabulary for risk. You have to have a common way that you talk about it. That common way has to be fairly objective. What you don't want to do, a worst case outcome, is you're sitting down with a couple of engineers or executives and having a discussion about something risky that you need or you've identified needs to be fixed, and the conversation veers towards, “Well, you said this is a high. But maybe it's a medium.” Like, “I don't agree with how you measure this,” or, “What do you mean by this ratio or this metric or this this finding?” That is a worst case outcome because now you veered off the path of, “What should we do to fix this?” to arguing about how you've measured it.

What I found is, and I know this will be a very unpopular opinion, is that things like CDSS are actually useful for eliminating this conversation. I think many people will agree, including me, that CDSS is imperfect and is not great. What it is very useful for though, is effectively an open book measuring tool. So I can open the book. I can say, “Hey, great. Engineering leader, I'm going to pick somebody.” If you're listening to this podcast and this is you, I don't mean you're a problem. But if you are the head of data engineering and you don't understand why we rated something as a critical risk, I can go, “Great. We mark this as CDSS 9 or whatever it might be. You can go look at what went into that. We can show you the math on the back of the napkin of like how we came to that calculation, and you can disagree with it.”

But the chances are you're not going to disagree with it to the point where it goes from a nine to a two. You're going to disagree with it where it goes from a 9.6 to a 9.4. Great, I'm happy to have that conversation. But the end result of that conversation is the same. This is a high risk. This is a critical risk. We need to address this, and let's talk about how we're going to address it, what your plan for addressing it is. I find tools like CDSS, when we're talking about risks of bugs in software, something like that, are really useful to steer that conversation to where it needs to be, which is, “What are we doing about it?” Not whether we agree it's a risk or not. Because you probably agree it's a risk, and people's first reaction as a human way is to go like, “Ah, it's not as big a deal as you're saying it is.” Like, “Well, sure. But let's not have that discussion. Let's have a discussion about how fast you should fix it or what you should do or what resources you might need to be successful.”

[00:29:04] Guy Podjarny: Yeah, that’s a very large scale and quite astute perspective on it. Who does the governing? Is it you who needs to have that conversation? Do the different businesses within the company need to track it, and you're holding them just to lay a different model? Does that work?

[00:29:22] Geoff Belknap: I think if the organization is healthy and you've done a good job of building those relationships throughout engineering or R&D, most of the orgs will do that themselves, right? Your tooling can spit something out. We've got some tooling. Even if it's a human that's done the review, it will go into the issue. It'll go into JIRA or whatever tooling you're using and say like, “Okay, we found this bug. Here’s the CDSS score. Here's a link to the math behind it if you really need to see it.” If you've built a strong relationship, and you've invested in training and educating people on how the process works, that's usually the end of it. They go like, “Oh, it's a nine or whatever.” Like, “We’re going to address this. Here's what we're doing.” In some cases where it's a systemic risk that you've identified, you're like, “Oh, we've identified that the way we've built this architecture, I’m going to make something up, it means that maybe our access controls aren't as strong as we need them to be.” Obviously, one product engineering team is not going to stop everything they're doing for the next three years and address that systemic risk, but it needs to be addressed. So now, you have an escalation path towards engineering leadership or product or even executive leadership, if it needs to be, to get someone, or a group of someones, to be on the hook, to be accountable for addressing that issue.

I think there, again, it really just comes down to like you have to build those relationships. A, security has to have strong relationships. B, the company has to care, right? There are a lot of companies that would look at that and go, “Sorry. Too expensive. Not going to fix it. We have insurance, and customers will probably forgive us.” I think what's the most important thing you can do as a security practitioner, and certainly if you're a CISO looking for a role, is identify and only work for those companies that actually care about fixing those problems. Because, spoiler alert, there are companies out there that do not care. They're really good at saying, “Security is very important to us or your privacy is very important to us,” but they're not really good at putting their money where their mouth is. I think if you have a strong organization, if you have a strong culture that's oriented towards doing the right thing for your customers and your members, the rest sort of takes care of itself.

[00:31:18] Guy Podjarny: I think, so from a practicality perspective as you run through this, there's a bit of a question about the role of the central organization versus, not just the caring but actually the practice of finding vulnerabilities. In a couple of recent conversations, I heard a perspective that says, “The security team's job is to help you be aware of a vulnerability. Give you the tools, give you the text so that you're aware of a vulnerability. But the businesses own the risk.” Do you agree with that statement? How do you think about basically whose ownership? I'm intentionally just giving you that framing to relate to it, but how do you think about who owns the tooling or the actual discovery of the vulnerabilities and then subsequently who owns? You've already answered who owns maybe tackling that risk.

[00:31:18] Geoff Belknap: I think it's a great question. It's certainly something I grapple with on a regular basis because the answer is there's no one way to think about this yet. Certainly, there are a lot of people that would very much like security to own all the risk and to just go away and fix it themselves and leave them alone to build whatever product they're building. The reality in most companies is that, yes, the business, at the end of the day, the CEO owns the risk or owns the risk decisions for the business.

But thinking about it in such simplistic terms sort of belies our understanding of how businesses actually operate, right? The CEO is not the main person that shoulders risk for the organization. If you interviewed 100 CEOs, I don't think any of them would say, “It's my job to own all the risks for the organization.” They would say it's their job to clear all the obstacles and make sure that people have what they need to succeed. I think about security in a similar way. I want to remove roadblocks for people. By doing so, I want to make the organization more secure, more safe. I want to make sure that your data is safe, trusted, private, and secure.

But every organization is different. So I will say at LinkedIn, certainly, I own a certain slice of the risk. But I certainly own the responsibility and I'm accountable for raising visibility of risk, making sure that, and technical risk, there are other people that do sort of business risk or financial risk. I make sure that the right people in the organization are hearing about the important risks that we need to address. Again, those are usually things that are long-running things that we need, either a horizontal initiative or some long-running program to fix, where that we might need to shift to the way that we do business to adjust.

Thankfully, there are not lots and lots of needs to address. But every person who's either part of the executive team or part of leadership in a product organization, you own risk, right? If you really are an executive leader in that organization, you own that product or that function or that platform. That means you own the risk that is generated by how you've built that or how you operate that or how it interacts with its customers and members. I think you're only a true leader when you internalize that ownership, that accountability for that problem space. Now, I think it'd be great if every organization could afford to dump massive amounts of money into a centralized security function that would just take care of that for you. But the reality is that's not how it works in most places.

[00:34:24] Guy Podjarny: Double clicking one more and then I've got a different question for you. Say there's whatever, some static analysis, some network scanner, whatever security solution that finds vulnerabilities in this model. Does your team own it, buy it, and give it or like enable the teams to be able to use it? Or does every team choose what they want to use?

[00:34:47] Geoff Belknap: No. At least at LinkedIn, we own the tooling that might detect or automatically flag security problems. We partner very closely with the people that own CICD and the people that own the entire pipeline of how product comes from bits of code to actually getting shipped. So there are a couple of checkpoints or milestones along that path to prod where, whether it might be something that's as easy as a linter or something that might show up in your ID that's like, “Ah, you didn't want to do that. That’s bad.” But at the same time, early in the process, it's going to be a guardrail, not a gate. It’s going to like, “You probably didn't mean that, and this is going to throw an error. But like if you really want to commit it, go ahead and commit it.”

Then we're building additional tooling that sort of, I would say, in the middle of the process, as it's baking or as it's building, that will throw errors and identify problems that need to be fixed before it can move on to being shipped to production. It might be able to go to test. It might be able to go to an earlier staging environment but it can't go to production. There are other gates you can build. We do have some gates on ramp to prod. You might be able to ramp a certain amount. But before you get to full global ramp, you've got to have effectively a flag or a ticket that says, “This has been reviewed by security,” right? What that means, even though it says reviewed by security, is it's gone through a trusted review process.

We consider trust to be privacy, safety, and security. That means that privacy and legal have looked at it and said, “Great, you're doing the right things. You're minimizing customer data or you're using customer data in the right ways.” Security is looked at and said there are no obvious immediate flaws that would generate a security issue that we'd be concerned about mass exfiltration or something like that. Then safety looks at it and goes, “Ah, great. You've sort of checked the boxes here, either automatically or through a manual review where we're making safe decisions about how customers’ interactions might be on the platform.” Then the end result is it's going to ship.

Now, that sounds very onerous, and sometimes it can be if it's a brand new product. It’s going to get lots of scrutiny. But the ideal, the perfect end state is it can kind of fly through all those guardrails and fly through the couple of things that have to be hard gates automatically, without a human needing to be involved through, sort of, policy or whatever technical process might exist, and then get to prod. We do that zillions of times per day. So I think the most important thing we can do, again, is get out of people's way and sort of develop all these things that a human might be looking for into a policy that can be written and executed by a machine, looking at all that code, understanding all the context of the product, understanding all the context of our regulatory privacy and safety requirements and get an amazing product at the end of it. We own delivering on that as much as possible, but there are a bunch of partners that are involved.

[00:37:24] Guy Podjarny: Yup. Making sounds like security easy, right? You're sort of partnering and kind of helping highlight those to the rest.

[00:37:30] Geoff Belknap: Yeah. I think the end goal is to make the easy thing, the secure thing, right? Just by default, make it easy to be secure.

[00:37:37] Guy Podjarny: One simple parting question, how do you measure all of this? How do you know that you're doing the right thing? How do you kind of quantify everything you've just discussed to measure your security program?

[00:37:48] Geoff Belknap: The short answer is poorly, and I'll extrapolate on that by saying there's no real good way to measure security. I think there's a lot of ways to measure individual changes you make in a security program to see if that gave you the return you were looking for in terms of either, “It's quicker to review this product or we're turning around risk decisions faster, or we've eliminated an extra gate that wasn't necessarily necessary in the ship to prod step.” But the reality is there just aren't a good set of metrics out there that definitely tell you where you are because the number one thing I get from peers or from people that are new to security leadership is like, “Hey, can I look at your board deck? What are you telling the board? What are you reporting to the executive team?”

While there are a bunch of common themes, which I think just real quickly, for everybody who's going to send me a message after this, the things I look at are, what are the current threats or current top of mind things, what are we doing about them, and what are the long-term things that we've made progress on since the last time we talked? There’s a lot that fits in there. There's a lot of nuance there. But beyond that, you have to figure out what the right way to measure things are for you. Your organization already has a set of metrics that they use to decide whether the business is being successful or not. Security needs to be able to represent itself and whether it's being successful in those same terms and that same language that the business is using to decide whether it's being successful or not, whether it's hitting its growth metrics or not. Otherwise, you're sort of this nerdy kid on the side yelling about some esoteric thing when the business is looking to how it's growing and how it's meeting its commitments to its customers and its members.

[00:39:20] Guy Podjarny: Yeah, great answer. Despite the fact that there is no sort of absolute answer, but I think it's a good perspective. Geoff, I can probably kind of grill you with questions here for a couple more hours easily. I think we're kind of out of time. Before we part ways, just one final question I like to ask every guest coming on the show. If you take out your crystal ball and you imagine yourself or someone sitting in your chair in your role five years from now, what do you think would be most different about the reality? Not so much about LinkedIn’s evolution but rather the world, the ecosystem.

[00:39:53] Geoff Belknap: Yeah. I think the abstract position I have on this is I expect within the next five years that there will be more regulation around this role. I really – If you had asked me this when we did this before, I probably would have said, “Ah, maybe 15, 20 years, there’ll be sort of a Sarbanes-Oxley or something like that that's for CISOs.” But I think now, as things are changing really rapidly and we're seeing direct impact for things that used to be nuisance attacks on critical infrastructure and the way that you use customer data being so heavily regulated or so under scrutiny, I suspect in the next five years you're going to look at something not unlike the CFOs role or the general counsel's role, where the CISO has some regulatory accountability for how the security program operates in our organization.

I don't know what that's going to look like or where it's going to come from first, but I can't help but look down the road and see that like that's going to come. A CISO is going to be on the hook for these things directly and directly accountable to regulators and lawmakers. I don't know how I feel about that but I definitely feel like it's coming.

[00:40:56] Guy Podjarny: That's a really interesting perspective. You expect this to come with transparency as well, kind of reporting just like you do for accounting, your kind of method of handling vulnerabilities?

[00:41:06] Geoff Belknap: I think, look, Alex Stamos talks about this a lot, and I think it really resonates with me. Pre-Enron, CFOs didn't really have skin in the game. Your auditors didn't have the same skin in the game that they do now, at least in the US markets. I think the CISO today, there are a lot of companies that hold them accountable. Certainly, in the financial services space, like the State of New York, is definitely in a path to making the CISO somebody that has to sign off on audit reports or has to have a transparency report about their cybersecurity activities that come out on a regular basis.

I see that all as foreshadowing towards, especially for public companies or companies of a certain size, they're going to be required to have a CISO, probably required to have somebody that has cybersecurity experience on the board, and then be required to report something either in their regular quarterly filings or sort of an annual report as to the state of security in their organization. I don't see how we, as a society, leaning so heavily on technology and everybody's data being something that is part of their daily lives, get to avoid something like that. I think transparency is good. I think that makes us all better.

[00:42:07] Guy Podjarny: Yeah. I fully agree with that. Geoff, this has been a pleasure. Thanks a lot for coming onto the show and sharing all these great insights and learnings.

[00:42:15] Geoff Belknap: Thanks for having me, Guy. I appreciate it.

[00:42:17] Guy Podjarny: Thanks, everybody for tuning in, and I hope you join us for the next one.

[END OF INTERVIEW]

[00:42:24] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show or get involved in the community, find us on Twitter at @DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode.

Bye for now.

[END]

Up next

Running And Expanding A DevOps Team With DJ Schleen

Episode 101

Running And Expanding A DevOps Team With DJ Schleen

View episode
The CodeCov Breach With Jerrod Engelberg And Eli Hooten

Episode 102

The CodeCov Breach With Jerrod Engelberg And Eli Hooten

View episode
Containers, Processes, And The Future Of Security With Liz Rice

Episode 103

Containers, Processes, And The Future Of Security With Liz Rice

View episode
Implementing DevSecOps In Regulated Versus Unregulated Industries With Rohit Parchuri

Episode 104

Implementing DevSecOps In Regulated Versus Unregulated Industries With Rohit Parchuri

View episode