Episode 14

Season 3, Episode 14

How Slack Stays Secure During Hyper Growth With Geoff Belknap

Guests:
Geoff Belknap
Listen on Apple PodcastsListen on Spotify Podcasts

In the latest episode of The Secure Developer, Guy is joined by Geoff Belknap, Chief Security Officer at Slack. Geoff discusses what drew him into security and reveals why it’s critical for security teams to be recognized as a full-fledged member of engineering. He explains why it makes sense for companies to develop a track record of transparency and actively encourage community participation through bug bounty programs. Geoff also concludes that companies should encourage basic security hygiene rather than seek a silver bullet that does not exist.

The post Ep. #14, How Slack Stays Secure During Hyper Growth appeared first on Heavybit.

Compartilhar

Geoff Belknap: "The reality is, people want to move fast, and people want to change fast, and develop things fast. That's just completely counter to how enterprise software was delivered five, maybe 10 years ago. The faster you deliver something, the better you better be at understanding the risk, and visualise it, understand it, and impact it. I was very lucky in that I started out with four engineers, probably finish out this year at about 30 people full-time in the security team. We are focused on the things that reduce harm to our customers and ensure that they can sort of have this foundational trust with us."

[INTRODUCTION]

[0:00:36] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow.

The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter, @thesecuredev.

[INTERVIEW]

[0:01:07] Guy Podjarny: Welcome back, everybody, to the show, The Secure Developer. Today, we have on the show, Geoff Belknap, the Chief Security Officer for Slack. Geoff, thanks for joining us.

[0:01:17] Geoff Belknap: Yes, thanks for having me.

[0:01:19] Guy Podjarny: There's a lot of conversations, a lot of questions that I sort of have when you think about security at Slack. I personally, sort of at Snyk, we heavily rely on Slack for many things. Therefore, heavily rely on the security of Slack.

[0:01:33] Geoff Belknap: Thanks for being a customer.

[0:01:35] Guy Podjarny: Always. Thanks for building the product. With that, there's a lot of goodness and importance around the security of Slack as a whole. But specifically, the reason that we're sort of having this chat, that I thought would be really interesting is that Slack is a very fast-growing company. It's definitely a company that moves quickly, that develops software quickly, that sort of ship software quickly. I'd love to sort of dig through as we have the conversation here is, chat a little bit around how you build and how you handle security in such a sort of tremulous environment.

[0:02:11] Geoff Belknap: Great. Yes.

[0:02:11] Guy Podjarny: Can you sort of spend a couple of minutes just sort of telling us about your background, and how you sort of got to Slack and what you do there?

[0:02:19] Geoff Belknap: I've been at Slack, just about two years now. Before that, I was at Palantir for almost six years and worked on some really interesting, challenging data security problems there. Really, this is the second half of my career. For about 15 years before that, I did a bunch of network engineering and telecommunications architecture work at startups, and banks, and telcos. This is really the extension of me finding the magic and connecting two things together with cable and the lights blinking on either end. That, extending to – well, I guess that means bad things can happen as a result of lights blinking on either end of a cable as well.

My journey to Slack was really, in my career, I've really appreciated opportunities that allow me to have an impact. Certainly, over the last few startups, the measure for whether that's going to be interesting to me is whether a good decision I make is just as impactful as a bad decision. I think, certainly, people in security, you're not really having an impact if a bad decision also contributes to a terrible outcome as well. While nobody wants that, for somebody that really wants to have a big impact, I think you need to understand that it matters.

[0:03:35] Guy Podjarny: Yes, very much. I think it's one of the areas that definitely contribute. Was there like an “aha” moment that made you see the light or the dark? I don't know how you sort of refer to the security side of things.

[0:03:48] Geoff Belknap: No, I think it's something I've always been interested in security in general. I think I was definitely one of those kids that wanted to be – I think, firstly, it was a pilot, then it was a cop, or a firefighter, or something like that. I think as I grew up and matured, I was always very interested in the legal system, and law, and justice, and security sort of naturally played to those ideals. I think in the mid-90s when I was getting into telecommunications, and originally, the industry was just getting deregulated, and this was taking off. The natural gravitation of somebody who's smart and likes engineering problems was, go work on that, go build the Internet. Certainly, I had the opportunity to work on some of the first cable internet deployments, and we built some of the first pre-standard broadband Internet networks. That was that was a lot of fun.

Fast forward to that, nobody at the time, in the nineties, nobody really thought you could have use that. It was the time of Kevin Mitnick, where abusing a telecom network was making free long-distance phone calls, and getting information from that. Now, when the industry and telecommunications has really matured to a point where our entire world economy is really based on top of it this. Not only is that how those economies are driven, but also how business, entire businesses are built on top of this sort of fabric that we've added to the world. It's really interesting to be able to contribute to how do you manage risk, and how do you make sure that you can enable technological change and economic innovation. But also, do that in a way that manages the risk and make sure that we're enabling that change and not enabling something negative.

[0:05:28] Guy Podjarny: Yes, for sure. Table stakes are definitely higher.

[0:05:30] Geoff Belknap: Yes, for sure.

[0:05:31] Guy Podjarny: The game has gone up a few notches. You joined Slack when, how long ago?

[0:05:36] Geoff Belknap: I think it was January 2016, so not quite two years yet.

[0:05:39] Guy Podjarny: Okay. Cool. Were you going to tell us a little bit about the evolution of the security team at Slack? I mean, we spoke to Shaun Gordon here from New Relic. He talked about being kind of the first security hire at 140 people in Optimizely. It was maybe like a similar story. What was Slack's trajectory around sort of size and for security hires?

[0:06:01] Geoff Belknap: I think when I joined Slack, I want to say, we were about 300 people, depending on sort of what kind of startup you are. That's a reasonable time to really focus on executive-level leadership for security. Especially for Slack, as we try to really compete in the enterprise space that we're in, with some really big names, it becomes a critical part of the business. That was what was really interesting to me. I was very lucky, and that I started out with four engineers, which is probably the biggest team that I've started with. Once or twice before this one, I've started security teams. It's usually one or two engineers that are on loan from some other team. Slack had already had four engineers that were full-time dedicated to security.

We've built from that. In not quite two years too, we'll probably finish out this year at about 30 people, full-time in the security team, which is a full-fledged part of the engineering organisation at Slack. The idea is that we are focused on the things that reduce harm to our customers and ensure that they can sort of have this foundational trust with us. Because as you said, your whole business is built on this. I'm looking across at the engineer and I'm watching him use Slack on his phone. It's both a thrilling and sobering thing when I ride the Caltrain into work every day. I look around the car that I'm sitting in, and I see people using Slack all over the place. For me, that's certainly the first time I've worked somewhere where it's ever present at people's lives and business. And this influences how they do everything.

We focused on, how do we ensure that we have that, like I said, foundational trust. We have our application or platform security team, we have our operational security teams, we have our teams that are focused on incident response and management. Then, of course, we have our teams that are focused on, how do we handle risk and compliance, how do we de-risk the Slack environment, how do we make sure that customers can align whatever sort of compliance programs they have to what we're doing. All towards the end goal of making it, so that if you're developing on the platform, or if you're using the platform to run your business, you don't have to think about those things.

You can develop an app or build something that's built on top of the platform, and understand that you can plug right into all these things that we've built. You can bring your Fortune 100 organisation to Slack, and understand that we've already provided all these touch points and mapped everything to your program. Eventually, you understand that, any of the apps you pull out of the app directory are something that are suitable and make sense to use in your environment and are not just toys, or funny, shiny things to play with for your devs, but they're actually parts of your business.

[0:08:43] Guy Podjarny: You came in and said you had four engineers, and then you talked about security as part of the engineering team. First of all, maybe let's sort of unpack that a little bit. It's not always a given that you consider security to be a part of the engineering team. Is that the way it's structured like security as a whole is a part of the engineering organisation?

[0:09:02] Geoff Belknap: That's certainly the way it is now. I think, when you start a security program, and you invite somebody in to sort of be the leader of that security program, you're never really sure exactly where that's going to go. In fact, I just had coffee with someone this morning, who's looking at, "Hey, we're almost at the same size Slack was when they started their security program. What do you recommend we do?" There's really no one common path that everybody can follow, other than, say, you start with generalists, and you sort of set them to work on your highest priority problems, and you build from there.

Slack, when we started the security team, I think we were part of the privacy and policy organisation, which sort of flowed through the business side of the organisation. Now, I report directly to Cal Henderson, our CTO, and we're a full-fledged part of engineering and a first-class citizen in engineering. Which really, I think, it helps people understand that security matters to Slack. In this field, I feel so sort of cliche saying, “Security is important to us and security really matters.” But I think to some extent, like that's the only term I can use to describe. Like it matters, it matters deeply to me, it matters deeply to people at Slack.

We do treat security in a first-class way, and make sure that the platform security teams or the application security teams are involved in reviewing new products that the product and the engineering teams are working on. They're involved in customer integration and customer engagement. When we're talking to an enterprise customer who wants to bring their entire environment over to Slack, we're involved in that discussion, and making sure that we understand their risk model, and they understand how we address that. We're involved when it comes to talking to developers that might be developing an app, or an integration for a customer, or trying to build a business on top of the Slack platform. And helping them understand how to do that in the most secure way possible, how to make the most impact, but also be able to offer products to our enterprise customers on top of the platform.

I think, everywhere that we can possibly touch security in the organisation, it's represented in a really positive way. I think. ultimately, that should be your goal if you really do take security seriously in your organisation. If you're building a startup, or if you have an established organisation, that's what you have to do and you have to do that in a first-class way.

[0:11:21] Guy Podjarny: I wonder if this type of perspective, which I totally relate to. I agree that one security engineering – to me, the choice of using the term and putting the team in engineering conveys. I don't know if I'm wrongfully reading into it. A few things, one is, it seems to prioritise building security systems, sort of building things that are a little bit more engineered to work well, as opposed to maybe relying on sort of manual efforts or actions. That's maybe one. Definitely, an aspect of an engineering culture.

The second is, it talks about building security in versus bolting on, or whatever the sort of the cliché you want to choose there. But I wonder, how much of this is, as opposed to DevOps revolution startup. It's an environment in which the mandate of breaking down walls between different teams of sharing responsibility, at least in sort of the development and ops world kind of permeates into security.

Well, if you go to an organisation that has 10 years more lifespan than Slack, let alone 20, 30, 40 structure is not necessarily in that aspect of it. I think, again, all positive, just sort of wondering a little bit out loud. Do you think this is like a prerogative of an early company? Or, can you sort of envision a 20-year-old company having a kind of the security team being led within the engineering organisation?

[0:12:57] Geoff Belknap: I think to your point, I'm definitely the very fortunate beneficiary of being able to start a security program and have the ability to make some of these foundational changes in how we think about it culturally. I think it really comes down to just that right. How does your organisation think about security culturally? How do they think about security? Are problems secrets? Are these things to discuss, and analyse, and really engineer solutions to? Or they're things that you sort of bring up at your audit committee meeting, and then never speak of again?

Specifically, to your question of, can I see a 20-year-old established or older established organisation managing the security of engineering? I think the answer is, yes, I can see that. But that organisation would have to work a lot harder at it than I would have. Again, I'm very fortunate, and that I'm starting with a clean slate. There's always this really tempting ability, certainly, with people on Twitter to go, "Well, if this company had just done X" or "Why don't they just X?" It's a great example of not understanding how complex some of these problems are, and how complex large businesses are.

I think, you've seen a lot of large organisations pivot recently, that you've seen leadership changes at Microsoft, sort of wholesale shift to that organisation. You've places like IBM make dramatic changes to how they operate their environment. I think that's an example of, if some of these largest organisations in the world can make that change, then, your organisation can make that change too. It just has to be a high priority for that organisation.

[0:14:38] Guy Podjarny: On that note a little bit, and talking a little bit about the pace. I would say that the premise of – the motivations, the reason that you would want to shift your organisation as a whole, or your security organisation seems to fundamentally come down to pace. The Internet is moving at an increasingly fast pace, and a business needs to catch up. You need to iterate, you need to ship stuff quickly, you need to try out ideas and fail quickly, so you can kind of switch on and find the one that wins. Everything needs to be fast, fast, fast. Fast as is a scary thing sometimes for security.

I guess, on that note, let me dig a little bit into, indeed, that sort of speed element. That's one of my kind of curiosity points, indeed. How do you see that? At least from the outside, it seems like Slack is shipping stuff very quickly. What's your philosophy and how does it come into practice around balancing this speed of delivery?

[0:15:43] Geoff Belknap: The faster you deliver something, the better you better be at understanding the risk and controlling it. If not just controlling it, being able to visualise it, understand it, and impact it, be able to have some levers or some controls to adjust it. I think, in Slack's case, just like everybody else is trying to be – they're either trying to play agile, or actually be agile. I don't know if that's probably a terrible cliche way to describe it. But, the reality is, people want to move fast, and people want to change fast, and develop things fast. Slack pushes hundreds of – we could push hundreds of changes a day, that are constantly improving the product in the platform, and that's just completely counter to how enterprise software was delivered even five, maybe 10 years ago.

Certainly, that's something we talk with our auditors and our customers about. But, you can move at that speed, and you can do that even if it confounds the auditors, and scares people that were more traditional risk managers. Because the reality is that your customers want that, they want you to go fast, they want it to be improved fast. The reason people are signing up for things that are SaaS products, and subscription-based services is because their expectation is that the product is going to constantly improve. Certainly, I think everyone, just like Slack, wants to meet that.

Well, that makes my job both very exciting and very stressful. And that, I have to constantly be building a program, controls, and visibility. If I just think about the tools that can tell us what's happening, and alert us too if something's going wrong. It means we have to respond faster when we do find something that's gone wrong or if we think something's sort of veering off in a direction that's less than safe. But it also means we have to have culturally the ability to flag problems inside the organisation to escalate those, to have discussions with comms, or general counsel, or with your CEO, or your CTO. About what are these problems, and how do we want to – how do they impact the business and what should we do? Do we change a high-level, top-level strategy? Is this a simple logistical change?

But if you don't have the ability to both quickly surface, discuss, and make decisions, and then disseminate an action plan inside your organisation. I mean, all those things contribute deeply to whether you are secure or not, and whether your business is going to survive.

[0:18:09] Guy Podjarny: How does that play out in reality? Share the concepts. You want to be able to communicate, you want to talk about risk without sort of some zero-tolerance mindset. Like sometimes, you need to move it. How does that manifest? Can you share some best practices, some tools, or maybe it's principles that you apply that we can adopt elsewhere, as well in kind of daily life?

[0:18:37] Geoff Belknap: I think a great example here of how these manifests, and how to measure whether this is working well in your organisation is. If you're at a maturity phase, where you can have a bug bounty, this is a great sort of test for, A, is your organisation mature in terms of how it handles risk and how to response to things? B, are you mature enough that you can accept random people on the Internet, telling you how bad your product is, other than your customers? I think it's really a big step.

For us, we have a bug bounty. I was very lucky, and that I walked in, and when I walked in the door, Slack had already established this. A bug bounty is really something that, from the outside, seems like an interesting idea and is valuable. But really, to do it well becomes part of the fabric of your technical and your business culture. You're putting something out there where you tell people, "Please find flaws in my product, tell me about them, and we will fix and compensate you for that trouble." That seems really straightforward. But what that means is, we get submissions all day long from the bug bounty program, and we scrutinise each of those, and we look at them all through a very serious eye, and we triage those. Then, we have an internal discussion with the team that looks at that, and they make a decision about, is this really a problem or not? Do we need to have more information?

Then, the stuff nobody ever sees kicks into work. There's a whole bunch of stuff where we file a ticket, we might go over to that product team, or the product management team, and try to understand there's a problem or that we've confirmed this is a problem. What priority is this? How fast do we think this needs to be fixed? Is this something that we think customer data is at risk immediately? Do we need to do all hands on deck and fix this? Is this something that can just go into the next release cycle?

That's a hard discussion to have if you don't have a strong engineering, strong security culture because people generally don't like to fix bugs. People don't like to fix bugs. I'm sure nobody listening to this podcast hates to fix bugs, but everybody else would rather sort of –

[0:20:42] Guy Podjarny: I think it includes the listeners.

[0:20:43] Geoff Belknap: I think everybody else would sort of like to wait until next quarter, or maybe the next release, or something else that's going to make money. It's very easy to see the friction between the need to address risk, and the need to sort of drive the business forward. Your security program has to act in a credible way to represent that risk to the business and get priority on that. Now, we've established that there's a priority there, and now, we get down to fixing it, confirming it. Then, you have to go out and tell that researcher like, "You know what? You're right, that's a bug. We fixed it. Can you please confirm whether it's fixed?" Then, you're giving that researcher permission, like, if you want to go tell people about this bug, if you're going to write a blog post or post it on Twitter, whatever it is, you have our permission. There are some reasonable constraints on that. But at the end of the day, you're agreeing as part of this sort of social contract, that they can tell people about this flaw that they found.

You have to take a moment and sort of let that settle in, that you're agreeing to let someone tell other people about something you fucked up. I don't know any other industry – well, I think there are very few industries where they're willing to sort of let people tell them what they're doing wrong, take that feedback very seriously, fix it, and then let people tell other people about that. Could you imagine being in a relationship where somebody posts all your flaws online? Well, maybe there are relationships like that.

[0:22:07] Guy Podjarny: They do exist, yes. I think the bug bounties are an amazing thing. We have one at Snyk. From an early age, I see them as such a massive boon and advantage. You basically get a bunch of auditors of varying skills, going off, and testing, and finding issues. Whatever it is that you pay them, it's probably nowhere near as much as you would have paid hiring those auditors in. We love those audits. Even if people sort of submit the wrong thing, we send them schwag. We send them some stickers, the magic ones that we have. Definitely successful.

I like the idea; I'd never really considered that way. I like the idea of maybe thinking about bug bounties as the transparency vehicle. I think when DevOps happened, a lot of the event, tried a lot of the revolution maybe came from people acknowledging failure, embracing failure. Internally talking about blameless post-mortems and things like that. But even externally, somebody's getting up on stage and talking about this massive outage they have, and how they screwed up throughout multiple times, and how they handled it, and how they learn from it, and how they're doing it better today.

Didn't really think security, it's risky, it's scary to do this, it's really scary to stand up on stage and describe your existing security process because you feel like everybody's going to find the gaps. It definitely is scary to come up and talk about security mistakes that you've done. But bug bounties are kind of the safe way to do it. Because it was a mistake, somebody found it, but it wasn't a breach.

[0:23:45] Geoff Belknap: Yes. I think it's a safe way for organisations to dip their toes in the waters of transparency. I think, the reality is, we're not headed towards a future where there's less transparency. I'm looking at a future where people need to have more information and consumers need more information about what the privacy impacts of your products are, what security features you have. I'd love to see an environment we're operating in where there was like, an Energy Star logo, or just like you have nutrition facts on the side of the cereal box. So people can sort of evaluate one SaaS provider versus another, and go, "Oh, this one's got worse SSL or TLS certificates than this one. Or this one handles my data in a different way."

We're going to move towards a time like that, and maybe five years ago, you could try to sue somebody into not presenting about your product at Black Hat or Def Con, or writing a blog post. The reality is, you can't. Going that way and trying to keep a lid on how you're approaching these things, and how slow or fast you're fixing them is the wrong way to go. The right way to go is, treat these things, give them the right priority, do a consistent job of addressing them even if you're not fixing it as fast as someone else might like you to. As long as you're fixing it, you're doing in a transparent way.

I think, what companies, especially startups are starting to figure out is, having a track record of transparently and consistently addressing these kinds of security problems in general lends you towards a place where you're improving the brand. You're improving the trust you have with your customers, your prospective customers. I think, there's certainly, without getting too far into the weeds of the news, you're looking at breaches and other things happen in a way that if you're dealing with companies that historically have not been transparent, and have not been consistent in how they handle these things. The fallout from those things is going to be much more dramatic than if people know that you've been doing the right thing, or making an honest attempt to take security seriously all along. It makes it much less like a TV trope to just say, you take security seriously when you're in front of Congress.

[0:26:05] Guy Podjarny: But once again, it's sort of the high table stakes there. Let's sort of maybe continue down this line of laying blame. I remember, I'm not sure if I'm remembering my facts correctly, but I think maybe it was when clouds bleed happens, or when one of the Slack team, security team sent flowers, or some box of chocolates, or was the other way around. I seem to recall some showing some love from the sort of Slack security team to another breach or vice versa. Well, I don't remember the details. These types of occasions stick to mind because they're so unusual in the security chatter.

Typically, I mean, we just had a massive breach at Equifax, we've had massive ones before, maybe not quite that size. But definitely, each time one of those happens when there's some big breach, some big data leak, the finger-pointing begins. It's all about laying blame, and a part of it is washing your hands, or it's not my fault. A part of it is glee, maybe just sort of being – gloating, sorry, for somebody else's failure. I find, from kind of a perspective of maybe somebody providing security tools, and talking about reducing risk. I suspect you share this as well. That it's really hard to talk about security in a positive tone, to educate for security, to educate about reducing risk without having the narrative be about hyping up the risk or having this fear-induced, blame-induced environments. If you don't do this, you're going to lose your job. If you don't do this, you're going to be breached. The world will come to an end. What are your thoughts on this? How do we kind of advance on that path?

[0:27:53] Geoff Belknap: I think the answer is definitely more cake.

[0:27:57] Guy Podjarny: That's always good.

[0:27:58] Geoff Belknap: I think what you're thinking of is, we sent Atlassian some cake or some cookies recently at the launch of their product. I think in the past, we've also sent cake or pizza when friends are having a bad day. Because the reality is, even though we're all in this market, and we're competing against each other, whether it be Microsoft or Atlassian, or anyone else, we all rise and fall together. The tide comes in and out. We all go up and down together. A breach at one cloud provider is not a cause for joyous celebration. It's a time for us to all reflect on, “That could have been us. What are we doing to make sure that doesn't happen?” Quite frankly, for people in my position, and a lot of people on my team its a, “Do those guys need any help? Is there anything that we can do to help them out? If there's any information that we have, whether it be threatened diligence, or smart people that are working on a problem?”

Quite frankly, we're often very ready to help, and I've seen that go both ways. People have reached out and offered assistance across the community. That's one of the parts about the community I really like. The alternative to that is, we also live in an environment where people are very ready to pitch their product on top of whatever this latest breach is. I think, because security is very hard to sort of understand, and grasp the ROI of, and understand when you should be buying versus building, and what you should be buying.

There are a ton of things to buy. There's something like 1700 different startups being covered by different analysts right now. All of them are in this market trying to sell a security-relevant product, and there's a ton of money floating around. They all need an opportunity to market. So, anytime, there seems to be some sort of news about security. I get a flood of emails as it as everybody else, I think in the industry about how vendor X's product would have stopped this. It's both infuriating, and it's unhelpful to the industry as a whole because that's not what we're here to do. Quite frankly, I don't think I've ever received an email where that was true, that vendor X's product would have prevented whatever I just read about in the news.

It sort of makes it harder for our industry, in our discipline, in our engineering practice to be better recognised, because there's sort of this ambulance chasing. I think, eventually, between M&A, and just sort of the natural momentum of the market, that will settle down and this will make sense. I think it's going to take longer. People will realise that security, cybersecurity, risk management, information security, whatever you want to call has to become a core part of how you operate a business.

The thing you should be spending most of your time on when you're deciding what to buy, or what to build is, what are the most high-priority things for your business, what's strategic, what's important, what are you spending the most amount of time on in terms of problems to solve. Then, maybe those are the things to spend your money on. But otherwise, you should be, like we talked about before, you really shouldn't be building things that are directly going to enable your business to achieve its strategic outcomes.

If you're not focused on those things, because you're distracted by something else that's taking all your time, that's a great thing to spend money on to make go away, either with people or technology. But everything else you should be focused on, giving you the best visibility, giving you the best ability to control or influence those outcomes.

[0:31:37] Guy Podjarny: I definitely kind of relate to the vision or sort of this goal, shall we say. Maybe, I'm a tiny bit more pessimistic around whether consolidation of the security market would lead us there. I feel like there's more salesmanship or sort of fear-mongering is kind of not really gotten away from our culture at any point in time. But I think that it's something that, if we don't do anything about it, definitely it's going to get worse, and we need to work on it. We need to push for it and try to incentivise it. I guess, maybe that's – we kind of have time for one more topic of conversation. How do you see incentives in this world? Because you talk about the counter-example of chasing an ambulance, chasing hack. How do you celebrate security success? How do you reward good behaviour and good achievements in security?

[0:32:32] Geoff Belknap: I think the way I think about incentives is that transparency helps a lot here. The economic incentives have really been misaligned for security for a long time. That, if you look at breaches historically, and I think if you look at the current breach du jour everyone's talking about, it's probably too early to tell for that one. But if you look at the numbers, historically, you'll find that the cost of being breached is very short-term. Especially if you're a public company, you'll find that your stock price might take a hit, at least temporarily. But the reality is like, that will come back, and people will buy their diapers, or hammers, or whatever it is that you sell, and things will stabilise again.

Right now, the economic incentive is to scare people into buying your thing or spending money in a certain place. It's not directly aligned with actually making things better, because people, by and large, have not spent much time studying and putting information out in a broad way about what is better and what does it take to make things better. I think, there are people – somebody I'm a big fan of, Bob Lord, who talks about this in the sense of, we all know we should eat less and exercise more. Certainly, I'm well aware of that fact, personally. But occasionally, you have to skip a day. You don't go to the gym, or you skip a leg day, or whatever it is, and that's fine.

But you understand that there are consequences with that, and you understand that making different choices about what you eat or having a cheat day, or whatever it is, is different than deciding, "Well, I know I'm supposed to eat less and exercise more. But I really like eating a full sheet cake and drinking an entire bottle of whiskey every day." While that sounds wonderful, you can't sustain that on a daily basis. Something bad is going to happen. The chances are, probably more bad things than wonderful.

You have to sort of manage risk in a way that leads you down a path of like, you're constantly improving. That's not any fun. It is fun to buy the brand-new APT dark web threat intelligence machine learned something rather. And the thing that you can wrap yourself in this comfortable blanket of like, "Oh, good. Now, all the threats will be found by whatever this thing is." But the reality is like, the thing that makes you safer every day are the really boring things. Managing inventory management, managing your risk, understanding what your risk is, understanding –

[0:34:55] Guy Podjarny: Security hygiene.

[0:34:55] Geoff Belknap: Yes, understanding the hygiene. While it's easy to say, "Oh, if you just keep up to date on your patches, then you'll be fine." Well, in a complex environment, understanding what is there to be patched, what the current patch date of it is, how many machines do you actually own or VMs are you actually running at any given time. That's a really complex problem, and it has a straightforward solution. It is easy to get distracted by buying that magical silver bullet, versus doing that hard hygiene work, or that hard sort of self-care work.

I think, more transparency lends to putting the incentives in the right place. Because if you're being transparent, if you had to disclose sort of what your security status was, your hygiene was as part of your quarterly filings, or if there had to be – if the market was incentivised in a way where you had to do, like we talked about these nutrition facts kind of label on the side of your product. You would be incentivised to make sure that you're constantly making improvements in these areas, and you'd be less excited about buying silver bullets, and more focused on making steady improvements and listening to what your consumers want. Or giving consumers what you think they need, versus just sort of trying to defend yourself against an inevitable lawsuit.

[0:36:08] Guy Podjarny: Definitely a complicated equation to have. It's indeed fun, it's fun to build this sort of advanced APT machine learning, dark web thing, as well.

[0:36:17] Geoff Belknap: To be honest, I think there are certainly – there is a need for some of these, and many of them are valuable.

[0:36:22] Guy Podjarny: There's no silver bullet.

[0:36:24] Geoff Belknap: Yeah. Having the best algorithm is not going to solve any problems for you if you aren't doing any of these basic hygiene things. It's not that the hygiene things are easy. It's just that they – if you look at sort of Maslow's hierarchy of need applied to security, you need to address some of these things first, before you're spending time on self-actualisation, or making sentient AI.

[0:36:46] Guy Podjarny: This was fascinating. I have all sorts of other questions for you, but I think we're sort of out of time. Before we let go, I'll ask you a question I ask all of my guests here. If there was sort of one tip, one advice, one sort of, maybe the other way around, some pet peeve that you have that people are not doing that could help a security team, a development team, a company level up their level of security, what's your tip?

[0:37:07] Geoff Belknap: I think I was just ranting about this on Twitter, which honestly, I have to be more specific about. But I think the best tip I have is if you only have one place to focus, focus on people. Focus on the really hard, non-instantly gratifying thing of like, invest in your people, invest in hiring great people, invest in giving the people that you have hired the things that they need. Listen to them, give them your time, give them your support, your trust, your respect, and they're going to do great things for you.

Putting great people on your team is going to be way better than spending double or triple that amount of actual hard money into some security product. In fact, some of the security products that we have that are the best or the least expensive things we would spend money on. I think, spending money, but also just investing time in your people is the best thing you can do. It's also, quite frankly, the least expensive, easiest thing you can do.

[0:38:12] Guy Podjarny: Yes, excellent tip. Definitely. If you invest in people, I think the rest will come. You will choose the right tools, you will build the right practices. Fully, fully agree. Well, this was a super great conversation. Thanks a lot for coming. If people want to kind of keep up with your set of inputs, want to follow you on Twitter, or sort of contact you in some of other way, can you sort of share how we can find you.

[0:38:34] Geoff Belknap: You could can follow me on Twitter, which is probably a terrible idea if you want to have a rational conversation, But I'm @geoffbelknap on Twitter. I don't know, come work at Slack. That's the easiest way to spend a bunch of time with me.

[0:38:46] Guy Podjarny: That works. Cool. Well, thanks a lot, Geoff for coming on the show.

[0:38:49] Geoff Belknap: Thanks for having me. This was super fun.

[0:38:52] Guy Podjarny: Thanks for everybody that tuned in and join us for the next one. Thanks.

[OUTRO]

[0:38:58] Announcer: That's all we have time for today. If you'd like to come on as a guest on this show or want us to cover a specific topic, find us on Twitter, @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over 100 videos about building developer tooling companies, given by top experts in the field.

Snyk é uma plataforma de segurança para desenvolvedores. Integrando-se diretamente a ferramentas de desenvolvimento, fluxos de trabalhos e pipelines de automação, a Snyk possibilita que as equipes encontrem, priorizem e corrijam mais facilmente vulnerabilidades em códigos, dependências, contêineres e infraestrutura como código. Com o suporte do melhor aplicativo do setor e inteligência em segurança, a Snyk coloca a experiência em segurança no kit de ferramentas de todo desenvolvedor.

Comece grátisAgende uma demonstração ao vivo

© 2024 Snyk Limited
Registrada na Inglaterra e País de Gales

logo-devseccon