Skip to main content
Episode 18

Season 3, Episode 18

Collaborative Security With Marten Mickos

Guests:

Marten Mickos

Listen on Apple PodcastsListen on Spotify Podcasts

“Marten Mickos: Bad news is good news. Tell me about all the shit and I'll fix it. Criminals don't wait for an invitation. They hack whatever they like. The internet bug bounty or IBB. It's a non-profit. We collect donations from wealthy companies and institutions. Then we turn around and use that money to sponsor bounties for open-source projects who can't pay for the bounties themselves. We, engineers and nerds always, think that technology is the solution. That's incorrect. Insecurity, humans are the solution.”

[INTRODUCTION]

[0:00:35] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers, covering security tools and practices you can and should adopt into your development workflow.

The Secure Developer is brought to you by Heavybit, a programme dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter @thesecuredev.

[INTERVIEW]

[0:01:06] Guy Podjarny: Welcome everybody. Thanks for tuning back in. Today, we have an amazing guest, Marten Mickos from HackerOne. Thanks for joining us on the show today.

[0:01:14] Marten Mickos: Thanks for the invitation, Guy.

[0:01:17] Guy Podjarny: Martin, you've done a lot in your career on it before HackerOne. I have a lot of questions and topics I want to talk to you about, but before we dig in, give us a brief history of time. What were your key activities and how you got to being CEO of HackerOne?

[0:01:32] Marten Mickos: Yes, I'll start with the present, meaning HackerOne. We hack for good. We organise bug bounty programmes, vulnerability coordination programmes, and crowdsource pentesting. That's what we do. I've been CEO for nearly three years now. Many people know me more from what I did before that, which was an open-source software. Most people know me from having been the CEO of MySQL – the first and the last, I’m the only CEO from 2001 to when the company was acquired by Sun in 2008, and then I stayed another year. I come to the whole topic of security from the world of developing in a collaborative way, developing infrastructure software.

[0:02:16] Guy Podjarny: Yeah.

[0:02:17] Marten Mickos: I could admit that when I joined MySQL and even before that, I didn't care about security. I couldn't spell the word security. I'm sure it was not on my radar, so when we now look around and find all this software that's not secure, I know where it came from. It came from people like me. Now, I'm here to fix that problem, repent and get everything in shape and ask for forgiveness for whatever I've done wrong in my life.

[0:02:45] Guy Podjarny: It's always good to have empathy for your users, right, for the people coming in, you've been in that position. You probably will be again, right? Even the most security experts who talk to security companies, and oftentimes you'd find that in the development process, there's still not enough attention given to the security practices themselves.

[0:03:00] Marten Mickos: Not nearly. I mean, it's terrible. It's completely terrible. But we don't care whether it's terrible or not. We know we can bring positive change and we will do a little bit of change or a lot of change, whatever it takes, but time will cure every problem.

[0:03:15] Guy Podjarny: Yup. It starts with caring, right?

[0:03:18] Marten Mickos: It does. But we must know, this is true both in open-source and in security, that those people who care the most can also be the most difficult to work with. The power of a collaborative model is that you can collaborate without agreeing. You agree on the mission, but you may disagree on a lot of details, yet you can collaborate. That is actually something that it's a passion of mine. It’s something that drives me to figure out how do you get people to work together who don't really agree on anything.

[0:03:48] Guy Podjarny: A lot of what you do is this world of bug bounties today with HackerOne, right? You deal with bringing this community of people looking to find vulnerabilities, generally frowned upon action in the legal side and then that the world is evolving to accept with finding vulnerabilities in someone's code and reporting those and you're trying to make all of that a positive action.

I think a lot of that comes down to collaboration around a fairly finicky topic. It's almost like feedback receiving. Let's dig into that a bit. So, first, like can you tell us a little bit about what a bug bounty is? Then we can talk about these complexities.

[0:04:22] Marten Mickos: Yeah. Bug bounty is about paying somebody for finding the weaknesses in your own software. It's emotionally very hard to get there because you have to tell the world that you are not perfect and that you would like to hear the bad news. You must have this mindset of saying bad news is good news. Tell me about all the shit and I'll fix it. It's not nice to say that, like many people don't go for their medical checkups for that reason. They don't want to know.

[0:04:51] Guy Podjarny: Yeah. Yeah.

[0:04:52] Marten Mickos: You have to have the readiness to want to know, but once you do that and you tell the world that you are interested in input, then you'll get it. There are hundreds of thousands of white hat hackers in the world. If you tell them that you would like to know what's wrong with your system, they'll tell you. Fortunately, we have a model here where we pay money to those who find it and we pay based on the severity of the find. There's a baked-in business model that works beautifully so that the best hackers get the best payments over time. Not in every single instance, but it's a very fair system where I pay for you to tell me what's wrong with me. The worse the problems are that you find, the more I'll pay you, but I pay nothing if you find nothing.

[0:05:39] Guy Podjarny: Right. Very much the success model there.

[0:05:42] Marten Mickos: Yeah.

[0:05:43] Guy Podjarny: Bug bound is an interesting model. I'm a fan. We use it here at Snyk, as well. How do you find people's approach to it? It’s like when you have those first conversations with somebody that has not run a Bug bounty programme, what types of questions or objections come up?

[0:06:00] Marten Mickos: The first question typically is, okay, how do I know that I'm not inviting criminals if I open a Bug bounty programme? When we get that question, we look at our customer and we say, how do you know that you don't have criminals attacking you right now? Then we discuss the notion that criminals don't wait for an invitation. They hack whatever they like.

So, what you do in a bug bounty programme is you invite additionally the good guys to hack, but for many, it still feels awkward to go out and say, “Please come and hack me.” But then when you think about it rationally, you realise that, yes, it actually, it can only bring a positive change. It can only improve the situation. It's true, the bad guys may be hacking me right now. That's question number one. It's a philosophical and emotional event.

[0:06:56] Guy Podjarny: It's a really good point that's not intuitive, which is like you're saying, come hack me, but really, again, the people that weren't waiting for the invitation would have been doing that already.

[0:07:03] Marten Mickos: Exactly.

[0:07:04] Guy Podjarny: It's not really that much of a change.

[0:07:06] Marten Mickos: We don't give any benefit to the hackers other than if they find something. A criminal would never sign up with us because they can't gain anything. On the contrary, we will know who they are. We'll know from where they hacked. We'll know their identity if we're paying them a bounty.

[0:07:20] Guy Podjarny: Is there a difference around the monitoring of it? If I was playing devil's advocate to that, I would say, well, but like when you're monitoring your system, then you're looking for these types of attacks and you try to block them and get alerted and respond to it. When you turn on a bug bounty, do you lower those defences? Is there anything that makes you more susceptible?

[0:07:39] Marten Mickos: Oh, that's a great question. Technically, yes, for some companies, they do that. If they have very strict monitoring of their attack surface and they see every attempt coming in, when they run a bug bounty programme, they may actually whitelist some IP addresses. They mask the hackers to come through VPN, so that they can see it happening. That's technically elaborate and it works.

What we must remember is, however, that security hacking that we do works best when it's diverse and free. You get the best hackers if you don't put such restrictions, because the best hackers, they don't want to mess with a VPN. So, it turns out that the best programmes are actually open to everybody and they don't track the VPN, but that's up to our customer to decide.

[0:08:25] Guy Podjarny: How often do these programmes run on the production sites, versus running on some sample site or some side site?

[0:08:34] Marten Mickos: Nearly, all of our programmes run production code and production sites, and there's a good reason for that because we want to find the exact vulnerabilities that otherwise could have been used by a criminal. There's no need in finding vulnerabilities that are not there in production state.

[0:08:49] Guy Podjarny: Yup.

[0:08:50] Marten Mickos: Best to hack production, but we also have exceptions here. For instance, our customer is Panasonic Avionics, world's largest maker of in-flight entertainment systems. We hack them on a test device, not in an aircraft, not well in flight. Yes, there are many situations where –

[0:09:08] Guy Podjarny: Some safety gaps. Trump, those advantages.

[0:09:12] Marten Mickos: Exactly. But most of it is live production, websites, mobile apps, APIs, typically.

[0:09:19] Guy Podjarny: Okay, cool. This is the – I cut you off a little bit, right? We talked about the first objection.

[0:09:23] Marten Mickos: The question, yes.

[0:09:24] Guy Podjarny: That one. What’s next?

[0:09:26] Marten Mickos: The next one, which is a very valid one. It's sad, but valid. We have companies who say, “Okay, maybe you can tell me about my vulnerabilities, but guess what? I already have a hundred that I'm unable to fix. Why would I ask for more?” That's a more serious question in the sense that you're right. If you can't fix them, why would you even bother to know about them?

The reason, then, the way we discuss it, we say, okay, if you can fix only a hundred or only fifty or whatever your capacity is, make sure you fix the most severe ones. You should run a programme and tell in the programme that you're focused only on the very severe vulnerabilities. Make sure you're not fixing some low-severity stuff that doesn't matter.

[0:10:12] Guy Podjarny: Right.

[0:10:12] Marten Mickos: That's one. Secondly, if you are unable to fix the bugs in your code, then you have a bigger problem that you need to fix anyhow. You need to deprecate the code. You need to get rid of your vendors. You need to hire more software engineers. I don't know what you have to do, but if you are unable to fix vulnerabilities and bugs in your code, you are not on a path to success. I believe that soon enough, governments will pass laws to stipulate that every company must be capable of receiving vulnerability reports and capable of fixing. Otherwise, they will not be allowed to carry consumer information.

[0:10:55] Guy Podjarny: Yeah.

[0:10:56] Marten Mickos: We're not there yet, but I think we are heading that direction.

[0:10:59] Guy Podjarny: That's an interesting direction. First of all, I fully agree. You want to have the information. You want to know about all the vulnerabilities, so you can prioritise accordingly. Even if it's a little bit harder to prioritise a long list versus a short list. Maybe getting a little bit more into like indeed to that remediation process. Who do you see engaged? So, like somebody comes along and gets a bug bounty report. A vulnerability has been reported. What kicks in in the organisation? Does it go to development? Does it go to like some security triage? Who receives it? What's the path to remediation that you see most often?

[0:11:33] Marten Mickos: In the most beautiful case, the report that comes into HackerOne, you click one button and it moves over to Jira. If Jira is what you're using. It's passed over to software engineering with a high prioritization, and they'll start fixing the bug. When they fixed it, they marked us fixed, it comes back into HackerOne. We know that the vulnerability has been removed. That's the most beautiful execution here.

There are many other aspects of it. You don't need to use Jira. You can use whatever tool you like. Once you've fixed it, you should collect information about your fixes and go back to the software, design a stage, and make sure you don't create the same problems again. If you can loop back into your software architecture, your software design, your choice of libraries, your choice of frameworks, and then you say, okay, if we are getting these vulnerabilities all the time, let's change something in how we code. That's how an organisation will evolve to a new level of security.

[0:12:35] Guy Podjarny: Do you see that learning in practice? So, like in concept, for sure, I know you report those issues. People get reported. They get remediated. Do you see a decreasing trend? I mean, if somebody got a dozen reports about cross-site scripting vulnerabilities and fix them, hopefully, already a better state than not. Do you see the frequency of this cross-site scripting type issues decreasing over time?

[0:13:01] Marten Mickos: We do. We do see a difference. It's a little bit like when you make popcorn in a microwave oven, in the beginning it pops a lot, and then it's less and less, which is a good sign. You know that it all has popped. It's a little bit the same with vulnerabilities that you will have more in the beginning, and then it will slowly start shrinking. Of course, you push new codes, so that brings them back, but still with well-behaved programmes, we do see it going down. Some of our most software-conscious customers who really pay attention to this, they show a clear trend of cross-site scripting bugs going down because they are eradicating, systemically eradicating them from their frameworks or from their software development environment, essentially.

[0:13:45] Guy Podjarny: I guess, the beauty of it is that if it is harder for the hackers participating in the bug bounty programme to find a vulnerability, it will equally or similarly be harder for an actual attacker to find it. Those bugs that remain there are harder to find.

[0:14:02] Marten Mickos: Exactly.

[0:14:03] Guy Podjarny: They're indeed harder for a real-world criminal to –

[0:14:06] Marten Mickos: What do you do then if you run a programme? You start increasing your bounty values. You can start a programme by saying, for the highest severity, we pay $5,000. Then after a while, you say, no $10,000, $15,000, $20,000, and you keep going up and up in price when the number of vulnerabilities go down. This is how you ensure attention from the hackers because they know that there are fewer possible bugs to find, but they also know the price is increasing. The bounty is increasing, they stick to you.

[0:14:37] Guy Podjarny: Yeah. Crowd economics in action there, right?

[0:14:39] Marten Mickos: Exactly. Now we take the next logical step. It means that when you look at bug bounty programmes, the highest bounty they pay is a measurement of their security posture and their security hygiene. It's only the ones with good security that can afford to pay high bounties.

[0:14:56] Guy Podjarny: Right. That expect at certain, like expect that at end of the day a small number of valid reports, otherwise you're going to go bankrupt.

[0:15:03] Marten Mickos: Exactly.

[0:15:05] Guy Podjarny: Cool. This is bug bounty, as I said, I'm a fan. I think they're working well. So, people are starting to open up to it. This concept, I guess, came from the worlds of Google and Facebook, right, that originally the top tier happens and companies like HackerOne kind of make it accessible, allow the non-giants to add a reasonable effort, go off and open that up and expand it. I recommend everybody to use it. What if you're not a commercial entity? What if you are MySQL or that one even had at least a financial entity behind it? What if you are an open-source project non-for-profit? What do you do then?

[0:15:42] Marten Mickos: Well, as the former CEO of MySQL, I have to say, MySQL was a commercial entity. We figured out one of the most fantastic business models for open-source back in the day, which we're very proud of. We managed to combine the good ethos with making money. But of course, there are many open-source projects that have no budget. Fortunately, the world is seeing the value of this now, so there are several initiatives in play now to support them in their bug bounty programmes.

We have started together with other companies, an initiative called IBB or the Internet Bug Bounty. It's a non-profit. We collect donations from wealthy companies and institutions, and then we turn around and use that money to sponsor bounties for open-source projects who can't pay for the bounties themselves. That's a way of getting security hackers to focus on open-source projects.

Actually, the situation is even better. many hackers have so strong belief in transparency and therefore an open-source software. They'll do it for free. Many of them say, “I don't need any bounty. If it is an open-source product, I'll hack for free. I'll report my vulnerabilities for free. If you pay me a bounty, I'll give it back to you, or I'll give it to charity.” There's a lot of goodwill in that space. IBB is just one. There are other similar initiatives.

The problem is not the funding. The problem is typically to find the project maintainers who will take the time and have the discipline to actually fix old technical debt because we know it's much more fun to develop new stuff than to fix the old. Having the discipline to go back and fix something that you created with good intent, but it just wasn't perfect. That's the real bottleneck.

[0:17:35] Guy Podjarny: I fully agree. We've seen with this report, about the state of open-source security, and we surveyed a bunch of maintainers. Generally speaking, most people just have no idea how to approach security. They would not have a disclosure policy on their project. They've never audited their code. Most of them, when confronted and granted, there's probably a bit of a selection bias for those who chose to even answer the survey. Claim that if an issue was reported, they would reply. I forget the stat, but it was a substantial amount of them talked about replying within a week or within a month, but I think none of them really consider that in volume like they don't consider how would they approach it there.

[0:18:14] Marten Mickos: That's where I would say that Linux Foundation is doing wonderful work. They have the CII, the Core Infrastructure Initiative, which is there to help open-source projects do the maintenance of the code and have people on staff who are ready to fix things because we should never make it unattractive to produce new open-source code. We put too many obligations on people, they’ll just not do it. We have to let it be really happy and fun and positive, but then we must also have the discipline side of saying, “Okay, this library or this product is now so common, and so important that we need to hire full-time maintainers who will take joy in maintaining the code. It's doable. It's absolutely doable.

[0:18:58] Guy Podjarny: For those open-source projects that don't address an issue, do you make those vulnerabilities open? Do you think we should make those vulnerabilities that have been found via a bug bounty programme have been responsibilities closed to a maintainer? They, for potentially good reasons, including lack of bandwidth, did not address it. Do those bugs make it to the public eye?

[0:19:22] Marten Mickos: I think it's a question not about open source, but in general, meaning if vulnerability has been found and the owner of the software does not take action, what should other people do? I used to think that you have no right to impose anything on anybody, so you should just keep it secret. Now, I've changed my own viewpoint. I do believe it's in society's interest that we publish it. I would point to Google's Project Zero. They do security research. They find vulnerabilities. They report them to the owners of the systems, but if the owners don't do anything, they will go and publish it unilaterally.

There you see a commercial entity like Google deciding to do that because they think it is in the interest of our digital society. I tend to agree today that we can't, if you produce code and it's used by many people on the Internet, then you have a responsibility for the collective. If you're not ready to take that responsibility, then keep your code for yourself.

[0:20:25] Guy Podjarny: Yeah, I agree. I think fundamentally security through obscurity just doesn't work, like if you're going to hide it, it's not going to stay hidden for long, and the attackers or the bad guys are well incentivised and invest much more in finding those issues in the next point in them versus the defenders who oftentimes are looking for easy pickings. So, by making an issue known, then it becomes something that is much more within reach for somebody to protect themselves against, ideally because there's a fix and you just need to embrace it, but sometimes in other means if it hasn't been fixed, or at least assess its impact on your systems.

[0:20:58] Marten Mickos: Yeah. You and I will agree that openness is great and collaboration is great and it's the only way to achieve security, but the world doesn't agree with us yet. The cyber security market is $100 billion a year. The majority of those dollars go into products and services that are secret, not collaborative, not sharing anything, and not having any transparency in what they do. The world is wasting a lot of money on old-school security practices and products that just don't cut it in today's digital world.

[0:21:32] Guy Podjarny: Let's talk a little bit about the world, right? Sort of about this evolution of the world to maybe accept at first bug bounty, subsequently transparent security. There's a whole desire, right? The whole DevOps resolution, or DevOps evolution revolution came about trying to aspire to a transparent environment, right? Where people have these blameless environments, so we have these blameless cultures. We talk about it and yet, like every time there's a security world, first of all, every time there's a breach, like the first two lines are, so-and-so got fired, right? It's almost like – just like the knee-jerk reaction is to fire somebody.

Then it's scary to stand up on stage and talk about a breach that has happened or even that has nearly happened, right, and sharing those results. The bug bounty element of it, vulnerability disclosures I feel, maybe I'm a little bit biased is in a positive trajectory. People are increasingly embracing it. What drivers do you see pushing us forward in this momentum, getting this like world of cyber to embrace openness, to embrace transparency? Do you see that coming about?

[0:22:40] Marten Mickos: I think you said it. You said blameless. It is so important. We can learn it from the airline industry. In airline safety, they have a blameless attitude. They will never blame anybody for any mistake. They'll share the information across all competitors. That is why flying is so safe today. We, in our ignorance or stupidity in the software world, didn't apply the same rules. We should. I'll give you a concrete example. It's not security, but it's about being blameless.

GitLab the company. They had an outage. No, more than an outage. They mistakenly deleted their production data. When that happened, they made a decision to go completely open about what was going on. They created, I forget what it was, maybe a Google Doc that was live where they shared with the whole world how they were dealing with the issue. They were completely blameless. They talked about developer number one, and developer number two. One of them had deleted the data. They never put blame on these people. They never said who it was. They just said, “This happened. This are great people.”

It was terrifying to see the moments minute by minute, hour by hour, but afterwards, when they got everything back and resurrected the site and people were back online, they had so much goodwill from their audience because they hid nothing and they blamed nobody. If we can take that single example from GitLab and apply it to software security and other aspects of the software development life cycle, we will be in much better shape.

[0:24:16] Guy Podjarny: Challenging that a little bit, security is still different, right? They deleted the data. They didn't expose it to somebody else, like do you think there will be this notion of trusting, because of transparency would extend far enough to win points, to an entity that has lost or not lost, but exposed our data to an attacker?

[0:24:37] Marten Mickos: I actually think, yes. Blame may be a natural instinct and punishment is very typical, especially in some countries, they punish more than others. I don't think it helps. If you fire a security person every time something goes wrong, you'll have to fire a lot of people and you have to hire a lot of people. Do you really want to do that? The ones you hired are people who got fired from some other job.

[0:25:01] Guy Podjarny: Yeah. Otherwise, they won’t take –

[0:25:01] Marten Mickos: You won't find a blameless person anywhere. If we could just settle and say, “Okay, guys and girls, we have all screwed up, we are all fallible, we are all vulnerable, nobody's perfect, let's not blame each other, let's do our best.” Of course, you must have good intent. If somebody fails with bad intent or true sloppiness, I mean, concrete, really bad sloppiness, then it's a different case. That's a case of negligence and we have to take action, but if somebody makes an honest mistake, we have to know that that's how human beings’ work. If we don't want honest mistakes anywhere, we should employ robots and just have AI-produced software, and then we won't be needed.

[0:25:44] Guy Podjarny: Yeah. That's a different story there. I agree with you. I think fundamentally trust is the only asset. The only currency will eventually have like breaches will happen. A grant that you want them to not happen and to happen for a few and you don't want them to happen to you, but –

[0:25:58] Marten Mickos: Maybe trust is the only asset we ever had, like how do you know? It could be the trust was the number one thing 3,000 years ago and it still is.

[0:26:08] Guy Podjarny: I think today, as there's more and more of these like breaches of major presumably trusted brands, leaking their data or exposing, make security mistakes because everybody's fallible. It's not necessarily they've done it. The way they've responded to it is massively important in whether people will then subsequently go back to them and trust them with their data again, because they dealt with it and you know that if they leaked your data, first they will tell you. Second is they had all the good intent to prevent that from happening in the first place and they had your interest in mind and not theirs. Then lastly is that they will learn from it and that they will do a better job.

[0:26:43] Marten Mickos: Look at Equifax. It's sad to have to mention a particular company, but they had many vulnerabilities reported to them. They refused to take action. Then when they got breached, they refused to take responsibility. Then when they took responsibility, they said, “We've fired the people in charge.” Then it turns out that it's even worse than what they originally said.

There you see step, after step, after step of ignorance, arrogance, negligence, all these bad things. You realise then that much better would be that when bad things happen, you admit it to everybody and say, “Okay, we've completely failed. Here's where we are. Please help us.” People will help. We have 200,000 hackers signed up for HackerOne. They're ready to help if somebody asks them to help.

[0:27:33] Guy Podjarny: I think the Uber examples in the previous CEO were probably good examples there as well, or bad examples rather. A lot of times things are hidden for a year. I think a lot of that mistrust is coming up and as the new CEO states, that's a part of their goal is to see us improve security, but fundamentally improve trust in those elements. What type of roles? So, like GDPR and a lot of these new regulations coming into play, but really GDPR as the big sledgehammer are driving a lot of protection and some constraints around exposing data, right, and some exposing to customers the data has been leaked.

It doesn't necessarily state like exactly how you need to do it. Again, bug bounty being one of those means, right, some acknowledgement. Do you see legislation or governmental activity promoting these types of practices, like being a bit more dictators? It’s like you need to have a bug bounty programme for you to qualify or score in this level.

[0:28:30] Marten Mickos: We're getting there. Yeah, we're getting closer to it. It's not good if legislation micromanages things, so it needs to give a broad enough mandate, but I do believe that governments need to say that any organisation owning or holding consumer information must have the ability to protect it in an appropriate way. One of them includes receiving vulnerability reports from the outside and then enacting software fixes. We should mandate it for everybody.

Have governments done it today? No, not fully, but we're getting there. In the US the Department of Justice has published a framework for vulnerability disclosure programmes. So, then, here's how you do it. If you're interested, do it like this. They've saved a lot of time and money for customers. NIST has published their security framework, which is excellent. The FTC is recommending this to every consumer-facing company. We're getting there.

Now, and in the US, they are passing laws now that the Department of Homeland Security must test the bug bounty programme, so not all of these laws are perfect in how they were written, but they all drive in the right direction. I agree with you, GDPR, although it's in a way a monster and people are afraid of what will happen, it was the right mechanism. Here, a bunch of governments, the EU states are saying, we're done with this thing where vendors don't take responsibility. You are responsible. You must notify if you have a breach. If you don't, we will take a percentage of your revenues. it's harsh, but I think that's what we need in today's world.

I believe, although I can say that cybersecurity is in a really sorry state right now. I do think the ship is already turning. It will take time to fix everything, but we can see how decision-makers on the public and private side are agreeing that we must take resolute action.

[0:30:27] Guy Podjarny: Yeah. I agree. I love that a lot of these elements fundamentally boil down to transparency, like they boil down to accept that you're imperfect and be able to have people attack you and report issues. They boil down to when there has been a breach, which is sufficiently reasonable to happen. You have to own it up and you have to share it and you have to inform the people whose data has been leaked and all those components.

[0:30:51] Marten Mickos: But, Guy. Those of us who drive transparency and promote it in the world, we know we have to keep doing it forever. Transparency doesn't survive on its own. It has to be supported. You have to bring it to the new worlds. We had open-source software, which was a huge movement. Now, we need to take transparency into security. We need to take openness from open-source into open APIs. We have to go to open data. All of this requires pioneers to drive it, demand it, rally people around it, because if we stop, if we get complacent.

[0:31:24] Guy Podjarny: Yeah. The natural main state, just don't talk about it, right, and just hiding that.

[0:31:26] Marten Mickos: Yeah, exactly. So, we must – at HackerOne, we defined it as one of our company values. We say default to disclosure. Meaning, unless there's a very good reason not to disclose, we will disclose.

[0:31:39] Guy Podjarny: Yeah. Be open.

[0:31:40] Marten Mickos: Whatever it is. I mean, not just security things, but anything in the company. We are driving a culture of openness. I know it takes this daily discipline and commitment to stay.

[0:31:52] Guy Podjarny: I think we started from the practical of the bug bounty and those components. We went a little bit high into the stratosphere to talk about how society changes to address it. Going a little bit back down into it. Let's maybe take a moment to talk about the other side of this, like we talked about the recipient of these reports, how do companies evolve it, companies owning it. Who's on the other side, like who are the people that you see coming in and trying to hack –

[0:32:15] Marten Mickos: The hackers.

[0:32:16] Guy Podjarny: The hackers that are participating in finding the vulnerabilities.

[0:32:20] Marten Mickos: We now have 200,000 individuals signed up on our network.

[0:32:23] Guy Podjarny: That’s a good number.

[0:32:25] Marten Mickos: Saying I am ready to hack. Of course, not all of them will hack and some of them may be fake accounts or I don't know what, but it still shows a huge interest in the world to be an active white hacker. We look at that group and say who are they? Where did they come from? Because we have never really published a recruitment ad for this. We just say, “If you're hacking sign up with us.” Now, we have 200,000 of them.

Many of them are young. The youngest are 14-years-old. They can be old as well, but many of them are, but half of our hackers are between 18 and 25. They are all over the world where you have a good level of basic education, mathematical and STEM education, where they have a reasonable understanding of the English language. That's where we get them from, typically from big cities where young people don't have that much else to do.

They typically, have security as an important part of their life. They may be studying it. They may be working as a security person in a company. They may be a pentester somewhere. They are doing it as a day job, and then additionally they hack to maintain their skill and get the thrill of finding a bug and the social aspect of talking to other hackers.

[0:33:44] Guy Podjarny: Do you see bug bounties used as an education entity? Do you see developers wanting to get into security, register to HackerOne, to try out, to like get some real experience or even better companies that build some form of like training programme that revolves around finding vulnerabilities through these bug bounties?

[0:34:04] Marten Mickos: That will be the best. We sometimes say that some of the best hackers are also developers. They understand how software is developed. Vice versa, the best software developers also, understand hacking. we would very much encourage them, welcome software developers to try out hacking on our platform and hackers who are on our platform to learn about software development, because it just increases their skill.

Then when it comes to education, we work with universities today. For instance, UC Berkeley, they have a course called Cyber War, I think is the name of it, where every student in order to graduate must sign up with HackerOne and submit real vulnerability reports. Otherwise, you can't pass. We're seeing now a great advancement in the learning and the blending of the two, because here, I'm sorry I'm getting philosophical again, but HackerOne and bug bounty programmes isn't so much about security as it is about the software development lifecycle.

In the ideal state when all of this works, a bug bounty programme is just the logical last step of the software development lifecycle and it feeds back into the beginning. When we get there, it will be beautiful. Nobody will be 100% secure at any point, but we will be much closer to 100 than we are today.

[0:35:26] Guy Podjarny: I think of Bug Bounties oftentimes as continuous monitoring. I mean, like one of the problems with security is that it doesn't have a natural feedback loop. There's no bar that shows you that your performance or your whatever, your CPU cycles, you can see degradation. You can see how they become worse over time. You can anticipate a problem, and you can set some alerts, and it can go back. Security tends to not hurt until it hurts really bad. There's no natural element.

I like in the world of DevOps and the concept of continuous, everything, really, you want some ongoing monitoring that shows you whether you're getting better or getting worse. I think an active bug bounty programme is one indication of that. Not as live as like a CPU cycle –

[0:36:08] Marten Mickos: You’re right.

[0:36:09] Guy Podjarny: But an indication of like how many reports are you getting at a certain period of time. Once you've established some status quo, it should be some red flag when you deteriorate, right? You need to be able to explain I shipped new software. My CPU cycles went up. I understand it. I accept it. But if you haven't shipped some major new functionality, you have an uptick in new vulnerabilities that are being discovered, maybe something's wrong. Maybe you need to go back and invest in security training or security controls in your system.

[0:36:37] Marten Mickos: Very true. You said it. The natural feedback loop for security in software. That's absolutely, true.

[0:36:44] Guy Podjarny: Yeah. This was fascinating and we can go on and on. We ran out of time. Before I let you go, I want to ask you one last question I like to ask every guest. If you had one pet peeve around security or like one word of advice around security that you would give a team looking to up level, what would that be?

[0:37:04] Marten Mickos: It's not one. I'll package many. But first of all, we engineer’s and nerds, we always think technology is everything. The technology is the solution. That's incorrect. In security, humans are the solution. Not just the hackers who find things, but humans who take security seriously. There I always tell people that I find there are two things that build security and maybe only those two things. One is discipline. You must be disciplined about what you do. It's not about whether you did it once. It is that you did it every single time and you never failed to do it. The second thing is this agility, doing things quickly, because when shit happens in security, it's all about how fast you can respond.

[0:37:48] Guy Podjarny: Yeah.

[0:37:49] Marten Mickos: When you have those two principles, you don't need to worry about all the technology that the vendors are trying to sell to you, because you will be able to build a very strong security posture based on those practices which are based in what human beings do. That's the good news here. We don't need all that hardware to make ourselves secure. We just need human beings who are passionate and committed to it.

[0:38:15] Guy Podjarny: Well, Marten, it's been great having you on. Thanks for coming on.

[0:38:18] Marten Mickos: Thank you, Guy. This was wonderful.

[0:38:20] Guy Podjarny: Thanks, everybody, for tuning in. Join us for the next one.

[END OF INTERVIEW]

[0:38:25] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show, or want us to cover a specific topic, find us on Twitter @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over a hundred videos about building developer tooling companies, given by top experts in the field.

Up next

Measuring Security With Allison Miller

Episode 19

Measuring Security With Allison Miller

View episode