Skip to main content
Episode 38

Season 4, Episode 38

You Own It, You Secure It With Andy Ellis

Guests:

Andy Ellis

Listen on Apple PodcastsListen on Spotify Podcasts

In episode 38 of The Secure Developer, Guy speaks with Andy Ellis, CSO of Akamai. They discuss streamlining customer assurance, the role of an incidents coordinator, and the value of transparency between a security company and their associates.

The post Ep. #38, You Own It, You Secure It with Andy Ellis of Akamai appeared first on Heavybit.

Partager

Andy Ellis: “In the early days as a security person you're told, ‘You want to be the gatekeeper. Nobody should be able to launch a product without your approval.’ That's horrible advice. You have to understand how humans make decisions to really grasp why transparency is so valuable. The goal here is to get a little bit better every day, not to be perfect. I think that's where many security professionals lose sight of how to operate. Humans have a set point of risk that they're willing to take. If you take a risk away from them, they'll engage in more risky behaviours.”

[INTRODUCTION]

[0:00:38] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow. It is a part of the Secure Developer community. Check out thesecuredeveloper.com for great talks and content about developer security, and to ask questions, and share your knowledge. The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com.

[EPISODE]

[0:01:12] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today, we have a good friend and a CISO that I've had the pleasure of being a colleague to. Andy Ellis, who is the Chief Security Officer at Akamai. Welcome to the show Andy. Thanks for joining us.

[0:01:25] Andy Ellis: Thanks for having me, Guy. I really appreciate it.

[0:01:27] Guy Podjarny: Andy, before we dive into the deep details over here, can you just tell us a little bit about yourself, the history, and how you made your way into security in the first place?

[0:01:37] Andy Ellis: Sure. I'm the Chief Security Officer of Akamai. I have been here for just over 19 years now. I got my way into security by a very roundabout method. I was at MIT, working on my degree in theoretical computer science, and I was in the Air Force ROTC program. For those who aren't familiar with that, that's an officer training program where you get your commission to go into the US Armed Services once you have your degree.

I really wanted to be what's called the weapons systems officer. You might think of it like the navigator on a large plane. It's the person who sits in the back seat and provides guidance to the pilot in the front, maybe you get to drop some bombs once in a while. I couldn't be a pilot because I didn't have the vision for it. While I was on my summer, you get to go do these exciting jobs in the summer. I was down at Luke Air Force Base sitting in the back seat of an F-16 and I got a phone call from this major down in South Carolina, and he's telling me about this new squadron that does information warfare.

I'm like, “That sounds really cool.” Recognise this is 1996 and this sounds almost like a job interview, which if you've never been in the military, you have to understand you don't really get job interviews. You just get assigned to somewhere. I was being assigned and he was just calling to make sure I wasn't completely crazy. I didn't have a choice about whether I took this job or not. They'd already decided –

[0:03:06] Guy Podjarny: Different world.

[0:03:06] Andy Ellis: A very different world. They were told they could have anybody they wanted in the Air Force, and they said, “We want everybody graduating from MIT with a computer science degree.” That was me, so they had to fill in with other folks. I showed up and this was working with a commercial intrusion detection system, for people who remember the net range that Cisco bought and border guard that was run by Network Systems Group, that we were deploying those systems all across the Air Force and I was responsible for building, and configuring, and designing the defences and working with our ops team on how we're going to make this work.

I learned an awful lot about security in those years, and then came to Akamai a few years later and I've been doing security here ever since.

[0:03:53] Guy Podjarny: Interesting. You basically got assigned into security?

[0:03:56] Andy Ellis: I did.

[0:03:57] Guy Podjarny: Well, it worked out well.

[0:03:58] Andy Ellis: It has. When I came to Akamai we weren't a security company at the time. We were at a CDN. We were just barely starting to talk about performance instead of just offload. So, the transition over these 20 years, to now a company that is security first, where performance and offload is still part of the value proposition, but from a go-to-market perspective we lead with security now.

[0:04:21] Guy Podjarny: Yes. That definitely is a transition. Also, the size of the company, how many people are roughly in the early days, if you go 18, 19 years back?

[0:04:29] Andy Ellis: When I came in it was around 500 people or so. We grew to about 1,200 right before the dot com bubble crashed, and we came down to 550. That was not an exciting time. I was in charge of doing all the technical coordination for every layoff and now we're back up to about 7,000 people.

[0:04:52] Guy Podjarny: Okay. That's also quite a difference. Not the same job when you're running a CISO for 550 versus 7,000 people.

[0:05:01] Andy Ellis: It's amazing, there are things that happen now that I'm like, “I used to be the person who did that,” and I'd be pulling out my hair to get it done. Now, I don't even see it happen and it's running smoothly with folks in different organisations. To see that maturity grow it's been actually really nice when I notice it. The trick is to go back and actually notice it.

[0:05:23] Guy Podjarny: Yes. See the change. So, let's do that a little bit. Let's talk a little bit about the security organisation and maybe a little bit about its evolution, so people get a glance at what it looks like in different sizes. Can you give us a bit of a rundown of how your – you're the CISO, how does your security organisation get divided? Who does what?

[0:05:42] Andy Ellis: In Akamai we have a very different structure than I think a lot of other organisations do. Principal operational security responsibility we embed into operational teams. Our enterprise security, the folks who are securing our corporate network, actually work with the CIO. We actually spun that off to the CIO organisation about 10 years ago on the premise that if you're responsible for securing someone's infrastructure and dealing with the alerts coming off of a system, you should be part of that operational organisation.

[0:06:17] Guy Podjarny: This is an incident response element, or more you mean the enterprise, like the endpoint security and the likes?

[0:06:24] Andy Ellis: Yes. The endpoint detection and response, so selecting what tool is going on to that device and then dealing with the incidents that come out of that, that's happening in our enterprise security team inside the CIO organisation. They have a dotted line to me, but their solid line is directly up to the CIO who's really responsible for ensuring he's staffing that correctly. What's amazing is that it actually gets more money now that it's not under me than it did when it was under me, because the CIO really does care about that and now has direct control over it. It's not just in this nebulous global InfoSec pot.

For our production services, we do very similar things, we often don't have dedicated security teams as you alluded to. We have a GNOCC that is our global network operations command centre and they do tier 1 and tier 2 security work. If there is an alert that says, “There's a problem with the machine,” they're the ones we're going to try to take care of it first and they'll do the preliminary investigation. If they discover something interesting, of course, we're going to escalate that into an incident. Incidents are all now coordinated again out of the operations team. That's actually a change for us in the last few years. The incident management program at Akamai historically was very engineer-driven.

When there was an incident, we would tap a relevant engineer, usually an architect or maybe an interim manager and say, “Look. You're the incident manager. Build your incident team. And you're responsible for managing and running the incident end to end.” That worked really well in our early days. Small organisation. It was the same 30 people that you'd tap all the time. But where we have so many different products and technologies now, you'd have people who'd be tapped to run an incident maybe once or twice a year. We found that they didn't know how to run an incident. They'd been through the training but it was their first time, so you get things wrong.

What we did was we created a new role in our operations team called the incident coordinators. All that they do is run incidents. Now, they're not supposed to be the technical experts. They're not the decision-makers to say, “Should we do X or should we do Y?” But they're the ones who make sure that when that question comes up it's being made by the right person, that they're thinking about the consequences, they know who to coordinate with. So, giving us the professionalism, and then my team backstops them.

The incident coordinators manage almost all of the incidents, my team still handles a few, but we're the ones who are responsible for governing that whole incident process. Whether it's a security incident or an incident for anything else. My team still does the governance, but more and more we've pushed into the operations teams for the follow-up work, because that's really operational work. Our job is to make sure the safety of the platform is robust, and as the incident process has become less ad hoc and more robust, we're doing less and less of it, which makes my team very happy, frankly.

[0:09:21] Guy Podjarny: Yes. I'm sensing a theme over here. Enterprise security want people running the enterprise infrastructure. People that handle the incidents and operations also handle the security incidents and operations and you overlay and support them over it.

[0:09:37] Andy Ellis: Right. My organisation now really has split once and then one of those split a second time. So, one split is the product and system security split, versus the go-to-market side of the house. We have a small team that does research on attacks against customers, publishes those through our State of the Internet report. I'll do a small plug for that. We finally got it out from behind a registration wall so you can just read it directly, and there you go out and write a number of different things, press engagement, and the goal there is to say, “As a security business, we are part of the go-to market function of the business.” Think of it as the product marketing of Akamai, just the security company, not associated to a specific product. So, do research, publish, sell, et cetera.

[0:10:30] Guy Podjarny: In many cases, there is more of an advocacy and an education and a reach for group that's under you.

[0:10:39] Andy Ellis: Under me.

[0:10:39] Guy Podjarny: Because that home for security expertise in the company, as opposed to sitting in an advocacy or a simple marketing organisation, or the likes.

[0:10:47] Andy Ellis: Right. But we partner closely with the marketing and the sales teams. We backstop our sales organisation if a customer says, “Why should I trust you?” If I send somebody to talk to them and say, “Here's what our controls are.” They just believed more than if a sales engineer tries to tell them the exact same words, and partly because they do know more, they can go more deeply. If somebody says, “We need special contract terms.” I actually have my own lawyer that works for me, that partners with our legal team.

The legal team are the ones who can approve language, but we're the ones who can translate between what reality is, what a customer is looking for, and what that language will be. It's been fascinating and so successful because what we find is customers just want the assurance. They often don't know how to ask for it. If we put somebody who is in the same role on our side in the room with them, we can often get them to assurance and then they're comfortable, and now we're just negotiating language. Whereas if you try to get assurance through a language negotiation, you end up in some really dark, ugly corners.

That's one piece of art. The other piece is really the product and system safety teams. Their job is ensuring our systems are robust and safe, that they operate in the way that we want them to. We split that into one team that focuses on assurance, so they're responsible for a lot of our compliance activities, they'll partner with specific engineering teams on bringing systems up to standards, whether it's PCI or Fed Ramp or SOC 2. It's security with the focus on the compliance.

The other piece we'll call system safety and resilience. The job of that organisation is really to focus much more deeply on given technologies, and say, “Okay, we're launching a new system. How do we know that it's safe?” The evolution that we've done there that is fascinating, because we can't scale our safety and security architecture as fast as the rest of the organisation. While we'll still do very deep partnering on specific products, we've actually built a system for development teams to self-certify. For them to come in and say, “I need to a security review.”

What we do is say, “Look. Give us one architect from your organisation.” We'll hand them a guide that says, “Here's how you do a security and safety review.” They'll do the review with you, you write down your results, and I'll come back to that in a moment, “And we'll just vet what you wrote down. So, when you go to product launch and everybody looks to us and says, ‘Hey, was the security review done?’ We're just going to answer based on the review you did.”

Here's the dirty reality that I didn't really understand until much later in my career, because in the early days as a security person, you're told, “You want to be the gatekeeper. Nobody should be able to launch a product without your approval.” That's horrible advice. Because only one person can choose to launch a product or not. It's the CEO of the company whom I delegated to a product, vice president or president. But it's not really delegated much below that, and you can't split that authority.

I had it for a while and I remember there was a product that was launching that used MD-5 as a checksum on one of their messaging things. Now, this was at a time when nobody should have been using MD-5 anymore, at least not for anything new. It was right when the deprecation notice comes out. But we've had better things for quite a while. We weren't quite to the Poly1305, ChaCha20 world yet, but we were close. I was like, “This shouldn't launch because they didn't know enough to even use the right library here, so I'm worried about other things.” Literally, the product team say, “This is maybe $1 billion-dollar product.” It wasn't, but that always is what a product engineering team will decide. They're saying, “Andy, are you really going to hold it up based on this one thing?” I'm sitting here and everybody is focused on me.

So, afterwards I sat down with the president of products. I said, “Look. Here's what I'd rather do. I'd rather we tell you if we think they're doing good risk management or not. And you have to decide if the risks are okay, but they have to write them down for you.” He was like, “Okay. We can try this.” The very next review came in and they had done what every other review had done, which was shown up and told us three days before that the review is happening, can you please review the product?

Before we've made this change, I would say, the launch and say, “I don't know enough to tell you if this is safe or not.” Everybody would jump on you, “How fast can you do your review? We really need to launch this.” And it became my fault. Afterward, I said, “Look. I'm going to fail this because they don't know what their risks are.” It was this instant changed. The person making the decision said, “Great, they're not launching.”

[0:15:41] Guy Podjarny: Yes. Because they haven't really done the review on it.

[0:15:45] Andy Ellis: They haven't done the review, and they were like, “Wait, what?” And people don't come back now unless they've done this review. Now, we let them do the review and there's a fox guarding the henhouse problem, is what everybody who do I tell this to thinks. They say, “Oh my God. Of course, they'll hide things.” The reality is developers aren't really interested in hiding things.

If I tell you, “Hey, tell me the unacceptable losses that your system could incur and the hazards,” almost every developer is like, “Yes. Let me write these down, because I've been wanting to tell somebody what the problems are.” It's not saying you can't launch with risk. We're just saying you have to write down what you know your risk is. Then all of a sudden people are like, “That was really bad. I didn't like writing those words down. How do I change this in the next release?”

So, this iterated engagement around making people accept their own understanding of risk, that “It's not my problem, it's yours. You're the one who wrote it down.” All of a sudden it has dramatically shifted how people operate.

[0:16:50] Guy Podjarny: This is very much in line with, you own it, you secure it, type element. How do you overcome the continuous nature, or even more in-line security controls? Or maybe a level of expertise, because this is the security review. It's a new system. There is a maybe an opportunity for more comprehensive review. But many changes happen that way. Many changes happen day to day after that system, in version 33.3. How did they get embedded over there?

[0:17:21] Andy Ellis: A piece of the goal is by being part of the original launch process, and major changes will go through that, by forcing people to write down their own risks, it means that when they're the ones who are doing that 33.3 release. Of course, they're not going to tell us everything in it. But they have now gotten practice itself at self-reviewing, and even if the normal remodel is, “I write it. You review it.” Even though you're my partner architect, but I'm doing that for you. When you write something I review it. Now, when I write something in 33.3, I'm like, “Maybe I can do a little bit better.” Because the goal here is to get a little bit better every day, not to be perfect. I think that's where many security professionals lose sight of how to operate.

Our goal is not perfection. Our goal is not risk elimination. Our goal is to enable our business partners to make wise risk choices because that's what we're in the business to do. We spend money, that's a risk. We might not make any money on the other side of it. That's like the very definition of risk. If I can see that you're spending the money badly, you just grabbed the wrong open-source library. Let me help you grab a better one even if it's not perfect. We do restrict some things. Around crypto, we do have rules that say, “We don't care if you think you can review it yourself. You don't get to write your own crypto.” There's some code lines there that we're very careful about who's allowed to do things, because we're also contributing a lot of our stuff back into open source in that area.

So, it's not that we're completely, “Hey, go run it yourself.” But our goal is to empower the organisation. Because first of all, we just can't afford as a company to keep trying to buy specialist security architects. I obviously need more than I have, but that's a true statement every CISO will make.

[0:19:16] Guy Podjarny: Across the organisation, across the industry.

[0:19:17] Andy Ellis: Yes. And they don't exist. But if I can take every software architect and enable them to do a better review than would have happened if we tried to centralise it, because the thing we have to remember is that my architects have to build up a model for your system before they can review it. Your architects already have that model. I just had to teach them how to apply security and safety thinking against the model they already have.

[0:19:46] Guy Podjarny: Yes, absolutely. I think that's the advantage of any security embedded into an owner of a system above and beyond the scale and the natural scale element of it. There's just an intrinsic understanding of the thing that is being protected, be it an app, or an organisation, or a set of servers that have an incident on them.

Drilling one step deeper, you have that dev team and they now understand they need to build the competencies. Oftentimes, there is at least two recurring components over here, which is tooling and training. Who runs those? Who owns ensuring that people have the right tools at their fingertips and know how to use them? Above and beyond the questionnaire or the review of knowing what question to ask?

[0:20:31] Andy Ellis: We own that strategically, but very rarely do we own the implementation. I'm responsible for identifying. “We need a better tool around vulnerability management tracking,” to pick a thing that's right now a hot topic. I'm pushing the data across the business. I'm partnering with our development tools team. We have a team inside engineering that builds tools for the rest of the developers. They're going to own whatever we settle on, but I'm the driver for them to say, “This is important. We're making this a strategic priority so that we can support all of the development team across the organisation in that one central spot.”

So, I'll develop that. For training, and we actually went through this, because I had to do secure coding training across the business. We bought from a couple of different vendors, different training packages. I will tell you, they were universally panned by our engineers. They were not happy with the training. What I did was, one of the people who works for me, Eric Kobrin, who runs our go-to market functions. I think you know Eric?

[0:21:39] Guy Podjarny: Yeah, he's awesome. You've got quite an asset inside that can do a lot of those types of educations.

[0:21:44] Andy Ellis: Right. So, Eric put together a secure coding training and then we've augmented with different people in the team. We built our own secure coding training and our engineers love it. We deliver it through our engineering learning organisation, so it's not this huge resource drain on my team, where they don't have to worry about doing individualised training and scheduling things. We have another org that does it, we just became basically the internal vendor for that training. Not only does it save the company some money, but it's actually better training that targets our development practices instead of something that was more generic from an outside company.

[0:22:21] Guy Podjarny: Yes. Definitely. Got onto your settings and I like making the most out of a different type of skill set that you might have, because maybe unlike some security teams you have people in the company that look to educate the world about security. So, you can use that and aim them internally.

[0:22:40] Andy Ellis: Exactly.

[0:22:41] Guy Podjarny: Got it. The tooling is done, it’s run by the dev teams and it's somewhat central still, but not necessarily the security teams. So, it's still aligned with dev tooling, but dev tooling is not team by team, it is somewhat central so those teams run the security. But still rolls along the principal of, you own it, you secure it.

[0:23:02] Andy Ellis: Right. We'll help you do that. Now, I do have my own tooling team, but that is mostly for things that we operate ourselves. Our compliance system actually runs on a dashboard we built ourselves, because we went and looked at the whole industry of compliance and we did not like any of the tools that were there. Most of them are not designed for people who build their own software as a service platforms, where it's not about, “Oh, I'm just integrating other people's technology and then I have to turn around and support it.” That just really wasn't the model of many of these other programs, so we just built our own.

It's fantastic because every document has owners, so we can just say, “Hey, you as an interim manager, here's the five documents that describe what your system does and how your organisation operates once a year. Please review these.” Then we plug them into all of the different compliance frameworks that we support, so when our auditor comes in for PCI, maybe they get four of those five documents. But when they come in for SOC-2 they get a different point based on what those regimes wanted.

[0:24:06] Guy Podjarny: A lot of it is about managing the data and the compliance bits on it. A lot of the action is still handled by the rest of that org, but the consolidation of the status is done by that internal –

[0:24:17] Andy Ellis: Yes.

[0:24:18] Guy Podjarny: So, let's switch a little bit to the side and talk about visibility. I think you're well-known and I've experienced that first-hand with being at Akamai that you're quite transparent. You oftentimes tout visibility and transparency in security, which is not the default case for many people in security. There was a lot of, “Hey, don't tell them. Don’t share anything.” Can you share a little bit of your perspective? How do you balance transparency and what value do you see there with maybe the risks associated with maybe showing the weaknesses or showing where are the holes?

[0:24:55] Andy Ellis: Yes. So, it's a really fascinating topic, because you have to understand how humans make decisions to really grasp why transparency is so valuable. It's not about trust, although trust is a wonderful thing that you get when you're open and transparent.
Humans have a set point of risk that they're willing to take. This is sometimes called risk compensation or the Peltzman Effect named after an economist, Sam Peltzman.

It basically says, “If you give someone more risk, they will automatically retract from it and do things to compensate and reduce their risk. If you take a risk away from them, they'll engage in more risky behaviors, because there's an aggregate amount of risk that they want to take.” This really came into the popular imagination in the eighties in the United States when we were debating national seat belt laws, and Sam Peltzman said, “Look, if you make everybody wear a seat belt, it's a safety device that will make them feel safer, so they will drive more dangerously, which means that while they'll be safer in an accident, accidents will happen at higher speeds and they'll kill more pedestrians.”

If you want to have fun, go look up the National Highway Transportation Safety Administration data for the last 40 years and look at the fatality rates per mile driven for drivers, passengers, pedestrians, and motorcyclists. It turns out Sam Peltzman was wrong. Motorcyclists bear the brunt of increased safety systems because they can't get safer, but the cars have shifted from driving 60 miles an hour normally to driving 90 miles an hour.

Once you really grasp that, that humans react to knowledge and awareness of risk, it explains why you want to be transparent. If you have a system and you say to me, “Hey, Andy, is my system safe to launch?” First of all, this is a setup, because you're launching no matter whether I say yes or no. But let's say you hand it to me and I go and I can spend however much time and create a laundry list of a thousand things that are wrong with your application, and they'll range from minor details to massive things. There's data breaches waiting to happen, or worse. Lives that you might actually just lose if this gets tickled. But I know about them and you don't, and you're the decision maker, so I've taken all of this risk that you should know about. Maybe it would be simple if you said, “Look. I'm just not going to sell this to healthcare because I can't afford the life safety issues.”

If you knew all these risks, you might say, “This is a great app that I'm going to try out with my startup, but I'm going to target whatever my niche market is where if I fail nobody's life is lost.” I would tell my sales reps, “You are not allowed to sell to a medical provider or to critical services,” or whatever it is. But if I know those risks and you don't, you'll just go ahead and sell to them, and it's not that you're being reckless. You don't know.

So, my being transparent is all about enabling the people who are day to day making these decisions, choices about risk, enabling them to make those in the context of the world they live in. They shouldn't walk through the world with blinders. Security professionals who are not transparent are literally putting blinders on their business partners and then getting angry when the business partner walks into a dangerous situation.

[0:28:28] Guy Podjarny: Fascinating. I love it when human philosophy is what drives activities and drives those reasonings. You see this a lot in the DevOps scene. A lot of being around incidents and around responses really boils down to, how do you expect humans to react when you put them in this?

[0:28:43] Andy Ellis: The thing I love about DevOps, and I argue that Akamai was DevOps before DevOps was a cool thing. The most important thing is that when a system breaks, the person who built it is on the hook to do the incident and fix it. The value that gives you is that they now have a personal incentive to reduce the safety risk of the system. They don't want to be called in an incident.

If you have somebody who can push a change at 5 PM and walk out the door and somebody else has to absorb the cost of that breaking, what's ever going to stop them from pushing a change at 5 PM? But if they push it, they walk out the door and I call him and say, “Get back in the building, you have to fix this or get on your laptop. I don't care that you're having dinner, you broke it you have to help fix it.” They will self-decide to stop pushing changes at 5 PM and walking out the door. Or they'll figure out how to make those more safely. Or they'll implement controls that will automatically roll back. I don't have to tell them how to solve the problem. I just have to expose them to the real risk.

[0:29:44] Guy Podjarny: Yes, absolutely. I think again, aligned very much with the perspective of, “Own the risk, see the risk, own the remediation. Be able to do something about that risk.”

[0:29:54] Andy Ellis: Yes. I think you're really tagging with own the remediation. This is something I learned from one of the people who works with me, Kathryn Kun, who runs our adversarial resilience team. She pointed out a long time ago that there is a never-ending list of good work available to be done and that we shouldn't pick for someone. If I have twelve things that I would love for you to do, and I know you can only do one of them, I shouldn't pick the one that matters the most. Maybe I trimmed it down and said, “Hey, Guy, here's these four big problems that your organisation is wrestling with. You really need to solve one of these. You can pick any one of the four and I am happy.” Now it's your project, not my project.

Whereas if I walked in and I picked one you are just going to resist it naturally, no matter how much you care about the organisation, because you're not part of the decision. You didn't own it, I owned it. So, I give you four. You pick one. Now, look how transparent I am. I'll tell you about a whole bunch of problems in your organisation in a way that is non-confrontational. I'm not saying, “Here's four things. You have to fix them all right now.”

[0:31:00] Guy Podjarny: You give me a choice.

[0:31:01] Andy Ellis: I give you some choice.

[0:31:03] Guy Podjarny: Not to belittle it, but I'm always astounded sometimes on how the same philosophies apply to a 7,000-person organisation and your kids, because I feel like basically it's the same. It boils down to the fact that it's human principles. But my daughter is awesome and amazing and quite opinionated. That same strategy works wonders, saying, “Here are four options. Pick one.” But it boils down to that there is just an epitome of the same humans that we are 40 to 50 years later. We think we're a little bit different, and maybe we are a little bit more rational about it, but the same incentives are at the core.

[0:31:39] Andy Ellis: The way that I think about it is, and it's not belittling. I think people often do this. They say, “Look. I learned how to manage people from parenting, raising my children.” What people often hear is, “Oh, I'm treating my employees like children.” No. It's actually, if you're a great parent, you recognise that your job is to create amazing adults. It's treating your children like children that causes problems, and it's treating your employees like children that causes problems. If you say to your kids – I get it all the time when my kids will be hungry, and I'll fall into the trap of saying, “Why don't you eat X?” “I don't want X.” “Why don't you eat Y?” “I don't like Y.” Each thing, there's a reason they don't like it.

But if I say, “You're hungry, well, you could have X, Y, Z, or W or go pick something else, I don't care.” All of a sudden now they're like, “Y sounds great. I'll just go do that.” That's the exact same philosophy. I actually treated the child like an adult. I said, “You have to feed yourself. Here's your list of choices, make your decision.” Whereas only give them one at a time, you treat them like a child, if they reject it, why are we surprised that adults reject that if even a 13-year-old is smart enough to reject it.

[0:32:52] Guy Podjarny: Indeed. This is an awesome conversation. I think we're going long, which is not bad or atypical. Let me try to squeeze just a short answer on something that I think is a specific perspective on Akamai. As I run this podcast or in general, you have conversations, a lot of good practice that emerges is this desire to turn good security into a business value proposition to promote this. We've had in Slack, for instance, they somewhat recently launched some good security features and they talked about how they made a lot of business noise around it, around as a commercial.

For Akamai, that's at its core. Being secure is very much a key selling feature at its roots. How well aligned do you feel, just high level, the internal security, the things that you see and you want to prioritise from a risk reduction internally, versus maybe the commercially friendly, promote the business value of being secure?

[0:33:50] Andy Ellis: I think they are really well aligned. I think, I'll talk about the business value. There’s two different things. There’s being secure and there’s selling security. I think we often mix those up. There are things that we do that the only reason that we do this is to protect the customer or to protect the customer's end user, or to protect the whole platform. There is never really significant arguments about the importance of doing that work, like any important work you obviously always face trade-offs. I'm not saying we do it perfectly. But I very rarely see people say, “That's not important.”

In fact, it's really easy to say, “Look. We have customers who care about PCI or SOC-2 or,” I could keep listing them. That's why we're even having this conversation. It used to be that people would say, “Maybe I'll sell this to the customers who don't care about PCI.” That conversation used to happen, it very rarely happens anymore. Now, we talk about when we're doing an acquisition, “How fast can we bring it into that secure envelope?” I don't love using that “compliance is the –” conversation starter every time, but it's a great conversation starter and it works. It gets people in the door.

Now, there's a pivot, which is how do you sell security? At the end of the day, the fact that we're PCI compliant makes certain sales processes easier, but it's not really bringing in new revenue. It might give us a small bump in the annual retail price per unit, or our average. But then, there's products that we sell that are about security. What's interesting to look at there is, if I look back at some of our most successful ones, often are ones that we originally thought about to protect ourselves. We said, “We have customers under DDoS attack. How do we make sure we never go down from a DDoS?” At some point we looked and said, “We have a better DDoS defence platform than anybody who's selling it commercially, why don't we just sell it commercially?”

Now, you take that and sell a security product. Or our VPN replacement that we built for ourselves. We were breached in 2009, built a VPN replacement service over the next 10 years, and cleaned up a lot of our authentication. Now, we're selling that on the market. This was a technology built on our platform because we had it available, that when we first started building it, we had no intention of selling it. It wasn't even a dream, but it was lined up to our own needs and it turns out many companies have the same problem.

[0:36:29] Guy Podjarny: I think that's great. It's like you know what's tried and true and it works. Potentially, I don't know if how much of this is urban legend, but presumably AWS and maybe the inception of the cloud has come from that same premise, of Amazon using its own underlying systems, and just adjusting. I'm sure there was some significant work involved in that, as there is in converting internal tools into commercial ones, but still, you know the premise works. You're not guessing.

[0:37:01] Andy Ellis: Yes.

[0:37:00] Guy Podjarny: Fascinating. I love this positive conversation-starter perspective on compliance and regulations. There's a lot of goodness, there's a lot of badness, and there's a lot of complexity over there. But what you can be sure is that it starts conversations, especially GDPR recently. Some people hate it, some people love it. But what you can't argue is that it rocked the boat. It triggered a whole bunch of conversations that would not have happened otherwise and for that, I love it. I love the fact that it happened.

[0:37:31] Andy Ellis: I think we're going to see the same thing with CCPA. It's going to drive a lot of those conversations too.

[0:37:37] Guy Podjarny: Absolutely. So, before I let you go, just one quick last question I like to ask every guest on the show. If you had one bit of advice or one tip, it could just be a pet peeve that's currently annoying you that people do, to give a team that wants to level up their security foo. What would that be?

[0:37:54] Andy Ellis: I have one simple rule, which is nobody is the villain in their own story. If you're having an encounter with a business partner, whether you're the development team or the security team, and you're telling a story that says, “This person is bad. They have bad motivations, they're trying to hurt me,” whatever it is, just stop. They have different motivations than you. They have a different model of the world, but odds are they have the same ultimate goal you have which is if you work in a for-profit company or in a startup that would like to be for-profit, it's to make money. That's their goal. They might see different risks than you do. But you can't learn from somebody who's a villain.

If you tell the story that they're a villain, you have just prevented yourself from learning what matters to them. Once you start learning what matters to them, you can channel them. I was just talking to one of my staff who was in a meeting, and he said, “There was this really weird thing that the person on the business side was making these continuous jokes about the places we were going to disagree. He would make a statement, and then he'd say, “Here's where so-and-so is going to chime in and say, ‘This is unsafe.’” I said, “This is perfect. It means they have a mental model of you that's accurate, that they actually saw the unsafe thing before you pointed it out.” That's great. It means they don't see you as a villain, they might see you as somebody they're struggling with but you're struggling together. You both care, they want to be part of it.

Don't tell the story in which the other person's a villain, because it's too easy to do. I can take almost any interaction and tell a story about the other person being a villain. But as soon as I do, I lose the opportunity for my own improvement, because instead I can say, "How do I act like they do? How would other people see me in that same light? Maybe I can improve based on my own gut reaction that was negative.”

[0:39:52] Guy Podjarny: It’s brilliant advice. Love both the catchphrase and the whole substance behind it and around it. I think it's useful probably well beyond security. This has been spectacular, Andy. Thanks a lot for coming on the show.

[0:40:06] Andy Ellis: No problem. Thanks for having me.

[0:40:07] Guy Podjarny: Thanks to everybody who tuned in, and I hope you join us for the next one.

[OUTRO]

[0:40:12] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show, or get involved in this community, find us at the thesecuredeveloper.com, or on Twitter, @thesecuredev. Visit heavybit.com to find additional episodes, full transcriptions, and other great podcasts. See you next time.

Up next

Build, Break, And Defend With Mohan Yelnadu

Episode 39

Build, Break, And Defend With Mohan Yelnadu

View episode
Large-Scale Digital Transformation With Brian Sodano

Episode 40

Large-Scale Digital Transformation With Brian Sodano

View episode
Optimizing Team Communication With Sara Dunnack

Episode 41

Optimizing Team Communication With Sara Dunnack

View episode
Combatting Security Burnout With Stu Hirst

Episode 43

Combatting Security Burnout With Stu Hirst

View episode