Skip to main content
Episode 23

Season 3, Episode 23

Automation With Zach Powers

Guests:

Zach Powers

Listen on Apple PodcastsListen on Spotify Podcasts

In episode 23 of The Secure Developer, Guy speaks with Zach Powers, CISO of One Medical, to discuss the evolution of security at One Medical, what he looks for when hiring for his team, and why automation is a must.

The post Ep. #23, Automation with One Medical’s Zach Powers appeared first on Heavybit.

Share

"Zach Powers: You can take a really smart software engineer and teach them security. But it's hard to take an older school security engineer who's mainly in for focus, and teach them software engineering. We have a belief in security here at One Medical, that if it can be automated, it must be automated. Part of the way that I'm scaling is by hiring engineers who are interested in security, but are really good at automation. If the security team is not engineering automation today, they will not scale, and they will not be able to play ball with the type of threats we face today.

[INTRODUCTION]

[0:00:39] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow.

The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter, @thesecuredev.

[INTERVIEW]

[0:01:10] Guy Podjarny: Hello, everybody. Welcome back to The Security Developer. Thanks for joining us again. Today, we have with us Zach Powers from One Medical. Zach, welcome to the show.

[0:01:18] Zach Powers: Yes, thank you for having me.

[0:01:19] Guy Podjarny: Zach, we have a whole bunch of topics to cover. But before we dig into that, can you tell us a little bit about yourself? Just sort of the background, how you got into security, what you do these days.
[0:01:29] Zach Powers: Absolutely. Like many people who've been in security for quite some time, that is not what my initial career was. I was studying like materials engineering, and then I got into just different types of technology. I fell into it out of a passion for it. Then, more and more became the go-to guy. Ultimately, to look back at where I really cut my teeth in security, is more on a global scale with salesforce.com, where I was the vice president of enterprise security there. Managed a lot of the internal application security, the infrastructure security, but also mergers and acquisitions of vendor security program that did app sec testing on a thousand plus vendors a year.

A really big meaty program, and from there, I've come to One Medical to try and take on improving security and healthcare, not just for One Medical, but across the industry, and influence some of the other groups in healthcare in the United States. It's a big mix of how I got here, it just exactly most security leaders.

[0:02:30] Guy Podjarny: Yes, indeed. Well, I guess the different backgrounds is actually kind of what gives you an opportunity to think about the problems in a different way, and hopefully, hopefully do it a little bit better. Within one medical, what was the sort of context of the security team when you join, what was sort of rough company size, were you first security hire, how is that?

[0:02:51] Zach Powers: Security was – when I first joined, mainly looked at, as I would say, point solutions, and some infrastructure security hardening. There were a couple people doing security work, some in the product engineering team, some in the IT team. But there was really no core security function, not like we have today. When I came in, some very good things had been done. There were still a lot of things that needed to be done. We formed that core function and started to hire a lot of industry talent, pulling from some bigger tech companies that I believe have a much better angle, or approach to security today. For example, where infrastructure is code, rather than thinking about devices, and servers that you plug in. Thinking about cloud first, app first, automate everything. Those are the type of organisations we're pulling security talent from.

[0:03:48] Guy Podjarny: Got it. You're coming in, you're structuring this team. Just for context, because we've had a bunch of these conversations on the show. What's the sort of the rough company size when you joined?

[0:03:56] Zach Powers: When I joined, it was around 1,300. It's a much smaller company, going from global environment, 60 countries around the world to 1,300 staff in the United States, and operating across nine cities.

[0:04:08] Guy Podjarny: Cool. You have kind of those people and you're coming to build it in. You come in, we talked about this a little bit in the tee up, which is, you come in and you're hiring these people that come in from infrastructure as code. I think in general, when somebody listens to you talking about security, you oftentimes tout indeed, kind of that relationship between an understanding of the DevOps practice, if you will, to security. How do you see that? How do you see kind of the intersection or the interaction between security and those kinds of applications or operations teams?

[0:04:39] Zach Powers: Many, many companies that I've had experience with, or that I advise, there's an older style of security team members that really do understand infrastructure, proprietary configurations of this vendors infrastructure, and point solutions. Those skills were very useful to point and so on. But I find that those security engineers have a very hard time relating with an influencing software engineer. Where I see that a lot more camaraderie happens, and honestly, a lot more collaboration and influence happens is when the security engineers themselves – at one point in time, we're a software engineer, or they have their own shops. They know how to develop, they're not just scripted. They actually have some solid coding skills. That goes much, much further.

What I often see at companies is kind of two camps. Does the security team have a lot of software engineering talent in and of itself? Usually, there's tighter integration with the product engineering teams, and those style organisations. If the security team is mainly hardware focused with a bunch of layers, three layers for firewall stuff from the early 2000s, I don't see that tight integration whatsoever. There's a lot of room for improvement there.

[0:06:00] Guy Podjarny: When you build up your teams, do you find software engineering background to be kind of, I guess, equally important to security experience? I guess, how do you kind of way, because there's only, unfortunately, like in today's world, there's still a small group of people that have both in the resume. Both software engineering and security practice. I guess, how do you weigh those two?

[0:06:22] Zach Powers: Absolutely. I was having this conversation last night with a bunch of security leaders, it's how do you scale this. A common belief that a few of us have is, you can take a really smart software engineer, and teach them security. But it's hard to take an older school security engineer who's mainly infra-focused and teach them software engineering.

Part of the way that I'm scaling it, is by hiring engineers who are interested in security, but are really good at automation, really good at handling more of a DevOps lifestyle, more of a continuous delivery environment. Those are the type of individuals that we're scaling with and succeeding with at One Medical. It's not that we don't have the tried-and-true security veterans, we do. But we're scaling the team, and teaching security to essentially engineers who had an interest, but they understand technology. We find that to be much more important right now.

[0:07:24] Guy Podjarny: Yes. I fully relate to that. I guess I would even amend that with, with the fact that software engineers, as they mature and kind of gain experience, they typically even kind of build a natural, better appreciation to security, hopefully, at least a subset of them appreciating kind of the role of security as part of a quality of software. Well, to an extent, it depends, of course, different people vary. But in the world of security, oftentimes, as a security person, kind of in the security career grows, they might even grow further away from the software side of things, and more into like the risk aspect of the business. I think, also, maybe even just that trajectory is a little bit different. Also, sounds like a good – not that it's easy to hire engineers, but still, like with the security talent shortage that we have right now, an opportunity to bring somebody in from the software side and train them up is a good path, builds options.

[0:08:17] Zach Powers: Yes, absolutely right. At the end of the day, it really comes down to, "Can someone code?" No matter what the position is, on our security team, you've got to pass an in-person coding challenge. That's more than just a Fibonacci series. But it really comes down to critical thinking. You don't have to be in security to be able to perform an adequate threat model. You just have to think critically. We do evaluate really hard on how intelligent and creative are the candidates. If they have that, they can learn security. If they don't have that, if they don't have the coding background, they're not going to be able to move at the speed of an organisation like One Medical or at the speed of many of the tech companies out there that really have moved to or are moving to a DevOps or continuous integration, continuous delivery environment.

Guy Podjarny: Yes, agreed. I guess, within that context, that's kind of an interesting and kind of a forward-looking model. You hire people into your security organisation with some coding skills, and maybe an engineering background. What does that do? I guess, how do you see the responsibility now split between that team that has some software engineering background and the software engineers themselves building the application? How do you sort of deviate the responsibility or activity?

[0:09:35] Zach Powers: Yes, it varies a lot from company to company. But the first thing I would say is, there is some degree of embeddedness that we do at One Medical and it varies by company, whether this can scale or not, where the security team members take part and sit in design review. Moving security as a discussion upfront, having that discussion take place with the software engineers. So not having someone in security look at the product after it's been developed, after it's been designed, and find holes in it. We take part from project initiation.

The security team, not for every single feature, but for larger scale projects, or sensitive sections of code. The security team sits right with the engineering team responsible for that project. At initiation, if you think like 20%, 80% reviews, they're all there. That's before anybody started writing anything. That's just at the design stage. That's how we do that at One Medical, and integrate that way. It goes very well, because the software engineers tend to know the security people that are embedded are software engineers, in their own right, they understand. Everybody has a common language and there's a mutual respect there.

We expect software engineers to learn security, whether they're on the security team or not, and we expect them to provide valuable input and make decisions. We need to be able to empower them. If they're not familiar with security, we provide custom training for them. If they want to understand threat modelling more, we go through custom training on that. It's really a mutual respect, not a big stick policy. That works well at One Medical. At some other companies, that doesn't scale as well, to be honest. What I see people do is develop a questionnaire, like you can develop a real quick app that engineering teams can go through to find out, should they go to a security review. Not all sections of product, or all sections of code are actually that sensitive, but they need to do that. That works out well for other companies.

There's some nuance there. It is what's culturally appropriate for your company. But in either way, I believe security has got to start at the very forefront of that, at project initiation, when you're talking design. It needs to be collaborative there. It can't just be a series of requirements that are tossed over the fence without any context. You'll hear me mention this a lot. Security within the context of your product, your application, your company is very important to us.

[0:12:15] Guy Podjarny: I think, it's great to embed the security team, and engage. I love the common language, but I feel like I always enjoy kind of drawing analogies to DevOps. Oftentimes, in the world of DevOps, one of the things that helps break down the walls between developer and ops is indeed some shared background. If you carry a pager, you're much more – if you have for a day or a week, you have a much higher appreciation to making sure that your system doesn't go down. Similarly, if you know how to build code, you have an appreciation that it's not that simple to make it not go down, as you build a software. That said, there's still a challenge around scaling security, and you can't involve everybody inside. In kind of our conversations, you were talking about how software engineers should be empowered to make security decisions, instead of quoting a little bit literally here. I guess, how do you draw the line? Maybe if you can give us some examples of what type of decision do you think should be made? Like within software engineering, how do you draw them in?

[0:13:18] Zach Powers: Yes. Real classic example, and I'm bringing this up with, we integrate various tools at different stages. So, ecstatic code analysis, or dependency analysis, or whatnot. At many organisations, having talked to thousands of software companies over the last 10-plus years. At many organisations, if they have a security team, the security team will scan some code after the pull request way after the fact, once it's already in production, and they'll find a bunch of problems with it, and take it back over the fence without any understanding of where in the product, or where in the app are those vulnerabilities or logic problems, what the context of that situation is, so they really have no understanding, or no firm understanding of the risk there.
I'm a firm believer that if you just provide information like that upfront to software engineers, who are responsible for that service or that section of code, they're going to understand the context. And they'll realise, "Wait a minute. This vulnerability, maybe it's not a false positive, but it's very low risk, and here's the contextual reason why." Let them make the decision around how to treat that situation. Versus, they may see something else and say, "That is far more serious than your security scanner told me." We need to actually hit pause, have another commit, go through another round of testing. Why I say this is I think that most software engineers, especially given some training, and some partnership with the security team can begin to do a lot of this on their own.

Give them the right tooling, give them the right data upfront. Theres no way that my security team or those that I'm aware of around the US or in other parts of the world can review every line of code. It won't scale, right? Then, we introduce automated tools. Then, there's the classic griping that goes on that the automated tools don't understand the products. It's like, "When will you remove the software engineer from the equation? Let's put the software engineer back into the equation and have them do their job." I think that they can absolutely make risk-based decisions. They're going to know better than a security team, most times how to remediate a given vulnerability, or risk of code quality issue. Caveat that if they've had appropriate training.

You're always going to have software engineers who might not how to fix this classic vulner or that. But given appropriate training, they will. I think that their contextual knowledge and their desire to produce quality code, maybe that's the optimist to me, but their desire to produce quality code will result in a better outcome. But they do need to be empowered to make those decisions and not feel like there's this big stick policy, where they spend their time in creative effort, developing software. And some other person that they never talked to is just going to bash holes in it, and tell them it's not good enough. That doesn't work anymore. I don't know if it ever worked. But it certainly doesn't work today, it's not how faster software delivery happens.

[0:16:28] Guy Podjarny: Yes, entirely agree. I think one of the challenges in this model, so you know, you come in and you have your engineers, hopefully educated about it. There's always going to be variety, frankly, that happens within the security team as well. You entrust them, you tell them that, hey, you're allowed to make decisions here. Here's a set of kind of echoing back some of the things you said. Here's a set of criteria, whether you've kind of discussed it ahead of time, or it's a questionnaire, or whatever, about when you should sort of seek professional help, like you should sort of bring in somebody from a security team to help decide with you.

What do you do about incentives? That's one of the challenges that oftentimes comes up is, developers are not – instead of the daily use, they're not incentivised. Now, they're there to build new functionality. If they don't deliver a feature, somebody comes knocking. But if they build a security flaw that gets discovered a few weeks later, maybe it's the security team's response, like that gets thrown under the bus. Hopefully, nobody gets thrown out of the bus and it's all positive. But I guess, how do you incentivise or encourage the dev team to indeed, kind of embrace this ownership amongst all the many others they have?

[0:17:37] Zach Powers: It's a good question. At most companies, to be honest, there is no positive incentive. Other than – your finger pointing, right? At Salesforce, we definitely tried a range of positive incentives. I carried that on to One Medical. Part of it is simply high fiving somebody for doing the right thing. Part of it might be, everybody loves swag. You want an awesome hoodie; security teams know all about awesome hoodies. We've done things for individuals who continually do good security practices, make great decisions, have them do a rotation, or work on – we call this coalitions have the mark on a special project to step out of their day-to-day routine. Most engineers love doing that, because they don't like looking at the same section of code all day.

In a coalition, we get a cross functional group together, and say, "Hey, we've got a really hard problem to tackle, we want you to help us tackle this problem. Giving new opportunities is a good way to do that. We've done things as silly as teach lock picking classes, things like that. So just finding something fun, memorable to positively recognise. In a public fashion, this engineer's rocking it with security, and here's why. Give examples. But then, giving them something fun and meaningful in return. It does go a long way.

The security team at One Medical will often invite software engineers to happy hours or whatnot, where we're not just having a drink, we grab a whiteboard, and we discuss things. When it is talked about, or experienced in a more positive manner. I do believe it goes a long way. People will sometimes say it's a security champions program. Some of those do work for sure. But I would say, this is more just publicly and positively recognising when people do good security behaviours. A little bit of swag goes a long way. Some really nice socks, a coffee mug, things like that go a long way. But I don't see it happen at many companies, to be honest.

If you slow your work down to produce better code, at many companies, you're penalised for that. That's definitely not the case. You need some executive alignment to be this positive about it. At One Medical, companies like Salesforce, I can name a bunch of them here in the Bay Area, we have a common philosophy that it is better to produce quality code than to go back and have to fix it later on. Because it usually takes longer, it usually involves some angry customers. It's way more thoughtful to do it upfront.

[0:20:24] Guy Podjarny: Yes. I love pretty much everything about that model, you give a whole bunch of examples. None of them included bonuses, or sort of financial motivations, because I don't think that's really what says – you kind of have this hoodie-driven security incentives are swag.

[0:20:41] Zach Powers: It just goes way better. At other companies, we've tried this and experimented and get the cash bonuses don't really work that well.

[0:20:49] Guy Podjarny: They actually create almost the cognitive dissonance, where people think that they're doing it just for the cash. Well, if you're giving them something fun, clearly, they're not doing it for that, but they're still enjoying it, and still has the kind of positive association that comes with it.

[0:21:02] Zach Powers: It's key, though, and change it up, so you don't always give the same hoodie, you don't always give the same sticker, or whatnot, the same t-shirt. Change it up. Because if people expect that, "Hey, if I do this, I'm going to get this thing." It cheapens the experience. So, there's somewhat of an unexpected surprise, they don't know when they're going to be rewarded. But they've realised that there's a culture of recognition.

When the software engineering team at One Medical gets together, every couple of weeks, everybody gets together for an all hands. We will sit down with the security team and call out, and publicly thank people for very specific actions. They're not asking us to do that. But it definitely goes a long way, and it promotes a cultural momentum, that these are good things to do. Then, it is okay to take the time to produce better quality code. I really do think I'm an evangelist about that. I think empowering software engineers, letting them make decisions, but also, recognising them for good decisions are very good work, produces way better security than not.

[0:22:07] Guy Podjarny: Yes, fully agreed. I love that. I also feel, I feel like the teams that have the best handle on this indeed do this. I've had the pager duty security team come on the show, and they were talking about indeed, sort of awards that they have for them. They're not monetary, they're just recognition. Sometimes I forget who mentioned this, but somebody talked about giving explicit security training elements to it. [Inaudible 0:22:30] certified hacker, [inaudible 0:22:31] type course, just so that they can have something to sort of add to their resume in terms of formal, "Hey, you've invested in it, we can develop those skills. Because at the end of the day, that helps your career as well, in the long run. But, fundamentally, it's all around, kind of getting that sort of positive sentiment around it. The world of security uses the term shame a lot, and uses the term pride very little. We need more of that, or have that pride in it.

We talked a lot about the software engineering background within the security team, and then you have the engineering team, you train them up, you give them these kinds of positive recognition, and hoodies to drive the right behaviour. You tee up, and you define, whether it's questionnaires, or practices, or processes, or whatever it is to sort of help them understand when do they pull in the security experts to help advise and add context, which the application developers have. How do you, on the other side structure, the security team. You talked about software engineering background, but maybe you can share almost like what's the org structure or the staffing, I guess, that you think is needed in the security side to help deliver on this?

[0:23:39] Zach Powers: I think it changes a bit as you scale a security team. There's a phrase we'll often toss around, like the rule of three is intense. If you're a security team of three people with the way you do things, once you're at a 10 people's not going to work, you need to change. Then, again, at 30 and whatnot. From a broad level, the way we are structured today at One Medical is partly due to the size of the security team and the size of the company. That will be slightly different than, for example, how to structure multiple teams back at Salesforce. Where we had – I think the teams they are well above 500 by this point. Part of it is scale of the company and the security team.

At One Medical today, we have a software security application, security team that handles all things code. Whether it's our product, whether it's internally developed applications. We have a lot of different teams, internal to One Medical that develop code, not just the product team. Whether to doing that for data analysis, on the back end, whether they're doing that for enhanced productivity with this business unit or that business unit. We have an application security team who works with software engineering.

Finance, we have people on finance that are coding. It's really a tech company at heart. We do have a lot of doctors, but we're a tech company through and through. As a result of that, we need a group of people to be able to partner with all these different teams. Granted, the way that we do that can vary from team to team. We're much more embedded with software engineering, or product engineering, if you will, than we are with so many other internal business units. But we provide the same services to them.

The application security team really helps focus on some of the infrastructure as well, because it gets a little blurry when you wholeheartedly believe in infrastructures, code, and that philosophy, drawing the line between what is your product, and what is your infrastructure gets a little blurry sometimes.

[0:25:46] Guy Podjarny: A definition to list, yes.

[0:25:48] Zach Powers: Right. The team handles a broad set. A subset of that team also handles what are called vendor security, which is nothing but a game of risk analysis up front, followed by classic application security activities. We have a gated process at One Medical like many other really security conscious companies, where you can't introduce software, into our environments, whether you're an internal business unit, or you're a software engineer. You can't bring new software into the company, or integrate it with us, without us going through some form of testing on it. You need to have app sec people for that. This team handles a broad set of activities. The highest priority being, partnership with product engineering. But like I said, if we develop code, I've got a finance partner who I think is totally awesome, and he develops modules, and are great guy. But we need to be able to partner with them as well, so not just product engineering.

The other real big focus in the way we structure security is with SecOps. Part of that incident response, classic IR folks, analysts, who know how to do forensics and whatnot, who have been through multiple breaches of varying scale, multiple incidents, they understand threat actors. Other part of our SecOps team really is software engineers. There's a whole lot of guys and gals on this team that can build at scale. They build the security engineering back end for us to consume and analyse the data from a wide variety of sources, and be able to automate security functions. Here's a good example. God, I don't want to pay highly talented security professionals to go out and manually quarantine a machine that downloaded commodity malware. That is a complete waste of money.

We automate as much, as much as we can. We have a belief in security here at One Medical, that if it can be automated, it must be automated. Whether it's inbound email analysis, file analysis, whether it's configuration analysis, whether it's detection of events, first stage triage, whatnot. All of that, we have automated are aiming to automate. The SecOps team, part of its classic security IR professionals, but part of it is just some tried and true very senior DevOps guys and gals who know how to build cloud, know how to build apps, know how to integrate things together. That's a security at scale today in my opinion. A big part of it is as close to or near time data analysis as possible, followed by automated actions and whatnot. It allows you to keep your team smaller, scale the technology, not the team. I don't want to throw bodies of everything.

In that picture that I just described, there's not for example, a security team member whose job is to manage anti-virus That doesn't exist in our team. We automate a lot of those things. Everybody on the team, except for security program managers codes. It's just part of the job. You must know how to automate the mundane work. That's where we're at right now. So ask me again when we're five times the size, and I probably will need some analysts who don't code. I probably will need some DevOps or DevSecOps, however you want to refer to the folks who focus solely on the security of the infra side of the house, even though it is inference code. But right now, we're the size of a team. It's not that small, but we're the size of the team where we're primarily focused on to broader areas. The teams handle a lot of cross functional and multidisciplinary work there.

[0:29:48] Guy Podjarny: Hopefully, that automation keeps you a little bit further away from having a 500-person security team, because you can sort of scale with automation, as you said more efficiently, than hitting those. It doesn't preclude the needs to have some kind of manual assessment, design reviews is a good example or whatever, but still sort of needs those. What's your key distinctions around building in-house versus using external solutions, or sort of software for these things? Is there any kind of guideline for it?

[0:30:23] Zach Powers: Not a super good guideline. What I would say is, if an external solution exists that does a good job. So for example, colleagues of mine in the past have written their own static code analysis tools, that did a far better job than some of those on the market, that's great. Not every company is going to be able to do that. If you find a tool on the market that can do the job, by all means use that tool. Where I see building it in-house is usually, we're building something that you can't go buy off the shelf. For example, there is no security data platform or engineering back end, turnkey solution that can handle large amounts of data at scale and analyse that. You need to build out yourself, and you need people who are very adept in working in, whether it's a containerised microservices world, or classic AWS, or whatnot.

You can't just buy that off the shelf, so you're going to have to build that yourself. But if you can buy it off the shelf, there are a lot of good security tools out there, like a web app firewall, as an example. Why build that yourself when there's a couple of really good ones on the market. I would rather use talent that's on my team to do something that's not easily solvable by someone else.

[0:31:44] Guy Podjarny: Right. I guess, also, like the security tooling on the market needs to adapt if they haven't already, to this mindset, as like, the more extensible that is. Because sometimes, you come across tools that are – they subscribe to certain discipline, and that discipline doesn't work. But the tool is not sufficiently flexible, to be a part of your automation flow. It's like my way or the highway, in which case, which is the highway. You go and you build your own car, and that's fine.

[0:32:14] Zach Powers: You could almost sort most security tools out there – this is where I'm not the optimist – aren’t that good. The way they tried to sell them to us is advice scare tactics, and that doesn't work. Often what we see, good security tools I see today are coming out of a lot of these, to be honest, smaller companies that are just a bunch of software engineers. They understand software development today in more nimble companies. They often have had experience on security teams that are reacting to real-world threats, not the kind of marketing threats people talk about. Some of the products we see today, they're not traditional, in that they've been around for 30 plus years. You could call them startups, you could call them smaller companies or whatnot, but they're people who really understand DevOps. They really understand where the tech stack is moving in most companies. It's app first, it's cloud first. They understand the type of languages people use.

Those are the products we find a lot of good in, and they're also the type of companies that collaborate with us. They sit down with us and say, "What do you need?" We'll tell them, we give them feature request, and I say, "Give us two months, or give us six weeks, or whatever," and they come back and they've done it.

[0:33:36] Guy Podjarny: They actually implemented it.

[0:33:38] Zach Powers: Exactly. Other security vendors, if I give them a feature request, and I say, "Give me two years." You can go pound a sand.

[0:33:45] Guy Podjarny: There's a whole bunch of questions I have, and I'm going to look at the clock, and I say, we've been at it for a while. We might save those for some future episode. But before I let you go, I do want to sort of ask you what to ask most guests, or all guests on the on the show. If you have sort of one piece of advice or sort of one pet peeve around security these days, some words of wisdom to tell a team, a dev team, a security team looking to level up their security, what sort of the one bit of advice be for them.

[0:34:17] Zach Powers: The best advice I can give is the, if a security team is not engineering automation today, they will not scale, and they will not be able to play ball with the type of threats we face today. It cannot be done manually. There are some things, some types of security testing that still need to be done manually. But so much of security, especially the world of SecOps, it must be automated. Ask yourselves that, if your team is capable of automation, are they prioritised? You setting time aside for them to engineer automation? If the answer's no to that, take a step back and think about that, because that is where most security teams are going today. What I would say, the companies that really understand the threats and are trying to respond to those.

[0:35:09] Guy Podjarny: Yes, got it. Learn to go out and get automating, if you're not doing that already. Zach, if somebody wants to sort of ask you some further questions or sort of pester you on the Internet, how can they find you on Twitter, others? How can they reach you?

[0:35:25] Zach Powers: Easiest way to reach out to me nowadays is on LinkedIn. I've slowly peeled myself off of most social networking over the years, for good reason. I get to spend more time with my daughter that way. But yeah, reach out to me on LinkedIn, happy to collaborate. I meet up with security leaders around the country, and engineers, happy to grab a cup of coffee.

[0:35:43] Guy Podjarny: Perfect. I guess, if it's the right person, maybe apply for a job at One Medical. I'm sure there's some hiring jobs that we are always hiring. Zach, his has been a pleasure and fascinating – I feel like, I'm probably going to get you back on the show to sort of talk about some more other aspects in depth. But thanks a lot for your time today.

[0:36:00] Zach Powers: Yes, absolutely.

[0:36:01] Guy Podjarny: Thanks, everybody for tuning in, and join us for the next one.

[OUTRO]

[0:36:07] Announcer: That's all we have time for today. If you'd like to come on as a guest on this show, or want us to cover a specific topic, find us on Twitter, @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over 100 videos about building developer tooling companies, given by top experts in the field.