Skip to main content
Episode 85

Season 6, Episode 85

DevOps Versus Security With James Turnbull

Guests:
James Turnbull
Listen on Apple PodcastsListen on Spotify Podcasts

Welcome back to The Secure Developer. On today's episode, Guy Podjarny, President and Founder of Snyk, is joined by James Turnbull. James is an engineering leader, author of 11 books, and open source developer, and is currently the VP of Engineering at Timber, working on the open source observability platform, Vector. He was formerly the CTO-in-resident at Microsoft, CTO and Founder of Empatico, and CTO at Kickstarter. He has held leadership roles at Docker, Venmo, and Puppet and was the chair of O'Reilly's Velocity conference. As someone who has been a core part of the DevOps journey, James is especially qualified to discuss how it's similar or different to security. Tuning in, you’ll hear about James’ journey, why he made the transition from security to operations, and why he considers people a key part of DevOps solutions. You’ll also find out where the lines between the two world meet and how one can benefit from the other. Tune in today!

Teilen

[00:00:17] ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community, a platform for developers, operators, and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk is a dev-first security company, helping companies fix vulnerabilities in their open source components and containers, without slowing down development. To learn more, visit snyk.io, S-N-Y-K.io.

Welcome back to The Secure Developer. On today's episode, Guy Podjarny, President and Founder of Snyk, is joined by James Turnbull. James is an engineering leader, author, and open source developer, and is currently the VP of Engineering at Timber, working on the open source observability platform, Vector. He was formerly the CTO-in-resident at Microsoft, CTO and Founder of Empatico, and CTO at Kickstarter. He has held leadership roles at Docker, Venmo, and Puppet and was the chair of O'Reilly's Velocity conference.

[INTERVIEW]

[00:01:31] Guy Podjarny: Hello, everybody. Thanks for tuning back in to The Secure Developer. Today, we're going to have a DevOps-flavored episode here. I’m really happy to have with us James Turnbull, who is a VP of Engineering at Timber and also kind of a long time DevOps thought leader. He has been the chair of Velocity, has sort of been very much a core part of the DevOps journey, making him especially qualified to discuss a little bit about how it's similar or different to the DevSecOps journey. James, thanks for coming on to the show.

[00:02:01] James Turnbull: Thank you for having me.

[00:02:02] Guy Podjarny: James, before we dig into the topic of substance, tell us a little bit about what is it that you do and maybe your journey and how we touch security?

[00:02:10] James Turnbull: Sure. I lead engineering product teams. At the moment, I’m working a product called Vector, which is an open source observability tool. I’ve been an engineer for about 25 years. I think it's too many years to start citing that, and I started my career off largely in sort of enterprise, banking finance, as a security engineer and a security architect for a few years in a retail bank, I built perimeter systems and ATM networks and stuff like that. In the last 10 years or so, I’ve largely worked in high-growth startups. I was an early employee at Puppet, configuration management tool. I was an early employee at Docker obviously doing containerization. I was a CTO at Kickstarter and I have done similar sorts of roles over the last few years. As you said, I was the chair of the Velocity conference for a couple years and also I write technology books. I’ve written 11 books about engineering and infrastructure.

[00:03:00] Guy Podjarny: 11 books is impressive. I have a few under my belt but 11 books is definitely a notable number there.

[00:03:06] James Turnbull: No kids I think is probably an advantage.

[00:03:08] Guy Podjarny: It makes for some free time. So, I’m curious a little bit. Quite a few guests that came onto the show have gone into security from maybe the operations, and you actually were in security but then you moved outwards. Do you remember that transition, that decision point of whether you want to leave security and go to a role that doesn't have security in the title?

[00:03:29] James Turnbull: Yeah. I was managing a CERT team, a Computer Emergency Response Team, at a retail bank, and one of the things I’ve been thinking about last three or four years before that was sort of large scale infrastructure management. One of the very frustrating things about being a security person in a large organization is you don't control the infrastructure. You are largely dependent on infrastructure teams to provide you with information about vulnerabilities, about the state of the infrastructure. You're largely reliant on them to do remediation for you. They would obviously come to me and say, “We don't have any tools that allow us to mass maintain, mass patch all of these systems. We don't use large-scale package management. We don't have any configuration management tools.”

I sort of started to look in that area, about the same time I found Puppet. I started working on the open source Puppet tool. When they raised some money, I thought I can see the viability of this tool. I work in the environments where this is a desperate need. So, I came on board at Puppet to try and bring some of that experience, and I think I was probably our first pre-sales person, as well as being head of community and professional services, and a bunch of other things along the way. I tried to inject that flavor of this is what a large enterprise looks like. This is their experience. This is the empathy I have for their problems. The very first customer, I was involved in the New York Stock Exchange, customers that sort of flavor where my experience resonated.

[00:04:47] Guy Podjarny: That's a great story, and it's interesting around this, taking ownership basically or being frustrated about not being able to control and address those issues. You very much scratched your own itch with the move to the solution itself in the world of Puppet. That's pretty good.

It's actually interesting that Puppet, I think, recently, maybe in the last couple of years, have actually launched a remediation-focused solution that indeed focuses. It sounds maybe like on the same problem space that you've just been describing as [inaudible 00:05:15].

[00:05:15] James Turnbull: Yeah. I’ve had a couple of steps at it. [inaudible 00:05:17] was a product manager there for a while, originally built a sort of patch management, vulnerability management system like 2012 or 2013 but it proved to be – It’s a non-trivial problem to solve. But, yes, it's definitely one of those areas where we have a lot of information about vulnerabilities. We have a lot more ability to interrogate hosts and work out what's wrong. We still have the challenge of how do you remediate those in a safe and reliable way? How do you ensure that there's no downtime?

I think those problems are the ones that Puppet is trying to work out, how do we solve? How do we take these feeds of data and turn them into actions or execution that organizations can undertake?

[00:05:55] Guy Podjarny: Yeah, very much. I mean, there's also been some surrounding changes with the adoption of Kubernetes, as you pointed out, the ability to interrogate environments and ask for more information.

Let's indeed maybe go down this path of the DevOps analogy and maybe a reverse path of your journey and talk a little bit about the analogies in the DevOps journey and in these changes of tools. Where is it that we can learn from them in security versus where do they defer and should be defined differently?

I thought maybe a helpful model for us to try and follow here is this people process technology type trial. Let's start from that first one there, people. Maybe I’ll start by having you describe because people have different definitions of the people aspect of DevOps. When you think about people change is required to do DevOps successfully, how do you answer that?

[00:06:45] James Turnbull: I mean, I think the people aspect of DevOps to me is probably the most important aspect of it. My best interactions or my best successes in building things or working on things or operating things was because I had good relationships with the folks all part of the process, whether they be software engineers, DBAs, or security people, or audit risk people. It’s a relationship-driven process.

To me, the DevOps thing is really about building a bridge between folks who previously perhaps didn't have a conversation. There wasn't necessarily any antagonism there. It was literally like, I do a job. I build an artifact. I give that artifact to some other people. Those other people do something with it, and sometimes they come and tell me that it's broken.

I mean, obviously there's a little bit of friction in there with regards to like the op side of things, get other people that live with the outage and so forth, but it's really about like, okay, the Ops people have a set of expectations and constraints, and they have some needs. If they had a conversation with the people building the thing in the first place at first principles, would some of that process be smoother? Would the friction be reduced? Would the potential for outages be reduced?

To me, having a conversation, having a good relationship, understanding everybody's needs is the key to that sort of DevOps thing. Obviously, there's lots of tools and those bits and pieces in that automate a number of those processes and sort of reduce the human effort requirement, but it all stems from having that conversation in the first place.

[00:08:12] Guy Podjarny: Yeah. I very much relate to this but I guess maybe let me ask you, from a security change perspective. Now, when you think, okay, we're going to do DevSecOps, and I’m using this buzzword fairly loosely over here, it's just a simple way of saying embed security into the DevOps existence if you will, or processes, do you see the same? Do you also think really the most important piece is the people piece?

[00:08:35] James Turnbull: I think so. I think there's a couple of levels of this. When I started out in security, there was a generation or even a couple of generations before me who saw their role as saying no to things. They really saw themselves as guardians of the business and the enterprise. The easiest way to safeguard those organizations was to say, “No, we shouldn't do this.”

I probably belong to the next generation that came along. I left security about the same time as this was becoming more sort of status quo, but a bunch of people who really said that our job is not to say no. Our job is to say present the risk, present options about how you might mediate the risk in some way, with a view to like allowing the business to do what it needs to do to make money in a fast and efficient and safest way possible. It's their risk appetite and not our risk appetite. Our job is to explain to them the decisions they're making and what the impact is, but it's really their ownership. They own the business risk in a way that we don't.

I feel like this resonates really strongly with me in this sort of DevOps sort of the world is ultimately the outcome is you want to move fast and ship things, right? You want to be able to take the time from code, not earning you any money on a developer's desktop, to code running in production, and making your business money. One of the sort of caveats in there, one of the things in there, is you want to do that in a secure way, as well as an efficient set of ways. Security becomes another stakeholder in that DevOps life cycle, and their job is really to say, “Okay. We have policy or we have process, or we have information we need from you that bubbles up and allows us to do our jobs. If we give you that information upfront before you build the things or before you deploy them, then hopefully that reduces the friction or reduces the last minute conversations.”

You get a lot of projects where the security team rocks up and says, “Look. We just looked at your encryption, and it doesn't meet our standards. You can't deploy this.” That's not a conversation you want to have 20 minutes before you're pushing the button to go live with PR ready and the business going with the sweet inserts, whatever is going on. We have customers waiting. To me, there's a very strong overlap between the sort of needs there.

[00:10:33] Guy Podjarny: Yeah. I echo that sentiment and I think people at the end of the day what needs to change and I guess removing friction. You used that word, removing friction between the different groups. I’m trying to say, today, when you think about DevOps, I think there's already an acknowledgement that a high-performing organization or DevOps organization, organization that does DevOps well, performs better from a business perspective. There's all these studies that demonstrate how that is happening, but also I think it's internalized in the day-to-day and the community narrative. People understand that there's pride in being a good SRE or a good DevOps practice. Even businesses I think increasingly understand that being great at that implies being better at business, being more competitive. But it's a bit hindsight. This is already like celebrating the success. I’m sure there's a lot of challenges still in DevOps adoptions as well.

But if you're harking back a little bit to the beginning, what are some tips and tricks – or not just tricks but core practices that helped break that mould, that helped the ops department stop becoming the department of no, and not giving you a server, or just caring about mitigating change, to mitigate the risk of a system going down? Security is oftentimes received in the same way. Are there some learnings about how we manage to turn the tide in the ops space that we can apply to security?

[00:11:56] James Turnbull: I think so. A big thing for a lot of ops people, and for a lot of developers too, is their experience of what the business actually does, what the customers experience was secondhand or removed. In the case of software engineers, that was probably mediated by program managers and business analysts, and they often never spoke to someone who actually dealt with customers. In Ops people, it was even further removed. Maybe there was support people between them. But if they were sort of getting third-hand things from software engineers or getting them from business analysts, we're getting them from product owners, there's a fair way down the chain there.

One of the big things for me was to say, when we kick off a project, when we want to actually do something differently, is we need to understand, we need to be able to track both the sort of technology aspects of it but also the business benefits, the metrics we care about. How is this going to move the needle? If I’m the CFO and I walk in the door and say, “You want to spend a million dollars on this security remediation project,” what do I get from that? How do I measure the success? I think a big thing is helping people understand this has a tangible impact. Introducing latency or helping us ship faster has a tangible impact on customer satisfaction on retention, on reducing churn, on conversion, all those sorts of things.

When you start to see your actions tied to an actual business outcome, you feel more stronger and you feel more empathy for the folks that are actually on the front line and talking to the customers. You get the resulting adjacent benefit of being able to articulate your needs, in terms of budget or people, in a way that the business understands. They're not interested in widgets. You tell a CFO like, “We're going to upgrade from widget 2.0 to 2.3, and it's going to cost a million dollars,” and the CFO goes, “I don't care. What does that mean?” You say to them, “Actually, we're going to upgrade this part of our infrastructure, and as a result we think we're going to be able to ship 20% faster or we've got to reduce the downtime we have on Saturdays from an hour to 15 minutes.” That’s a tangible business impact that the CFO can walk away and go, “Okay, that's a worthwhile investment.” I think that benefit is sometimes understated in their sort of first principles.

[00:13:52] Guy Podjarny: Yeah. I love that approach because it's very applicable to security, if you just ask the business question and make sure that there's an understanding there. Arguably, oftentimes the business question is even easier for security than ops in many occasions. Still digging a little bit into a specific item there, which was reducing the risk of downtime, again there's pretty elaborate practices right now around measuring, and I think I’m slipping a little bit into process here. But, again, in DevOps, there are some practices today that measure error budgets and maybe quantify it. But I think a thing that is still reasonably art is saying, “Okay. If I make this change, I will reduce the risk of an outage by a certain amount.”

When you think today to the world, and you've been in observability and those kind of worlds that really try to help improve resilience and uptime, what are your sort of favorite means of mapping a change or an effort to an anticipated outcome of whether it will actually reduce the risk of outage? I guess I’m thinking clearly about how does that map to mapping into security.

[00:14:55] James Turnbull: I think this is an unsolved problem. I think you have SLOs and budgets and mean time to recovery. It's sort of like if you improve the resilience of something and therefore uptime adds a 9 to the 9.9 or whatever, to some extent, it's hard to draw a line between the thing we did and that sort of resilience outcome. I think error budgets and SLOs are probably a better way to think about things than MTTR and uptime, just because distributed systems mean that there's no – I don't know that there's any such thing as – monoliths lend themselves better to that MTR. It’s an uptime view, but distributed systems it's more a budget of like, “We can cope with .001% 5XX HTTP errors or something like that.”

Yeah. I don't think this is an easy problem to solve, and the same thing has existed in the security world for a long time, like, how do you measure security? I think [inaudible 00:15:45] once described Securitons like, “If I spend a million dollars, do I get more Securitons? What Securitons per million is the correct measurement for an enterprise to be secure?” We have a ways to go before we can sort of translate it into risk terms that our business understands. At this stage of the game, my thought is collect all of the data you can, measure the behavior, measure the state of things, and then work out, when you pull a lever, what metrics change? For a lot of folks, they are already reverse engineering their monitoring system or their observability system on top of their application because previously they may have only managed disk memory or CPU and uptime or something like that, rather than some application throughput or performance. That sort of SRE view of like the application's health being the key to observability is a much more recent phenomena.

Yeah, I don't think I have a good answer, sadly. There needs to be something middleware in there between the sort of concept of risk, which is obviously security people and audit people. Risk analysts have a pretty clear idea of what that looks like and our frameworks and tools and our view of sort of infrastructure applications.

[00:16:56] Guy Podjarny: I mean, I think, first of all, yeah, you saw right through my question. I was really asking, how do you measure security? Just trying to get at it from the DevOps side, and I’m always in search for that Holy Grail of how do you measure that. If it is indeed, I think, as you described, I agree. I think it's one of the key differences though between the change in ops and the change in security because security doesn't have as good a feedback loop as ops.

We're fortunate enough for systems to go down more often. We have a certain ability to know whether our decisions were right or not, as compared to breaches, which I guess we're also again fortunate enough that there aren't enough of those or [inaudible 00:17:35] of those that we know of to serve as the same type of feedback cycle. It’s just harder to know if you're doing something right or wrong.

However, when you mentioned about SLOs being maybe potentially SLOs and error budgets being maybe a better measure for the world of micro services when systems are not holistically up or down but rather have different pieces, that actually aligns with what is probably the most prevalent answer I get when I try to snip around this area, which is people look for efficacy of a security control. Or, sorry, not so much efficacy of it but rather how well applied is it? Is it in place? Is it surveying X amount of the activity, X% of the activity that is going through it? If your security controls are applied properly, you basically now assume a certain legal faith here or trust in your team's analysis that your security risk is mitigated accordingly, which I guess is not that dissimilar to choosing your service level indicators and service level objective and then just sort of seeing how close you get to it.

[00:18:35] James Turnbull: Yeah. I wonder if there's a difference between the unknown unknowns that sort of exist in the infrastructure world. I think there are probably a smaller number of those. I think it's perhaps easier to reason about infrastructure, and I’m probably going to get a bunch of infrastructure people who tell me I’m a horrible person for saying this, but I think it is easier to reason about infrastructure failure modes than it is to reason about security failure modes. I guess someone like John Allspaw would probably argue that there's a human aspect of it is the uncertainty element in reasoning about sort of infrastructure applications.

In the security world, the fact that you have the human element of someone not understanding their security education, or being catfished, or being spearfished, or something like that, combined with the fact you also have malicious actors, ranging from script kiddies to nation states, means that there's a lot of unknown unknowns in the security world that make it harder to reason about the state of security than it does to reason about the state of infrastructure. I haven't thought too far down that path, but it feels like that resonates with me based on living in both those worlds.

[00:19:33] Guy Podjarny: I like the unknown unknowns or the kind of the degrees of freedom almost element to it, which is you have the human element, but you also have the adversary, which doesn't exist in operations. People are not intentionally, well, I guess if they are intentionally trying to take you down, it becomes a security problem at that point.

[00:19:46] James Turnbull: Yes. There's an overlap there obviously. A Venn diagram is there but definitely terms of information disclosure, and there are different categories of adversary in the security world.

[00:19:55] Guy Podjarny: Maybe we've already kind of slipped a little bit into process I guess, if you think about metrics as more a measure of process and how do you measure success. But when you think about DevOps processes, there's definitely a lot of practices that have evolved, and maybe the right question to ask here is which of these practices that evolved in DevOps should basically now be applied to security as well?

I think the process area, the overlap, maybe sits a little bit more between DevOps and SecOps than necessarily taking this back into the development process. Maybe I’ll throw a few processes at you, and I’m probably going to slip into technology because I’m bad at these delineations. But when you think about these processes, let's maybe talk through when should they be the same for DevOps and when should they be separate?

The first one that comes to my mind is incident response. Incident response at the moment separates, I think, in many cases. Is that right? Is that wrong? How do you think about those two merging or not?

[00:20:52] James Turnbull: Certainly, when I was in the last bank, I worked in the security incident process, it was separate from the ops incident process. I think that technology is now so co-mingled that having two separate incident processes or having no bridge between the two processes or no easy transition is probably the wrong mode of operation. I think almost any remediation action that a security person is going to take is almost certainly going to have an infrastructure impact or require infrastructure engagement.

If those people are not an integral part of your incident process – and a lot of people treat them as like a service provider, it's like our telco, and our ops team, and maybe our application development team. It’s sort of like they're not a subordinate sort of thing here that they're a partner in bubbling up the information you need in ensuring that the decisions you make from a security perspective don't have an adverse infrastructure perspective that changes you make to remediate a security incident also won't have some sort of adverse infrastructure impact.

I think that the parties need to be at least a lot closer together. To some extent, I feel like having a dual or parallel incident process is probably going to create friction in that sort of collaboration community kind of way. Obviously, not everybody is going to need to be there for every call, but I can't think of tremendously many security incidents that don't have some sort of infrastructure overlap. Somebody has used a vulnerability to compromise the thing when we have a breach, distributed denial of service.

All of those things require people to be pulling the levers and turning the knobs in the infrastructure world as well as the security world. I think that they will ultimately have to merge together, and by extension, I suppose socks and knocks are they two different things other than one thing.

[00:22:34] Guy Podjarny: I think it's a tough one. I relate to your flow. In fact, I think in some conversations I’ve had here, people talked about using PagerDuty, for instance, for security incident response. I don't think that's what we use at Snyk. We actually use the same tooling to trigger that alert. Probably a little bit different when you're trying to ascertain if it is indeed a breach or not.

I think you might touch, potentially maybe brought up one that sits not just in the twilight zone between security and operations, but also sits between product security and enterprise security because, when an employee potentially leaked information, that might also trigger a link to resolve. So, it's interesting to see whether like a few others in the product security incident response maybe aligns more as you point out with the remediation aspect of it, which is more infrastructure-oriented or sort of more DevOps-aligned while the enterprise security parts might find themselves fearing further apart and having the socks serve mostly that purpose.

[00:23:36] James Turnbull: I guess there's some overlap there with – like for example what we described in banking sort of global security, which was probably described as physical security in a lot of places with an entirely separate organization with its own incident response. They would often engage us because information disclosures were often the results of physical security breaches. But, yeah, there's definitely some more things intertwined there that perhaps not so much in the infrastructure world.

[00:23:59] Guy Podjarny: I think I guess you've opened this a little bit with the sock and the knock, but you're also building an observability tool. So, maybe just add a little bit in this potentially process, potentially technology section here, to talk about security observability. Do you see people using, whether it's your own solution or other observability tools, for security monitoring as well to identify something that is a suspected security issue?

[00:24:25] James Turnbull: Yeah, we do. Certainly, in the case of Vector, two of our largest users are both security companies. This is something that's always actually slightly frustrating to me about the sort of security DevOps, DevSecOps sort of thing is that, if you look at the signal that security teams collect using its tools or processes, and you look at the signal that infrastructure and teams collect and application development team’s instrument, their applications to emit, the Venn diagram is – There's a big overlap there. Perhaps you're looking at a slightly different facet of that information, but it's essentially the same information you're looking at, the state of the asset in some way.

Maybe a security person cares about patch level and an Ops person cares about HTTP code metric or something like that, but you're mostly looking at the same assets and collecting information on them. It seems to me that there's definitely an opportunity here for people to say, “How about we have one tool collect all of this information and we route the right information to the right people with the right decoration attached to it go right in Richmond or other, and we then stop us having to have five security agents installed on a host, as well as the infrastructure team's APM agent and their logging agent and their –”

I’ve literally walked into customers where they have two Splunk agents running on the same box, one for the security team and one for the application team. I’ve certainly seen environments where a fraud team is getting feeds from security as well as infrastructure who are running entirely separate and parallel event pipelines to provide this one team with two facets of the same information about the same assets. How much money would we save if we just built one pipeline that happened to be a bit more sophisticated or a bit more flavorful than two, both in terms of CapEx and OpEx. This is one of the aspects where I think that security can learn a lot from the sort of – I think it's an underlying DevOps desire to monitor everything or observe everything or track everything. Security can greatly benefit from that.

[00:26:23] Guy Podjarny: I love the Splunk analogy because my perception is that Splunk and SumoLogic, when they started, they were perceived primarily as an ops tool. It was really a log aggregation or aggregate data. It was about monitoring. Today, I believe, I haven't sort of tracked them, but I think they are primarily security tools. They are primarily SIMS. Definitely, Splunk, I think a lot of their revenue comes from the SIMS space. If I’m not mistaken, they're the leading SIM vendor, which is interesting. It’s sort of not discussed often enough. To an extent, it's a bit of a success story in terms of Ops to Sec. It doesn't involve quite the DevOps.

[00:27:01] James Turnbull: Yeah. I mean, that's definitely the Splunk journey was that originally Ops tool, and I think they struggled to get traction for a number of reasons. I think mostly because there wasn't a huge level of maturity in the operations world about what logs are useful for. More often than not, they were post- incident sort of analysis tools. If you're getting a log entry that tells you something's wrong, you're probably a step behind the problem anyway, whereas security people are looking at trends and they're looking at patterns and they're looking at – fraud detection people that they're like, “Oh, someone's used this ATM at four o'clock in the morning three times in a row, and they've used this.” There's a pattern you're identifying of like a bad actor doing something or an anomaly of some kind in the information, where the thought about the use of the data was more mature.

Yeah. Them sliding into that niche was obviously very successful for them, and now I see them actually coming out of that niche, and you can see the same thing happening with folks like Datadog and stuff like that who’ve just introduced a logging tool of their own. They're like, “Okay, logs are actually of value to us from an infrastructure point of view, now that we're more mature about thinking about it.” We are looking for anomalies in the same way that security people are but we're looking for performance anomalies, so we're looking for outage anomalies or some abnormal behavior that is causing us a problem with a latency or a job of some kind or something like that.

[00:28:18] Guy Podjarny: I really like that. First of all, you’re right that, at the end of the day, today, ops and security are both looking for deviations from the norm. Might just be looking something different but being able to define, and maybe this is with the advent of machine learning and the ability to define norms in more complete terms than ever, but it's nice. I also like how in your sentence you basically described how we had a scenario where, to an extent, security was ahead, and a practice went from security into the world of ops or maybe it was previously adopted there where clearly I think most of our conversations goes the other way around. Most of the conversation talks about how security should adopt practices, where maybe the ops and DevOps world has gone ahead. But I think both are very relevant.

[00:28:58] James Turnbull: I think anomaly detection certainly is the – Security is significantly more advanced in the use of anomaly detection, and all of the early examples of folks who are building anomaly detection tools on top of infrastructure like to identify like patterns of that. They were all basically lifting papers from security people. I'd be like, “I saw that at RSA, and now I can see someone actually using that algorithm to try and like work out whether they can use it to pump all that metric data through it and work out where they can detect some.” Obviously, it's not a perfect one for one, and so many of those things were failures because they weren't looking for precisely the right thing. But the fact that they took from that well was pretty telling to me.

[00:29:36] Guy Podjarny: Yeah, for sure. I mean, a little bit personally my personal journey kind of had both of those interpretations happen, so I went from the world of application security where I built a task tool, a SaaS tool, and then left security. My thought was, “Hey. In security, we have this deep application analysis that we know how to do. Where else would that be useful?” I found myself in performance, building front-end optimization tools that says, “Well, you can use that intelligence to make things faster.”

Then, in that world of performance, learned, got into DevOps. That's when DevOps was growing and the practices. Hey, there's a different way to engage development which we've always wanted to do. There's a different path. Then Snyk really came out of that, saying, “Okay, let's apply the DevOps way of implementing security,” and that worked out pretty well. I guess that didn't occur to me until now, but that journey was in my own personal journey as well.

[00:30:26] James Turnbull: I must ask Steve Saul this one of these days. Obviously, he’s one of the leading exponents of apps, front-end application performance and basically about collecting data and metrics and measuring things and obviously doing a lot of that, thinking at the same time as Google as sort of the SRE folks came around. I’ve never asked him but I presume there must have been some cross-pollination there. Certainly, given Google's caring about the sort of the front-end customer experience to some extent, or larger because of ads rather than anything else, but I wonder if there was some sort of cross-pollination that happened there. I think it'd be an interesting thread to pull on at some point.

[00:30:58] Guy Podjarny: Yeah, absolutely. I think even at the core of the Velocity conference which started with [inaudible 00:31:01] and while the two evolved into sort of their own two subconferences within it, the people interaction was significant.

[00:31:09] James Turnbull: Yes, it was.

[00:31:10] Guy Podjarny: I’m totally scrapping my kind of people process technology flow, which I don't think was right to choose anyway. But I have one more question to you before we sort of do the conclusion question here, which is really around, back to people, but the people themselves.

When you look at today's Ops community, if you look at DevOps, if you look at the skill sets and the attitude, even a typical background that person that is, I guess I’ll call it a DevOps person or an SRE, I guess the better one because DevOps shouldn't be a title, you look at [inaudible 00:31:41] today. It’s very different than a decade ago what an Ops person typically looked like, an IT Ops type person.

Many people in security sort of are worried about similar type change happening here and what happens to them. Are there any learnings that you have to be shared about how did either individuals or organizations successfully navigate that sort of skill set change that was required in the Ops world? Is it literally like a turnover of the teams? You need to hire different people. Can you get re-skilled? Is there a different approach? Any insights for that?

[00:32:15] James Turnbull: Yeah. I think that there is a very traditional view of a [inaudible 00:32:18] Ops person in the basement playing around with Gentoo Linux or something. It’s probably going to get me killed by a number of people, but I think it's about acceptance and I think the security world has probably been through a cycle of this, which is moving from that, “My job is to say no, and my job is the sword and shield of the company,” to being like, “My job as a business enabler and my job as a risk manager to help the business move faster in a safe and secure and as risk-friendly way as possible.”

I don't know that this is that different of a cultural change. It's sort of like in order to do that thing. I need to have a different skill set and I think that, if I think about the Ops communities that I was involved with when I was in security, the vast majority of them would be considered traditional sysadmins. They're more scripters than they were programmers or software engineers. In the security world, I was very unusual in the sense that, outside of the application security community, in the sense that I wrote code. That most of the people I knew were more analysts and technologists and infrastructure people and architects.

Outside of that application security community, very few of the infrastructure security people or even scripting was sometimes not amongst their skill sets. So, it may be a slightly harder bridge to cross, but I think that what we've learned is that if you are open to seeing that there's a new way of doing this that actually benefits you personally in terms of learning new things but also in terms of getting paid more money. That was pretty much my rallying call to any Ops people that I talked to sort of in the early 2010, 2011 period when that was coming. I was like, “This is a better job.” You are no longer being paged at three o'clock in the morning for stuff that you don't understand or you can't work out what's going on or you don't have to get someone else out of bed, and it's an antagonistic process. You're still on call, but the process is smoother. Your skills are more in demand. You have set yourself up for a better career.

I think that that same message will resonate with the security community, which is, you need to have some of these skills. The boundary between applications and infrastructure, the black box, the abstraction layer has moved up as such that you can no longer be a practicing security engineer unless you understand how the application works, how it's wired to the infrastructure, and then how that communicates out to the customer. If you can't sort of get your head around, okay, there's a JavaScript thing here and there's a [inaudible 00:34:27] thing here and there's a bit of Kubernetes stuff over there, I think your job prospects will suffer.

I think that there's a definite message there of you can uplift yourself. You can actually have a more fun job with a better skill set and a better salary.

[00:34:41] Guy Podjarny: I love the this is a better job type message. I think it's a correct one and a very motivating one at that. Just one parting question, which I’ve been asking all guests this year. Take out your crystal ball. If you imagine someone doing your role today, which I understand is lead the engineering group not a security group. But still, with regards to security, if you look at someone doing the job you're doing today in five years’ time, with regards to security, what do you see changing?

[00:35:11] James Turnbull: I will actually say that a lot of it for what people are only starting to touch on this is privacy, data compliance, and data security. I think that, for a lot of folks who've lived in the PCI DSS world, or this is something that we've known for a long time, that the most valuable asset at our company is the data we have about our customers and about the product itself. A lot of times, people are starting to see GDPR as being something they have to care about, and it's amazing to talk to sort of our engineering leaders who are like, “We've just been asked about GDPR, and I’ve never even thought about a compliance regime before. I don't even know where all our data is and like how do we classify it.”

I think security people have an opportunity here. There's a lot of hard lessons learned in the security world about data classification, about data management, that have a sort of parallels in this new world. I think that's going to be exacerbated further by the fact that distributed systems inherently distribute information too. The days of like monolithic databases are slowly disappearing, both in terms of servicing data stores for individual services, but source of truth being where are those sources of truth now? Mobile coming in, people being like, “Okay, we want to operate closer to the customer.” They expect real time or near-zero latency. Does that mean we have to migrate their data closer to them? In which case, how do we protect it?

All that stuff is where I’m thinking there's going to be a – I see the sort of edges of it right now. If you've had anything to do with GDPR, like the tooling around it, the process, the government interactions, they're all hideous. I don't know how sustainable that will be for long, but this will become an increasingly big problem, and I think there's an opportunity and a bunch of tools that will have to come out of that opportunity.

I’m thinking that that's somewhere that I’m going to put my crystal ball and that I can see some big movement and certainly a similar way to the sort of DevOps scene, DevSecOps scene is reducing friction, like data security without the friction. I’m not sure where that's going to go but that's something that I’m sort of tracking I guess passively at least.

[00:37:08] Guy Podjarny: No, for sure. Definitely a big concern. I guess that would be a DevSec DataOps or I hope not.

[00:37:13] James Turnbull: No, I hope not.

[00:37:16] Guy Podjarny: No, I hope not. Well, James, this has been a pleasure. Thanks a lot for coming on to the show.

[00:37:20] James Turnbull: Well, thank you so much for having me. It’s been a lot of fun.

[00:37:22] Guy Podjarny: Thanks, everybody, for tuning in, and I hope you join us for the next one

[END OF INTERVIEW]

[00:37:30] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show or get involved in the community, find us on Twitter at @devseccon. Don't forget to leave us a review on iTunes if you enjoyed today's episode.

Bye for now.

[END]