Skip to main content
Episode 170

Season 10, Episode 170

Autonomous Identity Governance With Paul Querna

Hosts
Headshot of Danny Allan

Danny Allan

Listen on Apple PodcastsListen on Spotify PodcastsWatch on Youtube

Episode Summary

Can multi-factor authentication really “solve” security, or are attackers already two steps ahead? In this episode of The Secure Developer, we sit down with Paul Querna, CTO and co-founder at ConductorOne, to unpack the evolving landscape between authentication and authorisation. In our conversation, Paul delves into the difference between authorisation and authentication, why authorisation issues have only been solved for organisations that invest properly, and why that progress has pushed attackers toward session theft and abusing standing privilege.

Show Notes

In this episode of The Secure Developer, host Danny Allan sits down with Paul Querna, CTO and co-founder of ConductorOne, to discuss the evolving landscape of identity and access management (IAM). The conversation begins by challenging the traditional assumption that multi-factor authentication (MFA) is a complete solution, with Paul explaining that while authentication is "solved-ish," attackers are now moving to steal sessions and exploit authorization weaknesses. He shares his journey into the identity space, which began with a realization that old security models based on firewalls and network-based trust were fundamentally broken.

The discussion delves into the critical concept of least privilege, a core pillar of the zero-trust movement. Paul highlights that standing privilege—where employees accumulate access rights over time—is a significant risk that attackers are increasingly targeting, as evidenced by reports like the Verizon Data Breach Investigations Report. This is even more critical with the rise of AI, where agents could potentially have overly broad access to sensitive data. They explore the idea of just-in-time authorization and dynamic access control, where privileges are granted for a specific use case and then revoked, a more mature approach to security.

Paul and Danny then tackle the provocative topic of using AI to control authorization. While they agree that AI-driven decisions are necessary to maintain user experience and business speed, they acknowledge that culturally, we are not yet ready to fully trust AI with such critical governance decisions. They discuss how AI could act as an orchestrator, making recommendations for low-risk entitlements while high-risk ones remain policy-controlled. Paul also touches on the complexity of this new world, with non-human identities, personal productivity agents, and the need for new standards like extensions to OAuth. The episode concludes with Paul sharing his biggest worries and hopes for the future. He is concerned about the speed of AI adoption outpacing security preparedness, but is excited by the potential for AI to automate away human toil, empowering IAM and security teams to focus on strategic, high-impact work that truly secures the organization.

Links

Danny Allan: I'm purposely being provocative. I thought multi-factor authentication was supposed to solve all this? Isn't that supposed to solve an indication authorisation?

Paul Querna: I mean, there's a thing here, and you said the right words is authentication, prove as you are, and then authorisation, right? What's tricky is more and more, authentication itself is getting pretty good. If you care about it, you make the right investments, right? Past keys on the consumer side, web authing, UB keys, hardware tokens, more on the enterprise side. If you TLS, one, two, the encryption layers are better than they were, the authentication tokens are more secure than they ever were. Authentication itself is becoming both commoditised and, relatively speaking, solved-ish, if you're willing to make certain investments. That leads you to the, well, if you're an attacker, what do you do now? Like, ‘Oh, I can't steal someone's password in an enterprise context,’ like you could 10 years ago, or 15 years ago.”

[INTRODUCTION]

[0:00:56] Guy Podjarny: You are listening to The Security Developer, where we speak to industry leaders and experts about the past, present, and future of DevSecOps and AI security. We aim to help you bring developers and security together to build secure applications while moving fast and having fun. This podcast is brought to you by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down. Snyk makes it easy to find and fix vulnerabilities in code, open-source dependencies, containers, and infrastructure code, all while providing actionable security insights and administration capabilities. To learn more, visit snyk.io/tsd.

[INTERVIEW]

[0:01:37] Danny Allan: Hello, and welcome to another edition of The Secure Developer. I am your host, Danny Allan. And I am super excited to be here today with Paul Querna from ConductorOne. Now, Paul has been in the identity space for a long time, the security space. I think he has some background at Octa, but Paul, maybe you want to introduce yourself for the audience?

[0:01:55] Paul Querna: Sure. Thanks, Danny. Yes, Paul Querna, CTO at ConductorOne, one of the co-founders. I started out a call in infrastructure development and things underneath the cloud and building cloud services, and then saw a lot of data breaches that were not awesome, saw security issues that were not awesome, and eventually, founded a cybersecurity company in the zero trust space, that was later acquired by Octa, and then started ConductorOne a couple years ago now as my second app at building a security company.

[0:02:33] Danny Allan: Would you recommend starting a company? This is your second one. I'm just curious what your recommendation is for the audience.

[0:02:39] Paul Querna: Not everyone should do it, but I think it's one of the best jobs in the world. If you're excited by it, and if you gain – you have to gain energy from the ambiguity, the chaos, the learning. But I love it. Every day is different. You're always learning something new. If that energises you, yeah, think about it. Explore what it would take.

[0:03:01] Danny Allan: Yeah. It's a lot of work. I've never done it, actually, interestingly. I've been in a similar role, but never a founder, and I know how much energy goes into that. What led you to identity? I'm going to group identity and access management together. I know authentication and authorisation are different, but what led you to this industry? Were you always interested in it, or just fell into it?

[0:03:20] Paul Querna: I mean, I was always interested in, I think security, just from a, I want to make secure products and software, I was always in infrastructure and things underneath systems. Security is an element there, but it was never my job per se, originally. It was seeing the security incidents, and actually led to identity very quickly. I grew up in an era of firewalls, like we're going to use source IP address as the mechanism of trust in a network. You're like, well, what if someone comes from the same – from your work, if you have an office and people log in, plug underneath in a port, should they have access to production now, right? It was wild that that was our threat model, that this physical world, where if you plug into a port, you have access to production data, right? That's where I started. My starting point was like, “Oh, wow. These firewalls are wild. We buy more of them, and they don't work, and then we – everyone has VPNs now, and you can log in from anywhere in the world.”

There’s that whole network layer was just broken to me from a security perspective. That's got me into, well, what's the better solution? It is identity, who you are, what device you have, is the right way to build attestation to make authorisation decisions versus the network-based authorisation decision. Quickly, that 2015 era, I started a company in that space, zero trust space, because those are the inspirations, like these network firewalls are the wrong way. You need to attest to someone's identity to make authorisation decisions. I got into identity via trying to solve security problems.

[0:04:56] Danny Allan: Yeah. I’m very familiar with those issues. I remember back in the 90s, I used to do pen testing. Literally, you go plug into the Banyan VINES network, and there was no authentication. You were already on the network. You had access to everything. I thought I'm purposely being provocative. I thought multi-factor authentication was supposed to solve all this? Isn't that supposed to solve an indication authorisation?

[0:05:17] Paul Querna: I mean, there's a thing here, and you said the right words is authentication, prove as you are, and then authorisation, right? What's tricky is more and more, authentication itself is getting pretty good. If you care about it, you make the right investments, right? Past keys on the consumer side, web authing, UB keys, hardware tokens, more on the enterprise side. If you TLS, one, two, the encryption layers are better than they were, the authentication tokens are more secure than they ever were. Authentication itself is becoming both commoditised and, relatively speaking, solved-ish, if you're willing to make certain investments. That leads you to the, well, if you're an attacker, what do you do now? Like, “Oh, I can't steal someone's password in an enterprise context,” like you could 10 years ago, or 15 years ago.

Then you try to go move to want to steal their session. I'm going to let them log in with their hardware MFA, and then I'm going to steal their Chrome session and go use them out on as if I was that person. That leads you very quickly to authorisation, which is: what are you allowed to do given your identity? I think one of the underplayed things, if you look back to the original zero trust movement and what people are trying to do there, one of the pillars was actually least privilege. You should have the lowest privilege you need to do your job today, and not more. I think it’s lost out generation. We fix a lot of the authentication stuff, but the authorisation stuff is still, like, you get hired, you get access to everything in the company on day one, or week one, you get access to all kinds of stuff. The reality is, most time your job isn't doing all those things.

That standing privilege is also what attackers go after now. There's a Verizon data breach report, all kinds of industry evidence you could point to that says, you really like, you steal someone's identity, basically, you log in as them. They look like an insider threat, which is a whole different – I think a lot of maybe SaaSs and enterprise companies don't think about insider threat that much. It's more thought of the financial system's problem, like, “Oh, we don't want someone to fraud us and steal some money.” That's changed a lot now, I think. I think it’ll continue changing the next decade, where insider threat is not just a model of someone trying to commit fraud and steal from the company. It's also how the attackers are going to appear to you more and more.

[0:07:41] Danny Allan: Yeah. One of the interesting things is if you track, for example, you mentioned the Verizon data breach investigations report, there may be fewer internal breaches, but they're far more damaging, because of the authorisation that the individual has within the organisation. That becomes even more true in the age of AI where everything's pooled into central repositories and then get access to this.

[0:08:02] Paul Querna: Yeah. Yeah. I mean, it's wild to think about we're using coding agents, or agents that help you write emails. You think about it in an enterprise, I log in to my IP. I might have hundreds of internal applications, right? Everything from a Salesforce to a GitHub, to my internal support portal, to your spelling correction app, everything in between, from highly sensitive apps to less sensitive apps. When I log in with SSO, I have access to all those.

If I'm writing an email, should my agent have access to all my data as me? You think about these personal productivity agents, it's scary. They shouldn't necessarily have access to all your data, right? The proverbial Coca-Cola secret recipe locked right in a deep vault of a database, or something, right? Like, I might need that for part of my job that I need AI to help me with. But when I'm logged in, writing emails at 10am on a Monday morning, I probably don't need all that data. I probably need a subset of where I am. I think this is one of the challenges in the AI space. We don't even have the terminology yet to describe sub-users, or shards of my own identity, where an AI that lives for an hour, or five minutes, how we draft an email, what should it have access to?

At the same time, I don't want a productivity barrier, right? If I am writing an email about my secret Coca-Cola recipe, I may want it to cite data from that, right? We don't even have the terminology to describe the ephemerality of that authorisation.

[0:09:39] Danny Allan: Well, even before we get to AI, let's talk about that ephemerality of authorisation. Is it possible, is it practical to do just-in-time authorisation for the majority of actions that an individual takes? Like, for what you would do on a daily basis?

[0:09:55] Paul Querna: Sure. I think that there's a crawl, walk, run here. You can go from a corporate security or production security perspective. I think a decade ago, people said, “Hey, I'm going to at least build a role-based access control ecosystem in my company.” It's going to say, “Look, if you have this job code, or you work for this manager, or you have these certain attributes, we're going to provision access to all these things when you get hired, or if you work on a specific project.” It's very cart launch, frankly, and it tends to be more per thread access.

You could move to a more dynamic access control mechanism, which, I think stage two of companies like, “Oh, well. I don't need access to everything all the time.” I think about even a developer, or a SRE DevOps, you don't actually need God mode in AWS all the time. I mean,

let's be real. Most of the time, you're going to log in and look at a CloudWatch graph, or maybe click a button during an incident. That's a few and far in between, partly because on the –there's so much Terraform and infrastructure as code. A lot of the world evolved around that.

In an incident, you might need to be able to press a button very quickly, right? There's a thing here where we see people doing things like, when you're on call, you get elevated privileges. When you're not on call, you don't. Or if there's an incident, there's a super-easy way to escalate privileges. Then they go away at the end of the incident. That's a dynamic access control, which I think, you think about access the things you have access to in a lot of companies turns into a –, up and to the right graph, where the things you have access to only grows.

There's a guy who I used to work with, name Jay. He'd been in the company for 10 years. He started in support, moved to be an SRE DevOps for a while, and later moved to be a full software engineer. He had access to everything for over a decade, because he worked in different teams, different projects, so his roles kept accumulating. He had an upward in the right graph of privileges in the company.

I think the mental model for a stage two maturity curve here is that you need to move to more of a camel, where you have some baseline privileges. Everyone gets email. Emails are such a service for your business. Privileged things are up and down. You have them for a use case, and then they go away. That leads you to standing privilege is actually a risk, if you’re talking to your CSO. The more things you have access to is risk under a curve. From a CSO perspective, you want to minimise risk. If you think about what is least privilege actually getting you? It's a lower standing risk. If any given identity is compromised in your business, they can do less, which I think gets to do a little bit –, you mentioned the Verizon data breach report is like, some of these impact – these breaches are really impactful. It's like the crown jewels. It's everything. I think that's the mental model is moving to is like, “Well, how do we do more dynamic access control?”

[0:13:02] Danny Allan: Are you expecting that AI would control that authorisation? If the answer to that is yes, because I don't know what you're going to say, if the answer to that is yes, are we ready for that model, where AI is able to make that governance decision? Culturally, are we ready? Technically, I presume we are. But culturally, are we ready for it?

[0:13:21] Paul Querna: Well, that's a great question. I mean, it's necessary, maybe is the first statement. When you think about –, you can use this email example, but I'm writing an email. If everyone remembers the old school Windows, user access control popups, do you want to let this thing access your thing?

[0:13:41] Danny Allan: The model dialogue. It drove me crazy.

[0:13:43] Paul Querna: The model dialogue that prompted with permissions. It's like, I think we've proven that as a bad user experience, right? We'll know. I do think an AI model that can evaluate, “Hey, you're writing an email. The two-line is only for internal people. I'm going to allow access to internal Salesforce data, because you're writing, emailing and trying to pull stats on some customer, or something.” That, in context, access management, I think, is inevitable. It's going to be driven by user experience as well, right? You can't have things that break the user experience, where it takes two weeks to approve an AI identity agent to have ephemeral access to a piece of data. I think out of necessity, from a user experience point of view and a speed of business point of view, you will have AI making AI decisions.

Now, culturally, are we ready for that? Not at all. It's going to be a major challenge at the next couple of years. I think the reality is it's like the self-driving cars as an analogy, right? They have, at this point, lower crash rates than human drivers. I think for access control, you can pretty quickly show very similar outcomes, where the AI doesn't really make mistakes, just for the most part, autonomous drivers don't make the big mistakes. If there's a case where they can't handle it, the flow is you eject out and you're like, “Hey, I need your help.” If you think about the constituents of access control, it's like 99.99% of the time, we can understand the context of what's going on, understand should you have access to something, and make a very quick decision.

If we don't, we escape out like, “Hey, we need to use help, or we need your manager to approve,” whatever the exception policy might be. I think that's a good mental model of the trust building over time. It's still going to be gradual, especially when you think about the compliance side of this, the other side of this. I think you'll have a lot of dual-mode things where the AI makes recommendations, but you have to wait for the others to catch up, like, “Oh, yeah. They're right all the time.” That’s just building trust.

[0:15:44] Danny Allan: Yeah. Where does the accountability lie in that? Because I can see one of three things pushing this. You could push it through regulation, like PCID PSS pushed security standards into the payment card industry, right? You could have insurance regulation saying, if you want to get insurance, then you got to do this, or you could have just experience pushing this. Regardless of the model, where does the accountability lie? Is it whoever implemented the AI, or is it the algorithm?

[0:16:10] Paul Querna: That's the other thing. It is risky here. Authorisation is generally what we call a binary decision. It's true or false at the end of the day. Do you get the thing you wanted access to? Which is different than summarising an email, like did you forget a good period at the end of a sentence? That's a spectrum of outcomes; you can grade them. Yes, there's a thing here where the outcome is a yes, no. Then who takes responsibility for that, yes, no? I think there will be – There are already more and more regulations, I think, on the court and compliance side is encouraging least privilege. Then, I think that's going to drive –, well, we need a least privilege, compliance mechanism. But we need to do that in a way that doesn't drive all users nuts.

That, I think, is going to be the driver of like, oh, well. Look, we can prove this AI considered factual things and was policy-driven. Maybe use some of the AI for orchestration and all those other things. At the end of the day, for maybe your highest risk entitlements. They might be much more policy-controlled, versus maybe some other things that are a little more loosey-goosey AI. I think both, it's a – like, least privilege is the driver the other day. Then it's, “Well, how do I improve user experience?” Then it's like, I still think of a layer of not all entitlements, or grants, in our identity parlance, are the same. There’s some that are truly your crown jewels and others that are really productivity enablers, and insider risk mitigation enablers. I think along that spectrum, you also even have different choices of how much automation you have.

[0:17:44] Danny Allan: Yeah, I expect it, it will be heavily policy-influenced originally, but over time, obviously, go probably more towards AI.

[0:17:51] Paul Querna: That's just one of the interesting things, I think. We think about identity in a business. It's a multi-party concern, right? This is one of the challenges that I think this industry has faced historically. Everything from HR, people management issues of job titles and job codes and roles, all those things, you have IT concerns around the health desk and getting people asked of things they need in productivity. You have engineering and security concerns of minimising access and who should have what and also productivity, even from the engineering side.

VP of engineering wants their team to be able to ship stuff, so you can't lock it down, so already can't do anything. I think what’s interesting about AI is you can actually encode more and more of those personas, almost as different AI agents. You think about a business. Whereas, previously, it was really hard, because it was like, well, I could buy an IT-oriented access management solution, but it ignored all the other people. Now I think with AI, you can start encoding those different people and personas and their concerns and their goals into independent agents that can all act on a request and work together to make a decision. You think about some of the AI agent architectures is like, you have a planning agent and then sub-agents. I think you'll see, each of your sub-agents are your different business concerns. Then you have an overarching architecture of making a decision.

[0:19:15] Danny Allan: Where do you see those agents living? I can envision a world where they're inside the code of the software, the application and envision another world where they're operating over on the runtime, on the network side. The answer, of course, in the long term will be all of them talking to one another, but is there an easier or better place to start?

[0:19:35] Paul Querna: I mean, look, this is like those – it's predicting the future of a nascent world. Yeah. I mean, I think it is going to be a little of all of the above, eventually. I think pragmatically, there's a lot of – you have to look at where protocols exist. I think to understand where decisions can be more easily injected, right? I think a lot about the SSO layer. At that layer, it's pretty easy to inject, “Hey, let's do some extra work here and figure something out.” But when you think about row-level access to a database, a PostgreSQL database, there are products out there that can sit in the middle and read what you're doing, but they become more invasive. I think you'll see it start at the easier places to implement. Then, there will be some companies that build a proxy that has an AI in the middle making decisions. I think those will work, too, but they're just harder to adopt generally. I think you'll see a spectrum of the deployment architectures and goals, and outcomes.

[0:20:37] Danny Allan: In an AI world, you prompt something and say, “I want to know something,” that particular AI thing that you're talking to, how does it delegate down your authorisation into the massive set of data that it probably has access to in the vector databases, or in the model itself? How do you mitigate that across agents?

[0:20:57] Paul Querna: Yeah. This is an area, I think, of there are actually multiple draft RFCs driving this question, because the idea–, maybe just back up for a second, for terminology. In my mind, there are a couple of agent types in the world. There's company agents. You can almost think of them as another employee. They'll have inputs and requests. Do they take on my identity when I ask it a question? Or, does that agent inherently have its own roles and permissions and everything else? I think you'll see both answers there. I think there's also personal productivity agents, which are, again, helping me write an email, helping me write my code. They're really my agents. They get a subset of what I want any given time.

I think you split there and they said like, well, the subset question, I think is a little easier, right? It's really an identity propagation question, an OAuth, we're going to have extra claims on the access tokens and the ID tokens that say, “Oh, this is an agent acting on my behalf, treated as if Paul was querying the database,” right? I think that'll be the pattern for personal productivity. I think for more – if you think about an agent almost as a service account, or an employee to company, you'll see a mixture of both, where sometimes that agent is impersonating me, and other times, it has its own roles and permissions and things.

I think it will depend on the use case, right? Broad database access, I think you want more of an impersonation mechanism, especially if you think about it, you have very role-oriented permissions, it will seem more like impersonation. I think when you think of, we want a lot of summarisation, or other attributes, it's going to feel more like, well, that's almost a service account that you manage permissions to all the downstream data.

[0:22:48] Danny Allan: Well, I see this world getting super complex, because I can imagine a customer calling up ConductorOne. Say, they want support. There's probably general documentation that the LLM has access to all of it. Then there's probably a subset of support data that you want them to get access to, but it's not their cases, so it needs to be anonymised in some way, because there's relevant information in there, but it's not their data. Then, of course, they might be asking about something specific related to a case that they opened three years ago, and you want them to have that specifically. So, there's this weird dynamic of there's public, there's pseudo-public, but anonymised, and then there's delegated access. Gets complex.

[0:23:24] Paul Querna: It's really super complex, but I think it's also, I mean, the developer side of this, and the products you can build inside of this is exciting at the other day. It means, even that idea that I can understand previous tickets, understand anonymised data, mesh in public reference documentation for the deep link to like, here's how you configure X, Y, or Z. That's pretty cool. Two years ago, I would have said, “That's wild. There's no way.” As much as, yes, there's hard work to be done, and I think you'll see this, like I said, the OAuth side, extra claims, some of this stuff is happening in real time, even the model context protocol and how it works and the authentication and authorisation of it. It's changing every month. That'll slow down eventually. Then, I think it gets to the hard work of like, well, cool. MCP makes more sense now. I can understand how to impersonate a user using OAuth claims. Now what? Now I have to figure out how to anonymise my previous tickets, right?

I do think it's moving really fast. In fact, I would say, it's moving faster than standardisation can keep up. It's not far off. It's not going to be a five-year thing. I think, even today, you can get pretty far. I think, yeah, you just got to figure out the core principles. Is it, are you impersonating, or am I a service account kind of? Then from there, you can build your architecture and figure out how to manage the authorisation.

[0:24:48] Danny Allan: Yeah, I've seen some of the proposals on the – not the RFCs. Anyway, the proposals to extend OAuth to MCP. I am encouraged by what I see. Two questions, because I know we're running out of time here. What worries you the most? You're in this world of autonomous identity management. My next question, I'm going to end on an optimistic note, but what worries you the most and what you're seeing within the customer base in the market today?

[0:25:13] Paul Querna: Look, identity was already hard. If you think about the explosion of SaaS and cloud, all these things, they already had really hard problems. I work at a company, and my company has several hundred SaaS apps. Each one of those apps may or may not have customer data or other sensitive data in it. Those SaaS apps might have local accounts and API keys and service accounts and non-SSO users and all kinds of other concerns. That was already a hard problem statement. Honestly, even when we started ConductorOne, that was the problem statement. That's enough. There's enough complexity there to deal with it.

I think the general InfoSec world, even the developer world, does not understand the volume and the ephemerality change that's going to happen, right? It's 100X, especially when you're thinking about ephemeral agents that have limited scope for limited time periods; it is exactly going from physical machines that you bought from Dell, to an AWS Lambda that existed for eight seconds while it serviced four requests.

The thing is that transition, that took the cloud world 15, or 20 years to go from understanding physical servers to products that are optimised for ephemerality, right? This is going to happen in a very, very small number of years. I think that is the hardest part is the complexity, the ephemerality, all the things about agents is going to make this identity world even harder. I just don't think most businesses are – they're still catching up with the cloud growth. This is also going to be very fast. It's not optional. That's the other thing is like, if you talk to the leaders of companies, there is a board-level mandate to use AI.

How aggressive that mandate varies by industry and by board and different things, but it is inevitable that every successful business will be leveraging AI in many, many use cases. That means the security side, it’s like, it's happening. You have to deal with it. I just think we're not, generally not prepared. It's going to happen really fast. We're not going to have the amount of time we had with cloud, and the pain points aren't really big.

[0:27:29] Danny Allan: It's AI or die. I mean, you're not going to survive in this industry unless you're using AI in a meaningful way. What makes you most excited? You're obviously on the cutting edge of this non-human identity and ephemeral identity. What makes you most excited?

[0:27:42] Paul Querna: I mean, I think there's a real, actually, in our world, an opportunity to remove human toil. The things you had humans in the loop doing on access management were ridiculous, right? I'm going to submit a ticket to IT help desk. IT help desk is going to go look at a spreadsheet to look up who my manager is to find out, am I allowed to get Office 365, or something ridiculous? That is 100% of it is solved at this point, even with just using AI. I think there's a lot of automation that's coming and already being built. I think part two of that is you're going to move to a world of the exception handling is what humans do. The everyday of it is going to be very automation AI policy-driven. That to me is really cool. Because that means the humans, especially the identity world, we have identity administrators and a whole cast of characters in this world, there's projects you just couldn't do before, because you were toiling down with all this day to day.

Hey, we hired 50 people on Monday. Fix their stuff. Or we're doing the M&A. Our identity world just tripled, and we don't know how to deal with it. These projects now become your focus, instead of the toil of just everyday access management. That's where I think the world's going. It's generally a positive thing.

[0:29:06] Danny Allan: Yeah. If I'm reading you correctly, you don't believe that autonomous identity governance is going to displace IAM teams. It's simply going to empower them to do the things that they really should be and could be focused on.

[0:29:17] Paul Querna: I think, overall. I mean, you could say the same thing about software developers. Is a cursor on all these things zeroing out the field? No. No, no, no. In fact, I talked with my engineers about this, like, your job is evolving. Your job now needs to involve more product vision, more understanding of what customers want. The hardest thing is knowing what to build, not necessarily the act of coding; it is getting faster and easier. I think it's the same thing in identity space, is your important skills are what to build, why to build it, what creates business outcomes for your company, how to make your company materially more secure, and how to reduce the standing risk of your company. That's what you need to focus on. Not the toil of helpdesk tickets and service now tickets, right? It's just different goals. I think, similar to software developers, which I work with every day, this identity world's also evolving.

[0:30:13] Danny Allan: Yeah, I completely agree. I tell that same story internally, like, where developers are not going away in any way. We should embrace it. We should use this technology. We can't ignore it. In fact, we should look at it as a superpower for us.

Well, Paul, it was great to have you on the podcast with us on The Secure Developer. I love the perspective on autonomous identity governance. For everyone who is watching this or listening, thank you for joining us again on another episode of The Secure Developer.

[END OF INTERVIEW]

[0:30:43] Guy Podjarny: Thanks for tuning in to The Secure Developer, brought to you by Snyk. We hope this episode gave you new insights and strategies to help you champion security in your organisation. If you like these conversations, please leave us a review on iTunes, Spotify, or wherever you get your podcasts, and share the episode with fellow security leaders who might benefit from our discussions. We'd love to hear your recommendations for future guests, topics, or any feedback you might have to help us get better. Please contact us by connecting with us on LinkedIn under our Snyk account, or by emailing us at thesecuredev@snyk.io. That's it for now. I hope you join us for the next one.

Up next

You're all caught up with the latest episodes!