Skip to main content
Episode 166

Season 10, Episode 166

Open Authorization In The World Of AI With Aaron Parecki

Hosts
Headshot of Danny Allan

Danny Allan

Watch on Youtube

Episode Summary

How do we apply the battle-tested principles of authentication and authorization to the rapidly evolving world of AI and Large Language Models (LLMs)? In this episode, we're joined by Aaron Parecki, Director of Identity Standards at Okta, to explore the past, present, and future of OAuth.  We dive into the lessons learned from the evolution of OAuth 1.0 to 2.1, discuss the critical role of standards in securing new technologies, and unpack how identity frameworks can be extended to provide secure, manageable access for AI agents in enterprise environments.

Show Notes

In this episode, host Danny Allan is joined by a very special guest, Aaron Parecki, the Director of Identity Standards at Okta, to discuss the critical intersection of identity, authorization, and the rise of artificial intelligence. Aaron begins by explaining the history of OAuth, which was created to solve the problem of third-party applications needing access to user data without the user having to share their actual credentials. This foundational concept of delegated access has become ubiquitous, but as technology evolves, so do the challenges.

Aaron walks us through the evolution of the OAuth standard, from the limitations of OAuth 1 to the flexibility and challenges of OAuth 2, such as the introduction of bearer tokens. He explains how the protocol was intentionally designed to be extensible, allowing for later additions like OpenID Connect to handle identity and DPoP to enhance security by proving possession of a token. This modular design is why he is now working on OAuth 2.1—a consolidation of best practices—instead of a complete rewrite.

The conversation then shifts to the most pressing modern challenge: securing AI agents and LLMs that need to interact with multiple services on a user's behalf. Aaron details the new "cross-app access" pattern he is working on, which places the enterprise Identity Provider (IDP) at the center of these interactions. This approach gives enterprise administrators crucial visibility and control over how data is shared between applications, solving a major security and management headache. For developers building in this space today, Aaron offers practical advice: leverage individual user permissions through standard OAuth flows rather than creating over-privileged service accounts.

Links

Danny Allan: “I always feel the RFC is difficult in diving into those and I always wonder about the people who are doing that 100% of the time because my brain would explode.”

Aaron Parecki: “The RFCs are not actually, they're not really meant to be tutorials on the topic because they are like the reference material, right? It's kind of like programming people of like you're essentially giving instructions to people that they're going to interpret how to build something. And it needs to be rock solid and very specific. Every word is very carefully chosen. But it ends up where, like you're saying, of these things are hard to read and they're not necessarily a good intro to the subject. So, that's also why I try to do a lot of that kind of complimentary work of developing videos, being on podcasts, doing live trainings and workshops on the topics where I can kind of bring in that more of the narrative structure around what these things are for and how they're intended to be used so that when you do go to actually read the RFCs, things make a little bit more sense.”

[INTRODUCTION]

[0:01:01] Guy Podjarny: You are listening to The Secure Developer, where we speak to industry leaders and experts about the past, present, and future of DevSecOps and AI security. We aim to help you bring developers and security together to build secure applications while moving fast and having fun.

This podcast is brought to you by Snyk. Snyk's developer security platform helps to build secure applications without slowing down. Snyk makes it easy to find and fix vulnerabilities in code, open source dependencies, containers, and infrastructure as code, all while providing actionable security insights in administration capabilities. To learn more, visit snyk.io/tsd.

[EPISODE]

[0:01:41] Danny Allan: Hello and welcome everyone to another episode of The Secure Developer. I am Danny Allan. I'm your host and I'm very excited to be with you here today with a very special guest, Aaron Parecki. Well, you know what? I could introduce him, but I think maybe I will pass it over to Aaron to introduce himself and his background and what he has done. Aaron, maybe you could just share with the audience some of your background and how we ended talking to that?

[0:02:05] Aaron Parecki: Yes. Thanks for having me. I'm Aaron Parecki, Director of Identity Standards at Okta. I work on a lot of OAuth specs, if my background didn't give it away already. Working on OAuth 2.1 and a couple other extensions and related work, and generally just trying to keep pushing the industry forward on its use of OAuth-related protocols.

[0:02:24] Danny Allan: That's awesome. Now, when I hear working on standards bodies and OAuth and those things, it sounds like a very difficult challenge that you're facing. Are you really spending most of your time working with standards bodies, or are you working inside auth helping to implement product capabilities and features?

[0:02:43] Aaron Parecki: Yes, I mean, it's definitely a mix of both. A lot of what I'm trying to do is establish that two-way communication between the standards bodies and the internal product teams of both helping Okta implement the product according to the best practices of the standards, but also then using our experience in implementing the standards to then bring it back into the standards bodies. So, it's really a two-way street. And that's, I think, what it really means to be part of the standards community. It's not just writing the RFCs, right?

[0:03:16] Danny Allan: Yes. I always feel in the RFCs difficult in diving into those. I always wonder about the people who are doing that 100% of the time because my brain would explode, I think.

[0:03:29] Aaron Parecki: Well, I mean, there is a good reason for it, which is the specs, the RFCs are not actually, they're not really meant to be tutorials on the topic because they are like the reference material, right? It's kind of like programming people of like you're essentially giving instructions to people that they're going to interpret how to build something. It needs to be rock solid and very specific. Every word is very carefully chosen, but it ends up where, like you're saying, these things are hard to read and they're not necessarily a good intro to the subject.

That's also why I try to do a lot of that kind of complimentary work of developing videos, being on podcasts, doing live trainings and workshops on the topics where I can kind of bring in that more of the narrative structure around what these things are for and how they're intended to be used so that when you do go to actually read the RFCs, things make a little bit more sense.

[0:04:25] Danny Allan: That makes a ton of sense. So, let's just set the background for some of our audience. I don't think that authentication and authorisation as concepts will be a challenge for anyone listening. Because even if you look at the OAuth top 10, I think the number one is broken object-level authentication or authorisation to those objects. So, the concepts are probably people are aware of, but maybe you can just set a foundation for people around what authentication is, what authorisation is, and how OAuth plays into that for the general developer community.

[0:05:02] Aaron Parecki: Yes, well, let me start with a bit of background on how OAuth really got started and where that came from, because it's actually very related to the main topic we're going to talk about today. It was originally created specifically for third-party app access. What I mean by that is when a user is using an application that is trying to access a different API than the application. So, two different related or two different parties involved rather than a first-party scenario.

So, in the early days of Web 2.0, this would be all your Web 2.0 services launching APIs, think Flickr or Twitter. And then third-party developers building clients on top of those APIs to do various things like upload photos or post tweets or read timelines, whatever it is. So, it's really about that third-party access. The reason for the OAuth pattern was because we didn't want to have users give the credentials for their account to the third-party application. Because, again, third-party app, you don't necessarily trust the developer of that application to do only what they say they're going to do when it says connect to your account.

So, we had examples of new apps that would launch, would ask for your Google credentials because it wants access to your Google address book, but when you've got your Google password, it has access to everything, and you kind of just have to cross your fingers so it's going to do the right thing, which is not a good situation to be in from a security perspective.

So, the OAuth pattern was set up to allow the user to type in their credentials in the first-party account where the account lives and then delegate that access through this access token pattern to the third-party application. That's really where things were motivated from and we've gone a long way with that pattern, and it turns out that there's a lot of reasons why that pattern also works in the first-party scenario, which is why you see it so much even for first-party apps now, and including adding on the OpenID Connect extension on top of it, where you get into even things like using it for enterprise single sign-on.

[0:07:07] Danny Allan: So, what is the OpenID Connect that you're referring to?

[0:07:10] Aaron Parecki: Yes. So, OpenID Connect is an extension of OAuth. It's actually developed in a totally separate working group, the OpenID Foundation, and it adds a couple of things onto the OAuth flow. So, forget about all the OpenID 1, that's not relevant anymore. But the OpenID Connect is the idea of adding an ID token instead of an access token to communicate information about the user's login event. That's really the main thing that it adds. It also has a couple of little protocol bits that are different here and there, but the main thing is that it's adding in identity information into the OAuth flow. Because again, if you think about API access, which is what OAuth was created for, you actually don't need to solve the identity problem in the OAuth protocol layer.

Because the goal of granting a third-party application access to an account is that it can go and actually just make API calls. It doesn't need to know who the user is. So, I like to use this analogy of checking into a hotel where you go to the front desk of the hotel, you check in, you show the person your ID, you get this key card, and the key card gives you access to things in the hotel. So, that key card is like an OAuth access token. You can use it to access the room, access the gym, the pool, whatever. But that key card doesn't represent any identity information about you. It just represents access. That's really the model that we use in OAuth, too, of the access token represents delegated access to data. It doesn't necessarily represent a user identity.

So, if the application that is a third-party application or first-party does want to know information about the user, you need to do something in addition to the base OAuth protocol. And that's what OpenID Connect does. OpenID Connect adds in this ID token that actually says things about the user, like their name or their unique ID or things like that.

[0:09:08] Danny Allan: That makes total sense. When OAuth was first rolled out, what was not considered? So, I know we've gone from OAuth 1.0 to 2.0 to 2.1 that we've iterated through. What things did we miss? And the reason I'm asking this is because I want to relate it to AI in a moment. We're about to do similar things in an AI world. But what did we not consider in the first implementation of OAuth that we learned over the course of the implementation?

[0:09:34] Aaron Parecki: Yes. I mean, it's a big question. I will give you an example of the move from OAuth 1 to OAuth 2 because that was pretty significant. OAuth 1, the goals were the same, like fundamentally the goals were the same of this delegated third-party app access pattern, but the way that it worked was it required the use of a provisioned credential like a secret into the application, and that had the unfortunate side effect of essentially making it not usable for single-page apps or mobile apps. You can't really ship a credential in a mobile app or a single-page app and expect it to be a secret, because you're giving a copy of it to everybody who's running the app. That was essentially a pretty major limitation of OAuth 1 that made it not workable for the future.

There were other aspects of it that were annoying also like the signing mechanism, but annoying is different than fundamentally not workable. So, the move to OAuth 2 redesigned things in a way that didn't rely on apps having a secret. And in doing that, there is a very big limitation that came out of that, that was not solved at the beginning of OAuth 2. And that is that access tokens are bearer tokens, meaning the only thing you need in order to use the access token is the string itself. And it's just a string. So, you can copy it around, you can share it, you can accidentally log it, you can steal it. All of these things that you can do with access tokens, bearer tokens, are inherently dangerous, right? That wasn't the case with OAuth 1 tokens because they required using this credential. It's just that in practice, you then couldn't use it for mobile apps. So, it kind of defeated the purpose, right?

That was a pretty big change. But even though bearer tokens are kind of inherently less safe and everybody knew that at the time, it at least meant that you could kind of push that problem down the road and solve it later. What we have seen is that over the years, there have been several different attempts at solving that problem and adding in this idea of proof of possession where you need something in addition to the token to use it. Most recently, the RFC depop is a solution to that. And that is, again, it's an example of how this kind of works where even though it wasn't solved at the beginning of OAuth, because the protocol was modular and extensible, we were able to add these things in later and kind of slowly grow the protocol rather than having to completely rewrite it. That's why we're not working on OAuth 3 right now. That's why we're working on OAuth 2.1, where it's a consolidation of the good parts of OAuth 2 because we don't need to fundamentally change everything again. I think that was an example of some good design decisions made early on in the OAuth 2 protocol to allow it to be extended and tweaked into what works, so we're still using it today in much more secure ways than OAuth 1 could be.

[0:12:46] Danny Allan: Well, that certainly sounds like the right foundations are in place, and I always am encouraged when you can go from 1.2 and kind of pause there, tell us if that the foundations are solid. I want to draw parallel then to the world that we're going into right now in AI, because I've been standing on a soapbox for the last little bit and saying, “Hey, it's not APIs anymore, it's LLMs. We have all these LLMs that are backing applications.” And there's no concept baked in in the early days, at least that I have seen, of I've indicated into one system and then that system talks to a secondary system, substitute LLMs for APIs, and what should I have access to in terms of the large language model in the second system? When you look at MCP servers and some of the things that are being rolled out, there is no concept of authentication or authorisation at least baked into those.

I guess my question is this, would you agree with that statement? Secondly, what we have built for OAuth 2.0 and our building for 2.1, is it broadly applicable to this next world? What are the steps that are being made to enable that to take place out in the market?

[0:13:55] Aaron Parecki: Yes. I generally agree with that assessment of, and this again goes back to the idea that OAuth is very extensible and does have a lot of active work ongoing in the space. A lot of these kinds of questions, either some of them do have answers and some of them don't have don't have answers yet, but are being worked on. If you go back again, like with the original goals of OAuth, it was very straightforward that it was about this third-party app access. And if that's the entire scope of the problem, there's a lot of things that are out of scope of the protocol that don't need to be solved in a protocol, because they can be solved in an implementation. But as soon as you start applying it to first-party use cases or these kinds of more complicated things where they're kind of chaining things together, a lot of the problems do get brought in scope then. And that's where a lot of this work is happening right now that is then directly applicable to these kinds of new use cases.

Some of them are OAuth specs, some of them are in related working groups. A lot of the same people participate in a lot of the different working groups as well. There's a new, a relatively new IETF working group called WIMSE for workload identity, and there's in the OpenID Foundation, there's also the AuthZEN working group that is dealing specifically about the authorisation model problem. And yes, I think a lot of those kind of new spaces there, again, a lot of it is extending the stuff that we have and building on the building blocks that we have with OAuth as the foundation. So, it's not like it's entirely new things to learn, which is good. But yes, a lot of new stuff happening there for sure.

[0:15:35] Danny Allan: Do you believe that the foundations that we have in place today are sufficient for LLMs? Or will we need new extensions on top of the foundation's cross-application access or OAuth that do not exist? Do you think we're going to have to add new things or they're broadly applicable?

[0:15:54] Aaron Parecki: Yes. I think with the, what we've seen with MCP so far is a good example of, it's a very new way of using tools, but fundamentally the model of a thing is talking to a server, whether that's an intelligent LLM or just a simple software client, that part of it is actually the same, and because of that, we can apply a lot of the OAuth building blocks to the MCP ecosystem in the same way that they have always been used. There is a resource server, there is an authorisation server, we can use access tokens to delegate access and that's one of the things I was doing, I was working on with the MCP folks about kind of making, helping update the MCP spec to name these things in OAuth terminology so that we can apply the building blocks in the right spot.

That said, there are definitely a few new ideas that are even just in that sort of getting, bootstrapping an authentication that did rely on some new work that came out of the OAuth group. Most recently, the resource metadata or protected resource metadata RFC helps bootstrap this tool where you can discover an authorisation server from a resource server URL rather than having to put in the authorisation server URL into the software itself. So, in a typical OAuth flow, you'll end up having to add these configuration lines of the OAuth server or the token endpoint, the authorisation endpoint into your client code, but that's not a good user experience or even developer experience for the way that the MCP tools are intended to work.

So, for that, they want to be able to configure, just put in the URL of the server you're trying to talk to, and everything should just kind of flow from there. That was using a very new RFC. It was just a couple of weeks ago that it was published.

[0:17:48] Danny Allan: Yes, otherwise, the developer would have to authenticate every single time they wanted to take an interaction if I'm understanding it in the traditional way, correctly.

[0:17:58] Aaron Parecki: Yes. Then there's also the client registration problem, which is interesting, but probably don't need to go too much into that. But yes, so even with that kind of stuff, there is some new things, but there's definitely some new questions that are arising. The authorisation model of the internal language model is a huge unsolved problem right now, but I don't want to get into that because there's just too much there right now. But I think one of the more practical concerns is when you mentioned cross-app access, when you are rolling these things out under an enterprise in a workforce environment where employees use single sign-on to sign into apps, and I will I'll sign on into my Gmail account, into the calendar, into Zoom, Slack, whatever, right? And I'm always going through the Enterprise IDP for me sitting in front of the computer.

So, what does that mean when the thing that is trying to get access to these resources is no longer software on my laptop, but actually an LLM, right? And that's where things, again, there's some kind of interesting new problems to solve there. So, cross-app access is a, it's the friendly name for the very long name of the spec that I've been working on. And they're actually for a couple of years, which is called identity and authorisation. The adopted spec is identity and authorisation chaining across domains. And that can be used for a number of different things, one of which is this kind of pattern of using an AI agent inside of an enterprise context, and then I'm trying to, I'm working on defining a more specific profile of that for specifically this use of you want to sign into your AI tool through the enterprise IDP, then you want to grant that AI tool access to all the other enterprise apps. So, that way I can just chat at it and it already has access to all the Google Docs. It already has access to my Zoom calendar, and everything, right?

Without that, it's a terrible user experience because I have to go and one by one, connect all these tools, right? We're seeing a lot of explosion of these tools, so it's only going to get worse. But the other huge problem with that is actually from the enterprise admin side. If you think about today, what happens with these tools and even just with like, if you drop a Google link into Slack, Slack will want to preview it, right? And it'll try to go fetch the document from the Google API and show you the preview inside of Slack. In order to actually make that work today, what ends up happening is the user has to click through and grant that consent, right? Now, for that Slack example of Slack trying to get data from Google, It's not a good user experience, but it's not that bad because it's really just once right? They do it once and then you're done. But once you apply this to the LLM world and the AI tools now, it's not just once. Now, you log into your chat software and now it's like please connect the 20 tools one by one.

But when it's that direct connection between these applications, that connection happens without any visibility of the enterprise admin. They can't see it happening. The only way that they could know it happens is if the server that is granting access has some sort of internal admin screen that shows logs of what's been going on, and some of these do, some of them absolutely do not. So, a lot of this communication between applications is essentially happening behind the back of the admins in the enterprise, and they can't see that this data is being shared between applications, which is not a good place to be. So, they can't see it. They can't shut it down. They can't monitor it, right?

That's one of the other goals with this cross-app access pattern is to actually put the IDP in the middle of the flow so that when these two apps want to talk to each other, whether it's Slack to Google, or your chat bot to the Google Calendar, they actually first have to negotiate through the IDP in order for that to be visible and controllable. And then the admin can actually see and control about that access

[0:22:21] Danny Allan: So, in that world, the administrator is saying these two applications can talk to one another, but then each application controls at a data level, what the individual is going to get access to. Am I understanding that correctly?

[0:22:35] Aaron Parecki: Yes, exactly. Because again, that kind of internal data model is a very large problem to solve. Trying to do that in a way that is then federated is even harder. Again, there is active work going on in standard groups to solve that. But I don't think I will be able to convince 20 of the top companies that are using Enterprise to change their authorisation model to use the same model so that they can federate the authorisation itself, right?

[0:23:09] Danny Allan: Yes.

[0:23:10] Aaron Parecki: So instead, what I do think we can do is get 20 companies to add a new way to issue access tokens that they already issue. They already have an OAuth API. They already issue access tokens. Those access tokens are scoped to a user, and they already have their own internal authorisation model of when an access token comes in, how do they know what it can access for that user's data.

So, what we're doing is we're saying, here's a new way to issue access tokens. Instead of through a redirect flow with the user, you actually can do it all in the back channel, talking to the IDP and negotiated that way. But the output of that is the same access token that you would have issued anyway. That way you get to the leverage your existing authorisation model and you don't really have to touch any of that code. That part says the same.

[0:24:03] Danny Allan: That makes sense. So, what does the organisation have to do to roll this out? If you think of this kind of organisationally and they're trying to enable LLMs and MCP servers and do all of this, what are the kind of one, two, three steps that they need to have? Obviously, they need an IDP in there, but what are the priorities of steps they go through?

[0:24:24] Aaron Parecki: Yes, I would say, top priority, of course, is single sign-on to all the apps. Like you don't really have a story unless you have single sign-on in the first place. Once you have single sign-on that lets you then assign groups of users to applications. You can then control at least which users are able to use which applications, instead of just having it all be a free-for-all. This new idea of actually the apps talking to each other, the cross-app access pattern, this is new, this is in development, it is not available in any software yet because it is based on in-progress specs. We are working on it. We are looking for partners to also work on it with us, and hopefully we will get that to the point where that is the next requirement that you would have in the enterprise use of these tools, is be able to actually have that control of the apps talking to each other, be managed at the IDP as well. But really, I mean, you have to start with single sign-on, because that's kind of the foundation.

[0:25:25] Danny Allan: Yes. I mean you're working at the perfect place for this at Okta because obviously single sign-on is fundamental to what you're doing. How have you seen the industry adoption help this or impede this? Do you find as you roll this out that organisations are, yes, let's all get on board with this particular protocol or this particular way? Or what are the roadblocks that you've seen in the past from this being successful?

[0:25:49] Aaron Parecki: Yes, I mean, doing new things is always hard, but that's why one of the design goals of this has always been, the new spec is trying to add as little as possible to what already exists. So, rather than saying again, I'm not trying to say everybody use the same internal authorisation model and then federate those objects around and everything will be solved, right? Like that is a possible path, but it's not a practical one.

So, I think more practically like trying to find what's the smallest change you can do to your product to enable this very powerful use case. That is something that is, it makes it much more palatable for people to actually start playing around with and start building.

[0:26:35] Danny Allan: Who's on board with the cross-app access right now? Do you have big companies on board? Or are these individuals like yourself that are promoting it? What is the power behind it right now?

[0:26:46] Aaron Parecki: Yes, we are working with some companies that you've absolutely heard of, can't share the names yet, but there will be some announcements soon about that and looking for more. If you are building an application that is used by enterprise companies and you want to provide your customers that level of control that they want, again, in touch with me, I'm happy to help you work on this and explain it.

A lot of the work that we're doing with this is public, so if you go, I can put in the show notes, links to the spec itself and a GitHub repository with a complete implementation of an IDP and two applications that talk to each other through that IDP. So, it's there. It's not like it's a secret, but hopefully we'll have some actual names we can share soon.

[0:27:35] Danny Allan: We will. We'll put that in the show notes for the session. What about where we are right now? How do you think about that? Clearly, the spec isn't in place yet, and we don't have the IDP, the common IDP connections taking place. Would you caution people against rolling out MCP servers? I'll give you a very specific example that I think about every day. Snyk is an MCP server, actually. By the time this rolls out, well, actually you can go to GitHub and download the MCP server that wraps our CLI, that can obviously secure code.

Reality is that we could talk to an agent over here that is generating code, but it might not be that the individual that is generating code, you want them to see all the other code that is in all the repositories, like there might be things that you shouldn't see on either side of that. Would you say in that circumstance, “Well, hold off, don't implement this until everything is in place and all the security is in place?” Or would you say, “No, go ahead, it's going to be solved within a reasonable amount of time?”

[0:28:35] Aaron Parecki: I mean, everyone has their own different risk profile for this stuff, right? But I think one of the important things is if you are rolling out either building MCP servers, rolling them out. The safer way to do that right now is to make sure that it leverages the individual user's current authorisation model, as in don't create a service account at some API and then wrap that in the MCP server and then let everybody just use it, because then you're essentially letting everybody have the same level of access that that service account has.

So instead, if you go through, in the current world, you can go through an OAuth flow for the MCP servers. And if you do that, then the access granted to the MCP client is scoped to that one user. That way, it leverages the existing authorisation model of whatever you're talking to at the other end, right? Assuming they did it well, which you kind of have to assume that that API is built well if you are using it in the first place.

So, I think that's probably the most practical advice right now. But yes, it's definitely missing a lot of that context if this is all happening under the enterprise IDP right now.

[0:29:50] Danny Allan: Yes, the AI world, obviously, and the MCP world especially, is moving at a pace right now that I have not seen in the past. It has moved faster than any previous technology that I have ever witnessed which is why I ask the question. We have customers asking us every day for these types of capabilities, and I always kind of wonder my head, is it a good idea to give them those capabilities knowing that the controls for it are not in place? But I guess, risk profiles, as your answer is correct, risk profiles.

[0:30:20] Aaron Parecki: Yes. Again, everybody has their own level of risk, but definitely take it into account. Don't just blindly roll these out. You need to look at what kind of access is being granted and whether you have the ability to turn it off, if you want to, all those kinds of things.

[0:30:38] Danny Allan: Well, Aaron, I love that you are looking at this. I actually noticed a post that you had on LinkedIn, I think I mentioned earlier, and it gave me hope because I actually have been ranting about this for a while. Everyone's rolling it at MCP servers and no one's actually thinking about the authorisation, which is an issue. What gets you most excited for the future?

So, you've seen this before. Do you think we're learning from the stakes that we're building it in at the very beginning in the right way? Like what gets you excited?

[0:31:04] Aaron Parecki: Yes. I'm relatively new to the MCP world as well. I didn't really know about it even until March, it was when I started diving in and looking at their OAuth spec. But I was really excited that the update that came out in March to the spec did actually use a lot of the modern OAuth building blocks. And that actually gave me a lot of hope that the people are thinking about this. They are thinking about it and looking at the more recent documents of the OAuth working group has been publishing and trying to do the right thing and using the work that exists in the best way possible.

I helped guide it along a little bit to tighten things up here and there. But generally, it was very much heading in a good direction. That gives me a lot of hope because I have also seen the opposite happen with other projects and communities of let's just abandon the 20 years of history and start from scratch and clearly, it'll work differently this time. But fundamentally, what the OAuth world has done is there's a long history of working on these interesting problems and solving them incrementally and slowly making things better and better. It's nice to see that then applied in a practical application like this.

[0:32:24] Danny Allan: It's definitely the case that we're thinking about it from the very beginning and using the learnings from the past. In fact, MCP credit to the founders, it looks an awful lot like the language server protocol. And in fact, if you look at the actual text, the data being exchanged, it's JSON-RPC. It's actually basically the same format using LSP, which is an existing standard. I like that they've extracted out, to your point, extracted out the identity and authorisation as an extension to it, much better than trying to build it into the core protocol itself, leveraging, standing on the shoulders of giants or what people have implemented in the past. I don't know whether you've dove into the MCP protocol, but that was my TLDR when I looked at it. This is very similar to LSP.

[0:33:13] Aaron Parecki: Yes, I'm not familiar with LSP actually. But yes, I mean, it's a JSON-RPC protocol. It makes sense for the space they're working in, a couple of interesting things that they've done as well with SSE and things like that. But yes, I'm glad they didn't try to reinvent authorisation from scratch, right?

[0:33:32] Danny Allan: Yes. So, any last words, Aaron, just for the audience before we leave? You're speaking with a lot of developers who are interested in security. If you could give them one word of wisdom for moving forward, what would it be?

[0:33:46] Aaron Parecki: I guess don't reinvent the wheel. I mean, it's an old saying, but it's very true. There's a lot of work. Even if you think that the current wheel is very broken and too hard to use, do take a look at the context for it. If you look at OAuth, I agree. A lot of it is messy and confusing and hard to understand, it's there for a reason and it was not accidental. There's a lot of people who've worked, spent a long time writing these words and choosing them very carefully. It's at least helpful to understand why they're there. Then you can make a decision about changing or improving things. That's frankly a lot of what we're doing with the specs as well. It’s like OAuth 2.1 is going back and kind of updating some of the language used in the original spec because we talk about things differently 15 years later, right? So, yes, build on the work that's out there.

[0:34:44] Danny Allan: Yes, couldn't agree more, whether it be cryptography or authentication, like don't reinvent the wheel, use what's out there. There's people who've been through this before and let's learn from them.

Well, Aaron, it was great to have you on The Secure Developer. Thank you for joining us today. If people want to reach out to you and find more, what's the best way to contact you?

[0:35:01] Aaron Parecki: Find me on LinkedIn. That's a great way to contact me. And also, my website, aaronpk.com. You can find recent articles I've written about OAuth and MCP on there, and you can find my contact info there as well.

[0:35:13] Danny Allan: Excellent. Well, thank you for joining us, Aaron, on The Secure Developer, and thank you to everyone for joining us today. We look forward to having you next time on the next episode of The Secure Developer. Thanks.

[OUTRO]

[0:35:26] Guy Podjarny: Thanks for tuning in to The Secure Developer, brought to you by Snyk. We hope this episode gave you new insights and strategies to help you champion security in your organisation. If you like these conversations, please leave us a review on iTunes, Spotify, or wherever you get your podcasts and share the episode with fellow security leaders who might benefit from our discussions. We'd love to hear your recommendations for future guests, topics, or any feedback you might have to help us get better.

Please contact us by connecting with us on LinkedIn under our Snyk account, or by emailing us at thesecuredev@snyk.io. That's it for now. I hope you join us for the next one.

Up next

You're all caught up with the latest episodes!