Skip to main content
Episode 160

Season 10, Episode 160

Rethinking Secure Communication With Mrinal Wadhwa

Hosts:
Danny Allan

Danny Allan

Listen on Apple PodcastsListen on Spotify PodcastsWatch on Youtube

Episode Summary

In this episode of The Secure Developer, Danny Allan sits down with Mrinal Wadhwa, CTO at Ockam, to explore the evolving landscape of secure communication in distributed systems. They discuss the challenges of securing microservices, IoT networks, and Kubernetes environments and how traditional TLS-based security models may no longer be sufficient. Mrinal shares insights into Ockam’s approach to end-to-end encrypted, mutually authenticated channels and the impact of WebAssembly, passkeys, and modern cryptographic identity management on security. Tune in for a deep dive into how organizations can rethink security at runtime to minimize risks in today’s complex digital ecosystems.

Show Notes

Security in modern applications is more challenging than ever, with microservices architectures, IoT deployments, and distributed computing environments introducing new risks. In this episode, Danny Allan welcomes Mrinal Wadhwa, CTO at Ockam, to discuss how secure communication models need to evolve beyond traditional TLS and perimeter-based defenses.

Topics covered include:

  • The challenges of securing microservices and Kubernetes clusters
  • How end-to-end encryption and mutual authentication can minimize risk
  • The importance of cryptographic identities and key rotation at scale
  • How Ockam enables secure channels across multiple transport layers (TCP, Bluetooth, Kafka, etc.)
  • The role of WebAssembly and passkeys in rethinking security models
  • Shifting from perimeter-based security to secure-by-design communication

Mrinal shares key insights on how organizations can rethink risk at runtime, considering the number of people and systems involved in data flow rather than just static build-time dependencies. Whether you're a security leader, developer, or architect, this episode provides actionable insights on building trust in your infrastructure without compromising performance or agility.

Links

Teilen

Mrinal Wadhwa: "If you take people as a proxy for risk, the number of people in your dependency chain at build time is an order of magnitude, if not too lower than the number of people in your dependency chain at runtime. Think about the risk at runtime to the flow of the data. Then, think about removing that risk in various ways. I think one way to do it. There are other ways. Encrypt your data in when you're storing it, et cetera, et cetera. There's all sorts of ways to approach it. Don't send your data to certain destinations. That's also a way to remove that risk. I guess my tip here is, think about the risk at runtime to data flows."

[INTRODUCTION]

[0:00:43] Guy Podjarny: You are listening to The Secure Developer, where we speak to industry leaders and experts about the past, present, and future of DevSecOps and AI security. We aim to help you bring developers and security together to build secure applications while moving fast and having fun.

This podcast is brought to you by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down. Snyk makes it easy to find and fix vulnerabilities in code, open-source dependencies, containers, and infrastructure as code, all while providing actionable security insights and administration capabilities. To learn more, visit snyk.io/tsd.

[EPISODE]

[0:01:22] Danny Allan: Welcome to another episode of The Secure Developer. We're super excited to have you back, and I am very excited to be with you today because we're going to delve into a topic that's always been near and dear to my heart. I started in security in the networking space and doing some hacking on Banyan VINES Networks. I'm going back now about 30 years. I'm joined today by someone who has a tremendous history in this. He's the CTO at Ockam, and I would like to welcome to the show, Mrinal Wadhwa. Mrinal, how are you?

[0:01:53] Mrinal Wadhwa: Very good. Thank you, Danny. I also spent a lot of time in networking systems in the beginning of my career. So, it's very, very exciting to be here and it's lovely to have this conversation with you.

[0:02:04] Danny Allan: Yes, likewise. Now, maybe we can just kick off. Why don't you give an introduction for yourself to the audience, and how your career started, and how you ended up where you are today?

[0:02:13] Mrinal Wadhwa: I started my career at EMC, the data storage company. Right at the get-go, the problem I was sort of solving in that team was around high availability of very large-scale data. This was 20-ish years ago. We were thinking about how do you make data available across geographies in a way that is highly available, even in cases of failure, et cetera. So, that kind of started this thinking about large-scale distributed systems for me.

Then, very quickly, that led me to other types of distributed systems problems. I became very interested in Erlang in sort of the early mid-2000s. That taught me a lot about thinking about distributed systems, and messaging, and fault tolerance, and things like that. The types of systems that I got involved in then became naturally that, like I spent a bunch of time in the Hadoop ecosystem doing things there.

Then, about 10-ish years ago, maybe a little bit longer, 11 years ago, someone asked me to be the CTO of a hardware company that wanted to become a IoT company. This company was called Fybr. They made sensors and controllers for city infrastructure, but these things weren't connected. My job was to come in and design a system that would connect all of them. So, in designing that system, and then deploying it all around the world, I started to realise that securing these systems is a pretty challenging problem. Because now, in a typical deployment, we had like 100,000 entities that needed to be trusted to not only send us information, but also to take actions in the physical world.

The application domain was fairly critical, it was like city infrastructure, water systems and red-light systems. Then, we had some customers that were like airport deployments and things like that. I spent a lot of time thinking about how do we secure this large-scale distributed system that's sending lots of messages at a very high rate. That kind of led to my current job because I started thinking the capabilities we built in that IOC system should actually be general purpose. And we should be able to create secure mutually authenticated connections from anywhere to anywhere in any kind of system, not just the IoT. That started the genesis of our open-source project, which is called Ockam, and also the company, which is also called Ockam.

[0:04:53] Danny Allan: Well, that is awesome. I'm glad that you have that start in the application software space, because most of the audience, I believe, is very much involved in software. When I think of security in these systems, I go back to my computer science days. This is going back a while again. But I think of the seven-layer stack, where you start down, and you have IP, and you have some transport management on top of that TCP, and TLS, and secure messaging on top of that. I guess, my question is this, when people are building software, do they need to think about the security between the transport layer into the application or do you find that software developers are thinking about that or they're just all opening a socket and not worrying about it?

[0:05:37] Mrinal Wadhwa: I run into a mix of those. There are people who are not thinking about it, but their systems are such that they should be considering it. I also run into people who have thought about it extensively and have tried various things to minimise their challenges. But you mentioned the seven-layer stack. The challenge with the seven-layer stack is the security protocol there, typically is TLS, and it's sitting on top of TCP. The problem you encounter is, in a very large number of modern application use cases, it's impossible to create a single TCP connection from a client to an actual server of that information. What you end up doing is creating TCP connections to intermediaries that are usually exposed to the internet.

Then, stuff that's elsewhere is still not getting the security properties of TLS. And that's where stuff, I think become very risky. So, Ockam, specifically is a solution where instead of a secure channel sitting right on top of a transport layer connection, where the guarantees are from only as long as the underlying transport layer connection. In case of Ockam, you can have a secure channel that can sit on top of a route described by three TCP connections, or 10 TCP connections, or a TCP connection, a UDP connection, and a Bluetooth connection. You can mix and match these routes of transport layer connections. But over this entire route, you can have an end-to-end encrypted, mutually authenticated, secure channel. That unlocks a whole series of capabilities and changes the way you then can think about risk in your application. To answer your question, I encounter both types, and it's always an interesting conversation depending on where someone's starting.

[0:07:39] Danny Allan: Yes, it used to be that your connection was kind of point-to-point. Well, I don't even know if that was true 20 years ago. But certainly, today it's not. In fact, I work – most of the applications we're working with are Kubernetes based in a cluster and there's a gazillion microservices that are all communicating with one another. Do you find that the microservices behind those gateways, forget about one hop, two hops. I find it's a thousand hops often in applications to truly get. Then, do you find those microservices typically are encrypted inside the organisation, inside the Kubernetes cluster or no?

[0:08:14] Mrinal Madhwa: No. Usually not. People consider it, but then, realise it's complex to do in Kubernetes environments, and they end up going, "Oh, we've got a boundary, we're okay." So, most often, we encounter Kubernetes deployments with not enough security in the microservice-to-microservice communication piece. There's also termination of TLS usually at some front-end load balancer. Then, beyond that, a traffic is kind of moving in the clear is the typical deployment I encounter. The better cases, they'll terminate the internet-facing TLS at the load balance, and then, try to set up a new TLS. But then, you get into the problem of, how do you do mutual authentication in TLS, how do you do it in a way that scales to your Microsoft architecture, and all sorts of complexities around that.

[0:09:05] Danny Allan: Do you think they're not doing it because it is a – because of the complexity of setting it up. It's a security versus complexity, simplicity thing, or it's just, they're not concerned about it. Because most of the developers I talked to, or at least the development community that I speak with, they do care about security. But the complexity of setting these things up is what holds them back most of the time.

[0:09:30] Mrinal Wadhwa: That's true. That's been my experience as well. I rarely encounter someone who goes, "I don't want to reduce the chances that our system gets compromised." But, then, they trade it off against other things on their plate, and all the other things that they could be doing. I think security, privacy, performance, these are all features of a product someone is working on. Unless you can tie back the features to some sort of value proposition to the end customer, it can be hard for people to justify the work involved, especially if the work is complex.

[0:10:11] Danny Allan: Yes. Thankfully, most of our audience, the developers would care about security, and would want that security. How do you think about it in the context of offline items? You mentioned IoT earlier. Can you establish a handshake with something that is not immediately available? Or, you talked about multiple protocols. I'm interested in communication that goes over both Bluetooth, which is a radio transmission versus UDP and TCP, which are also very different. How do you do that? Well, I'm just curious on how you do something that's offline or how you do something that crosses multiple protocols.

[0:10:49] Mrinal Wadhwa: The first part of the question is fairly simple. It is possible to do handshakes with things that are offline. The way you do that is, the thing that is intermittently online, that thing usually is the initiator of the channel, and the thing that can be highly available, some sort of service, can be the responder in that channel. Once you have that mechanism in place, then those initiators can come in, set up a secure channel, oftentimes cash the state of the channel, so they don't have to reestablish the channel. In that way, we can save battery life, et cetera.

Then, once that channel is initialised, you can kind of keep benefiting from that channel over a long period of time and do rotations in the life cycle of the channel itself, which is important because you don't want to end up with all of my devices are communicating using the same API token. I'm hoping no one gets that token. Surprisingly, that seems to be a very common deployment where people are relying on the security of one API token and many, many, many clients actually possess it, which is unfortunate, but it's real.

[0:12:02] Danny Allan: It makes me feel like I'm going back to the days of SSH tunnelling where I'd established one SSH tunnel, and then, I would push everything through that one connection inside it. But, okay, well, so that's interesting. Obviously, the encryption and security of these things is super important. How do you deal with the performance of it? Because I have to believe, one of the things that has held us back historically was that, if you terminated something and reestablish, there is a performance implication of that. That's always held developers or software developers back from trying to do this at scales. Are there ways to mitigate the performance implication of these installations?

[0:12:44] Mrinal Wadhwa: Great question. And I also feel like I didn't fully answer your first question. So, I'm trying to club those answers. So, your second part of the previous question was around, how do you do this over multiple protocols, right?

[0:12:56] Danny Allan: Yes.

[0:12:56] Mrinal Wadhwa: I think this leads nicely into how think about performance in our system and these use cases. Traditionally, secure channel protocols have been designed on top of transport layer protocols. So, TLS is on top of TCP. Bluetooth has a secure channel protocol that's on top of Bluetooth. Other systems have secure channel like their satellite communication protocols that have their own secure channel implementation.

What you'll notice if you look back at history of papers published around security vulnerabilities, you'll notice that TLS over the last 30 years has incrementally gotten a lot better, especially with the latest versions, their fantastic designs consider all sorts of possibilities. But all the security protocols in the other transport layers usually got left behind, right? So, they aren't as safe as TLS 1.3, far from it, usually. And they have this problem of that software and a Bluetooth stack gets shipped over very, very long cycles. You can't even fix problems.

With Ockam, we took a slightly different approach, which is, we were like, secure channels should not sit directly on top of a transport layer connection. Instead, we inserted an application layer routing protocol just above the transport layer connection, which means that I can take messages from one transport connection and hand it to another transport connection.

[0:14:26] Danny Allan: I got it.

[0:14:27] Mrinal Wadhwa: So, I can take a message from TCP and hand it to Bluetooth. I can take a message from Bluetooth and hand it to Kafka, et cetera. So, there's routing protocols, it's like a relay race. It takes a packet, and then it hands it over. Once you have this, and then you place a secure channel protocol on top of it, now, I can have a secure channel that goes end to end over a Bluetooth hop, then a TCP hop, then another TCP hop into my Kubernetes cluster, then another TCP hop to my actual microservice pod. Then, the connection is then, the secure channels then terminated at that location. The end result is, my little tiny IoT device gets a end-to-end encrypted channel all the way back to my pod in my Kubernetes cluster. And all the parties in the middle are no longer adding risk to my channel. So, this basic mechanism unlocks, turns out a whole series of possibilities.

I started with this in the IoT land, but think about it this way. Let's say. I have a server in Ockam's Kubernetes cluster, and I want Snyk to access that server. What is the traditional approach? I will take my server and I will put it on the internet, and then Snyk will make an outgoing connection to that server, and there'll be some communication. The challenge is, now, everything behind my Internet-exposed server, all the Kubernetes infrastructure, et cetera, I now have to design other ways to protect the data. But with this Ockam approach, I don't have to expose my server on the Internet. You don't have to expose your client to the Internet. Instead, we can both make outgoing TCP connections to an encrypted relay service on the internet. Ockam offers like a paid SaaS offering, but it could be someone else running an Ockam relay.

So, you run this relay, both sides make an outgoing TCP connection to the relay, and we then set up an encrypted channel over this relay. So, both our client and server are entirely private, yet Snyk can consume data from Ockam's services. That means that my server and all the risk it gets exposed to by being Internet exposed is no longer a challenge I have to think about. So, this is a different kind of use case that gets unlocked with that same capability.

Then, I mentioned a third one before I get into a performance, but completely different one would be, there's this company, Red Panda that launched a zero trust Red Panda using Ockam. Red Panda offers Kafka-like brokers as a service. They have a special protocol for it. But at the end of the day, you have a Kafka producer and a Kafka consumer wanting to send each other data. That data goes through a cloud service offered by Red Panda. The traditional approach with all the other Kafka vendors currently is, that the data gets TLS encrypted at the Kafka client, gets TLS decrypted at the front end of the load balancer of the Kafka broker.

Then, who knows what happens? But it's in the clear, before someone decides to encrypt it just before storing it into this. Which means that, if I am a consumer of such a service, I have to think about the risk added to my data by this third-party service I am relying on, right? But in the Red Panda case, because Ockam can move data through any transports, it can move data through a Kafka topic. So, at the Kafka producer, the data gets encrypted. It goes over a TCP connection, stays encrypted, gets read into a Kafka topic, stays encrypted, gets read back out of the Kafka topic stays encrypted, goes down another TCP connection, gets read by a consumer, and that's when it gets decrypted.

All of these topologies and all sorts of use cases get unlocked to create end-to-end trust, end-to-end mutual authentication, end-to-end guarantee of data integrity, and also, end-to-end confidentiality.

[0:18:41] Danny Allan: You're addressing kind of two parts of the triad of confidentiality and integrity. So, that brings us to availability or performance of this. I'm just curious how you solve that at scale when you're talking – well, just doing this in the complexity of organisations today and at the scale that they need to address. How does that work in this model?

[0:19:05] Mrinal Wadhwa: Yes, that question almost directly leads to the third bag in the triad, which is authenticity. So, the complexity in secure communication infrastructure usually comes from the problem of trust. So, the problem plays out like this. Let's say I have one entity wanting to talk to another entity, which means, two things have to trust each other. But if I have 500 things wanting to talk to other things in that cluster of 500 things. I now have to set up mutual trust in a 500 squared relationship. Everything has to trust each other. That's where it gets really complex. How do you set up this trust?

The traditional approach is, let's all 500 trust one authority, and that authority will give us some sort of, in TLS's case, certificate. In Ockam's case, a credential. Then, we will sort of do authentication authorisation based on that credential. That's the traditional approach around this, but all of this creates a whole series of complexity problems. How do I do mutual trust? How do I manage private keys? How do I rotate those private keys? How do I get credentials? How do I rotate and revoke those credentials? Et cetera, et cetera, et cetera. At scale, I gave the example of 500, but our typical user is thinking of millions of such entities. So, how do I do this at a million-entity scale, where anything might need to trust anything else? That's where all of this complexity then sort of compounds in very hard to deal with ways.

In Ockam, we step by step thought about each piece of that puzzle. So, for example, the first step was, everything needs a cryptographic identity. This cryptographic identity will be rooted in a cryptographic key. But we never want forever cryptographic keys, because secrets tend to get leaked. So, we need to think about how will this identity key be rotated over time. In Ockam, we a cryptographic key, that's an ED25519 signing key, but a key sort of idea was that each key can sign its successor into existence. So, let's say, I trust your service based on its identity key. If that service decides to rotate its private key, my trust in that service is still rooted in the first identity key's public part, which we call an Ockam identifier. So, it allows your service to continue to rotate its private key over time. That means that, I have eliminated a very important complexity challenge, which is, how do I reestablish trust when you decide to do rotation. So, that's one example.

Another example is, if you have this attribute-based access control layer that's rooted in credentials. The traditional approach to credentials is X.509 certificates. The problem is, X.509 over so many years has gained so much complexity. The trust infrastructure around X.509 is typically Web PKI. That also has over the years gained so much complexity. But the use case that browsers are going after when they're doing Web PKI is different from the use case an enterprise is going after when it controls all the entities that are participating in a communication.

When you're trusting all the root authorities in Web PKI, you're trusting 2,000 different entities. You don't need to in a typical enterprise application. In Ockam's case, we significantly simplified how we do credentials. There are authorities. There's no one true authority. We can have many authorities testing to many different attributes in our system. The data structure that authority signed is very simple, so it's not several kilobyte long, X.509 cert, it's a tiny 300-byte long little credential, and it has attributes in it, so you can define attribute-based access control policies.

I can spend hours talking about the details of this, but the point is, the use cases that TLS and Web PKI are focused on, which is, browsers trusting – all browsers in the world, trusting all Internet servers is a different problem from all things in my enterprise application trusting each other. So, we were able to take the constraints of this other use case and design specifically for that, rather than taking on the complexity of Web PKI.

[0:24:13] Danny Allan: You raise an interesting point there, because it is true, SSL and X.509 certificates are based on the browser-based world, not kind of service-to-service type world, which is what you're describing. Do you think that the emergence of Wasm or WebAssembly will drive a different model than the traditional PKI infrastructure? Because what you're really doing at some level is trusting agents – I use the word agents loosely there – out at the edge, you're trusting a decentralised a federated model. Do you think it will cause people to reimagine what they're doing with our current security model for browsers?

[0:24:49] Mrinal Wadhwa: Maybe. Potentially, it's already kind of happening. Passkeys is an example of people thinking about trust inside a browser differently. The realisation is, we really need cryptographic secrets, not a passphrase people or humans have to remember. So, by having passkeys, we can have these keys be stored in safe place, like in my secure enclave inside my MacBook. So now, you've got this ability to have cryptographic secrets, but to make that flow work, a whole set of mutually trusted channels have to be established, right? If you dig into the infrastructure, Apple and Google and others have to implement, you start to notice end-to-end encrypted channels in that path. So, I think that with passkeys and other, I can give a few, a couple of other examples.

People are already reimagining the traditional trust model of the Internet. Because if my server and my Wasm program are trusting each other based on a cryptographic key, then, knowing each other's public keys in some way, then the utility of the TLS infrastructure starts to – the Web PKI infrastructure starts to minimise, and the trust in this key starts to become elevated. Another example, a recent one that I like a lot is, Apple published a paper called Private Compute Cloud a few months ago. Private Compute Cloud is their AI in the cloud infrastructure.

When not enough compute is available in my iPhone, Apple will delegate some compute to a cloud infrastructure, but this compute will not terminate TLS along the path to the actual compute location, because they don't want my private phone calls to end up in log inside their TLS infrastructure. So, they're like, "We'll have entering encrypted channels all the way to some confidential compute environment. We'll decrypt the data just there and we'll set up all sorts of controls around this confidential compute environment."

So, these are examples of, I think people are already – whether you take people using Ockam to do safe things or these examples from Apple, and passkey, et cetera. I think people are already thinking about how do we do trust in a different way compared to the traditional browser-based model that we were all focused on for a long time.

[0:27:23] Danny Allan: Well, it makes sense because we're becoming attuned to the more decentralised distributed model. I guess it does make sense that we are looking at new models. If I can pivot for a moment. In the old world, we used to do some of the security inspection at the perimeter. We would decrypt at the perimeter, and then we do a deep packet inspection for whatever DLP or malware coming in. Part of our security defence was having that ability to decrypt. Are we worse off or better off in an end-to-end encrypted environment? Do we lose the ability to do that type of inspection?

Very specific example. I worry about data loss prevention. Someone exfiltrating data, and so one of the things that you typically want to do is monitor traffic to make sure that data is not leaving the environment. Do we lose out – does the risk increase by having end-to-end encryption?

[0:28:19] Mrinal Wadhwa: No, if we think about it properly. The traditional model that you described is, all data flows through a central place. That central place gets to look at all the data, usually they do it by breaking TLS, like that's the typical approach. You terminate TLS, lie about the TLS connection to the sender, and then, you reestablish TLS, and lie about the other side. That's how it's usually implemented.

So, you effectively use a man-in-the-middle attack to do inspection. That's what all of data solutions do. Secure by design in my mind is just an exercise in moving risk around. There is risk. The question is, how do you deal with it? Where do you locate it? How do you then kind of manage it well because you've located it well? Locating all the risk at the central location, which can see everything and manipulate everything, not just see. Manipulate, right?

[0:29:22] Danny Allan: Yes.

[0:29:22] Mrinal Wadhwa: All secure channel broad calls, whether it's TLS or Ockam provide data integrity guarantees in addition to data confidentiality guarantees. Usually, when I talk about encryption, people kind of stay focused on the confidentiality aspect. But the really important guarantee is, I am guaranteed that the data as it was sent from the source is exactly as it is received by the destination, but if you intercept TLS with a man in the middle, you're losing that guarantee of confidentiality and also integrity. Which means, an attacker, that's the honeypot, that interceptor is the thing that an attacker really, really should get to.

So, it's this very important choke point from a response standpoint, and highly vulnerable if someone gets there. Versus, if we could move all the risk of a certain piece of data to the origin of that data. So, if we take, at this location, the data is originating. It is already unencrypted. That is the only place it's going to be unencrypted, and then it's going to be unencrypted where its destination is. If we can do that, then we remove this central location, but now, we've got the challenge of how do we observe exfiltration problems.

Let's build observability tools that can read what type of data is leaving these edge locations, but have no ability to manipulate the integrity of the data as it leaves these edge locations, right? In Ockam, what we do is we send open telemetry feeds out from all the ends of an end-to-end encrypted system, and even take and consume those open telemetry feeds in wherever you're consuming your events. So, you're getting the reads to see if bad data is leaving. We can also give you abilities to stop with a policy-based control the data from leaving. But what you don't get is the ability to beam the middle of the flow of the data and manipulate it. So, if someone compromises this observability infrastructure, they're not adding risk to the actual data. They could leak the type of data leaving. But maybe if you design it well, they can't even leak the contents of the data leaking.

[0:31:50] Danny Allan: Yes, you're really ensuring obviously the integrity. I wonder if the way, the reason why that centralised choke point was implemented, because historically, security people have not had access to either the origin, or the target. So, the easiest way for them to implement was at the beginning, which takes us back to development. Because, actually, if you want to do a source-to-target encryption, it means that you have to threat model and design your controls into the source of the application or into the target of that application rather than trying to do it in some way in the middle that breaks the confidentiality and integrity, as you say.

It actually puts my burden on the developer to probably – I mean, I'm thinking out loud here. That's really in some ways what we're doing with DevOps. We're giving the developer more responsibility to implement those controls at the source.

[0:32:44] Mrinal Wadhwa: Yes, it's very hard though, right? It's hard to do cryptography well. It's easy to shoot yourself in the foot with it. It's hard to do key management well. It's easy to really, really mess that up. We also can't convince everyone to change code in their applications. This is hard. In Ockam, we approach this in two ways. I have to start with the premise; I do think it has to be as close as possible to an application. Ideally, inside the application.

So, the starting point was, we built a Rust library so you can use our Rust library inside your application. Then, you're sending messages that are data structures in the context of your, in the context of your application domain that you're sending over this encrypted channel from within your application. Which means, now, this application developer needs easy ways of saying, create a secure channel for me, easy way of saying, create an identity for me. Store the secrets of this identity in a KMS. Rotate this identity secret. All of these need to be available as single line function balls ideally. That's what the Ockam Rust library enables. Instead of you having to think about how to do photography well.

I encounter this case all the time. So, that's why I want to bring it up. Oftentimes, people will go, end-to-end encryption. All I need is a yes and a key, and I'll be fine. Well, it turns out that's where we started 50 years ago. You need a lot more to do encryption well. So, if you think about forward secrecy, you think about key compromise impersonation. There's a lot of very long history around doing secure communication well. So, you need really well-designed primitives. Ockam gives you those primitives in the form of single-function calls.

We can't get everybody to change their applications, and it's a lot of work to go, "Oh, I have to plan a project that will change my application to do this." The, other model that's a typical model Ockam deploys in is, we will run a sidecar next to both ends of this communication. This sidecar would expose a virtual point-to-point TCP connection.

In my earlier example of Snyk communicating with a service available inside Ockam's Kubernetes infrastructure, I would run an Ockam node as a sidecar next to my service in my pod. You would run Ockam node as a sidecar next to your client inside your Kubernetes infrastructure, I'm presuming here. Then, over this, we will set up a point-to-point virtual TCP connection, which means that my server will appear on local host inside your client applications pod, and you just call it. We then can automate all the work. So, we use EBPF to do kernel-based, highly-performance encryption, and network connections. We can automate the storage of keys into KMS, all the complexity then can be removed from the end customer.

The user, all they have to do is change the listening address to local host and the client address to local host, and everything just works from there. That's kind of a way – the way we've taken to simplify all of this complexity, and make it easy for developers to adopt it.

[0:36:15] Danny Allan: I'm a big believer that the way that you secure applications is not through refactoring and super smart people. It's actually the opposite. It's being able to add something super simple and making it easy for developers. I always say – using another security example. I always say, the reason we got rid of buffer overflows is we introduced Java that managed the memory for people. So, making these things super easy is actually the way to secure it. If you could leave the development community with one tip on end-to-end encryption, one thing that they should do, and it's not use Ockam. Though, hopefully they all do. But what is the one thing that they should be considering to help secure the communication of their applications?

[0:36:57] Mrinal Wadhwa: Think about the risk to the communication of your data. So, in the last few years, there's been a lot of conversation about risk that comes from dependencies. And I think that's been a fantastic direction because we're talking about, hey, if there's a new piece of code inserted into my code as a dependency, then that means that 5, 10, 15 developers over there of that thing could do something maliciously either intentionally or if they are compromised, that could affect my risk posture. So, we've been thinking about this.

But the thing that we think less about, or there's less conversation about is, if I make a third-party API call, risk is really just a proxy for how many people are involved. So, how many people are behind that API call that could potentially be attacked, be vulnerable in some way or be compromised in some way. If my data is being exposed over there, then, what are the odds of that data being compromised, given that, say, 10,000 -person company with 500 of them having direct or indirect access to that piece of information? So, thinking about risk and dependencies in this way is incredibly insightful.

I used to do a presentation where I'd be like – if you draw this and if you take people as a proxy for risk, the number of people in your dependency chain at build time is an order of magnitude, if not, too lower than the number of people in your dependency chain at runtime. Think about the risk at runtime to the flow of the data. Then, think about removing that risk in various ways. I think one way to do it. There are other ways. Encrypt your data in when you're storing it, et cetera, et cetera. There's all sorts of ways to approach it. Don't send your data to certain destinations. That's also a way to remove that risk. I guess my tip here is, think about the risk at runtime to data flows.

[0:39:09] Danny Allan: I like the implicit recommendation that, which is to do threat modelling. Think of it outside of just the technology itself, which is really do a threat model of the risk that is faced by the application. Well, that's fantastic. Thank you for joining us today, Mrinal, that was awesome. I love to geek out on these networking technology things. I'm never sure if developers, the broader development community is interested in networking, but I have been and always will be. Simply because, I think most security issues at the end of the day do come down to networking and networking security and performance. Thank you for joining us today on The Secure Developer, and we'll catch you next time.

[0:39:47] Mrinal Wadhwa: Thank you. Thanks a lot, Danny.

[END OF INTERVIEW]

[0:39:51] Guy Podjarny: Thanks for tuning in to The Secure Developer, brought to you by Snyk. We hope this episode gave you new insights and strategies to help you champion security in your organisation. If you like these conversations, please leave us a review on iTunes, Spotify, or wherever you get your podcasts, and share the episode with fellow security leaders who might benefit from our discussions. We'd love to hear your recommendations for future guests, topics, or any feedback you might have to help us get better. Please contact us by connecting with us on LinkedIn under our Snyk account or by emailing us at thesecuredev@snyk.io. That's it for now. I hope you join us for the next one.

Up next

You're all caught up with the latest episodes!