Episode 111

Season 7, Episode 111

Alignment, Agility, And Security With Patrick O'Doherty

Guests:
Patrick O'Doherty
Listen on Apple PodcastsListen on Spotify Podcasts

Security as a field is constantly evolving. As a result, it requires a high degree of awareness, including staying up to date with the latest developments in potential new threats. It was the challenge of working in security that drew Patrick O'Doherty to the field in the first place. Today on the show, we speak with Patrick about his time as a Senior Security Engineer at Intercom, his current role at Oso as an Engineer, and what he has discovered on his security journey. Patrick shares what he learned while being part of the security solutions team at Intercom and how they built common infrastructure and coding patterns. We also discuss the role of empathy in security, why it’s essential for your goals to be aligned with the people you’re trying to help, and why we should all work to be more aware of third-party threat exposures. Tune in today!

Partager

[INTRODUCTION]

[00:00:27] ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk’s a developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open-source containers, and infrastructure as code. To learn more visit snyk.io/tsd.

On today’s episode, Guy Podjarny, Founder of Snyk talks to Patrick O’Doherty, engineer at Oso. Patrick is a senior security engineer with a decade of experience developing, securing and orchestrating software at scale. He has a background in security engineering, infrastructure security, full stack product engineering, cloud architecture and early-stage SaaS business growth and development. Patrick enjoys employing his product engineering background in a combination with security expertise to solve real-world problems with high-quality of life software. We hope you enjoyed their conversation and don't forget to leave us a review on iTunes if you enjoyed today’s episode.

[INTERVIEW]

[00:01:59] Guy Podjarny Hello, everyone. Welcome back to the secure developer. Today, we’re going to talk a little bit about security solutions and maybe how that differs or similar to security engineer, and how that maybe differs from engineering in the first place. In general, about all sorts of journeys within the security world. To dig into all of those, we have with us, Patrick O’Doherty, who is an engineer at Oso today. We’ll hear about that a little bit more in a sec. And was previously a senior security engineer at Intercom, which we’ll hear some stories from that part as well. Patrick, thanks for coming on to the show.

[00:02:31] Patrick O'Doherty That’s great to hear. Really excited.

[00:02:33] Guy Podjarny Patrick, before we dig in a little bit, I’ll ask you to – can you just tell us a little bit about what is it that you do today, and maybe a little bit about the journey that led there. Notably, we’re going to dig into the Intercom part, but maybe just in passing.

[00:02:46] Patrick O'Doherty Yeah. I’m a software engineer today at Oso, but in a prior life, I was a senior security engineer at Intercom. I actually started at Intercom, I started about a decade ago in product engineering. I arrived as security by way of interest, and I guess, just being excited about the problems that there were to solve internally at Intercom and taking those opportunities when they were afforded me. But yeah, about a decade ago, I started in just what I call product engineering. It’s building web-based SaaS products for people. I joined Intercom – actually, my first role was working on what was an early growth team. Basically, focusing on the problem of helping people who are potential prospects, understand all that Intercom has to offer, go from being somebody on the website who has a curiosity all the way through the signup, onboarding flows, educational flows, all the way through to being a happy paying customer.

Over the years, I took on a couple of other different roles, most notably after that was working on Intercom’s billing infrastructure. Through that, I became a little bit more exposed to some of the security challenges of operating infrastructure for an organization that was growing. Then from there, I had a really well-timed opportunity to help the security team in Intercom as they were starting, when they were spun out from an infrastructure-only focus team. I spent a couple of months in Dublin helping some new security hires onboard and it was a great trade. I was a local sort of subject matter expert on Intercom systems, but I wasn’t very well versed in security. My colleagues were very well versed in security but needed some help navigating Intercom and learning about all the – where the skeletons were.

It was at that point that I realized, “Oh! This is actually what I want to be doing as a career.” Using the product engineering chops that I developed, but on a more focused set of problems, helping build what we call secure building blocks within Intercom that solves common problems for people to allow all the other developers to build with confidence, without fear that they would introduce a significant vulnerability, or set a cable wrong, something like that. So, yeah, it was a natural progression of falling further and further down the rabbit hole of computer security and being offered opportunities to be responsible for us, which I was all too happy to take.

[00:05:02] Guy Podjarny Yeah. No, it sounds like a really interesting kind of path there, from growth to the building side to the security side—somewhat unusual, maybe the transition of various. I guess if you think about the skills that you have in the different stages there, even within the Intercom journey, do you think those are transferable? I mean, are the things that you’ve built up or knowledge you’ve built up, knowing how to do growth well, did they apply when you came along to work on security?

[00:05:31] Patrick O'Doherty I think they were definitely helpful with respect to considering how you were going to ship tools to an internal audience, how to onboard people, considerations of their daily workflow and how you’re going to be augmenting that. I really think that security engineering is no different to product engineering, insofar as you really have to understand the job that people are attempting to accomplish their priorities, things that are important to them. And then mix in sort of the security imperatives. Most people, when they ask you to, for example, turn off a CSP component, your security mind will immediately jump to the red flags. But beneath the surface, there’s actually a feature or some sort of product icon that they’re shooting for. If you can dig, and scratch and sort of build a better understanding of what your counterparty is trying to do on the other side of the table. And use your security sort of knowledge to be able to make that a little bit safer, you can arrive at a result that does the job, but also makes it safer for everybody.

I think that’s really the fruitful sort of center of security and product engineering, is when you can sort of bring your knowledge of what your developer counterparts are trying to do, and what’s going to be important to them, how they’re hoping to solve a problem, what the ombre considerations are going to be. And mix that with all of the security imperatives that need to be true and arrive at a fruitful solution.

[00:06:50] Guy Podjarny Yeah. I love that. It’s like the true platform mindset when you think about security, like security engineering as building a product for people. And you’re also kind of throwing in the empathy angle of it, of how you’re able to relate to maybe people that were in the position or working today on something you might have been working on before. This is your journey and kind of your path. We’re going to get a little bit to the last phase here of moving to software development within also. Just for full disclosure for the listeners, I’m an investor in Oso, which is actually how I managed to get to meet Patrick. Most of our conversation here is about his work at Intercom. But just for full disclosure, it’s worth noting that.

Through your journey here on – we talked about sort of the growth to security and how that’s similar. That’s a bit specific to that engineering or the other. How would you say security engineering is different? Those previous bits are where it helps, what do you think were the most surprising bits around how security engineering differs from product?

[00:07:52] Patrick O'Doherty I think the most challenging aspect of security engineering is possibly the fact that the field is constantly evolving, I think. With respect to product engineering, there can be an idea that maybe something is complete or done. Whereas in security, if there’s a new threat that is relevant to your domain, it’s sort of on you to accommodate your existing tools, and make sure that they are resilient against that threat. It was actually one of the things that I think made it the most, one of the more exciting sort of career turns for me. You’re constantly having to integrate new views of the worlds, new things that are happening into your threat model. I think that that can be a challenge sometimes if you are not sort of used to having to constantly develop different sets of requirements. You’re sort of reading from sort of a product vision of what you want to build, and it’s more cast in stone and less fluid.

I think it can be challenging sometimes to sort of have the proper integration of what really needs to be true in the product world, but also being mindful of what’s safe, and possible and what the limits of technology that you’re using are. I think, yeah, being accurate about that can sometimes be a little bit difficult.

[00:09:03] Guy Podjarny Does that align well with the general desire for agility? Because, I mean, I think the vision might be a bit less fluid, hopefully, if you know what you’re building towards in product engineering. But oftentimes today, the practice of building the software, actually increasingly wants to be agile, wants to be able to zig and zag. I guess, do you find that to be similar to the need when new threats pop up or do you feel it’s apples and oranges?

[00:09:29] Patrick O'Doherty I certainly think that security engineering requires us to be more defensive in our thinking when we’re considering a solution, and we’re building something in the product world, we’ll often think of only the sort of the positive path, the intended virtuous path that a user might take through using a particular piece of the service. I think security engineering calls on us to like really consider each facet of that in a more exacting manner, like what happens if users abuse components. Are there ways for a concerted and resourceful actor to use your product in a malicious way? I think it can be difficult sometimes for product engineers to, without previous experience of having something of theirs attacked, I think it can be difficult for them to envision the whole host of what happens in the full Internet and coming to bear on their solution.

I think, security engineering, it’s our job to look at what product engineers are doing, and draw a pretty hard line around the safe elements and make sure that those things can be done safely. If you want to render content from an arbitrary source, that there are ways to do that safely without introducing a foot gun that’s going to imperil your product later. I think that’s really describing the biggest differences, is that, with product engineering, you’re very focused on sort of the virtuous path, the positive feedback loop that you want people to have with your service, the feature that you’re building. With security engineering, you’re basically working backwards, you’re sort of asking 100 questions as to what happens when users don’t complete this flow, and they enter somebody else’s email or they attempt to use a resource that they don’t own. I think that having a view as to what are the most important boundaries to establish for your developers, and your company, I think is really the most important element.

It’s having a knowledge of the strategic problems of importance to your organization, and it will be different for every organization. It will be based on the inputs and outputs. Are you taking an arbitrary content from people? That’s probably dangerous. You should do something about that. You should push sandboxes around that. You should make sure that it’s safe to handle for your product engineers. It’s our job as security engineers to make sure that we’re giving people safe power tools.

[00:11:46] Guy Podjarny Yeah. I love that. You’ve kind of just embodied basically the complexity of being a security engineer, which is, on one hand, you need to be agile, and adapt to sort of threats as they pop in. You can’t get too enamored with your roadmap and your path, because new things will pop up. And like it or not, you need to adapt. At the same time, you don’t have the benefit of, “Hey! I’ll just do this piece over here and I’ll move on.” Oftentimes, you need, probably never perfection, but more thoroughness in your activity than you might be allowed on the product side.

Patrick, through this journey, the team, I know from our previous chat, the team that you landed on in Intercom, it’s called security solutions, which I found to be an interesting name for a security team. What was the charter of that team and what type of work did you do or what did you own?

[00:12:33] Patrick O'Doherty Yeah. So security solutions was an internal team of engineers who built common infrastructure and coding patterns for use threat interests. Everything from designing and implementing the authorization layer that sits atop, customer interactions with their conversation data. Through to one recent project that I worked on before I left was introducing a safe rendering library. Intercom is a communications platform. It takes in large amounts of rich content from a multitude of sources and it needs to be rendered securely in a workflow application where people are responding to customer inquiries. We would basically look at what are the most important assets that Intercom has, what are sort of the most dangerous tasks that product engineers and other parts of the business are taking on a daily basis, and try and build solid tooling for that, really to introduce common components and pieces of infrastructure, libraries and other resources that people could use to go about their day safely without having to be mindful, or to have a full sort of expert knowledge of the relevant security risks.

[00:13:46] Guy Podjarny I think that’s a super cool team. I guess, oftentimes falls under one of the Mantles of product security. How did the security solutions team interact with the rest of the engineering organization, or with other people building maybe other shared components like platform or infrastructure?

[00:14:03] Patrick O'Doherty I often described security solutions as sort of an internal consulting team for all of the rest of Intercom. We had a program of security champions, so people who had volunteer from different product teams or elsewhere, who had a mind for security, and who would often sort of raise novel issues with us or things that their teams were experiencing, or having issues with and bring them to us. Oftentimes, it was either through either internal consulting or we had a number of security programs, sort of metrics overall that we would be responsible for throughout the organization. Interactions were either sourced from problems that people were having, which thankfully, as a team of builders, we had a strong relationship with the other developers. Within Intercom, we weren’t seen as a blocker. We were seen as a team that would be able to work with you to develop a solution and build something better, which I think is extremely important.

If you’re viewed as a naysayer or a drag on product development, you will often not have a full view of all of the problems that are relevant to your organization. I think it’s really important to maintain a healthy set of – you can’t defend what you don’t know, and you need people to tell you how to defend. It’s one of those, like at the bottom of the pyramid of requirements. Otherwise, we were responsible for the overall measurement and ongoing sort of monitoring of our security health. If there was issues that we could spot in the future from credit reports, or other sorts of leading indicators, we would work with teams and reach out to them and say like, “Hey! We think that this might be an area of difficulty. What do you think? We have a solution in mind. Here’s a prototype of it. We think that this would be relevant for X, Y and Z. What do you say?” So yeah, always with a friendly hand, I guess is the better answer of how do interaction start either from a problem that was sourced from, felt pain from one of our security champions, or something that we could see in the numbers, but always with a proposed solution.

[00:15:58] Guy Podjarny Yeah, no, I love that approach. Many of these things are in the security solutions team. But was the team just sort of geeking out a little bit about work responsibilities? Was the team itself also in charge of handling the incoming bank boundaries and such, or was that another security team, you were just looking for patterns and building things out?

[00:16:17] Patrick O'Doherty So we have a sibling team, threat detection, who were I would say, like more in charge of the operational day to day aspects of running our security programs. So, an example piece of infrastructure that security solutions and threat detection would have collaborated on was the scanning of uploads and attachments from all sorts of media sources that would arrive into Intercom. We built this as a component and managing the health of Intercom in response to malicious uploads, or email reputation or other concerns relating to how malicious actors might take advantage of the platform was more of an operational concern that would have been run by threat detection. But they would have – big issues would have been collaborated on by both teams if there’s a leading – if we see an uptick in the metrics on one side, it would definitely be shared problem and collaboration for both sides. We want to build solutions that are easy to run that reduce toil, that are respective of people’s time and the job that they’re trying to do. So, we were very, very incentivized to make the lives of our threat detection colleagues as easy as possible with the solutions that we were building.

[00:17:24] Guy Podjarny Yeah, yeah, absolutely. I love the helpful approach on both fronts, not just to the engineers, but also to the ones dealing with the risk itself. Let’s go down a level and actually get to some concrete examples. What are some examples of some projects that you were working on, solutions you were building, I guess, as part of the security solutions mandate?

[00:17:42] Patrick O'Doherty The two projects that I worked on most recently before I departed Intercom were on a secure rendering solution. That’s something that I alluded to earlier. This was a project. The second one was relating to platform authorization, so overhauling how the authorization layer for people accessing their conversations or workflows and other aspects of their data that was available as an API to product developers.

The first, secure rendering solution, this was something that I – you know, based on a year-end evaluation of our Bugcrowd program, we had noticed a slight uptick in the number of CSP failures. One of the things I was always very proud of Intercom was that we maintained a very tight and exacting content security policy for the primary web application. That’s where Intercom users would log in every day to visit the inbox, to respond to customer inquiries, to create their email campaigns, whatever it may be to view customer profiles. This application contains a huge amount of very sensitive data. It’s your business conversations, it’s your customer profiles. We had for a very long time maintained that CSP should be as tight as possible and that we retrieve any CSP escape. Even if it was blocked by the browser, we would treat any sort of CSP failure as a very high priority.

In the latter half of 2020, we noticed that there had been sort of an uptick in the number of CSP escapes that we were having reported via Bugcrowd. Having looked at this, we realized that there was a number of different rendering jobs being formed across the Intercom app with mixed views of content, and that we could do a better job of sort of limiting those jobs, limiting the rendering tasks to own the sort of secure versions of that. At the center of this, there’s a – what Intercom called, it’s blocked content. So all of the rich media that makes its way into Intercom, emails, conversations was serialized in public called block format, which contained a very, very small subset of HTML for basically text markup, bolding, italicized. It also allowed for the rich rendering or the embedding of certain supported media, so things like YouTube videos or other content that you would want to be able to put into a conversation and have rendered security.

We realized that, basically this blocked content, it’s subset of HTML was sort of the sets that we wanted to be allowed across all of Intercom. So we decided that we would sort of turn it inside out. We would take what was a small HTML sanitization layer that was embedded in blocks and make it sort of the universal rendering output across all the applications. So in looking at the problem, we realized that there was only three types of rendering jobs that are happening in Intercom. There was render a full set of blocks with all of the rich content that that entailed, or there was what was really the crux of the problem was, rendering this very limited set of HTML elsewhere. So things like, if you wanted to add a note to a user profile with some bolding or italicized, there were some small air gaps in that that would allow content that you didn’t want to get in there, and then eventually be knocked back by the CSP. But we just didn’t want to deal with that anymore.

We had a look and we realized that we could do away with all of the unsecure rendering practices that people had been using dumb things, and instead replace it with this HTML sanitization layer, which we’d already implemented for our messenger. Having reduced it to those three things, we basically outlawed all the other uses of dangerous HTML paths, and successfully managed to live away from them.

[00:21:21] Guy Podjarny Great kind of enter into a story and the different components within it. I’m interested in the sort of the bit about going on the developer education story. It sounds like you’ve done all these analyses. I imagine you worked with the teams ahead of time to understand the needs, understand the components. And if you needed that, I guess campaign later on for them to change it, was that hard? Was it a risk in the process that you will do all this work, and you would build all this stuff and the teams wouldn’t actually adopt it? Or did you get buy-in ahead of time?

[00:21:52] Patrick O'Doherty I never appreciated that it would be a risk in getting it shipped, I have to say. But we had a lot of buy-in from people who wanted to see this problem solved. For each of the teams, they would have been previously responsible for following up with the Bugcrowd reports or the issues that would have come to them. There was alignment of incentives here that they just wanted to be done with this as a problem. And in fact, some of the issues that were played like that team, stuff like that they didn’t necessarily have 100% confidence in their solution to mitigate all the dangerous possibilities of rendering.

I think, actually, the level of issues that we were experiencing was such that people were very, very happy to see a solution. We also went about by automating and giving like exacting input to each team, using some of the three-inch condos, a pattern of every substantial piece of code is attributed back to the team that owns it. And we were able to programmatically use this data to GitHub issues that would tell a team exactly what they needed to do, where and what, here’s an example PR of how this other team has successfully migrated this. By demonstrating that there wasn’t a huge effort, that it was really a couple of simple substitutions and the problem would be solved one and done. I think that was enough really for people to see that it was worthwhile doing. Once we had a couple of teams and take it on very quickly, and everybody was able to watch the progress of this large GitHub milestone, close up to 100%. It was really a team-led affair for sure. And there was a lot of mutual sort of incentives aligning, I guess to make it go so well.

[00:23:22] Guy Podjarny Interesting. I love the – it’s still a service mentality and it’s still a campaign. So you’re going into it, you have a certain amount of buy-in already from people that make you think this is a kind of a reasonable risk to take, or a worthy enough investment of not insubstantial amount of engineering time probably to build these new components. And then you go off and promote. But I guess, back to that growth experience, maybe from earlier on, think about onboarding, think about sort of ease of use and ease of implementation, because you still need to get people to buy into the journey, appreciating the pain that you’re addressing. For instance, that lack of confidence in it. So a lot of narratives that would apply to building any dev tool or likes, because that’s really what you’re doing.

[00:24:03] Patrick O'Doherty Yeah, absolutely. We put a fair amount of effort into documenting the background of the problem, the pain that was being felt, highlighting to people the significance of what happens when this goes wrong, and really why it’s important to have it be correct. I’m often mindful of that, solving a problem for our team is kind of the best solution, especially – or is the best case, because they’re probably familiar with the pain point previously. So they have this contrasting view of before and after, where I think it’s really hard as solving a problem for a team of folks who have yet to be hired, or like the engineers who are going to arrive 12 to 18 months down the line. How do you ensure that they, through no fault of their own, like don’t accidentally veer out of the blessed set of solutions? I think a heavy reliance on automation and documentation there was like really what made it work for us.

We were able to say, this is the chosen solution, and we will through aggressive code linting and early feedback loops and your test suites make it known to you when you’re doing something that might be dangerous. And we’ll suggest this solution that we have on hand as an alternative.”

[00:25:04] Guy Podjarny Yeah, perfect.

[00:25:05] Patrick O'Doherty Yeah. Making it easy for people to become aware of when they were walking off the blessed path was also a big part of making it successful.

[00:25:13] Guy Podjarny Awesome. Maybe we sort of squeeze one more in here. Tell us about the other project that was in a different domain thereof authorization you mentioned. What was that one about?

[00:25:23] Patrick O'Doherty So, yeah, authorization, again was something that I became – it was something that we had started to see, and like a small background, radiation book ride numbers creeping up. It was actually at the initial authorization solution was one that I had built with some colleagues many years ago. It’s a success when something that you built three years ago, you outgrow it, you add scalars, the needs of the organization, the needs of your products become more complex. We had a relatively rudimentary access control layer that was implemented, all the controller levels across Intercom that basically implemented role-based access control for all of the different data types and other things that you would want to protect.

This solution had been built in a time where it was viewed as sort of like an opt-in mechanism. If you were a member of an Intercom workspace, you generally had access to a lot of things. In fact, by the time this work came around, that sort of presumption had been inverted. We want to introduce a world where being a mere member of a workspace wasn’t a guarantee enough to get further access. We wanted to be able to describe more granular access layers and have confidence that this authorization would be uniformly enforced. In addition, we had a product team reach out to us and say, “Hey! We want to start introducing attribute-based access control, an entirely new form of how Intercom was viewing permissions, and which sort of operated entirely orthogonal to the previous solution.”

Where previously, we had a controller level concern. We know wanted to be able to ask authorization questions about any given object, and a request and determine a user’s access to it independent of the Azure HTTP requests and the controller that they were talking to.

This just absolutely seemed ripe for security solutions implementation. Every piece of the product engineers are working on is guarded by some form of authorization. It was part of everyday practice and everyday work for them. And if we could make it better, there would be multiplicative benefits along the line. We had the benefit also of working in a monolithic codebase, where we could rely very heavily on convention, like creating one sort of chosen way of doing something and use a number of code-based things or custom sort of coding specs to make sure that it was enforced. So ended up settling on a hybrid solution, I guess, where we just have actually became aware of Oso was using this for or using their products to implement Intercoms authorization layer.

[00:27:54] Guy Podjarny By motion in action there.

[00:27:57] Patrick O'Doherty I was very, very happy for their existence when I was building this. So we wrapped Oso’s policy engine as a concern for Intercom developers and gave teams a simple API that was required for use across all controllers, but which was capable of both the traditional role-based and then the new attribute-based access control functionalities the developers were asking for. The big sort of, I guess, trick of this implementation was that we, as security solutions, we didn’t want to be in the middle of product engineering work. It doesn’t matter to us what role gives access to what resource. What we care about is that the actual authorization policies, like the logic that’s being enforced, was uniform, that it was sound and that there wasn’t any possibility for product engineers to accidentally create an unauthorized branch of this policy.

We ended up using Oso in combination with some code owners, GitHub practices to basically separate logic and data, where security solutions became responsible for the logic of the policy, the definition of users, the definition of workspaces, their relationship to each other, how that is sort of hydrated in the course of an HTTP request or any internal authorization query. But we left the definition of roles and what they gave access to entirely up to product engineers as it was before. In that way, basically, every new piece of code had to be added to this policy data layer, which we didn’t have sort of – we didn’t enforce code owners restrictions ever. But the policy that’s – if people wanted to make a product or an authorization logic change, that was something where they would, through code owners have like get code review from security solutions, and that’s part of the code base, which would change at a much, much, much lower rate was kept safe and sound while the policy database was left available to product engineers to change as they saw fit, which was a very, very common occurrence. It’s like building up new controllers, new concerns. We wanted that to be something that they could just add to on a daily basis without any risk of introducing invariants that they didn’t like.

[00:30:13] Guy Podjarny Yeah. In this case, it’s very much a platform engineering type of work. You embedded this, unlike the previous example, where you had to go champion and get the different dev teams to opt-in, this was a central decision that team got whether they liked it or not, right. It was a bit of a broad rollout. Is that correct?

[00:30:30] Patrick O'Doherty Yeah. We had the benefit of, I guess, a solution already being in place. We use GitHub’s scientist code to demonstrate to people that the Oso-based solution that we were running was 100% end-to-end equivalent in terms of the results that it would produce. But that it would provide all these sorts of ergonomic benefits, that they wouldn’t have to manage the policy, that they wouldn’t have to manage any of the sort of operational concerns. There would be a simple registry of roles, and resources and other components that would be their day-to-day domain safe and sound, that all of the other bits of machinery were operating as expected.

So being able to show the equivalency of these results was a huge advantage. It allowed us to roll this out without any sort of doubt in the back of our mind that we were going to –

[00:31:19] Guy Podjarny Successfully get adopted, yeah.

[00:31:21] Patrick O'Doherty Yeah. And also that we were – it’s authorization, it’s actually essence of your application, right? Who can do what? It’s so core to any product that you’re going to develop, we didn’t want to be, I guess, cavalier with making large changes to such a crucial bit of the code that was running atop every single request, and where mistakes would effectively mean privacy incident or a data leak incident. The ability to safely experiment to show, to I guess, like validate against our production traffic, and over time, build confidence on that solution, and then flip out the authoritative side of the equation was a huge win. It allowed us and the product engineers both to have complete confidence that everything’s going to work as we would expect.

[00:32:07] Guy Podjarny Yeah. It sounds awesome. It’s interesting to sort of hear about the two projects and in contrast, each of those tackling different. This one, a bit more preemptive, and a bit more platform-minded, versus the previous example, which was more dealing with maybe some ongoing leakage of other issue coming along. I mean, a component.

Thanks, Patrick. This was super interesting, and sort of feels like this day in the life of a security engineer from sort of the empathy aspects of journey into it, and how it differs from engineering into these last two concrete examples of solutions and how they interacted with engineering teams. Thanks for that walkthrough. Before I let you go, I want to ask you my one kind of regular open-ended question towards the end of it. Here it is. If you had an unlimited amount of resources funding to tackle a problem that is related to your world, what problem would you take on? And if you know, maybe, which approach would you take on to tackle it?

[00:33:04] Patrick O'Doherty When I think of all of the most challenging security problems I’ve had to deal with, they have related to the use of third-party vendors and like integrations with into your application or your service. I think I would challenge people to have a threat model relating to every integration, every piece of infrastructure, every component that they add, which is operated by a third-party and consider risk mitigations for that. If you’re operating a piece of infrastructure on prem and your cloud environment, maybe you should put sufficient guardrails around that and if ever there is a need to contain it. It’s being very exacting about the permissions and the authorization grants you give to third parties, auditing them on a timely basis and really knowing sort of the ins and outs, the inputs and outputs of your organization.

I think, going back to something that I said earlier, you can’t secure what you don’t know. I think most organizations would be surprised at the level of access they have maybe granted to third parties and the risk exposure that is there. It’s certainly, we’re all in this together, so it’s appropriate to consider every vendor, every link in the chain, both for yourself, but also onwards interactions with your position and other people’s supply chains.

[00:34:24] Guy Podjarny Yeah, absolutely. It’s not an easy one. Definitely sort of a worthy one to take on it. Patrick, thanks again for all the insights and for coming on to the show.

[00:34:33] Patrick O'Doherty It’s been great to be here. Thank you very much for having me.

[00:34:35] Guy Podjarny And thanks, everyone, for tuning in and I hope you join us for the next one.

[END OF INTERVIEW]

[00:34:43] ANNOUNCER: Thanks for listening to The Secure Developer. That’s all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you’d like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don’t forget to leave us a review on iTunes if you enjoyed today’s episode.

Bye for now.

[END]

Snyk est une plateforme de sécurité des développeurs. S’intégrant directement aux outils, workflows et pipelines de développement, Snyk facilite la détection, la priorisation et la correction des failles de sécurité dans le code, les dépendances, les conteneurs et l’infrastructure en tant que code (IaC). Soutenu par une intelligence applicative et sécuritaire de pointe, Snyk intègre l'expertise de la sécurité au sein des outils de chaque développeur.

Démarrez gratuitementRéservez une démo en ligne

© 2024 Snyk Limited
Enregistré en Angleterre et au Pays de Galles

logo-devseccon