Skip to main content
Episode 64

Season 5, Episode 64

Open Source Security And Technical Management With Ryan Ware

Guests:
Ryan Ware
Listen on Apple PodcastsListen on Spotify Podcasts

On today’s episode, Guy Podjarny talks to Ryan Ware, a Security Architect and director of the Intel Products Assurance and Security Tools team. He has been at Intel since 1999, and has focused on product security for almost his entire career. His current passion is ensuring that developers at Intel have the right security tools in their hands to be able to quickly and efficiently understand the security implications of the choices they make in their daily work. In this episode, Ryan and Guy discuss open source security and how Intel deals with vulnerabilities in open source projects, the collaboration between security and development teams at Intel, and how COVID-19 has affected Ryans job. Ryan shares his perspectives on balancing management and individual contributor roles, some tips for that transition, as well his final advice for teams looking to level up their security foo. Tune in today!

Teilen

[INTRODUCTION]

[0:01:26.0] Guy Podjarny: Hello everyone, welcome back to The Secure Developer. Today, we have a very broad perspective of open source to discuss here. We have someone, I’m really kind of happy to have on the show here, Ryan Ware, who is a director and security researcher at Intel. Ryan, welcome to the show, thanks for coming on.

[0:01:41.5] Ryan Ware: Hey, thanks for having me.

[0:01:43.4] Guy Podjarny: Ryan, before we kind of dig in, tell us a little bit about what is it you do at Intel but also a little bit of the journey that got you into this role and into security in the first place?

[0:01:52.1] Ryan Ware: Sure, no problem. I’ve actually been at Intel for 20 years, but my security journey actually started even before that. I was very fortunate, growing up down in the Los Angeles area when I was young, to be near Caltech in 1983 and 84. I had accounts on their systems to actually be able to go log in and do stuff, and I had like username and password, and I was like hey wait, how does this stuff work?

I actually started learning from that point on about security. Currently, I am director in Intel’s product assurance and security organization, where I am responsible for what tools our developers use as part of their software development. And although I am in security, and have always been in security, I’ve always considered myself to be a developer as well.

It’s actually kind of a fluke, I ended up at Intel. I was actually doing my internship with Laurence Livermore National Lab and they were going to bring me on, but two months beforeI graduated, they had a security breach at Los Alamos. My boss called me and said “Hey, we have a hiring freeze for nine months,” and I said “Okay, well, if I’m still around looking for a job, I’ll come.” Intel called me two days later and brought me up and I actually got a job offer two days later after that.

I was actually quite surprised that Intel was in Oregon. I honestly had no idea. interestingly, it’s the largest concentration of Intel employees in the world here, and that’s actually been one of the biggest advantages I’ve had in my career, because every major product at Intel has some presence in Oregon.

I’ve been able to do all sorts of very interesting things in my career. I did start out in IT, after that, I did digital rights management with audio, I did vulnerability analysis of Intel products, I did fuzz testing of our bare metal processors, trying to find security issues with those. I built a digital healthcare appliance and was responsible for the security on that, we built that out of Linux. I worked on a graphics card, the very first attempt at an Intel discrete graphics card called Larabee, I was responsible for the security of that. I was in open source security for seven years and moved around through bios and ended up where I am now.

[0:04:07.5] Guy Podjarny: You’ve definitely been kind of working on Intel security before it was cool. Sounds like throughout the stack of what has been the different evolutions of the card and the scope of activity for Intel. How different was that job, closer to the beginning? In terms of not just your role, which I’m sure you’ve sort of taken on more responsibility, but also that work of graphic arts or low levels and today’s cloud world or those activities? Do you find there’s 80% the same, is it 20% the same, how do you –

[0:04:33.3] Ryan Ware: That’s an interesting question. There are definitely some things that are the same but the environment that we’re doing everything in, over time, has dramatically changed. There is that aspect to it, and then there is also the aspect of the growth in security evolution at Intel. I mean, back when I started at intel 20 years ago, not only did we not have a secure development life cycle for example. Nobody had one, Microsoft hadn’t come up with the idea yet and started evangelizing it.

Sure, security was important and we were trying to prevent things like code red in all of the various Windows malware and things like, that but it’s not like it is today where I think the security researcher community is probably orders of magnitude larger than it was back then. There’s lots more focus by companies on trying to engage the communities to figure out how we actually reduce the number of security vulnerabilities in our products going out the door.

There was much less focus on that as an industry back when I started.

[0:05:40.7] Guy Podjarny: Yeah, for sure. I guess kind of again, before it was cool. Specifically, one of the long stretches that you had within your Intel career has been focusing on open source security, right? Open source technology and those elements. Can we drill in a little bit into that type of work? What type of work really does that scope entail? I don’t know if everybody knows Intel for the chips, but is that –

[0:06:01.6] Ryan Ware: Yeah, I mean, people don’t understand quite the breadth of software that Intel actually creates. Lots of people these days, when they’re talking about software security, they’re very focused on thinking of things like web applications, which totally makes sense because that’s what a lot of the world is run on these days. But, the products that we create at Intel, we have to worry about a much broader swath of software.

We have software that goes from anything from pre-boot firmware, like the CSME or converge security management engine, to bios, to operating system kernels, to drivers for those, to application frameworks, to applications themselves, to web apps – we do have products that are web apps and we also have our IT that is focused on web apps – and cloud applications. Throwing even more into that, we also have microcode for our processors that is, in a lot of ways, considered to be software.

There is a rather broad view here of the software that we would create and there’s open source in all of that. So for example, CSME, pre-boot firmware, that is based on Minix, which is an open source, small operating system. OpenSSL is in all of our bios. Microsoft, there’s not a whole lot of open source in the kernels, but we have lots of products that include Linux. Many products. You look at the number of open source projects that are actually in Intel’s products and it has dramatically grown over the years. I actually ran into something the other day at Intel that has more than – this is not a web application, this isn’t coming from the node world or python. It’s actually a C application that has over a thousand open source projects in it.

Trying to determine, “Are those open source components that are used in that particular product appropriate for Intel to be using in a product with Intel’s brand name?” is quite an endeavor. Trying to keep up with security vulnerabilities in all of those thousand projects is also quite an endeavor to keep up to date with.

[0:08:15.9] Guy Podjarny: Yeah, I must imagine. Does it have – like, when you think about open source governance, indeed, and sort of open source risk, dealing with these different vulnerabilities, and you think, like you mentioned, a very wide variety – does the same open source program apply to all of them? Do you need seven variance of how do you deal with open source vulnerabilities? Based on whether it’s on Linux components in an Intel chip, versus something that is cloud-operated, or with any other variance dimensions?

[0:08:42.4] Ryan Ware: In some ways, each open source project is almost unique in how you have to deal with it, because of the wide spectrum of support that you get with these. For example, you look at a project like the Linux kernel. The Linux kernel has hundreds of developers, they’ve had thousands of contributors over time, their development model is very rigorous. If you’re going to get something into the Linux kernel, it’s got to go through a lot of quality checks and security checks on the way, for code review and other processes.

Then there are other projects out there that other open source projects depend upon, but haven’t been touched on for years. I’m not going to name a name of a particular project because I don’t want to cast shame on them, but one of the products that we actually use, are dependent upon, a piece of software that has not been updated since 2009 and has seven known vulnerabilities in it.

Just to clarify, the open-source component that we're talking about here, there was really no functional alternatives we could use. There were too many dependencies on it so we were in a tough position. At that point, the only real viable solution we had was to fork the code and go fix the vulnerabilities ourselves. Unfortunately, we have extra technical depth but that project has its own forked version of that code internally that has all of the fixes. 

We can’t take that particular component out for a variety of reasons. It’s, for better or worse, a fairly simple component that has a very straight forward usage model to it, but this type of component has also been shown to have security vulnerabilities that can affect broader things in another context. In these particular cases, we have to do things like fork the code base and treat it like it’s our own code base. There’s a number of different things that we’ve looked at to see if a particular project is useful for our products.

We have to look to see whether or not the project has a regular meet in its impact cycle. Does the project even really have maintainers at this point and, if so, how many and what are they focused on? Is there actually documentation about the project? How long has the project existed, is it just something that just hewed up last week or has it been around for decades?

One of the interesting things that we look at and this is counterintuitive to some people is: does the project actually have any CVE’s against it? If it doesn’t, that’s actually not a good thing. Generally, any security researcher looking at a project is going to find issues. If a project has zero CVE’s against it, then it is a very likely never had a security researcher looking at it, and that’s a distinct problem.

We also look at things like does the project actually use static code analysis? There’s different open source solutions for static code analysis, various projects you’re utilizing. Surprisingly, some projects don’t even have a way to submit a security bug. That’s a problem if you're going to submit a bug out in the open about a security issue, you're basically telling everybody look, here, here’s the problem, go attack this.

There’s a lot of different things that we have to look at before bringing an open source component in house to use.

[0:11:34.8] Guy Podjarny: Yeah, I know. I think they’re all quite severe and I’m fully with you there on that explore the CVE’s and there’s a million CVE’s [inaudible 0:11:40] but like well, and categorized well, and kind of responded to well. It’s a wonderful sign of a healthy project that cares about security versus, too many. Millions is not a good one.

[0:11:50.5] Ryan Ware: Millions is not a good one.

[0:11:52.0] Guy Podjarny: Yeah, maybe a little bit more of a midway point there.

[0:11:54.9] Ryan Ware: The other thing that I just thought of too is, there are projects out there on the internet – and I’ll pick on Apache Harmony right now – that are actually end of life but their code is still out there. We actually end up crossing paths with teams that have founded these projects that are EOL, and go “Hey, I need some sort of java implementation in my product," so go and use Apache Harmony and it’s like, know that has zero support. There’s no way to get security updates for it, unless you basically want to fork it for yourself and own the whole thing, it’s not the right solution for you.

[0:12:30.7] Guy Podjarny: Indeed, all of this kind of raises this concern about open source – not just consumption but your own open source projects, right? Correct me if I’m wrong but I think you're in this situation where you ship critical components across the world, used by many. Some of those are your own open source projects, sort of Intel-owned open source projects, that in turn consume other open sourced projects.

When you’re in a place in which one of those dependent open source projects is vulnerable, it’s known it’s a part of your own projects and your components are massively distributed. You can't kind of easily, I imagine, update those components. I mean, has there been an evolution in thinking? You’ve done this for a stretch over here, what’s the best practice for dealing with this type of scenario?

[0:13:12.6] Ryan Ware: That’s a good question. It’s a hard thing to deal with at times. One of the things that we see a lot these days, that we didn’t see before – and we’re seeing it actually more and more in the Java world right now – is we get a dependency to one of our applications. So we’ve decided to bring in some module that we need, but that has dependencies, and those dependencies have dependencies, and you start going down the dependency tree.

And then something like six levels deep, it hasn’t been updated in five years. At that point, how easy is it to go update that particular component? You can try to take ownership over the whole stack and change whatever that dependency is, and modify it so that you don’t need that old component. The problem with doing that is you’re starting to fork and deviate your stack from the upstream significantly. You don’t want to do that either, because then you’re building up technical debt in your own code. So there’s a couple of different ways that we have found of dealing with that.

One way is in that component way, deep down in the stack, we actually just patch the security vulnerability in there so that folks don’t have to go deal with that. Unfortunately at that point, the team is making their own fork, and that’s a little bit of a problem with technical debt that we don’t want to add, but it’s better than shipping something with a vulnerability in it.

At the same time, if it’s something that’s higher up in the stack, more of a first or second level dependency, then the team actually has the opportunity to go ahead and look and say, “Are there other alternative, open sourced projects that we can actually use for this?” For example instead of using something like unzip, which hasn’t been updated in a while, to using something like Leipzig.

[0:15:04.1] Guy Podjarny: Yeah, I guess you have to manage and gain visibility, kind of coming back to that sort of open source governance to do all those, and then you have to choose the right remediation approach to enact?

[0:15:13.0] Ryan Ware: Yeah, and Intel’s very serious about the way it handles open source, in that we don’t want to do things that the communities are unhappy with. We’ve tried to actually bend over backwards to do the right thing.

[0:15:27.8] Guy Podjarny: Let’s talk a little bit about working between the security group and development, right? You’ve teed up your job as one that helps the organization and helps the application teams be secure. Can you describe a little bit the relationship and the collaboration between your team or sort of the security organization and the developer organization within Intel.

What methodologies or sort of processes or such you’re using, and what works and what doesn’t work?

[0:15:49.2] Ryan Ware: Yeah, absolutely. It’s hard to understand the size of the development community within Intel. I mean, you said before, people think of Intel, they only think of chips and hardware and yeah, there’s definitely that, but we have many thousands of developers in Intel. I believe that when Renée James was Intel’s president and ran our software solutions group, I believe one of the things that she said publicly was that Intel, if you took all of the developers out, would be the third largest software company in the world.

I think that’s probably still somewhere around accurate at this point. Trying to scale secure development practices to hundreds, if not thousands of different groups within Intel, it is a challenge, shall we say. That said, we try to focus very clearly on the added benefit that we can bring to those teams when we engage them. So, for example, Intel of course does not want to ship security vulnerabilities in its products. We have our Security-First pledge, and we have made it very clear what our position is on that.

So to try to be compliant, we want to make sure all of our teams are using the right tools, and figuring out how to reduce the number of vulnerabilities, in showing them how they can do that in a way that actually brings very little friction to their development process, and allows them to not have to worry about these security problems that end up in their code. Actually they find to be a beneficial way of working with us, because a lot of times when we engage them they are very concerned that all we’re going to do is bring more overhead to their product development.

When we actually show them that we can actually make a lot of the things that they’re doing more efficient, through automation and looking into their dev ops processes, it really actually helps them understand that it is actually freeing them from having to worry about generating security vulnerabilities and finding the issues themselves.

[0:17:55.5] Guy Podjarny: I definitely love the collaborative element. When you say ‘them’, how does that typically work? Is the interaction sort of at the tops, sort of the CISO talks to a CTO, or sort of the head of the business unit coming down? Is there alignment between other partners on the app sec team, also the security team, that partner with the dev teams somewhere in between?

[0:18:13.7] Ryan Ware: We actually have an organization that is spread through all of the product groups where there is a head security and privacy leader in all of our business units and under them, they have – the terms a little silly, I think – but they have security champions that are focused on working with all of the various business units to bring these practices into play.

My team is there to support all of them when they are engaging with the various business units, in helping them with their expertise to be able to go and show the development teams what the right things are to do. So, fortunately, my team does not have to scale all on its own to every single product group within Intel. We have a large support system where these folks are actually very helpful for us.

[0:18:59.9] Guy Podjarny: I am really intrigued by the best practices of security champion’s group with some love, some hate, you know I will put the name aside. You know even some places it’s mavens and the likes. So a couple of questions on them, are the security champions – is that a full-time job or is that a property of a developer? Do they actually get time to work on it or they just be the focal point for expertise?

[0:19:20.0] Ryan Ware: So it is a mix actually, I would have to say. I do know some people that are full- time security champions, or we also have what are called product security experts, which are the more technical security experts. They’re all fabulous, they all do great work in very difficult times. That said I would say the majority of them are people that are more embedded in their product teams, focused on product development themselves, and so are more of somebody who is one of their development peers, and fit in that way.

[0:19:56.8] Guy Podjarny: Got it. So if I quote this back, there’s basically your group, larger, thinking about security, practices, tooling, in a very sort of holistic fashion. You partner with the different business units who, in turn, have some people that would be full-time, would be product security people, that would be central to you but not central to the company. Then you have some people who are security champions who are more truly parts of the team and they’re just affiliated. They’re spending some of their time, they are the focal points and the tentacles into the rest of the organization, is that right?

[0:20:27.9] Ryan Ware: Yeah.

[0:20:28.1] Guy Podjarny: Cool,. I like the collaborative approach and I like you mentioned dev ops and working with them but also the higher level commitment, this notion of like the pledge to security but I know Intel is making a very heavy push on it. We are in a bit if a different world right now, with this COVID-19 and the quarantine, everybody and the world is turned upside down.

How has that collaboration changed, if at all? How did you feel that impact, maybe in general sort of Intel’s development, just the software side, and how was your work from a security perspective? Do you feel it has gotten harder? Have you taken any steps or it’s gotten easier?

[0:21:00.2] Ryan Ware: COVID has definitely altered things quite a bit. It’s been very interesting trying to be as productive as I normally am with my job, doing everything from home. To some extent, it has actually been helpful because I get less distractions that I normally would, being at work. At the same time, there are things that are very beneficial that have not been able to happen. So there is lots of things that come out of good hallway conversations with folks that you just happen to run into, talking about how are you doing and what’s going on with you.

So that particular aspect has been a problem because I do miss those interactions, and they have been very helpful like keeping me grounded with what various business units are doing. At the same time, business units have been very proactive, reaching out to us about problems that they are having lately because they are working from home. They aren’t actually on site, working collocated like they normally are.

They’ve actually been very sensitive about security problems and reaching out to us when they think they have issues, and so that’s actually been refreshing. Trying to do some of the things that we would do normally when there is a problem, like traveling to the site to go work with them directly face to face, that unfortunately is not happening. That is actually one of our most effective ways of helping teams and it is just not happening right now.

[0:22:27.4] Guy Podjarny: Yeah just sort of embedding. I guess there is no physical embedding to be done because they themselves are in different locations. It is great to hear about the appreciation in both parties that the hallway conversations are not happening, and therefore maybe more proactive outreach. So how do you compensate so you can’t travel for it to help them when there is a security problem? Is there something smarter than just jumping on a video call with them?

[0:22:48.0] Ryan Ware: Actually, I think video calling is the best way of dealing with that right now at this point. We have been doing a lot of that and actually, for missing the hallway conversation aspect, I’ve actually been proactively setting up 15 minute coffee sessions with folks. Virtual coffee over a video and that’s actually helped to some extent too. I don’t think the video conferences for helping teams has been as effective as being there but it has still been adequate and working fine.

Honestly, when a team is having problems, I don’t find there is a better solution than to say, “Here, let me get on a plane, let me go live your problem with you for a week, and see how we fix it.”

[0:23:27.4] Guy Podjarny: Yeah, makes sense, but I like the trick of 15 minute coffees. I have heard it from a few people and definitely sometimes once you structure the relationship it actually has some advantages in ensuring that it happens.

[0:23:37.6] Ryan Ware: Absolutely it does.

[0:23:39.1] Guy Podjarny: You know what is going on. So we talked a little bit about open source

governance, and you know open source security and maybe the changes a little bit right now.

Have you seen any changes in the open source side in terms of collaboration or working on security? 

[0:23:52.2] Ryan Ware: I haven’t really seen any change in that aspect. The open source community seems to be chugging along like it always does and working great. I do find that the open source community is generally very receptive whenever they hear about security concerns. Definitely when I have had to report security concerns to places like the Linux kernel or others, they take them very seriously. I haven’t had to report one during our lovely COVID-19 isolation.

But I have no doubt that it would be taken as seriously doing so. All of the open source communities that I have my RSS feeds and mail list with seem to be just as active right now as they have been since before all of this happened. So that’s actually been interesting. I think a lot of it is because a lot of the open source folks work at home already, but even the ones that don’t, they’re working from home now and it all seems to be just busy.

[0:24:48.6] Guy Podjarny: Yeah I saw some interesting stats from GitHub recently that shows it's the same number of hours on average people are doing, but they go later into the day. You know, some kind of lying, child’s care breaks during the middle of the day, or others but also maybe you’re not going out for a beer. You might be swapping it with typing out some code on an open source project.

[0:25:08.6] Ryan Ware: No, you just stay in for the cocktail.

[0:25:11.7] Guy Podjarny: Indeed, well hopefully you choose either and you stay in for the cocktail or you’re writing open source, otherwise the combination is quite not necessarily bode well for the security aspect of the –

[0:25:21.8] Ryan Ware: Very likely.

[0:25:23.3] Guy Podjarny: Indeed. So I think I’d like to take this down a slightly different path, which is – you’ve done many roles at Intel, some were more individual and some were more management. Clearly you’re accumulating some responsibility. It would be great to hear, I know a lot of people coming from a technical path struggle a little bit especially when it comes to burning issues and security or deep expertise, which also security pulls in, with coping with the part management, part individual contributor. How do you balance those two perspectives?

[0:25:54.6] Ryan Ware: It is an interesting challenge and it is funny for me, because I – when I came to Intel – I never had any desire to manage at all. It was not something I wanted to go do. I have always been a technical person. I like to focus on my technical work. There was a point in time, I was the security architect on the Me Go Linux distribution, which is a collaboration with Nokia. After that collaboration ended, a number of Nokia security engineers came to Intel and worked for us.

I was given the opportunity to manage them and also help chart the direction for open source security for Intel. To do that, the requirement was that I would have to manage the team. I went to go talk to my mentor at the time, Richard, who is a wonderful person. He is a great engineer at Intel, worked on some crazy, crazy things that we should talk about over a beer sometime, but he, in his perfect smartass way, said, “Think of it as a growth experience.” 

So I went ahead and decided to go manage the team and actually grew that team from 6 people to 23 in total over Oregon, California, Guadalajara, Finland, Romania, and China. It was actually a very rewarding experience. Sometimes, I felt like having one foot in the technical world and one foot in the management world that I was a wishbone being pulled apart, but at the same time, being able to drive a broad initiative using a team that you are managing, was actually quite rewarding. We did a lot of very beneficial things with that.

I stopped managing for about five years and then started managing about a year ago again. Now being the director of this team, it is actually also been a very rewarding experience in trying to be in this position and pushing a broad effort around security tools across all of Intel. It has been very rewarding. That said, I would never ever do it unless I could actually keep a bit of technical work going. 

Because if I don’t have some technical work, I just can’t do it. I used to joke with my team that if I can’t open up E-Max once a day that I will just get too cranky, but I don’t use E-Max anymore.

[0:28:16.7] Guy Podjarny: A different, an IDE you know, I guess you generalize it a little bit. I know that oftentimes, having done some of those myself, in that transition period when you move from management to IC, to the neutral contributor or vice versa, there is often times an adjustment period. Are there any tips or approaches that helped you adapt I guess as you transitioned from one to the other?

[0:28:38.0] Ryan Ware: Sure, definitely going from a management to an IC on the same team because I actually did that with my old team. It just got too big for me to be able to focus on any technical work, so I found somebody to manage the team and brought them in. Making sure to not to second guess the new manager in front of other people is probably a good thing, and so there is that. One of the things about managing that I find difficult, and I have one particular team member who reminds me about this all the time, and she’s awesome, but one of the things that I find difficult is actually delegating sometimes because, there is a task, I know how do it, I know how do it well, I can do it quickly but at the same time I have other things I got to do too.

So that’s one of the things I have to focus on personally when I manage is making sure I delegate the right things to the right people.

[0:29:28.1] Guy Podjarny: That one is indeed the hard one, when you are the one that comes from that expertise, but also critical. I guess one more question in this vein, as a security person, like oftentimes at the very core of the job the entire department is sort of predicated on helping others make secure activities right? Or secure decisions. 

I guess how much would you say is similar between – even like an individual contributor and a security role looking to help developers make secure decisions versus a manager. Is that still massively different? Is it the same as a developer going through with a dev manager? 

[0:30:02.9] Ryan Ware: So I think for me, it ends up being the same as the developer going, because I have a reputation within Intel of being technical. Generally I get a very good reception because of that from the technical folks on the team. That said I think in general yes, if you are a non- technical manager going to a team and trying to help influence them, I think you have to do that in very different way.

Sure there are different methods – by showing them some of the results more broadly of negative outcomes for their products and focusing on that. To be honest, when I do things I don’t like to focus on the bad things that have happened. I just like to focus on improving on what is going on with the team. 

If we want to focus on bad things, I don’t know of a single developer who has not written a security bug in the code. I mean we all have. It is just the way it is. So I do find being a technical manager and being respected as a technical manager basically gives me the same response as if I was a developer coming in. 

[0:31:03.9] Guy Podjarny: Yeah and it sounds like it is also helpful to avoid fear mongering and still to get some results. 

[0:31:08.6] Ryan Ware: Oh absolutely. I mean sure the wall of shame at DevCon has its place, but I don’t necessarily want to do that inside with the product team.

[0:31:15.8] Guy Podjarny: I really been enjoying this conversation, but I think we are running out of time. Before I let you go, I like to ask every guest coming on show, if you have one bit of advice to a team looking to level up their security foo, what would that be? Something they should start doing, something they should stop doing. 

[0:31:32.2] Ryan Ware: I would say the thing that they need to do is make sure that they’re automating all of the security tools that makes sense into their dev ops, because that is actually something that I wish we had done earlier than we’re doing it and have done it. Being able to, in an automated way, let a developer, as early as possible, know about a problem in their code is the thing that you have to do. If you could even do it at the check-in point so that they understand it then. 

Or even with a tool, as they are writing their code, that is like something as a problem in our IDE immediately. Getting that automation in place to do that is critical. It gets so much harder to solve issues the further you get down the development pipeline. Trying to fix an issue just before a product goes out the door is so much harder.

[0:32:21.1] Guy Podjarny: Yeah, indeed like many wasted cycles, many dollars, and eventually security issues that never really got addressed because it is too hard, and we just got to push it through. Very sound advice definitely echo that. Ryan, this has been great. Thanks a lot for coming on the show. 

[0:32:33.9] Ryan Ware: Hey, thank you for having me. It’s been a pleasure.

[0:32:36.3] Guy Podjarny: And thanks everybody for tuning in and I hope you join us for the next one. 

[END OF INTERVIEW]