Skip to main content
Episode 116

Season 7, Episode 116

Open Source Security, Vulnerabilities, And Supporting Women In Technology With Emily Fox

Guests:
Emily Fox
Listen on Apple PodcastsListen on Spotify Podcasts

The Cloud Native Computing Foundation (CNCF) hosts critical components of the global technology infrastructure and has played a huge part in elevating the industry standard for security. They bring together top developers, end-users, and vendors, and also run the world’s largest open source developer conferences. Today on the show we’re thrilled to welcome Emily Fox, a Security Engineer, who also serves as the co-chair of the CNCF Technical Oversight Committee (TOC), and is involved in a variety of open source communities. In our conversation with Emily, we unpack the intricacies of Open Source security and vulnerabilities, as well as what she’s learned during her time with the CNCF. We discuss what participants can expect from the Global Security Vulnerability Summit, how you can get involved, and the project that Emily is most excited about. Finally, Emily shares her passion for ensuring that women join the technology sector and breaks down the crucial steps that will get us there. Tune in for a fascinating conversation on open source securities, vulnerabilities, and more!

Partager

[0:00:17.5] ANNOUNCER: Hi. You're listening to The Secure Developer. It's part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

This podcast is sponsored by Snyk. Snyk's developer security platform helps developers both secure applications without slowing down, fixing vulnerabilities in code, open-source containers and infrastructure as code. To learn more, visit snyk.io/tsd. That's S-N-Y-K.IO/TSD.

On today's episode, Guy Podjarny, Founder of Snyk, talks to Emily Fox, a Security Engineer and a member of the CNCF Technical Oversight Committee. Emily is a DevOps enthusiast, security unicorn, an advocate for Women and Technology. She promotes the cross-pollination of development and security practices. She has worked in security for over 12 years to drive cultural change with security as unobstructed, natural and accessible to everyone. Her technical interests include containerisation, automation and promoting women in technology. She holds a BS in information systems and an MS in cybersecurity.

She is an active member in several open-source communities and is a co-chair for KubeCon and CloudNativeCon. We hope you enjoy the conversation and don't forget to leave us a review on iTunes if you enjoy today's episode.

[INTERVIEW]

[00:02:28] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning back in. Today, we're going to talk about Open-source and security and Open-source vulnerabilities, like really how does the community deal with those and a variety of other interesting topics and to explore all of this world. We have with us Emily Fox, who is a Security Engineer and has been one for a while, and also some organisations we can necessarily name here, but beyond that was the co-chair of the Security Technical Advisory Group at the CNCF and now has a role that you can tell us about more in a sec. It's been a driving force behind the Global Security Vulnerability Summit, that's coming up. So we'll talk more about that. Emily thanks for coming onto the show here.

[00:03:08] Emily Fox Thanks so much for having me. Yes, I'm a Security Engineer. I've been around the block for a while, across multiple organisations, but currently, I have the pleasure of serving within the Cognitive Computing Foundation's Technical Oversight Committee. I'm a Security Liaison to the Security Technical Advisory Group there, where I am co-chair emeritus and still a technical lead. I've been getting more involved in the Open-Source Security Foundation and several other communities and talking with folks in Cloud Security Alliance as well. Now I find myself partnering with the current Co-Chair for a Security TAG, Brandon Lum, on bringing together a global security vulnerability summit for the community.

[00:03:49] Guy Podjarny: Yeah. I always want to be a co-chair emeritus, once you have a Latin word attached to your historical title, you know, you've done something right.

[00:03:59] Emily Fox Yeah.

[00:04:00] Guy Podjarny: Tell us a little bit, Emily, about what is the Security TAG?

[00:04:05] Emily Fox The Security Technical Advisory Group within this CNCF is the group that is designed to support the Technical Oversight Committee and reviewing and driving the direction of security for cloud native projects. It's a little bit different than the existing technical advisory groups within the foundation who focus on key technical domain areas such as networking and storage, and really get involved in those projects of those domains to allow them to reach a higher level of maturity and adoption and integration with the rest of the community.

Security, as we all know, is a critical part of all parts of technology, regardless of its Cloud Native networking, storage, infrastructure, whatever. So they span multiple projects and often what ends up happening is as projects go through the graduation within CNCF, the Security Technical Advisory Group is invited to perform a review for them to help better prepare them for that level of maturity and sustainment that's expected for widespread adoption within the community.

They do a lot more than just that. They provide guidance and recommendations to the community. Recently at KubeCon CloudNativeCon Europe 2022 in Valencia, they announced a whole bunch of new projects that are open for review. They launched the Cloud Native Security Controls catalogue, which is in this state 153 rev 5 mapping to it, as well as a server to security paper for review, reference architecture for supply chain security, a whole litany of projects. Oh, and version two of Cloud Native Security White Paper. They are doing so many things in that space.

[00:05:40] Guy Podjarny: Yeah. Sounds like a lot of great activity on it. How does the work split? Is it more being this security escalation expertise support entity for the other project? Or is the majority of activity actual outputs and deliverables and tools the ones that you've mentioned coming out of the Security TAG itself?

[00:06:00] Emily Fox It's a little bit of both. A large part of the work to date has been in filling the void of information around securing Cloud Native ecosystems. That being said, though, we do have programmes within the Security TAG to provide that security expertise to cloud native projects that need it. The security reviews that we perform are one way that we do that, help set them up to make sure that they're following secure development processes. Their reviews are set up correctly, that they have multi-tenancy in place that's appropriate and applying the correct mechanisms that they're looking for that true level of isolation.

In addition to that, there's a security panel's programme that projects can engage with directly to get more specific security relevant feedback to make sure that there are any problems that could potentially show up are being resolved. So it's all over the place and it's nice, because we have over 800 individuals that are members within the group with a core of about 20 to 30 that are fairly active. We have a nice opportunity to do a lot of different things, but we're always looking for more contributors. That way we can get more things done faster.

[00:07:08] Guy Podjarny: Got it. Who's the, when you think about the customers of this group? A lot of your examples go back to the maintainers, the projects themselves within the CNCF and how to help them secure their work and their environments. How much do you think of the consumers of those projects in turn as the Security TAGs customers as well? Or is it an indirect, it's the eventual projects that should worry about that?

[00:07:33] Emily Fox It's both. While we do a lot of engagement directly with projects, a lot of our work in the form of our papers and our reference architectures that are designed for practitioners and adopters and users of those projects are on multiple projects that are looking to onboard into the Cloud Native Ecosystem. Whenever we pursue one of our projects, especially when it's a paper driven activity, we always want to make sure that we are our audience and our intent behind addressing that audience is very clear and upfront. We usually have an executive summary for more senior level managers that are looking to direct their teams into this space. Then the rest of our papers are more around the practitioner that's taking advantage of those Cloud Native projects. But it is both. It's maintainers, it's contributors, it's adopters and end users from regular software engineer all the way up to a CISO for instance.

[00:08:25] Guy Podjarny: Yeah, How is this unique to the CNCF? I mean, why isn't the work that you're doing with these different projects, how much of it, what percentage of it is applicable to really any Open Source projects, any projects out there that is building versus the CNCF projects themselves?

[00:08:42] Emily Fox I will say, because Open Source as an ecosystem itself is so massive and it has such a rich heritage of different processes around how we do software development, let’s say about 50%. I say that as we are transitioning into more microservice and containerisation within the landscape. So, while Cloud Native is in itself its own unique ecosystem for the architectural design decisions that are in place of that promoting distribution and immutability and ephemerality of our workloads, there is still a lot of core security primitives that apply within normal Open Source projects and that universally anybody can go to our documentation in our recommendations and be able to apply them at some level to whatever their situation is.

[00:09:28] Guy Podjarny: Yeah. Yeah. I think that makes sense. A lot of it sounds often like the security team of the CNCF. It sounds like a forward looking security team that has that isn't just supporting empower developers to build there that secure their work, but also to actually builds tools, builds platforms who tries to make doing that work easier, better documented. So is that a positive or a negative? How accurate a parallel is that?

[00:09:55] Emily Fox It's fairly accurate. The technical oversight committee and other members within the foundation rely a lot on the work that we turn out to be able to drive where the next direction and where the next steps and path are headed. A lot of it is also generally just getting projects introduced or what being secure in today's cloud computing environments actually look like. So we're often challenged to like, there's this new and emerging piece of technology. What are the security implications of it? How do we introduce new principles and practices to better position them for when the inevitable time comes of maturity that they're not owned immediately once their first end user adopts them corporately.

[00:10:40] Guy Podjarny: Got it. You mentioned the talk, the Technical Oversight Committee. Tell us a little bit about that and maybe how it interplays here with the Security TAG.

[00:10:49] Emily Fox The Technical Oversight Committee is responsible for providing the technical direction of projects within the foundation. You can think of them as the stewards of what it is to be Cloud Native. So a lot of the work within the technical oversight committee is determining where the ecosystem needs to move next, curating the projects that we have within that space to ensure that they are meeting the appropriate requirements and expectations of Cloud Native projects such as they have a healthy maintainer ship, associated with them.

That they're capable of driving contributions, that they're responding to their adopters needs, and that they're filling a market place within the ecosystem, either as a gap or as a secondary or alternative product, because we firmly believe and know key making within our projects, because healthy competition within the ecosystem is really what drives innovation and the talk is designed to enable these things to happen.

[00:11:52] Guy Podjarny: Got it. The talk ends up working as the concept of the talk exists in other foundations as well, right? It is, I guess, the architecture team, if I was to draw an analogy.

[00:12:01] Emily Fox That's correct. With each of the technical advisory groups within the foundation have a technical oversight committee member that is a liaison that serves as the communicating body between the talk and that tag, so that — one we make sure that the tags, which are an extension for their specialised domain area, because we can't know quite everything there is now in the ecosystem, if you've seen the landscape, it's massive. So we rely heavily on the tags to help guide us in some of those decision makings to give us that information so that we can be more informed in our recommendations to the community.

[00:12:35] Guy Podjarny: Got it. As the liaison to the Security TAG at the talk, throwing all the TLA’s over here. What's your favourite project? What's currently getting you most excited in terms of the work that either side is doing?

[00:12:49] Emily Fox This actually recently came out at KubeCon, the supply chain security paper from the Security TAG, which we released a few years ago. We've worked on a reference architecture of that. Now, our next step within the Security TAG is to take that reference architecture and apply it in practice to Cloud Native projects such that every cloud native project within the community ecosystem can take advantage of a secure software supply chain.

It's a huge undertaking. Recently I've been having excellent conversations with Kubernetes SIG release team and with the supply chain working group leads within the Security TAG about how we could potentially introduce the reference architecture to Kubernetes without having a blocking impact on their work, because they've been making great progress in this space, but also be able to reuse some of those lessons learnt as we go through and implement this reference architecture for them or for a different project to be able to wash, rinse and repeat for the rest of the community and having the support of the foundation and this is the next steps that we need to go and head in. It’s fantastic, especially as it overlaps with a lot of the initiatives and work that's going on within the Open Source Security Foundation.

[00:14:04] Guy Podjarny: Yeah, for sure. We'll get to the Open SSF in a sec. I really like that direction, because I feel the world of CNCF and notably the world of Kubernetes is one of the better instrumented journeys of codes to production and one of the more consistent maybe methodologies. There's still a lot of flexibility it offers the opportunity of defining maybe what good looks or defining something that's consistent that can actually be embraced, can actually be applied reasonably fast as far as ecosystem changes go across a lot of the projects, but also across a lot of the ecosystem. Right. A lot of the customers versus the much more DIY settings that exist outside of CNCF, first of all, like do you agree, but also is part of the intends to be a bit of a role model here or should we really be focusing just on, hey, if you're in Kubernetes, let's get it right. If it works out as well, that's great, but if not.

[00:15:00] Emily Fox I believe that we all follow some of the core principles of Open Source in that regard, and that if I'm doing something that's going to provide value to me, being capable of sharing that with the broader community to iterate and improve on it is going to be the best outcome for everyone. I feel strongly that the foundation is well positioned to do that. We have a lot of great community leaders that have enabled us to establish these frameworks with flexibility in these principles around op and to choose the correct infrastructure or specifications that work best for your individual environmental needs and concerns and use cases.

If they can be reused outside of our projects and ecosystems or across multiple within it, then that's so much the better for everyone, because then we can cover more ground. What are those edge cases that need to be resolved and what are the challenges that everyone is encountering and solve them together?

[00:15:56] Guy Podjarny: Yeah. I mean, that's a great answer for it and a great view on how it relates to Open Source. Going on a bit of a side tangent on it, but can you tell us a little bit about what are the core tenets of software, a supply chain as defined by this reference architecture. How would you define the primary pieces of a secure software supply chain?

[00:16:15] Emily Fox First off, I will highly recommend to everybody to read the supply chain security whitepaper that was released by the group, because there is a ton of excellent information in there and I know I am going to miss something even though I helped author portions of that paper. I will say the first and foremost is verification of trust, and that is actually the root of pretty much everything within the supply chain from a security perspective.

A lot of our current challenges in this space or don’t trust anything, make sure that it's signed, make sure that you're getting it from the appropriate location that you think it is to establishing that provenance, but it's the verification piece that is the most important, because you can have all of this great enriched information coming in and about an artefact in the build environment it came from, all the dependencies that are within it, because you have a building materials associated with it. But the key value in all of this architecture and within the reference architecture itself is the ability to verify with high assurance all of the information you're receiving, because without that verification, you're just perpetuating an existing problem.

[00:17:22] Guy Podjarny: How much of that verification includes opinion in supply chain security? There is the notion of just, okay, I think I got it from here. Did I actually, can I verify the provenance? Can I trust it? Can I evaluate that down the stream that I indeed have? Those are the questions about, is it any good? Do I trust the quality of this software? Do I trust the intentions of this container? Do you perceive that as being within scope of this reference architecture? Is that separate?

[00:17:48] Emily Fox Oh, yes. that's usually the second. You pretty much nailed it. That's usually the second part of this is once you have the verification in place, you also have a responsibility to define what is considered acceptable for your individual or organisation or if you're a maintainer of a project. So what are the field values that you're looking for? So sysN and TIA have a minimum amount of fields for a software bill of materials that need to be produced, but you as an organization, as an adapter or as a maintainer might want more than that.

Any material or artefacts that are being produced, while you might be able to verify everything that it's from the source that it says it is and the correct steps were followed. If you're missing some of that material, that's going to make that outcome operational for you, then you've lost the quality of it and it's no good anyway. So you've spent all this time and energy building all of these systems and capabilities in the end, just throw it out or have an impartial decision as a result of it.

[00:18:43] Guy Podjarny: Do you feel there is a conflict between the desire of the consumers of the software and this quality to know everything that has happened to it and may be a bit of a drive of privacy in the Open Source community? I mean, how often does it conflict, how much we know about a maintainer, and how much an Open Source maintainer wants to be known versus how much does the consumer of their software want to know about them?

[00:19:10] Emily Fox I don't know that I've personally come across any of those kinds of concerns from a maintainership perspective. I will say that adopters or end users of metadata and artefact information coming out of supply chains to date know they need it, but the question around how they're potentially going to use it and the information that's presented to them is still up in the air for a lot of teams. That could be, is there only one maintainer on this project? How critical is that project within my entire infrastructure versus a project that has 15 different maintainers, but is 200 PRs deep and is never going to get out of whatever technical debt or blockers that they're having in place. From that perspective, there's still a lot of work to be done, but as far as who the maintainers are and where some of these dependencies and software packages are coming from, that hasn't been elevated at least to my level, as a potential concern within the community.

That being said, when an individual makes the decision to Open Source their software, they also are making a form of a risk assessment and how much information about themselves they're making available, but they're doing that in the context of a decision of being the maintainer on the repo, creating a GitHub account or a GitLab account and associating that. That information is public and can be pulled in by other entities, dashboards, organizations for consumption for their own end goals and decisions. So there's always something to be mindful of out there.

[00:20:45] Guy Podjarny: Yeah. I like the lens of saying, look, when you're choosing to be in the public, it's almost choosing to be a celebrity, or you might be a successful celebrity in the process, but in the way you're putting yourself out there in the public, and then you should be mindful of the consequences of that. You know, you don't have to feel those consequences are known for you.

[00:21:05] Emily Fox You also have a voice in how your information should be used as well. You're providing your handle, your signature of your commits for the work that you're doing and the use of that information out of the context is something that you have the right to question.

[00:21:22] Guy Podjarny: Yeah. Yeah, for sure. So we've spoken. Let me take us back to actually where a lot of these software supply chain security work is happening at this in the broader sense, and that's indeed to the Open SSF. Describe to us the Security TAG and the talk and how they interact. How do you feel within even the picture of the CNCF? The Open SSF fit’s in. How do you add them to the mix?

[00:21:44] Emily Fox This goes back to something I mentioned earlier that CNCF is really focused on Cloud Native Ecosystems and while a lot of our work that we produce can be leveraged by other Open Source communities, the Open Source Security Foundation is really tasked with everything under the sun, which means that they have a much more complex problem space to provide solutions for. I mean, we still have Open Source projects that are under active maintenance where patches are being emailed back and forth, and that works for their processor or they're on SBN or they're using material or something older instead of Git from a foundation perspective.

Security TAG and CNCF can only provide so much guidance, because we have a reasonable expectation for modern Cloud Native Computing, which assumes a lot of these core principles of Git, GitOps, those kinds of workflows to be in place, but quite frankly, when you have an Open Source Ecosystem as vast as ours is today and as international as it is, you need an organisation that's capable of covering what that 80% looks like, because it's going to be a lot more different and diverse.

[00:22:56] Guy Podjarny: Do you find the needs of the Open SSF taken that broader remit and the needs of the Cloud Native Security stacks ever conflict? Is it just about the remit? Are there actual almost journeys defining what it looks that maybe you'd like them to be that way, because of the CNCF for their reasonable and CNCF and they're deemed unreasonable elsewhere.

[00:23:20] Emily Fox I would say that yes, there is a key difference there, because of the expectations we have within the CNCFs, we're going to be doing things in a very particular way and we're going to have very specific recommendations and guidance as a result of that. Whereas the Open SSF and a lot of the work that they're doing at some level, being able to enforce code review at the repo by the maintainers — let's say you choose two of them, one of them has to be not an author — that's something that we can apply across all Open Source generally speaking. That’s you've got a pair of eyes looking at the code, but the actual mechanics and details about how that actually occurs and the enforcement of that is going to be different and might have some security implications associated with it.

There's also a measure of what is good enough and we can't always achieve perfect. Reaching a certain level of security assurance in your software development processes is ultimately going to be yet again, our risk decision for the project, because just like Open Source, you can use it as is, and we make no security guarantees or quality guarantees or if they maintainers choose to provide thoe level of guarantees, they need to have a mechanism to provide that attestation that works for the language ecosystem they're part of. Those are all different.

[00:24:42] Guy Podjarny: Yeah, they're very different. I like though, the idea of trying to set the standards to the modern standard in many ways. Oftentimes, talk about you want to build your security programme in a way that is anchored in the future, that it should help you get to where you want to be, you should build those out. If you have legacy, if you have the history, then that's okay. But don’t model your security practices on the lowest common denominator and by doing so, make it worse for everybody. You want to aim at what good looks like, you don't extend. I guess the CNCF can help us set standards in reasonably easy to implement security controls, easy enough to adopt, and then we can figure out how do we, make them as close as possible to that in a more legacy environments.

[00:25:28] Emily Fox That's exactly right. I would caveat that instead of anchoring in the future, we should be sailing for, because that requires a lot more work and tuning for the weather that you're going to encounter or the vulnerabilities that are going to be thrown your way so you can adjust your processes accordingly as the market and the pace of technical innovation changes.

[00:25:49] Guy Podjarny: You totally outdid my metaphor over here. I've learnt in the process over here. That's awesome. I like that. So indeed, let's talk a little bit about — I appreciate the full journey here through the roles of the different security related organizations in CNCF and in Open Source as a whole. One of the key initiatives that you've been driving is this Global Security Vulnerability Summits. Tell us a little bit about it.

[00:26:12] Emily Fox Yeah. This came out of a discussion with the Security TAG and the Cloud Security Alliance. We realised that we had a lot of overlapping project areas, and the Security TAG maintains a supply chain security attack catalogue of different attacks. We've seen that have been reported against Open Source projects and not just Cloud Native. We realized over time that ultimately vulnerabilities are the same and we have whole classes of vulnerabilities that exist. We're not really making good progress in eliminating them, because ultimately, as long as you have humans touching computers, we will always have security problems. Even then, we'll have many –

[00:26:52] Guy Podjarny: We will never fix all the bugs. That’s the reality –

[00:26:53] Emily Fox Exactly. But given the amount of technical change that's happened over the past 70 plus years within computing and the concept of computers, we haven't had that much change in vulnerabilities. When you think about it, when we talk about what is exploitable, what are the tactics and techniques associated with exploiting a vulnerability that's present? We generally fall back to CVE as a common structure for how do we express and convey vulnerability. But CVEs don't cover everything. Some of them are configuration weaknesses, some of them don't fit in the normal construct of a vulnerability associated with software that is released to somebody.

We have all of these outstanding problems in this space and nobody's really disrupted it. part of our conversation with CSA Cloud Security Alliance was we need a better way of doing this. We need a better way of managing all of these vulnerabilities that are occurring within our projects and the Open Source community and within the Cloud Native community. We need a better way to enable security researchers to own the reporting that they have about it and to allow others to collaborate and enrich that data set, because ultimately, we've got tens of thousands of CVEs and the list is growing every single day. We have all of this data, but how much of it is actually being used to solve the problems that we're experiencing?

I haven't seen anything along that for generating reports saying uptick of security vulnerabilities and supply chain attacks. All of these things are going on, but it's been going on for the past decade. We're not changing. So the Global Security Vulnerabilities Summit was designed to bring people together within this ecosystem that are frustrated with the status quo and want to drive a change. We don't know what that looks like. We have problems. We want to solve them, but how do we get in and drive this next level of innovation that we desperately need here?

[00:28:53] Guy Podjarny: It sounds like a lot of the innovation you're seeking eventually to solve the problem of, how do we wrangle this information. A lot of it is about structure. A lot of it is about is it ****CVEs? Is it something else? Is a structured, who owns the data? I guess is where data uploaded, but a lot of it is around almost the data model, all the vulnerabilities in the ecosystem, is that –

[00:29:17] Emily Fox Yeah. Ultimately when you're doing any system design, it comes back to data, data structure, data governance and modelling and production. No surprise here. It's the same situation with vulnerability management within the ecosystem as we are getting more into well, we've been in SaaS baced environments for a long period of time, but as we continue down that path, service owners are going to have some level of responsibility for vulnerabilities that they're introducing to their customers. Right now they negotiate that on their own. But how much of that do we actually know? What's being reported? What are our requirements as an adopter of that service? Do we need to restart the entire service on our own to gain the benefit of that automatic update? Do we even know that information? Is it being expressed to us in a way that is easily understood and digestible and actionable?

Ensuring that, if I'm in an enterprise and I'm responsible for managing our entire fleet of vulnerabilities that occur, I have 15 different software products that are providing me vulnerability information, which one is the one that's most trustworthy from a quality perspective? Which one has actual information to deliver to my developers about how you actually need to go and fix this? But also, are they all talking about the same vulnerability and why is the information different between them. So getting around and pulling all of that together?

[00:30:37] Guy Podjarny: Yeah. I think that division is very laudable and I agree with the problem. I think, I live in a world where we come across the challenges with CVEs regularly as well as the challenges of like those things sometimes has as well, vulnerabilities that have no CVE. That's also not the ideal in the path –

[00:30:53] Emily Fox Yeah.

[00:30:55] Guy Podjarny: Not having a standard ID. I believe, we can figure out the format. I don't know what it is. One of the challenges in this world, though, is that it's not always fact. There's a lot of opinion. What is and isn't deemed the vulnerability anywhere from, hey, you had some input validation problem in a spot that one person would classify that as five different vulnerabilities, because it's whatever you can do. Ask an objection and then recommend, all of those, while another will do one. On the other side, a maintainer might not always accept what a security researcher thinks is a vulnerability. Then of course, of course, of course, the classification of the severity of how big a deal is this is heavily, heavily subjective.

How do you, I mean, in the world of Open Source, maintaining, curating data assets hasn't always been the thing that Open Source excelled in terms of curation, given the desire to entertain all these opinions. How do, you feel like that is solvable, is that outside of the scope? Let's figure out the format and the way to communicate about it and let people curate. Is there something else?

[00:32:00] Emily Fox I think it's a little bit of both. I think it is a solvable problem. Again, with everything in software development, you're going to have to iterate and refine over time. I think there is a lot of nuance associated with it, because there have been plenty of occasions where maintainers are informed of what is a perceived vulnerability, but is in fact a feature of the product and is operating as intended.

Then we have those kinds of situations. It can introduce a vulnerability and provide an adversary an opportunity to exploit the project, because of how the feature is written. Then as you very accurately described, there's this opinion that gets formed and how do you express, because quite frankly, as somebody that could be leveraging an Open Source project. I don't care if it's your opinion that the feature is operating as intended and a security researcher says it's vulnerable. I need to know that something exists that I should probably be paying attention to. That's where I think we can all get on the same page and start solving this.

In some cases, that means information sprawl about what it is and giving security researcher and the maintainer an avenue to have that dialogue in the open, so that consumers of that data set can make their own informed opinions about how to proceed.

[00:33:16] Guy Podjarny: Yeah. Are there good role models you think we can pursue over here? Are there examples of classification of data that has different views on it outside the security industry that that you feel inspired by?

[00:33:29] Emily Fox I, unfortunately, I can't think of any off the top of my head that could just be because I haven't been exposed to a lot of them. I think there is some merit in data classification in and of itself. So when we think about if you're doing a privacy impact assessment or a health impact assessment of a service offering or a project that you're building. Understanding the kinds of data that you're going to be dealing with and how it is intended to be used, is a worthwhile exercise when you're pursuing things that are as messy as vulnerability management is in that data set in what it looks like. Because ultimately that's what vulnerability management is. It's a software product that we need to be responsive for as an industry, and we need to make sure that we are talking to each other about what our needs are so that we can capture those fields and those requirements accordingly.

[00:34:20] Guy Podjarny: Yeah. Yeah. I mean, it's super interesting because to an extent I agree that I've often thought of what is it that we can learn from and I don't have a great answer to that either. One of the challenges is we're trying to do it in security. We're actually not even trying to do it's too much anywhere else. Like anybody that tries to classify the bugs or problems, it comes back to a bit of an artisanal, each one of them is a unique snowflake. In security, we're maybe even more aspirational here where we're trying to catalogue them and classify them, because of the complexity of measuring security. So we're trying to get something to help us here.

It's interesting, maybe security is the charge over here and if we want to classify other aspects of software bugs after this. So what would be a great outcome out of this GSVS?

[00:35:09] Emily Fox One of the passion projects that I've been helping out is the Global Security Database, which is an effort to bring a lot of vulnerability identifiers together and provide that mechanism for maintainers and security researchers to collaborate on what a vulnerability actually looks like, accounting for things that are services in the cloud as well as actual software that's delivered.

I would love to see more individuals get involved in that particular project, but even if it's not that project, I'd like to see the vulnerability management community in the security research community come together and start having those dialogues and conversations. What I've seen too often is that security research and security operations sit on one side of industry away from secure development and software engineering and this is an ongoing challenge that we have. How do we get these two groups to talk to each other? It's very much like going to a party and you've got one group on one side of the room and the other one on the other side of the room. I'm just waiting for somebody hit play on that boombox, so dance party breaks out.

Ultimately that's what I want, because I want better collaboration. Because we're too disjointed in our end goals and realistically, we're just trying to solve the same problem as provide better quality and higher security for the software that exists that everybody relies upon.

[00:36:31] Guy Podjarny: Yeah. Well, that's definitely admirable goal. I think it's sounds a promising step in that direction. If people want to get involved in GSVS as providing opinions or supporting, how can they – what can they do?

[00:36:44] Emily Fox The Global Security Vulnerability Summit is June 23rd - 24th. It's a in-person and virtual event. It's a hybrid. It's in Texas this year. You can go online to Google Search for Global Security Vulnerability Summit. That's one way of getting involved. There's also you can find me in CNCF slack and message me and I'll send you information about it. We have a security technical advisory group as you open for organizing and executing the event. But realistically, it's sign up to attend, participate in the discussions. We have some great talks that are coming out of it, but I want to see more discussion, more birds of a feather dialogue around a lot of this space.

[00:37:25] Guy Podjarny: Emily, wrapping up a little bit over here. We've spoken about all these different activities and different foundations on it. Do you feel we're set up for success? Do we have the right structure here? Are we missing anything big beyond the vulnerability data format?

[00:37:39] Emily Fox I'm going to say that I think everybody's minds and hearts are in the right place, but I think we're missing a lot of who that everybody is. There's a lot of really good ideas within the community of how we drive this, but I feel like a lot of them are the voices that have been here for a long time. We can certainly do better in getting more individuals across different technical domains, involved in more leadership positions and help driving some of these great ideas, actually, through to completion. So that, I think, is the biggest need of us being successful in this area.

The other part of it is getting more individuals are proved to be able to contribute to Open Source security. That's one of the ongoing challenges that we have in Open Source. It's one thing to be able to find a vulnerability and report it, but it's an entirely different thing to also provide a fix for it. That, in my mind is the mark of a strong engineer that you're capable of finding the problem, but also coming forward with solutions to be able to select from and solve it at the start instead of waiting.

[00:38:45] Guy Podjarny: Yeah. That's a spot on. Very well said. I guess we can advocate those more of that type of activity. Do if there's any work happening around pushing that, about recognizing. We recognise, there's some bragging rights involved with something or publishing a lot of CVEs. Is there any, maybe that's part of the GSVS work, is how do we recognise people that actually fix vulnerabilities?

[00:39:03] Emily Fox That Open Source Security Foundations, Open Source Mobilisation Plan does have streams that cover a lot of this. There's been ongoing discussion about how do we incentivise and recognise individuals that are doing the chop wood carry water work of making these impactful changes on a project by project or community by community basis to solve some of these problems. There's still a lot of work to be done, so if folks are interested. I highly recommend joining the foundation's Slack channel and seeing how they can get involved there. There's items such as a security incident response team for foundations, which is just one way in which we can provide that human capital to be able to assist projects as well as security tooling being written to automate a lot of this.

[00:39:53] Guy Podjarny: Excellent. Emily, this has been a goldmine of great perspectives on opinions here.

[00:39:57] Emily Fox Thank you.

[00:39:58] Guy Podjarny: Maybe trying to squeeze one more out of you, an open ended question over here. If you had unlimited resources and budgets to solve an industry problem, what would you tackle and maybe how would you go about doing that?

[00:40:11] Emily Fox This is something that is very dear and personal to me. I personally believe that we do not have enough women in technology. As you start adding filters on to that, women in technology, women in technical Open Source, women in Security or cybersecurity, women in Open Source Security, that list of eligible individuals gets smaller. When you add on other caveats, public speaking, for instance, leadership positions, it gets smaller. I’d like us to try to solve more women in those roles.

I think there is a lot of value just in the perspectives that are provided by them, but also in the unfortunate glue work in some cases that is done. Those things are what make individuals that hold those leadership positions so valuable to the community, because they have the strategic vision and direction across multiple things, because they're working across four or five different foundations. They're making sure that things are delivered on time and that they're accomplished, that's really where a lot of that value is. So we have a long way to go and we have some good initiatives. I think reaching back earlier into the education system and making sure that you have someone to follow in your footsteps behind you and giving them room and space to talk and elevating them is the most important thing that we probably do.

[00:41:35] Guy Podjarny: Yeah. Well, absolutely on it. Wouldn’t talk about investing in it. There's a bunch of these projects. I think the mentoring as an individual call to action is super useful and practical, and if we all practice that would be good. Are there big moves that we can do beyond that, that you think would make a dent over here? Are there programs we should be driving? Are there other differences in terms of –

[00:41:58] Emily Fox Showing up and helping. We teach visual coding in elementary and primary schools, providing those same opportunities with a security twist and security principles associated with it. We teach children about how to be safe online from a cybersecurity perspective in protecting their personal information. They get it. They're starting to understand and be more mindful of that. We can do the same thing with security, and a lot of the time we lose girls in middle school. Making sure that we have more programmes, more opportunities, more engagement so that they can see leaders in this space that embody the figures that they want to be when they grow up, but not just being able to see a person in that space. They want to see somebody that's like them to show them that it is possible to pursue that course.

[00:42:47] Guy Podjarny: Yeah, absolutely. Emily, this has been a blast. Thanks so much for coming on to the show. It's been really, really great.

[00:42:54] Emily Fox Yeah. Thank you so much for having me.

[00:42:56] Guy Podjarny: Thanks, everybody for tuning in. I hope you join us for the next one.

[END OF INTERVIEW]

[00:42:06] ANNOUNCER: Thanks for listening to The Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode.

Bye for now.

[END]