Skip to main content
Episode 140

Season 8, Episode 140

The Need For Diverse Perspectives In AI Security With Dr. Christina Liaghati

Guests:

Dr. Christina Liaghati

Listen on Apple PodcastsListen on Spotify Podcasts

Episode Summary

In this episode, Dr. Christina Liaghati discusses incorporating diverse perspectives, early security measures, and continuous risk evaluations in AI system development. She underscores the importance of collaboration and shares resources to help tackle AI-related risks.

Show Notes

In this enlightening episode of The Secure Developer, Dr. Christina Liaghati of MITRE offers valuable insights on the necessity of integrating security considerations right from the design phase in AI system development. She underscores the fact that cybersecurity issues can’t be fixed solely at the end of the development process; rather, understanding and mitigating vulnerabilities require continual iterative discovery and investigation throughout the system's lifecycle.

Dr. Liaghati emphasizes the need for incorporating diverse perspectives into the process, specifically highlighting the value of expertise from fields like psychology and human-centered design to grasp the socio-technical issues associated with AI use fully. She sounds a cautionary note about the inherent risks when AI is applied in critical sectors like healthcare and transportation, which calls for thorough discussions about these deployments.

Additionally, she introduces listeners to MITRE's ATLAS project, a community-focused initiative that seeks to holistically address the challenges posed by AI, drawing lessons from past experiences in cybersecurity. She points out the ATLAS project as a resource for learning about adversarial machine learning, particularly useful for those coming from a traditional cybersecurity environment or the traditional AI side.

Importantly, she talks about the potential of AI technology as a tool to improve day-to-day activities, exemplified by email management. These discussions underscore the importance of knowledgeable and informed debates about integrating AI into various aspects of our society and industries. The episode serves as a useful guide for anyone venturing into the world of AI security, offering a balanced perspective on the potential challenges and opportunities involved.

Links

Follow Us

Share

"Simon Maple: Selling a product around something as well, you'll hear them talking about various areas of technology. And every now and then you'll say, "Well, the supply chain's a great example." And you think, "Actually, that's just one piece of supply chain. There's a ton more of supply chain." But actually, one of the reasons maybe an individual group is talking about that particular bit is because, well, actually there's a tool that fixes that bit. The focus is in and around there. Whereas I guess for MITRE, your boundary is the problem versus an area of tooling or work that you're doing to fix a part of that problem. Right?"

[INTRODUCTION]

[0:00:35] ANNOUNCER: You are listening to The Secure Developer, where we speak to leaders and experts about DevSecOps, Dev and Sec collaboration, cloud security, and much more. The podcast is part of the DevSecCon Community, found on devseccon.com, where you can find incredible Dev and security resources and discuss them with other smart and kind community members.

This podcast is sponsored by Snyk. Snyk's developer security platform helps developers build secure applications without slowing down. Fixing vulnerabilities in code, open-source containers, and infrastructure as code. To learn more, visit snyk.io/tsd.

[INTERVIEW]

[0:01:21] Simon Maple: Hello, everyone. And welcome back to another episode of The Secure Developer. My name is Simon Maple. And during this episode, we're going to be continuing a thread, I guess, that's been going for the last month or two where you'll have noticed we've talked about AI a fair bit. And we're going to be talking a little bit about some of the concerns around AI and some of the areas that we need to think about mitigating.

Joining me today on this discussion around AI and other topics, we have Dr. Christina Liaghati, who is the AI Strategy Execution and Operations Manager at the AI and Autonomy Innovation Centre at MITRE. Welcome, Christina. How are you?

[0:01:59] Dr. Christina Liaghati: Thank you, Simon. It's great to be here.

[0:02:01] Simon Maple: Oh, it's an absolute pleasure to have you on. Yeah, looking forward to a number of topics that I think we'll cover in today's session. First of all, why don't we hear a little bit about yourself, Christina, in terms of maybe your journey through your career? And then perhaps a little bit about what you do at MITRE.

[0:02:16] Dr. Christina Liaghati: Yeah, sounds great. I actually kind of came up a little bit more on the physics and engineering side of the world, right? I got my master's in physics and my PhD in engineering. But I've really had the unique opportunity to work with and support sponsors all across the government over the past 10 years in everything from healthcare to national security. And really, over the past few years, most of those government sponsors have been talking with us about leveraging AI-enabled systems, right? And how to do that in an assured and appropriate way?

I've been fortunate enough to be part of the development and growth of the AI security topic across a lot of these government and industry collaborators. Actually, one such sponsor conversation several years back led to the collaboration with Microsoft and our partnering with about a dozen other organisations to launch the original version of ATLAS, which is our MITRE AI security framework really focusing on the tactics and techniques that an adversary can take advantage of, right? Use in order to take advantage of the AI-enabled elements of our system.

We created ATLAS three years ago. And it's been a kind of living, maturing collaboration since then. ATLAS and AI security have been my primary and really technical focus ever since, which is why I actually lead ATLAS now. I'm very passionate about this space because I see how much of an opportunity it's going to be. Or it is now, right? And how much the space is continuing to grow in order to be able to help folks better secure their AI-enabled systems.

I mean, even just this past year of LLMs taking the world by storm. Folks really being able to much more touch and feel, right? How these systems could impact and improve their lives has dramatically changed even the threat landscape, right? Because it's enabled a lot more threat actors or even just folks that are poking at the systems and seeing what they can break to be able to expose how vulnerable and brittle these systems are. I'm super excited to be able to talk to a little bit more of the community about AI security and what we can all start doing to secure our AI-enabled systems today.

[0:04:15] Simon Maple: Yeah. Sounds amazing. In terms of security generally, what kind of brought you to this area? Because I remember before we were talking actually about your path into tech, and originally it was more on the physics side than into the compsci. I was very jealous of that. I kind of came straight into the computer science area and I perhaps didn't choose some of the maybe more interesting topics at the time that I would have been kind of interested to go into. But what was your path like in kind of choosing security as an area that you wanted to go into?

[0:04:44] Dr. Christina Liaghati: Yeah. So much of it was driven by the conversations that we were having with sponsors, right? I'm one of those people that I roll up my sleeves and I kind of dive into the problem that is the biggest pain point for folks. And whether that means that you know I'm taking on different roles that other folks don't necessarily want to do or I'm diving into a problem area that we think is going to be bigger over time, but that folks don't really understand right now. That's really what led me into AI security.

MITRE, about five or six years ago, was spending a lot more time and energy on understanding the AI security problem. The more that we dug into it and understood how brittle and vulnerable these systems are, the more we prioritised developing capabilities in this space. And just seeing that opportunity space, right? This is literally cyber security all over again, right?

And seeing that at our doorstep, seeing AI-enabled systems becoming more and more tangible for folks and them not realising how vulnerable their systems are when they start to incorporate AI. We're in this tiny proactive window to be able to help folks figure out how to secure and assure their AI-enabled systems before we see some of the kind of major – I don't know. The worm attack on the internet, right? Some of the iconic internet breakdowns or cybersecurity attacks that just shocked the world. And, "Oh, maybe we were designing our digital systems in a poor way." But now we're seeing a lot of that naivety coming out again as folks are incorporating AI. And that's really what I've been passionately digging into as we see that as a continually evolving problem for folks.

[0:06:15] Simon Maple: Yeah. Just a share number about applications and the accessibility of AI today through huge numbers of APIs that allow people to build on top of that. I totally agree with what you say, that the window is so small in terms of us actually being able to understand what are the right steps that we need to take to rather than just use AI, to use AI responsibly and securely.

For us to look forward the next year or a couple of years, what do you envisage as maybe some of the potential exploits, potential problems that we're going to see with people jumping into AI today too fast without taking those precautions?

[0:06:50] Dr. Christina Liaghati: Right. And this is very much where ATLAS has been focused on helping to scope folks’ understanding of where they need to be prioritising their defences against some of these potential threats and vulnerabilities. AI-enabled systems are being attacked today. One of the case studies that underpins the tactics and techniques in the ATLAS matrix is actually a $77 million theft from a Chinese tax authority. And it was just two individuals taking advantage of how they were using a facial recognition system. It wasn't even the facial recognition system failing, right?

This isn't always even actually in a lot of cases right now, in the real-world attacks that we're seeing, the models are performing as designed. It's the naivety of how we're incorporating them into our existing cyber systems and opening ourselves up to new vulnerabilities. It seems to be the easiest low-hanging fruit attack pathway for adversaries to be taking advantage of.

For example, in that $77 million loss that was over a two-a-half-year period, it was just two individuals that were repeatedly using the same pathway of buying some cheap cellphones that could present a modified video feed instead of the general front-facing camera. When you log into your iPhone, you're of course using your facial recognition verification to get access to your phone. It was the same kind of setup where they were verifying the faces of citizens to give them access to the tax authority system.

And these two individuals were able to use that cheap cellphone to present a modified video feed instead of that usual front-facing video camera and created some really crude videos of taking basic headshots that they bought off of the black market. Creating a really crude video of the face slightly blinking its eyes and opening and closing its mouth. Slightly twisting its head, right? Anything that a human would immediately see as like, "Whoa, that's weird. That's definitely not a human." Right? Nothing as fancy as the deep fakes that we see today. But that was enough to fool the facial recognition system. Because it hadn't been really designed to deal with cases like that.

And then, because they were able to use that headshot that they'd bought off the black market paired with some personal identifiable information that went along with those Chinese citizens, they were able to get traditional privileged access into that tax authority system. And that allowed them to repeatedly file fraudulent invoices and steal $77 million over that two-and-a-half-year period, right?

A lot of attacks like that are happening right now. We've seen a lot of cases even with LLMs where – like MathGPT, for example, is one online application that was just connecting a Python frontend with their GPT API frontend, right? And allowing the user to ask the system to answer math questions, right? But instead of GPT just running the the calculation, say, through another platform, it was connecting directly with Python in order to ask Python to, say, compute whatever question that the user was asking.

And because of how they were connecting that, the user could actually ask the system to compute forever, right? One of the white hat hackers that was actually demonstrating this vulnerability brought the system down for a couple of days, right? It crashed their servers. It was more of a traditional denial of service attack simply because they weren't, say, setting up guard rails to prevent a user from interacting with that API and asking it to do things like that. The same setup they were able to access their GPT API key, right? If this had been a malicious hacker instead of a white hat hacker, they could have also submitted a lot of fraudulent charges on their behalf.

The attacks are happening today. But I think the sophistication of the attacks are going to continue to increase as, say, our defences increase. Because right now these low-hanging fruit, really easy attack pathways are working really well. They don't need to be super fancy with their attack pathways. But I think we're going to see the sophistication ramp up over the years as we get just a little bit better at defending our systems too, right? This is the same kind of back-and-forth of cyber defence and attacks.

[0:10:40] Simon Maple: Absolutely. And I think you're highlighting there a couple of really interesting attacks, it will happen more and more in the future. But it's happening today as well. And I think what's really interesting here is – folks online to look at the ATLAS matrix as well. We're not just talking about one area or a couple of areas where these types of attacks might happen. The ATLAS matrix has maybe 20 to 30 different areas where we need to consider areas of mitigation, areas of attack onto an AI system. And of course, there's many different ways in which you can use that AI system.

In terms of the battle that we're trying to fight today in terms of one part of the business really wants to make use of AI and use it as a competitive advantage over others in the market. But we see the huge area here of potential drawbacks and issues that could arise. How do we as a business balance these two? And actually, from a security point of view, have a voice in the organisation to say, "Well, actually, let's do this responsibly or do this with consideration about these types of issues that could arise and understand how we can mitigate those." Have you seen any examples where that's been done successfully or well?

[0:11:49] Dr. Christina Liaghati: Yeah. A lot of different organisations are of course trying to do this. And we've seen it done differently in most of the orgs that we're working with. I'd say that there's not any like say perfect model yet. But the things that we keep coming back to is if you're going to solve a lot of these security problems and even really the broader landscape of assurance, right? Which includes equitability, interpretability, robustness, resilience, right? The problems that come from incorporating AI into existing systems especially – or even future systems, but especially if those systems interact with humans, that there is so much more potential for failure and vulnerabilities than I think any of our current mindsets for designing digital systems are set up for, right? Definitely taking a bit of a mindset shift.

And some of the most effective ways that we've seen groups handle that is to bring in at least diverse perspectives at the beginning of a design stage, right? Because that's even some of the things that we keep running into in the AI security world. You can't just patch these bugs, right? In a lot of ways, these models are fundamentally vulnerable or they're vulnerable in the context that you're using them.

If you're going to keep using them in a certain application area, like with health data or in a certain environment where humans are interacting with it in a certain way, then you may need to go all the way back to the beginning of the design phase for the model and retrain it, right?

Some of these vulnerabilities and potential assurance concerns are really baked in along the full life cycle of development. I know a lot of cybersecurity folks will sometimes get frustrated when they're brought in at the end and say, "Hey, fix my system. We're about to deploy it." That's not going to work in a lot of these cases. You really need to come in at the very beginning of the design process and think about, continually iteratively discover and investigate where your risks might be throughout the full development and deployment and then maintenance of the system in the life cycle here, right? This definitely changes the mindset and bringing in diverse teams has been very helpful for some groups to start to get after that.

But even that kind of standard model for how to deal with this is looking like it's very context-dependent in how folks are using AI, right? How often? How big of an element of the system is going to be? What type of data you're going to be training it on? What type of context or environment? So much of what we focus on inside of MITRE when we're talking with organisations about these problems is that so much of it comes down to it being a consequential application of AI, right?

There's a lot of really cool applications of AI that maybe are slightly lower stakes in terms of life or risk to humans or society in general. But we're now applying AI in not just the government, but in, say, a lot of external, right? Industry consequential application areas that are honestly going to be really concerning to see failures happen in, right?

Imagine healthcare, or transportation, or any of these kinds of really critical infrastructure areas. Those are going to be very, very key for us to go through this full very thoughtful process to incorporate diverse perspectives to understand and mitigate the risks throughout the full deployment of these systems.

But to come back to your original point of folks being very excited about deploying these systems, it's not like where the assurance experts or the security experts are saying like don't use these systems. That's not it at all. It's much more that we're saying like we want to make you feel good about deploying these systems, right? We see the power of these tools and we want to increase your trust in the systems.

I think if we're able as a community to go through these processes hand-in-hand, right? Like the adoption and deployment right alongside security and assurance, it's going to increase both public trust, leadership, right? Everyone that's a stakeholder and decision maker in using and deploying these AI-enabled systems, I think we're going to be able to increase the speed actually in some cases where we would be able to take advantage of the value of AI if we can also demonstrate that we're mitigating and thinking about some of these risks.

[0:15:42] Simon Maple: Yeah. Absolutely. And increase the long-term success of people actually using AI. And I feel like you've got some really good points there. I think one of the ones that I feel like we do a lot today in DevSecOps style movements and motions is trying to include security into existing patterns of work, existing workflows. And exactly as you mentioned there, thinking about things in the design stage. Doing secure by design, thoughts, and discussions with AI.

There's a lot of existing places and a lot of existing tools maybe even that can help us with a lot of the potential problems that AI can help. But there's also a lot of new problems that we're only really seeing because of an AI model or using AI in an application. What would you say the most overlooked issues today? Would you say that maybe the more of the newer types of attack vectors that we are only seeing because of AI? Or is it existing basic hygiene practices that are potentially causing the biggest problems with using AI in our application?

[0:16:35] Dr. Christina Liaghati: The basic hygiene practices are really the low-hanging fruit for an attacker to take advantage of. I think that's what we're seeing exploited the most. But the adversarial machine learning community and honestly a lot of the AI security community has been, say, talking about adversarial patches for computer vision systems.

I'm sure years ago everybody was talking about like being able to put on a pair of glasses that had a really funny pattern on it or put a hoodie on that had a very funny pattern on it that would allow you to evade a computer vision system or something like that, right?

There have been very obviously talked about either data poisoning or computer vision type attacks, that kind of thing, that have grabbed the community's attention over the years. But to be honest, those are at a certain level of sophistication, then an attacker doesn't really have to go to that amount of trouble in order to take advantage of the vulnerability of our AI-enabled systems yet, right?

So many of these basic hygiene issues of how you're connecting it, right? Like that Python connectivity not setting in enough guardrails around it, right? Or with the facial recognition system in the tax authority example that we just talked through. They were trusting it to just catch all cases without going through and realising what it had been trained to do and what it hadn't. And some of these really easy ways like using a headshot off the black market to fool it. And those kinds of low-hanging fruit things are I think much easier for adversaries to take advantage of right now.

But I think we also, over the next few years, especially as we get our act together in more of these fronts to secure some of that low-hanging fruit aspects, it's also going to be pretty straightforward. And we've seen some of these more sophisticated attacks already for adversaries to take advantage of the more in-depth or honestly hard-to-detect, hard-to-mitigate attack pathways ways. Like the ones that our AI security community has been talking about for several years now.

[0:18:18] Simon Maple: Yeah. Very interesting. Let's switch gears a little bit now. And obviously, from your side right now, leading the ATLAS project in MITRE, it's obviously very interesting to be a kind of like a security head here for kind of a big community project and a project which isn't necessarily bound by potentially a vendor tool or something like that, which of course you've also had experience with in the past as well. Tell us a little bit about the differences, the challenges, what you're enjoying most about, I guess, either side of these two roles?

[0:18:46] Dr. Christina Liaghati: I absolutely love MITRE. I am the biggest commercial that you will ever hear coming out of it kind of thing. I mean, I've been here for seven years now. And I was at Raytheon before that. And while I loved my time at Raytheon, the biggest thing that I love about MITRE is being able to work in that safe space in the middle of the community, take that kind of objective perspective, and think about what is maybe best for the overall problem and, say, not even just different aspects of the mission or different aspects of, say, the industry areas that I might be developing tools in, right? It's a little bit different. Being able to take a step back and think about the problems as a whole. And that I have absolutely loved.

For maybe the listeners that aren't familiar with MITRE, MITRE is a pretty unique place. We're a not-for-profit that works across most of the government sectors supporting and advising our government sponsors, understanding their unique missions and needs, and really bringing our deep technical expertise and our network of over 10,000 MITRE engineers, and scientists, and really technically brilliant folks forward when they're needed to work on the most critical and impactful problems across the government.

Because we're also kind of a not-for-profit objective third-party, we're also able to work across industry, bringing industry and government together, like we do with ATLAS to solve problems that we know will impact the entire community of both the public and the private sectors.

For example, right now, we've got over a hundred organisations that are really mature and thinking about AI security in the ATLAS community. And even beyond that, like at the international stage, I'm chairing a NATO exploratory team that's focused on these exact problems as well, right? Leveraging the same things that we've been working on around ATLAS and a lot of the community with industry that we built out there. It's been really wonderful to take advantage of that MITRE perspective to work at the intersection of the entire community here.

And honestly, it's been a really refreshing breath of fresh air kind of thing that so many folks in industry, and academia, and government have been willing to kind of set aside the competition boundaries, whatever else kind of thing, right? To come together and work on these problems as a group.

Because, I mean, in a lot of ways, folks are recognising that there's a shared reputational risk. There's a shared supply chain risk that all of them are facing. That if they don't come together and work as a bit of a community to share data, talk about these real-world problems, mitigate them as a community, it’s going to be a lot less effective than everyone trying to get after this, reinvent the wheel, and every individual organisational silo that might need to be worried about this right now. I'm really loving the kind of community that we've got gathered around this and very grateful that I've been able to play at that intersection for a while now.

[0:21:17] Simon Maple: Yeah. And it's very interesting. Because when you very often hear particular vendors or people who have a bit of money in the game that are selling a product around something as well, you'll hear them talking about various areas of technology. And every now and then you'll say, "Well, supply chain's a great example." And you think, actually, that's just one piece of supply chain. There's a ton more of supply chain. But actually, one of the reasons maybe an individual or group is talking about that particular bit is because, well, actually there's a tool fixes that bit. The focus is in and around there. Whereas I guess, for MITRE, your boundary is the problem versus an area of tooling or work that you're doing to fix a part of that problem, right? How about the other side? Anything that you enjoyed previously when not doing the kind of more of the community style broader work working specifically in and around a tool or project?

[0:22:04] Dr. Christina Liaghati: Yeah. It is really fun I think to focus on the development of a particular system. I do kind of miss some of that. I mean, it's not like we develop capabilities under ATLAS, right? It's not like I don't still get very technically crunchy when I want to. It is a little bit different and honestly daunting to zoom out from, say, the development of a single tool to thinking about the problem that faces the entire international community, right? That's a very, very different scale.

And I definitely appreciate the community of even tool developers, industry leaders that are trying to get after one specific aspect of the problem. Because we need that, right? I'm very determined to help as many of the industry groups that are trying to develop tools and capabilities here as we can. Because if we don't have them kind of working on that cutting edge of, say, developing something that's in a particular aspect of the problem, we're never going to solve the full puzzle, right? MITRE can't develop everything. And we don't do that, right? We don't compete with industry in developing tools and capabilities like that. We develop things that we open source or build out for the community to start getting after some of these problems. But I really appreciate the different industry groups that are really digging into developing some cutting-edge solutions to part of the bigger puzzle.

[0:23:13] Simon Maple: Yeah. No. Absolutely. And I think, actually, a lot of the work that MITRE does here is so awareness driving and very really educational about the areas that people need to kind of like dive into more. And I guess when we talk about topics like this and we think about the existing culture of security, how do we make security more approachable? Others in security as well as beyond in other departments. Is there a culture challenge that we still need to address?

[0:23:39] Dr. Christina Liaghati: Yeah. Absolutely. There's a major culture barrier here that I think we need to work through. Part of that is that the diversity of the AI community is not great. And then you overlap that with the diversity of the cybersecurity community and it gets even worse, right? There's not a lot of existing perspectives, like diverse perspectives in that even AI security community, right?

A lot of even what I do just to try and increase the security of the community kind of like through that population of diverse perspective piece is just try and encourage folks to get more involved, right? Either that's breaking concepts down into slightly more layperson terms, right? Just to be able to bring someone in from a totally different discipline area. Help them understand the problem space and then get them a little bit more involved in our risk mitigation and analysis processes, right?

I was talking a little bit earlier about how you need those diverse perspectives throughout the entire life cycle in order to identify and mitigate some of these risks, right? We're not going to train all computer scientists, all AI developers to understand, say, social bias problems, right? To really understand some of the socio-technical issues with, say, a particular data set, or using it in a particular way, or expecting users to interact with a system in particular ways, right?

You're not going to train AI developers to be the be-all-end-all solution solvers in this space. You need those diverse perspectives. But in order to get those diverse perspectives, you need to make it accessible to the groups that would really help you, right? Even that's like psychologists, right? Even just kind of the human-centred design teams to really help them understand what's happening on the AI side to bring them together to really solve the problem with the AI developers and the traditional computer scientists.

And honestly, a lot of that keeps coming back down to incorporating more women, incorporating more minorities, right? Different diverse perspectives, marginalised communities into the design process and the conversation about how we're using and deploying these systems. It's honestly a long road to hoe, but it's something that I'm excited about doing personally. Because I've had so many conversations even just with external conferences where I get up and really talk about the necessity of having a diverse group involved in this whole – even just diverse perspectives, right? I'm not even just talking about race or any of the other, say, things that we might traditionally think about when we talk about diversity. But diversity of, say, technical perspective is so critical in solving some of these problems.

That and the fact that I'm obviously a passionate woman that is really driven and excited about solving these problems has meant that there's been some really cool folks come up to me after talks or reached out to me virtually to want to get a little bit more involved. And that's where we start to hand them, "All right. Here's some starting points." Right? Dig into the, say, adversarial machine learning 101 page on ATLAS if you're really interested in AI security. Or here are some additional starter points to get you a little bit more involved in this space.

Even inside of MITRE, I have a little bit of a squad, right? A team of women that have been really awesome at working with me to even talk more about some of the work that we're doing at like, say, external, more women-focused conferences in cybersecurity. The Women in Cybersecurity Conferences or WiCys. Any of those, we've been a little bit more active even just in the last few years to show folks like, "Hey, you can be a leader in this space." And there are folks that are successfully doing novel work here and honestly trying to drive the community in that productive direction. I'm pretty determined to get more folks excited about the topic and make it more accessible. But honestly, we still have a lot of work to do as well
 

[0:27:21] Simon Maple: That's amazing. And just hearing you talk about that, it's hard not to be excited just listening to the way you're talking about it. You're passionate about it. And it's an amazing, amazing topic. And I think AI is a very interesting space as well for that because it's such a – we shouldn't really say an early space. Because it's been going for so, so many years. But the speed at which it's suddenly growing at now, it's still such an early time for AI. And it's being so broadly used, right? Everything you can think of, there's an application for it. And so, yeah, very, very interesting time to get involved.

I think you said a 101 at MITRE. Where would you recommend people who would –

[0:27:52] Dr. Christina Liaghati: On the ATLAS web page. We have a 101 page for adversarial machine learning for folks that are just starting to try and understand, right? Because a lot of folks come to us and they're in a traditional cybersecurity background or they're on the traditional AI side, but they don't really understand the overlap of the two. We have a 101 page on the atlas.mitre.org website.

In additional spaces beyond that, we'll usually point folks to some of the other – like, "All right. Here. Get started with this podcast." Or get started with the MIT AI 101 course. Or if they're really coming from a totally different discipline and they're just starting to understand AI, there are other places that we're trying to get folks a little bit more spun up on.

But to be honest, AI is kind of capturing the attention of a lot of not just the technical community, but the broader world as a whole. And that's been a little bit weird for folks to both start to get access to tools like ChatGPT, right? Like some of these things that have made it a lot more real for them to see how they can use it in their daily lives. But it's also meant that a lot of folks are just kind of digging in on their own just to understand some of the basics of AI. More folks are actually coming forward that might understand a little bit of the basics of AI, but they want to understand more of the risks side, which is honestly where we pull them into say either our ATLAS Slack space, which is where we've got a lot of the public community together like actually just having some more informal conversations about this. To get them a little bit more involved in understanding the risks and then seeing where they might want to get plugged in and mitigating that.

Because, to be honest, that kind of a – the risks of AI 101 thing doesn't really exist in a very clear and easy area yet because it's so context, and data, and mission-specific, right? It's much more about I think helping folks understand how to analyse their own risks in the space and what tools they can use to start to do that, which is honestly evolving pretty rapidly right now. Because nobody's got it figured out very well.

[0:29:43] Simon Maple: Yeah. Which, as you say, having that diverse kind of angle and viewpoint perspective on these kind of topics is vital to have and very important that we have that. Going back to something that you mentioned earlier actually about when we think about AI, problems in AI, and we can't expect every AI developer to need to know or know about every single type of problem, how to avoid these types of problems.

I guess this is actually a little bit similar if we draw a parallel to just typical cybersecurity, right? Any development cybersecurity problem where an issue kind of comes into some code that they put in, do they know about the type of issue? Do they know about how to mitigate that? From an AI perspective, what do we have today do you feel from, I guess, experts who can kind of help developers, or even processes, or tools, or things that can help us trigger to say, "Oh, yes. Here's a specific AI problem that I need to eliminate before pushing this.”

What advice would you give from a process point of view or a culture point of view about how AI security and AI development can better align?

[0:30:46] Dr. Christina Liaghati: Yeah. You mentioned the parallels with cybersecurity here. And that's honestly why we made ATLAS look like ATT&CK, right? We've deliberately tried to design some tools that were familiar to the existing security community to make it a little bit easier for them to understand and start to mitigate some of these threats and concerns. But to be honest, even the mitigation side of this is still very much under development.

We actually released a mitigations page on the ATLAS website just to give folks an initial starting point to understand how they could mitigate some of these threats about six months ago. But even like beyond that, like even just where you get started to understand what your specific system is vulnerable to is an area very much still under development and has in a lot of ways just been a, say, more traditional red team has been the necessary solution to kind of understanding that internal problem, right? Have someone attack your own stuff, right? Like a friendly white hat hacker who can go through and see exactly how your system is vulnerable.

But we are building a few more tools for the community to start to get a bit more visibility into how their system might be vulnerable without deploying a full-up red team. And that's like, for example, with Microsoft actually, one of our collaborative releases about six months ago, nine months ago now was our arsenal tool. It's a fairly early development stage. But we wanted to get it out there into the community as fast as possible.

Threat emulation tool. That is we open-sourced as part of the MITRE Caldera platform that a lot of researchers and security folks inside of existing cyber workflows are using right now for the threat emulation side. We wanted to incorporate some of the ATLAS tactics and techniques into that existing threat emulation platform for folks to start to understand the vulnerability of the AI components of their systems as they were using those threat emulation tools, right?

And that's built on top of the Microsoft Counterfeit tool. And that's built on top of the IBM ART tool, the Adversarial Robustness Toolkit. There are a few things across the community that we trying to make easier and easier. That was largely what that whole even just building on top of each of those existing tools was to make it even more accessible to the security community to start to incorporate that into their existing, say, threat emulation or red teaming pipeline.

But a lot of even just the startups in this space, right? Like a lot of groups are still bringing together red teaming capabilities and workforce groups, right? Even that's the other thing, right? Microsoft and Google, a lot of the tech giants now have really internally mature AI security teams to start to mitigate some of these threats and concerns. But in a world in general, the Walmarts, the Johnson and Johnson's, right? Some of the big groups that are starting to incorporate AI into a lot of what they're doing, in a lot of ways, rather than spin up their own internal red teaming capabilities, they're either looking to some of these industry groups that are developing capabilities that can be brought in to help them solve some of these problems. Or they're starting to work with some of these, say, open source tool, like the Arsenal plugin that I mentioned a couple of minutes ago, just to understand some of their current risk profile, right?

A lot of different groups are still at different stages, I'd say, in solving this overall problem. But we're excited as a community to continue developing solutions for more folks to get better prepared for the threats as they continue to evolve.

[0:34:02] Simon Maple: Absolutely. And plenty of things there for our listeners to kind of dig into deeper as well. Absolutely. Thank you. Great advice. We've come up to our final question already. It looks like it's gone quick, Christina. But the final question as always, in your existing role, if you could wish for AI to take away one piece of your role today, what that part of that role be?

[0:34:21] Dr. Christina Liaghati: Ooh. Well, I think I would have to say making email a little bit easier to stay on top of. That's honestly one of the things I'm very excited about as I think the personal assistance, AI-enabled personal assistance in the next few years is going to change the business game quite a bit. I'm really looking forward to that. Because even just for my own mental health, I have given up on staying on top of my email 100%, right? I'm implementing a lot of tools to try and respond to folks as quickly as I can. But having the number of folks that want to talk about AI right now has not made any – on top of it. I am excited about that side.

[0:34:59] Simon Maple: And it can absolutely respond to all those other AI agents that are sending emails as well.

[0:35:04] Dr. Christina Liaghati: Right. Right. Exactly. Yeah. Yeah. Exactly. The number of malware emails that we're going to get that are designed with an LLM is not going to sound like a Nigerian prince anymore. It's going to be much more personalised. That's going to dramatically change the game on the kind of filters we're going to need in our emails too. We'll see. This will be an interesting next few years.

[0:35:22] Simon Maple: Yeah. Yeah. Absolutely. I totally agree with that. Well, Christina, thank you so, so much. It's been an absolute pleasure chatting with you. Yeah, I think people want to hear more about a lot of the things that you're working on, atlas.mitre.org is the place to go to read more about that. Thank you very much, Christina.

[0:35:35] Dr. Christina Liaghati: Awesome. Thank you, Simon. It was great to talk with you. And we'll look forward to getting more folks involved in the community.

[0:35:41] Simon Maple: Absolutely. And thanks to our listeners at The Secure Developer for tuning in today. And look forward to more great content over the rest of the year. Thanks all.

[OUTRO]

[0:35:53] ANNOUNCER: Thank you for listening to The Secure Developer. You will find other episodes and full transcripts on devseccon.com. We hope you enjoyed the episode. And don't forget to leave us a review on Apple iTunes or Spotify and share the episode with others who may enjoy it and gain value from it. If you would like to recommend a guest, or topic, or share some feedback, you can find us on Twitter @devseccon and LinkedIn at The Secure Developer. See you in the next episode.