Skip to main content
Episode 138

Season 8, Episode 138

SAIF - Effective Risk Management And AI Security Standards With Royal Hansen

Guests:

Royal Hansen

Listen on Apple PodcastsListen on Spotify Podcasts

As AI adoption continues to grow, it's important that effective risk management strategies and industry security standards evolve along with it. To discuss this, we are joined by Royal Hansen, the VP of Engineering for Privacy, Safety, and Security at Google, where he drives the overall information security strategy for the company’s technical infrastructure (and keeps billions of people safe online).

Royal cut his teeth as a software developer for Sapient before building a cyber-security practice in the financial services industry at @stake, American Express, Goldman Sachs, and Morgan Stanley. In this episode, he explains why adhering to a bold and responsible framework is critical as AI capabilities are integrated into products worldwide and provides an overview of Google’s Secure AI Framework (SAIF), designed to help mitigate risks specific to AI systems. Royal unpacks each of the six core elements of SAIF, emphasizes the importance of collaboration, shares how he uses AI in his personal life, and much more.

Today’s conversation outlines a practical approach to addressing top-of-mind AI security concerns for consumers and security and risk professionals alike, so be sure to tune in!

Share

"ROYAL HANSEN: To me, it harmonises everything we just talked about, which is like – so input validation is a control we've relied on in software security for a long time. Harmonise it with the model, that prompt injection. I think that's a perfect example of what we mean by that. Now, there's different – another example would be serving infrastructure, and the integrity of the serving infrastructure, just like you have an integrity of the deployment of your binaries."

[INTRODUCTION]

[0:00:32] ANNOUNCER: You are listening to The Secure Developer, where we speak to leaders and experts about DevSecOps, Dev and Sec collaboration, cloud security, and much more. The podcast is part of the DevSecCon community found on devseccon.com, where you can find incredible Dev and security resources, and discuss them with other smart and kind community members. This podcast is sponsored by Snyk. Snyk's developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open-source containers, and infrastructure as code. To learn more, visit snyk.io/tsd.

[EPISODE]

[0:01:18] GUY PODJARNY: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning back in. Today, we're going to continue our AI security journey with Royal Hansen, who is VP of Security and Privacy, he'll tell you more in a bit, at Google, and has been very deep in AI security for a little while now. Royal, thanks for coming onto the show.

[0:01:35] ROYAL HANSEN: Thanks for having me, Guy. It's great to be with you again, and then with the audience.

[0:01:38] GUY PODJARNY: Royal, tell us a little bit –  give us a bit of context. I probably butchered sadly the title and the scope of what you do. Tell us a bit about your role, maybe a little bit of context about your background for people to contextualise your advice as we go on.

[0:01:51] ROYAL HANSEN: Yes, it's great. As we've gotten to know each other, I think we share a fair amount of history actually. I started my career at @stake in security. This is over 20 years ago, so I was deeply in this software security, web app security in the early days.

[0:02:07] GUY PODJARNY: [Inaudible 0:02:07] that world on it.

[0:02:10] ROYAL HANSEN: Exactly, exactly. Then, I went to the banks. We did a lot of work for banks, so I worked in financial services, both in London and in New York. That was because really, the cybersecurity issue was a big deal for the digitisation of all of our financial services work. But five years ago, I came to Google and, really, in many ways for conversations just like this. The platforms on which people were developing software, they were deploying enterprise software increasingly and that consumerisation of IT we talk a lot about. People beginning to use consumer IT in the in the context of enterprise. It just felt like being in a place like Google, working on security was a great place to get leverage, to help more people.

I came here five years ago, I run – you've got it right, privacy, safety, and security here. The engineering function, centrally. I think one element of it that's useful for the conversation here is that line between security and safety, particularly in the context of AI is blurring. I think it's important that we all keep that in mind. It isn't just strictly speaking a technical arena that occasionally raises its head in the real world. Those lines have significantly blurred.

[0:03:22] GUY PODJARNY: Yes. Yes. Absolutely. I think the safety and security to an extent, there's also an element that had the – I had the CISO of Meta here, and we talked about integrity, and how – [inaudible 0:03:32] all these lines are so intermixed now. I guess as we get powerful and versatile systems, like these AI engines, AI brains that can do so many different things.

[0:03:44] ROYAL HANSEN: Exactly.

[0:03:44] GUY PODJARNY: Yes, it sounds a very compelling background, and very valuable for security to see the different lenses. I do like the comments on the consumerisation of IT. But also, years ago, I had Adrian Ludwig here on the show, and –

[0:03:56] ROYAL HANSEN: A friend from that same era.

[0:03:58] GUY PODJARNY: Exactly. He kind of became and he talked about how, at Google, he kind of developed an appreciation that enterprise-grade security is actually like far worse security than consumer-grade security. Which consumer grade means, it just needs to work. You can't expect people to go through training or to be taught. It needs to just work, which was a great kind of point of view that stayed with me through the years.

[0:04:19] ROYAL HANSEN: It's a really good point and we'll talk about it. But I think this line between the cloud business at Google, which has grown tremendously in the last five years, but yeah, you still do have these two worlds of the consumer product and the enterprise, and they take a different business model, like a different way. But the tech heavily commingles in ways that I think – if we do it well, everybody benefits. If we do it poorly, to your point, actually, it's uneven. I think people suffer from it actually.

[0:04:50] GUY PODJARNY: Yes, and a [inaudible 0:04:50] lowest part and they basically focus their energy on that. That's where I'm going to start. So, we're going to talk a lot about the secure AI framework that you've released, you're one of the authors on the blog and a paper on it, which I found to be super interesting. Today, still one of the few AI security frameworks out there on it. I'd love for us to go through it in detail.

Before we do that, I'd love to just get a few more loose-form views from you about AI security. So, let me start with something very, very open-ended, which is, AI is an exciting but super powerful technology that has kind of burst into our lives at a somewhat alarming pace. What worries you most when you think about AI and security? Which concern most alarms you?

[0:05:40] ROYAL HANSEN: Yes. I think my primary concern, and there are lots of dimensions of risk that we're managing, and we'll talk about that. Is that we will throw out as a society, the benefits of all of the investments, and this is how you and I know each other. In things like software security, or data security, or infrastructure security, or safety, as we rush to adopt and include AI, and all of these applications or infrastructure. The SAIF as a framework is designed to help us with that. But I think it points out that as an industry, as a society, we have this tendency to rush with new things.

We’ve got to be really thoughtful about the way we build off of existing strengths because otherwise, we'll open new vulnerabilities that have nothing to do with frankly, the new AI-enabled interface. But will be on a shaky stack, we’ll be on a new and uncertain stack. I think this – you've heard Google talk about it, but I think it applies here. Being responsible and bold at the same time is the trick. We don't want to wait until we’ve figured everything out because we'll miss all kinds of interesting opportunities. On the flip side, if we lose those 25 years of software security expertise, we're going backwards in a real way.

[0:07:03] GUY PODJARNY: To sort of focus on the new and shiny, instead of the – that's a really good point. I'd be remiss if I didn't explain a little bit that you and I are both active members of the Open Source Security Foundation, the OpenSSF, and have been very active on supply chain security, software supply chain security to be exact. There are worlds of reports, and we can have three podcast conversations on that alone.

[0:07:23] ROYAL HANSEN: But that's a perfect example. Just as a forecast a little bit of the SAIF thing is, as we've done software supply chain work and the security of it, now there will be new elements of the stack related to AI. Let’s make sure we take a step forward in the provenance and the integrity of the AI configuration files, the models, or the whole supply chain of the data in the context of something like SLSA or SBOM, the things that you would have talked about in this forum. Let's not make that a totally separate conversation. Let's make it one conversation. It's all software. It's all data.

[0:08:03] GUY PODJARNY: And a lot of the fundamental concepts are security concepts where the tech can be swappable, but you still need to know where a component or data came from. You still need to be able to attest to something downstream, whether a certain process or controls have run through upstream. A lot of those fundamentals are alive and well.

[0:08:21] ROYAL HANSEN: Exactly. Exactly. So, you'll see people try – and I think new people will be involved here to your point, who don't have that familiarity with the work on SLSA or SBOM work. We’ve got to bring them along in this process. So that is part of the SAIF framework, which is, let's bring this AI community along on some of the journeys we've been on in the security world.

[0:08:42] GUY PODJARNY: Yes, and I think that's a good one. I think one challenge with that, when you look into getting the fundamentals of security, into building an AI system, one concern that comes up is a lot of times the AI systems don't have fundamental development practices. There are a lot of data scientists, maybe even some data engineers, people in a variety of that continuum, and they're building systems, oftentimes developing in production, oftentimes not versioning, data is fairly poorly versioned in many of these use cases.

I guess, can you, from a security team's lens, organisationally, clearly, that's something that you can aim to improve and probably will help you in many more ways than security. But from a security lens, is that prerequisite? Do you have to fight to get those data practices or those deployment practices sorted out before you can really do what you need to security-wise? Or do you seek to lump into that reality or security flaw?

[0:09:35] ROYAL HANSEN: It's a great point, and we could use it – we could talk about this at each layer of either the software development or the model development lifecycle or of the stack, right? We can take either lens. One of the things that attracted me to Google five years ago, I knew from the outside, the team did an incredible job of integrating just take the basic things like cross-site scripting or input validation literally into the mono repo, so that we weren't testing every commit, we were simply scanning for the use of the correct input validation, script handling, output exfiltrate, output escaping, so that we knew people were using the right implementation.

That saved an enormous amount of effort in the software security world because it was about using the correct thing, not finding all the versions of the bad that tens of thousands of developers might submit. In this case, you're spot on. There's a whole new group of people who've been in research areas, working with data, and iterating on models at frequencies that we've not seen in software, that just almost by definition, this tuning work is happening much faster. On the front end, think of the interface to a chatbot is natural language, not content handled by script, handled by code. Literally a language –

[0:10:58] GUY PODJARNY: A lot more loose and a lot more –

[0:11:00] ROYAL HANSEN: – goes all the way to the model, in a way. And then the model is deployed as a configuration file, alongside your binary in the stack. For each one of those, I raised them just as different layers. To your point, the security team has been here working with the research and AI specialists to do exactly what we said, like, "Okay, this is new, it's different, the speed at which you're iterating." How do we sandbox that? Because we can't treat each iteration like a commit, and a build. It's just not going to work.

[0:11:34] GUY PODJARNY: Right. It's too big.

[0:11:34] ROYAL HANSEN: It's too big, it's too frequent. The tools don't – that's not the tools that they've been using. How do you sandbox it, so they can do that for a while, without the risk of exploding into the rest of the ecosystem, but still giving them the flexibility? Same thing with the config file that they deploy. How do you put the same kind of integrity controls on the config file that you have on the binary? Just treat it like a config file, so treating it differently. Same thing on the front end. when it's a prompt, how do you treat that like code, like script, like almost like JavaScript that's being unsent into the model?

The point is, the security team and the research team work together to build those – think of it like crypto authorisation input handling. We need our equivalent of the security primitives for the AI stack, so no one ever has to think about these things.

[0:12:30] GUY PODJARNY: I mean, that's great. There's a few prerequisites behind. First, you need to know who the researchers are. That was another point –

[0:12:35] ROYAL HANSEN: Fair enough.

[0:12:35] GUY PODJARNY: One sort of fellow CISO pointed out, which is a lot of security teams have been investing for years in a relationship with the development team. And they might have no relationship with the data scientists, data researchers. I guess they have to fix that. There's no two ways about it. You have to address that.

[0:12:51] ROYAL HANSEN: Well, I do think that's what – the framework is actually a good one, because some of the framework, the SAIF framework is just about bringing different communities together, the detection response team, the automation. Because I do think you're right, the relationships and the confidence in each other is as important as some of the technical details.

[0:13:11] GUY PODJARNY: Right. Great. So maybe – I think we're sort of – we're getting – it's going to be –

[0:13:15] ROYAL HANSEN: Into the framework.

[0:13:16] GUY PODJARNY: – through the framework on it. But maybe let me ask you one more question before we go to the framework. We're going to talk a lot about structure and things you need to do. What excites you the most about AI and security?

[0:13:27] ROYAL HANSEN: Yes. I think that AI is fundamentally a democratising technology. In that, it's going to enable a lot of things that every individual already knows a fair amount about, but can't quite complete. They're not the expert, whether it's the SOC analyst, whether it's the software developer, whether it's – but also just in the context of normal life, whether it's my teenager who's doing a variety of things with his phone. There's a whole bunch of just these little things that, if we were to go the last mile, the automation would be much – the efficiencies, the productivity gains will be enormous. I think that's going to happen across the security landscape, but in lots of ways that are somewhat mundane, but the multiplier effect on that will be enormous.

Think of all the people that are doing software, and all the people that are doing scheduling, all the people that are doing logistics. All of this – every one of them is going to get a percentage better, and that multiplier will be enormous. I think of AI through that lens, rather than through sort of the whiz-bang, sort of magic that it's going to solve. I think a bit of semi-experts out there, they're going to get that much better.

[0:14:37] GUY PODJARNY: Yes more the AI, you're more excited by the AI assistant than the autonomous –

[0:14:40] ROYAL HANSEN: Correct, correct.

[0:14:41] GUY PODJARNY: It's more about making every person 100x more productive, versus replacing certain roles. But, Royal, there is a role for – the security talent shortage has been sort of a truism for many, many years. It's hard to get people onto that. There are probably elements of the software world and security world that we do want to aspire to have some amount of SOC analysis just be entirely autonomously done. Which I guess is hard to really delineate between AI that has boosted an individual to do a lot of things and an AI that has offloaded so much of their work that some aspect of their job is no longer their job. Now, they're desperate.

[0:15:23] ROYAL HANSEN: It'll be different in every line of work, every function, every person, and that's kind of what I like about it. It is some – it's flexible and can be applied to the person and their job. And also requires the person to think about it a bit. What’s next in the list of tasks that we should get to?

[0:15:40] GUY PODJARNY: That excites you more than the concern around how much productivity boost the attacker is going to get? Do you feel the balance? Because the attackers are going to get that whatever spear phishing at scale, like a million automated exploits generations as soon as interoperability gets aired on it. I guess, do you feel industry-wise, community-wise, we're set up to have the defenders gain more of the value than the attackers?

[0:16:07] ROYAL HANSEN: Yes. I'd love your reaction to this, but I've increasingly thought that when – just think of the asymmetry of most of – forget AI, but the way we've talked about it. You got that one attacker who's right once and defeats the 3,000 people doing security at a big company.

[0:16:24] GUY PODJARNY: The fundamental of that equation of security.

[0:16:25] ROYAL HANSEN: Exactly. But in this case, that attacker is going to get incrementally better, but the 3,000 people are going to get equally incrementally better, each individual. The math here says defenders are going to get better more than an order of magnitude. I mean, when you think about number of people doing defence versus offence, it's at least orders – a couple of orders of magnitude is different. So just if you think productivity, efficiency, effectiveness, automation, there’s more on that one. Now, in each, there will be plenty of one-off instances where the attackers benefit, but that's true today. That's not changing that. The math to me says defenders get better, faster here.

[0:17:10] GUY PODJARNY: Yes, that's an interesting. It's definitely an optimistic kind of view to it, and I think it's the one we should focus on. How do we make that happen?

[0:17:16] ROYAL HANSEN: Fair. That's right. That's right. Maybe there's something about my job that makes me think that way, but that's how I think.

[0:17:20] GUY PODJARNY: I think I'm not disagreeing with it. I think it's interesting. Maybe one counterpoint to that is, if AI ends up inflating the attack surface, the same way that it inflates productivity. Because if that happens, then that game kind of goes away, because defenders have that order of magnitude more systems to protect. So we need to be cautious not to get ourselves into that reality.

[0:17:41] ROYAL HANSEN: I do think one of the big reasons that we've put so much energy into SAIF as a framework is that, to your point, everyone's going to be affected by this. Because anyone who's got automation on the web, an app, any automated, digitised service, whether they're doing AI or not, that interface could be enlisted as an AI-enhanced or enabled process. So the world's got to come along on this journey because you're right. Those other interfaces, just because they're not AI-enabled, doesn't mean they might not be subjected to AI-enabled activity. So we've got to do –

[0:18:19] GUY PODJARNY: Intentional or unintentional.

[0:18:22] ROYAL HANSEN: That's right.

[0:18:22] GUY PODJARNY: Well, I think we teased the framework enough here. We should probably get into describing it. So let's get to it. Now, maybe start off by just sort of telling us what is the SAIF framework about and start breaking it down. We'll come and go through the different sections in detail.

[0:18:35] ROYAL HANSEN: Yes, that'd be great. I mean, at the highest level, and we've teased it out a little bit. It's simply the application of our own experience when we began in 2011 with a team inside of security that was using that version of AI's capabilities to solve security problems. Over 13 years, that team has solved different problems or integrated with different foundational tech or thought about the attacker and the defender within that context. We've tried to distil in ways that are applicable not just to a large technology company, or a software development company, or a bank, but to anyone who wants to go down this journey, some categories of focus, they could come along on this journey. That's the first thing.

The second is, it's an effort for us together to refine each one of these. Because as I said, in the beginning, it's a lot of new stacks emerging, a lot of new players, a lot of new people. Researchers, data scientists who've never been in the security conversation. There's a journey to bring those folks along, so that we as both companies, consumers, and countries have confidence that there's a process and a structure for growing comfortable with each new innovation or each new implementation.

[0:19:54] GUY PODJARNY: Yes, that would [inaudible 0:19:55] replace the framework.

[0:19:56] ROYAL HANSEN: Exactly. It's not as much about here's the plate, here's the recipe. It's about getting people together and working on each one with that combination of experts who can help us do it right. So, you'll see a lot more detail over the coming quarters and years.

[0:20:12] GUY PODJARNY: Got it. That's the intent when it says that SAIF is a conceptual framework on paper. It's the saying, I guess, maybe that's expectation setting as well, as we talk here. We'll talk about how to get to the specifics. The framework is quite high-level as it is, how you should think about the problem versus these are the five steps. There are some examples, of course, but it is conceptual, it is kind of how to think about it. So, I guess let's sort of go through it. Maybe we start off by quickly enumerating, it has six components, right? Maybe we start off with that, and then we start diving in.

[0:20:43] ROYAL HANSEN: Yes. My sort of shortlist, I'll poke at this or I'll call out some element that caught your eye on it. But the first step is the foundational work, and I sort of alluded to that. But just like the integration of the foundational work that we've already done in software security, in technology, generally. The second is the integration or thinking of the detection response and your threat intelligence functions, to integrate both the possibility of the attacker using AI to our earlier conversation, as well as to then use it in defence, which is what leads to number three, which is to use AI in automation to keep pace really as what we just talked about, to make the defender or the defensive technology AI-enabled as well.

Fourth is to harmonise the controls, and so we talked about this in the context of now you've got Vertex AI, or you've got TensorFlow, or you've got other external frameworks. How do they integrate? How do the controls you use for input validation harmonise with what is now being used in that stack, or in that component of the stack? Then the feedback loops, and we talked a little bit about this. But I think, with the speed at which models are iterated on, and that AI-enabled software changes its behaviour, you have to have a feedback loop in your own processes that expects that, and doesn't kick off a two-week sprint for each thing you learn. It's got to be much faster than that.

Then the sixth is contextualisation, and we were just talking about this. Again, these are all showing up in business processes, and maybe in some cases, it's more fraud than it is security. But to take that step back, and make sure you're thinking of it in the context of whether it's safety, integrity, fraud, whatever it is the right framing.

[0:22:29] GUY PODJARNY: [Inaudible 0:22:29] So I think the six steps, we'll get into it. They do blend a little bit into one another. To me, they read as a sorted. So if you're only going to do one, do the first. If you're only going to do two, do the first two. Is that correct? They feel like a maturity – you can’t do number four without doing number one, but you can do number one without doing number four?

[0:22:49] ROYAL HANSEN: Correct. Particularly as you go through those transitions, and you heard me a little bit as we went. One ought to lead to the next as you do it well. That's right.

[0:22:59] GUY PODJARNY: Okay. I think that's important because they'd represent, while it is a framework, and it has the six elements, there's a facet of a maturity model in it as well. Although probably each of these is a never-ending task. You can always do more of each of the six. 

[0:23:12] ROYAL HANSEN: That's a great point. That's a good example of where, as a community is that somewhere we want to go. Take this, and maybe make it a little more granular under each, so that we could more formally use the maturity model concept with it. So again, good idea. We could – we're looking for ideas here too, and how to best use this.

[0:23:34] GUY PODJARNY: Yeah. I think maturity models have – everybody has a bit of a love-hate relationship with them. Because they imply a certain point in which you're mature, and you're done, and that's not correct. But as you get started, I find ordering is very important for many people who are just trying to figure out where to get started. So indeed, let's get started. Let's dig into the first one.

So you explained or touched on it several times here around sort of the security fundamentals on it. I guess, maybe carve that out, I understand that there's – but you need to not forget the basics in AI. But how if you're touching, this is your first step, you're trying to tackle, indeed, how do I identify even the foundations that I have? I guess, how do you approach embracing, like putting security foundations, the bare bones around AI?

[0:24:20] ROYAL HANSEN: Yes. Look, I think there's two levels to this. One I talked a bit about when I went through the lineage of data, and then the deployment with config files. But the very first step here is, to our software supply chain work, what is the software that created and deployed in combination with whatever other software is being deployed in these applications? To me, there's just a knowledge of with precision, the packages, the binaries, the code, however it's compiled, right? This is the thing, they're very different. A model is just a model, has to be deployed in the context of some interface. It oftentimes will then interact with other pieces of software.

To me, the starting point is just knowing that. As you said, sometimes these have grown up in silos outside of the traditional IT department, and so you don't know. I think the first step is just people becoming familiar with the typical training, and then deployment serving, let's say. They have these serving platforms for models. And then as you look about, well, for your own databases, logic tier, and presentation layer, how do they fit together? Some of that's just technology – it's not magic. These are still just software, data, and binaries that are deployed in a server with access to the internet.

[0:25:44] GUY PODJARNY: Yes. But it's simple, but easy to forget, which is, before you dive yourself into the world of prompt injection, try to figure out who is using what in the system, what components are in there? Are they using any vulnerable components, what about the master production? The sort of the one-on-one of software security, not even. 

[0:26:00] ROYAL HANSEN: That's right. How are these being developed? What's the IDE for your data scientists? I think – 

[0:26:07] GUY PODJARNY: Stock knowledge through the software, but also the stack in the software development process in the AI.

[0:26:13] ROYAL HANSEN: They're tools, right? They are brand new tools, and not all of them, to our conversation, have been integrated with the traditional pipelines of the software development teams.

[0:26:22] GUY PODJARNY: Got it. Okay, that makes sense. Maybe naturally because of the Snyk a little bit, but I do hear about this a lot. For a lot of people, it starts from just like software supply chain security, hygiene. Just tell me what is there. Does it have known vulnerabilities? We've had examples, EmaFlow being probably the most kind of noisemaking one with the most command execution vulnerability there. Then, go on to that to just inventory and to address. So inventory, you kind of – you know what you need to secure, which is a good start, and then you know what's inside of them? I guess, how do you evolve from there, and from the basic security practice, to the AI aspects to it? 

Notably, I guess, I worry about data. I feel like people generally, it's a good reminder, but it's relatively easy for them to say, "Okay. Remember that an AI system is a system still, and you need to secure the software around it." They have playbooks for that, they've done those works for many years. The data piece is a lot more new, a lot more unknown to a lot of these people. Do you perceive it? Is it still – are there security foundations to data? Or is that one of the later maturity phases?

[0:27:25] ROYAL HANSEN: Yes. Look, I think there's a practical reality in the beginning, which is just knowing the storage or databases that are used in the training. But by the time it's deployed, it's deployed in the context of the runtime of an application. The history of the training – so I think there really are two dimensions to this. The training of a model, which then gets distilled and packaged. And then by the time it's deployed, I think it can be modelled very clearly like an input-output, and it's a configuration file in a serving layer of an application, and it can be disintermediated. So this is one of the things that we do with filters. There are different ways in which you can check the input before it gets to that model. So that's just one example of like the –

[0:28:14] GUY PODJARNY: Or it gets to the training of the model.

[0:28:15] ROYAL HANSEN: Yes. I say that if I were worried about this as just a normal developer, seeing people begin to use it, I would focus on that runtime, and this is really the second point of the framework, which is detection, response, and threat intel. To your point, if you understand at a high level, or at least the components, what is possible? What are threat actors doing? What are my incident responders? Are there people monitoring the software? What should they be looking for? So it's kind of a threat modelling exercise.

[0:28:48] GUY PODJARNY: But before we go there, I get that, and I think it's both important and interesting. But before that, when we're still in the build phase, we're still in building a system, how do you consider and how did the framework consider maybe the training phase, the ingestion of data? Is it runtime? Is it a detection response type system over there if you've manipulated it." 

[0:29:05] ROYAL HANSEN: To me, it's a data – 

[0:29:06] GUY PODJARNY: Identifying data poisoning, data poisoning attempt. If someone did, how do you do forensics? I think that's the scary bit around the build piece. Fine. Once I have a file, if I trust the file, I can secure a file. But what do I need to think about ahead of it, earlier upstream?

[0:29:21] ROYAL HANSEN: Yes. I think you have to here distinguish between the foundational models, and then the distillation, and reinforcement learning curves on top. Because there's only a few companies on the planet that are going to be doing the foundational model training. So most people are going to pick up some version of it, whether through an API or through a distilled model itself. I think even in those cases, it becomes more of a runtime issue for most people than the true train – at Google, we spend a lot of energy on the large amounts of data that are going into these models and making sure you know the kinds of risks that you're talking about. But that happens behind the scenes, to your point, not in a production environment in that sense.

[0:30:13] GUY PODJARNY: Yes. I guess I agree, but there are areas in between. For instance, some people would run an open-source model and they would provide some additional training, or maybe the better way to think about it is the fine-tuning exercise.

[0:30:25] ROYAL HANSEN: Yes, that's right.

[0:30:27] GUY PODJARNY: Increasingly, if you are running a useful adaptation of the foundation model for your system, it probably involved some amount of fine-tuning whether you embedded that into context. I think that's maybe the one-on-one. But oftentimes, it's sort of more real tuning of – so I guess those pieces are – I mean, we're touching on the shared ownership model, which is, if you're just using, whatever, the Vertex API, or whatever OpenAI, it's kind of like – if you're just picking a player, and you're just calling it an API, and it just added some public context to it so that it would ingest your documents page, and give you some summaries, then your security concerns are quite light, because there's nothing the AI system that's not yours, and it doesn't know anything that anybody – not everybody should know.

But as you get into that fine-tuning, I guess, I perceived it in my less kind of educated recent practice version of this, that something that is almost like a connected system, or like a system you build and run before as a training element. And you secure that, and then the output of that feeds into. Is that a –

[0:31:27] ROYAL HANSEN: You're right. There is an entire discipline, which is similar to kind of the foundational model training. In that middle – to your point, and this is the point is, prefigures the deployment to publicly available, consumer-available, whatever, user available, serving of the model, which is – whether you'd want to talk about it as fine-tuning, or reinforcement learning, with humans, or with autogenerated. People are using models to generate content, to train models, or to distil in the distillation process. Look, this is an area back to our safety conversation, and sort of somewhere in the harmonise, I think we could have jumped to four in some ways to the harmonisers.

[0:32:12] GUY PODJARNY: That's what I was about think, is like maybe I am drilling into something, like further down the maturity, which is good. I’m learning [inaudible 0:32:18].

[0:32:20] ROYAL HANSEN: I think of that [inaudible 0:32:20]. My experience in why it's important to do the SAIF industry-wide, the experts here are the AI folk. It takes real understanding and hands-on experience to know how to do what you're just describing. So, I think this is where the security teams have to lock arms, and the paper on this is related to our Red Team. We've formalised our Red Team with three new researchers. They were not security people, they were just AI people, but hands-on with three of sort of the team, just a small team to come together and say, "Let's work out the practices for Red Teaming." We know software security really well, we can break apps. We know model, the kind of work you're just describing, and let's use that expertise together.

I don't think there's any substitute at this point because if, as you're saying, you have teams that are actually reinforcement learning, or distilling models. Like they're not just relying on an API. You really have to bring those two teams together, because they're different. People have spent decades becoming good at reinforcement learning and distilling models. People have been decades getting good at thinking about the kind of prompting, the equivalent of prompt injection sort of attacks, or what bad guys are going to do. 

I think if you're in that kind of business, you really have to dig in and understand what control can you get from the distilled model versus what control do you have to apply between the user interface and the served model? So to me, that's the big –

[0:33:55] GUY PODJARNY: I think that makes a lot of sense. I think it's a good clarification, and actually very good guidance to someone that might be going down the rabbit hole I just did. Thinking about securing the AI systems and say, "Okay. Phase one, there's software here." So, just clean it up, know what's there, do your software supply chain security practices, adapt them, understand, make sure that you cover the AI stacks, and that you're able to secure those. Those are really, to an extent, nothing terribly novel at this phase about either being AI or not being AI. It is a bunch of assets that you need to secure, and you need to run them through it. 

Then it leaves you – and then after that, instead of going down the okay, and make sure you go on this route to the end, so the refining of your model. Step two is actually the runtime piece. Maybe we switched to talking a lot about that. You've already touched on it a little bit, but how do you define kind of that step two?

[0:34:46] ROYAL HANSEN: Yes. For me, the primary lens in step two is the detection, and response, and threat intelligence. Because everybody's coming on this journey to really understand some of the nuances we just talked about. But in the short term, what you need to know is, what are bad guys doing either using AI against your interfaces back to the conversation we had earlier. Maybe completely unrelated to AI. It's just your web service, your API. Or in the instance, where you're combining some new models and software, like we talked about in phase one, what are the abuse vectors?

To me, there's just a familiarity with reality. In the same way we talked about with software, and the Red Teaming, or the adversarial testing, or the detection and response that sort of learning that. That's to me the front line, after you know what's deployed. Even before we get to kind of the sophistication of harmonising the controls.

[0:35:43] GUY PODJARNY: Yes. Phase one, I feel like, find a CISO and kind of getting those marching orders, I know how to execute. Phase two, some of the stuff you mentioned are –familiarise myself with AI attack vectors, doing a detecting thing, a runtime detection on prompt injection. I guess, is that practical for me, given the nascent nature of tooling in the ecosystem at the moment?

[0:36:07] ROYAL HANSEN: Yes. I think, one, there is more and more threat intelligence, and writing on this. I don't think everybody needs to go to same level of school in it, but I do think that means to the degree, or detection, or response to your SOC team, and you're ready to have a threat intelligence team, or you're consuming it from others. It needs to be a line of thinking and study, not just for phase two, but for phase three when you want to use it.

To me, that's why two becomes three, which is then, you want your teams to use it. I think the SOC is actually one of the best places that we're going to find productivity and enablement. So then, you see what we're doing in Google Cloud, one of the things with the Chronicle suite and others is to start getting the SOC super-powered with more of these capabilities. They're a perfect candidate for an interface where they can ask questions. Think about how hunting happens right now or response. Oftentimes, a lot of different screens and a lot of different places.

To me, the opportunity is, is that SOC analysts or the team running that, or the detection response functions. As they become familiar with it, they also – one of the best ways to become familiar with is to use it, to begin to think about how you could use the simplest of automations to understand it better. Because you're right, there's no magic bullet, it's a journey that these teams are going to go on. But again, most people are not immediately deploying a distilled, and reinforced, sort of refined model into their –

[0:37:47] GUY PODJARNY: That touches the sensitive data, that is accessible to the attackers. But I guess, that's the other signal to take out of this, as if you reach that too, and you say, "I don't know how to do this." Then maybe that should give you pause a little bit about which systems you're willing to deploy to production –

[0:38:02] ROYAL HANSEN: That's fair. That's a fair statement.

[0:38:05] GUY PODJARNY: But we touched on two and three here. So sort of phase two, it's says, look, know what's going on in runtime. Some complexity in the ecosystem today around the tooling able to detect it, we'll come back a little bit to prompt injection, because I think there's some interesting nuance to it. But then step three is more about using AI for security, and the framework and yourself, you're mentioning there's reciprocity between those two elements. If nothing else, just by familiarity with the subject matter.

[0:38:30] ROYAL HANSEN: Yes. That's why I feel like we had a leg up in a way that wasn't necessarily by design. But because we had a team that had been using it for almost 15 years.

[0:38:41] GUY PODJARNY: Right. They knew what to look for.

[0:38:42] ROYAL HANSEN: They knew how to start engaging, even if the SOC teams hadn't spent all their time thinking about it. Once this emerged, we had a body of expertise that could be – or their team that had had hands-on experience. But even still, it's our SOC that's using the kind of simplest of chatbot interfaces in an appropriate way here to automate their own jobs, to help them be faster. For the very reason that I'm describing, which is, so they're better and smarter on this stuff, in addition to the productivity gains they get.

[0:39:12] GUY PODJARNY: So, I think a bit of a side question about prompt injection. So I had David Haber from Lakera on here, the creative [inaudible 0:39:19] kind of talking about prompt injection. I took two interesting things out of it. One is sort of the thinking of the LLM system or the foundational model as kind of like the brain. So, you can train it to your heart's content and you can do the right thing more often, but you can never really be fully sure that it wasn't the wrong thing or say the wrong thing. Therefore, you have to anticipate a wrapper around it.

Then the second related, I guess that was a little bit less from this conversation, is that prompt injection feels a lot less like a problem that you can fully solve than SQL injection or even elements of remote command execution. Where they feel as a finite set of correct answers that you can reduce it to, versus this attempt at social engineering, the LLM of sorts that is maybe more akin to phishing, and it's something that you should anticipate that will never go away, that you can build a system that is sufficiently resilient, that it would make the runtime system kind of useless. Is that correct? Am I wrong about either of those?

[0:40:24] ROYAL HANSEN: I think that's a logical way of thinking about it. I know, I admit to thinking roughly in the same way. But I'm not yet, because all of us are going on this journey. I did a very basic AI, sort of nearest neighbour type clustering, senior thesis in college. But I'm still coming along on some of the nuanced journeys, because I've been a security person, I've been a privacy person. I'm just like with – think of JavaScript, or think of buffer overflows, and think of the – some of that gets managed in hardware, we're still working on memory tagging in hardware. Or we're working in JavaScript on the front end, where you do provide the convenience of checking the input in the browser, even if you don't rely on it. I think it's better to think of it as a composite set, and you use the best tool for the – in the layer. 

I'm still excited about some of the reinforcement learning work that we can do. We're still very early on in training the brain like you're describing. I agree with you. It doesn't mean we'll ever rely on it solely. You want that pre-submission, and ultimately post-submission, because the other part of it is, almost independently of whether you fooled anyone or not, what action is it going to take? That's where the risk occurs, whether that's to automation or individuals.

I think you work at all layers, and I think a really fruitful area of research is combining security, safety, and AI expertise in the distillation and reinforcement learning of these models to get them as good as we can in the context you described. Because the filtering on the front end or back end has its own limitations. So I think you've got to keep working on all of them. We're just more familiar as security people of the software, the sort of Regex type checking, versus the reinforcement learning, which is a new domain. But I think you're going to – if I were 23 years old –

[0:42:32] GUY PODJARNY: We're used to systems that behave the same way, [inaudible 0:42:24]. We're not used to ever-evolving, ever-changing systems. Somehow, I wouldn’t be able to assess whether they're right or wrong.

[0:42:32] ROYAL HANSEN: If I were 22 years old, I could listen to this and say, I want to start thinking about my career. I would go headlong into this, sort of like how we were going to make these multimodal models as safe and smart as we can using the tools themselves? I think that's fruitful. A lot of research, a lot of interesting work still to do. And really, a big part of why we're doing the safe work is to bring the whole community along in that discussion.

[0:42:56] GUY PODJARNY: Yes, indeed. I think let's maybe go through – we're sort of running a little bit short on time, but it's okay, because I think the first steps are more than meaty ones, and they lead into it. All in all, I think if you're sort of at phase three, you should be pretty pleased with yourself at that point. You've established the foundations on it, If you have some runtime protection, you're using AI to secure your AI interaction, and hopefully more than that, the plane interaction interactions as well. Now, we get to the harmonised phase. We've already alluded to – 

[0:43:23] ROYAL HANSEN: We're kind of just talking about that.

[0:43:23] GUY PODJARNY: We can talk about that quickly, and then we talk about the next one.

[0:43:27] ROYAL HANSEN: To me, harmonise is everything we just talked about, which is like – so input validation is a control we've relied on in software security for a long time. Harmonise it with the model, the prompt injection. I think that's a perfect example of what we mean by that. Now, there's different – another example would be serving infrastructure and the integrity of the serving infrastructure. Just like you have an integrity of the deployment of your binaries. You want the same to apply to the model waits in the serving infrastructure, so you have integrity of that to deployment, as well as in runtime that it can't be tampered with or stolen.

To me, each of these software security domains, now, you look and say, "Okay. What have I picked up, or what's new, what's the implementation of input validation or integrity of deployment, or output escaping in the context of this new app?” And you harmonise.

[0:44:22] GUY PODJARNY: That's like the level up of the previous, like when you're getting going, just put something in place, and it's probably going to be orthogonal to your – like it should be the same practices, you should use your learning. Sometimes you use the same tools, but you should use the right tools for these build time, runtime, detect and response times for AI.

Step four is really around an appreciation that, at the end of the day, you're not looking to protect AI, you're looking to protect your business. That would include AI components, and add components, and you should strive for uniform handling of these things. So people don't need to rationalise about two different approaches.

[0:44:57] ROYAL HANSEN: That's right. Don't treat AI and the software as isolated controls. You got to think about the flow of data.

[0:45:05] GUY PODJARNY: [Inaudible 0:45:05] from the attackers' lens. Okay, cool. So we talking about harmonise. We're at step five here now, and we're sort of getting into expertise domain. I would – if you were to think of it as maturity, which is not exactly correct, which is adapting controls to adjust mitigations to create faster feedback loops for AI deployment. So what does that mean?

[0:45:23] ROYAL HANSEN: This is super exciting, right? But we've always talked, even years ago that I think at @stake, we talked about self-defending applications. But you have to think about these systems or potentially think about them – it depends on the system – as reacting almost in real-time to the environment, to the attack, to the – and this isn't just security. This is quality. So, it also includes things like the training and the way the software is developed and deployed, you’ve got to do these things quickly. I think macro level, we stopped thinking of this as just to build a pipeline and thinking of it as a little more living system. We're almost constantly tuning or responding, sometimes using the system itself, because you've trained it to respond to certain prompts in one way or input.

Sometimes, because the teams behind it – I mean, this goes back to detection response and others – are literally an extension of the system because they're receiving alerts, they're receiving escalations, they're receiving exceptions. To me, there's lots more work to do. But the Red Team is a good example. You're running Red Team exercises yourself against this, in ways that it's not a one-time thing, it's almost a constant –

[0:46:45] GUY PODJARNY: [Inaudible 0:46:43] defence. It's an exciting idea if we get there, basically. But successfully hacking the system and then telling the system, "Hey, I’ve hacked you." The system would rearrange itself to prevent that from happening again.

[0:46:56] ROYAL HANSEN: The real example for us is in a lot of the safety work. You have humans that are evaluating the spectrum of harm, it includes malware and phishing and things like that. But instead of those analysts then tuning their classifiers through a process, ideally, their feedback literally tunes the model in real time. There's nothing that prevents that from – that's what the system does.

[0:47:26] GUY PODJARNY: Yes. [Inaudible 0:47:26] versions of that that have been happening with like IP reputation, or there's elements of accumulating signal that are happening in the world of security and happening for a while. This is just a lot more evolved than that. In most cases, this would be something that you would look to purchase. I wouldn't expect every, whatever, online business to evolve their own defences. They would use tools that do this type of learning.

[0:47:52] ROYAL HANSEN: Yes, just like you would have in a call centre or a support centre. You're going to keep buying a better interface for those analysts on the front line. The same thing will happen for the SOC, I agree. But again, it did chip in a little bit on the familiarity of those teams with the technology to do it well.

[0:48:10] GUY PODJARNY: Yes, because it is quite aligned, than exercise for you, the inconsistency. There's a whole topic here, we're not going to open up around how society deals with statistical systems. If we only let go, they can do so much for us. But we're used to systems being aspirationally deterministic. Let's talk about sort of the last – the black belt phase there. Contextualised AI systems risk and surrounding business processes, which again, I don't know if I'm using the maturity analogy on it, but it does feel advanced.

[0:48:39] ROYAL HANSEN: Yes. I think, again, this is the opportunity that AI is presenting as we automate more and more of, we digitise more and more of society. Through all of the things we've talked about, remember that we're not in it just for intellectual satisfaction that the system is correct. We're in it to protect consumers' deposits. We're in it to protect consumers' health. We're in to protect travellers. So you have to think much more holistically, and you need to involve the trust and safety teams, the fraud risk teams. Everybody needs to be thinking of it as a bigger system. 

Again, back to SAIF as a framework, it's to bring society along. I actually think for all this sort of interesting technical work that we've just talked about, the important part of responsible here is to bring society along on this journey. This is not – the majority of the users of these chatbots or interfaces are not software people. But all of a sudden, they've kind of become software people, right? Because they're interacting in a way that's much more automated with these systems. To me, we got to bring the larger community along on this journey.

[0:49:44] GUY PODJARNY: Yes, the rest of the business. It's interesting, it's not the same as citizen developers that are building applications, but it is more active participation in them, modifying the system. I guess, interestingly, to an extent, this is the job of security, to make the decision around whether the risk is worth the reward and different actions that run across the system. Should I block this user from making this purchase, losing money in the process? Because I think it's fraud. That's probably the most clear reputation of it. But we've evolved that, "Okay. Good." 

Thanks, Royal, for sort of walking through it. I think the framework is very powerful. Just from my two cents, it does feel very much like a maturity model for it, because it feels, you can't really jump to five or six, you have to have laid the foundations in them. Although, of course, all of them evolve all along. I think we've got about five more minutes here. I'd like to maybe try to squeeze in two questions here.

As we evolve, we go past the framework, and we get into the practicality of it. People will do it, let's say, they listened here, they read the paper, they love it, they come along. A lot of these phases are quite theoretical, and people might need a bit more hand-holding. You alluded to the sort of the Red Team document on it. But what should we expect from the SAIF framework in terms of maybe more handheld guidance for today?

[0:51:01] ROYAL HANSEN: Yes. On each one of these, we are working on the next layer of technical and procedural, or even sort of just pointers to help expand that. So they should look for that. But we're also engaging a number of companies, partners. You've seen us with the public sector, you know, different countries, thinking about how to make commitments, do these things responsibly together. So, it won't be just from Google, it'll be from others. If we do this, well, they'll be contributing to that same framework.

[0:51:35] GUY PODJARNY: Yes. Do you think this might be a foundation for regulation, for instance, like as you designed it, you thought of it as something that might serve that purpose as well?

[0:51:35] ROYAL HANSEN: I think that it's a contribution to the discussion. How regulation unfolds in each country is a sort of work in progress for both the industry and the country. We hope that it's useful in that context.

[0:51:57] GUY PODJARNY: Yes, and I guess the longevity or the design of its to have longevity aligns with that purpose of it.

[0:52:03] ROYAL HANSEN: That's right. You don't want to – let's not freeze ourselves in any one moment of time here. Because there's still a lot of responsibility going with the boldness at this point.

[0:52:13] GUY PODJARNY: It's super helpful. I appreciate you, and we will release the framework for it. I do think people need help with kind of rationalising where it is. I'm sure as we learn more on it, there's going to be some evolutions of it. Hopefully, they're not dramatic. But indeed, bringing in more than a decade of experience from the end of the world is very valuable. Before we close off here, maybe one last question to ask you. If you could use AI to automate away one aspect of your job, what part would that be?

[0:52:40] ROYAL HANSEN: I'm actually very excited about the personal assistant elements, just the little things in my life, whether personal or other that just take a little bit too long, a little bit too many emails, or a little bit too many web forms. I am very excited just to take some time back for myself in those little things. I'm quite excited about that, actually.

[0:53:05] GUY PODJARNY: That's pretty good. I've actually heard someone toying with building a phishing disparaging type system where it would answer a phishing email, and it would go back and forth, and back and forth, and back and forth when he spots a phishing email. You click a button and it would respond, which I thought was very entertaining, a more effective version of that. Not for the phishing, for the –

[0:53:23] ROYAL HANSEN: That's right.

[0:53:26] GUY PODJARNY: That sounds like a good one. I would love one of those as well. Royal, thanks, again for tuning in here, and look forward to seeing SAIF evolve.

[0:53:32] ROYAL HANSEN: Great. We're looking forward to it. Maybe we'll come back with a deeper dive on one of those as we have some more material with the industry. That'd be great. 

[0:53:39] GUY PODJARNY: Absolutely. That would be great. Thanks, everybody, for tuning in, and I hope you join us for the next one.

[0:53:43] ROYAL HANSEN: Thanks, Guy.

[END OF EPISODE]

[0:53:48] ANNOUNCER: Thank you for listening to The Secure Developer. You will find other episodes and full transcripts on devseccon.com. We hope you enjoyed the episode, and don't forget to leave us a review on Apple iTunes or Spotify and share the episode with others who may enjoy it and gain value from it. If you would like to recommend a guest, or a topic, or share some feedback, you can find us on Twitter @DevSecCon, and LinkedIn at The Secure Developer. See you in the next episode.

Up next

(Rewind) The Changing Landscape Of Security With Dev Akhawe

Episode 139

(Rewind) The Changing Landscape Of Security With Dev Akhawe

View episode
The Need For Diverse Perspectives In AI Security With Dr. Christina Liaghati

Episode 140

The Need For Diverse Perspectives In AI Security With Dr. Christina Liaghati

View episode
The Evolution Of Data, AI, And Security In Tech With Tomasz Tunguz

Episode 141

The Evolution Of Data, AI, And Security In Tech With Tomasz Tunguz

View episode
AI, Cybersecurity, And Data Governance With Henrik Smith

Episode 143

AI, Cybersecurity, And Data Governance With Henrik Smith

View episode