Episode 143

Season 8, Episode 143

AI, Cybersecurity, And Data Governance With Henrik Smith

Henrik Smith
Listen on Apple PodcastsListen on Spotify Podcasts

Episode Summary

Guy explores AI security challenges with Salesforce's VP of Security, Henrik Smith. They discuss the fine line between authentic and manipulated AI content, stressing the need for strong operational processes and collaborative, proactive security measures to safeguard data and support secure innovation.

Show Notes

In this episode, host Guy Podjarny sits down with Henrik Smith, VP of Security at Salesforce, to delve into the intricacies of AI and its impact on security. As the lines between real and artificially generated data become increasingly blurred, they explore the current trends shaping the AI landscape, particularly in voice impersonation and automated decision-making.

During the conversation, Smith articulates the pitfalls organizations face as AI grows easier to access and misuse, potentially bypassing security checks in the rush to leverage new capabilities. He urges listeners to consider the importance of established processes and the responsible use of AI, especially regarding sensitive data and upholding data governance policies.

The episode also dives into security as a facilitator rather than an inhibitor within the development process. Smith shares his experiences and strategies for fostering cross-departmental collaboration at Salesforce, underscoring the value of shifting left and fixing issues at their source. He highlights how security can and should act as an enabling service within organizations, striving to resolve systemic risks and promoting a culture of secure innovation.

Whether an experienced security professional or a tech enthusiast intrigued by AI, this episode promises to offer valuable insights into managing AI's security challenges and harnessing its potential responsibly.


Follow Us


Henrik Smith: “We see now with voice impersonation, it's getting easier and easier, going viral now with all the people that are like, “Hey, I'm going to make it sound like this radio star, this rock star, this musician.” But it could also be, how am I going to make it sound like your CEO or your CFO, and adapt the script automatically? I can make a video where someone is actually talking to you and setting up a video conference with a recorded message to you.

I mean, that's been around for a long time, but it's just way more effective and way more accurate. It's harder and harder to tell the difference of what is fabricated data versus real data. If you get a call from your CEO, saying that, “I'm in a customer meeting right now for a $2 billion deal. I can't access my laptop.” There's very few people that will say no to the CEOs, when they're asking for like my password is not working. What is it?” 


[0:00:55] ANNOUNCER: You are listening to The Secure Developer, where we speak to leaders and experts about DevSecOps, dev and sec collaboration, cloud security, and much more. The podcast is part of the DevSecCon community, found on devseccon.com, where you can find incredible dev and security resources, and discuss them with other smart and kind community members.

This podcast is sponsored by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open source containers, and infrastructure as code. To learn more, visit snyk.io/tsd.


[0:01:42] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning back in. Today, we'll continue some of our AI security conversation on it, from a lens that is both sort of internal, but also a provider, also a leader in the AI space, which is sort of a little bit of a Salesforce perspective, and just generally a knowledgeable security professional, having sort of seen a bunch of these sort of tech trends before. So, for that we have with us, Henrik Smith, who is a VP of Security, as well as risk operations and enterprise security at Salesforce. So, titles and scopes under your title. So, Henry, thanks for joining us here on the podcast.

[0:02:16] Henrik Smith: Hey, thanks, Guy. Thanks for having me.

[0:02:18] Guy Podjarny: So Henrik, as we get going, indeed, in that scope of responsibilities, also you have an interesting journey. Give us a few minutes on your maybe background, and what is it that you do for people's context?

[0:02:29] Henrik Smith: Yes, absolutely. I will say background, it's been a while, I think I’m coming into my 28th year of this space right now. I started back in the day when it was totally fine for a 16-year-old to start fiddling with Unix systems in an enterprise organisation. So yes, I've been around for a while, and been in a lot of different areas of the industry as well, for I would say, both sides or all sides of the table last year. So, I spent five years with AWS security before I joined Salesforce about four and a half years ago now, in the CRM integration team, and now called a number of different things, I will say.

But yes, I lead our risk operations centre at Salesforce together with our baseline organisations, our business information steward organisation, also covering the information security officer for our enterprise organisation, apparently here. So, it's a couple of different hats, and I like to say that I poke at a lot of things and have opinions.

[0:03:25] Guy Podjarny: But it's a central responsibility that works, like has some responsibility for tooling or something globally, but then also works with [inaudible 0:03:33], from the different parts of the org, in which you are, I guess, kind of guiding and supporting as opposed to actually kind of operating their domains?

[0:03:41] Henrik Smith: Correct. So, the base organisation sits on me and they cover the security challenges, security prioritisation for all our core services, and we call them clouds, but for all the services. So, you'll see them outside as like service cloud and marketing cloud in the various names. They sit under. And then we also have the risk operations centre, which is focused on systemic risk remediation for a lot of the large-scale initiatives that we run, which partner, of course, with all the other areas like security assurance or TPM organisation, the various risk organisations we have, anything from internal risk to nation, state-wide, whatever it could be. So, we collaborate a lot across the whole company.

One thing that we're focusing primarily on is systemic risk that has been around for a long time, or just things that continuously come up that impact the whole company that might have to take a different approach to, and then, primarily to shift left as much as we can on how we remediate things. I think the time where people are just throwing out problem statements or throwing out the complexity over to the individual engineer and developer is gone. We have to move away from that. We have to move to fixing things centrally as much as we can and focusing on fixing the source. I think too many years, people have been focusing too much on just solving the outcomes instead of solving the actual problems causing them.

That's one thing that we're trying to do across the whole company and seeing how can we help people to fix things versus letting them do it. Then, how can we actually move security to be a full-service organisation, which is, security has always been a service organisation. I think people have just mistaken that for being an enforcing and a body of no.

[0:05:10] Guy Podjarny: Indeed. Well, it's like, I guess, maybe over the years, there was just a bit of a misalignment in terms of the people within the security organisations that saw themselves as a service organisation versus people outside. But hopefully, as an industry, we're working to fix that.

[0:05:27] Henrik Smith: Exactly.

[0:05:28] Guy Podjarny: So, let's dig a little bit into tech change, and especially kind of in the context of AI and in your role in which you have a very broad [inaudible 0:05:36]. So, maybe before we dig specifically into AI, when you think about a few technology paradigms are quite as dramatic as what we're sort of seeing right now with AI and other lens. But still, when you're in that role, so you've sort of seen through your journey, various tech innovations come in and disrupt a little bit the status quo. From a security lens perspective, how do you start thinking about these types of problems? What's your maybe more broad approach? And then we'll get a bit more specific with AI, and with what do you see working or working from what you see from within Salesforce?

[0:06:12] Henrik Smith: Yes. Absolutely. At Salesforce, we have a lot of different teams working on AI from different aspects, anything from audits, reviews, security reviews, to how we incorporate AI into our own services and how we build their own AI models to help our customers. But I think the key is what you said, it's around the technology shifts that have been 30 years. It's one of those where it's hard not to draw parallel to that anyone now wants AI in the services, both from defensive security, but also inside the services. Do you want to sell AI services? I mean, it's hard to not draw parallels to a couple of years ago, but it was machine learning. Before that, it was the next service. And before that, it was the next service.

So, I think it is a technology shift. It is a new technology that allows us to do a lot of things we couldn't do before. However, I think people need to not panic too much on this specific knowledge shift of how to do things, how to solve assurance, how to solve data governance, et cetera, and just use the process and the principles that you already have, and not so much trying to like, reinvent the wheel every time.

Follow the same process as you had before when it comes to like, where do you have data? What is your data governance policies? Where's data are going to be stored with this moving forward? How are you going to store it? Who's going to be able to access it? And I think that most of the process and principles that you already had, still applies. That hasn't changed. It's still data sitting somewhere processed by someone, and it's egress and ingress somehow. It's the same as you've done before.

So, while I think that from a technology perspective, it is a massive shift. It allows us to build services that we've never seen before. From a security perspective, it adds a number of new challenges that we haven't been faced with as much as before, but that doesn't mean that we have to do everything from scratch. What we're doing today, what we did yesterday, is still what we can do tomorrow to help secure our data, secure your customers, and not trying to, again, don't try to reinvent the wheel every time something happens in the industry.

[0:08:14] Guy Podjarny: Yes. I think that makes a lot of sense and it's also aligned with a lot of maybe the nascent threat models are such. When you see them pop around, it's like really like 90% of them is well, like to maintain a good hygiene process and follow that along, and those places on which you needed to adapt, to AI or to LLMs. But not necessarily, you don't need to go back to first principles and build the whole thing to be AI-minded. The security first principles are the robust software first principles work.

[0:08:43] Henrik Smith: Exactly. I think in general, we've talked about this, but it's an area where a lot of what we do in security is preventing people from making mistakes. But that's always been the case. It's help people to not make less effective, less helpful decisions in daily life, said lightly. But I think that is one of the things with GenAI is that it does allow people to make less informed choices, easier than we've ever had before.

It's easy to make severe mistakes than we've ever done before, just because it is so enticing to use. It's one of those where take a simple thing as data governance. Where do you store your data? It is so easy to just start spinning up a problem somewhere, and for a service that is just available as a demo, and start tapping away and you're seeing like all the power that you can get from GenAI services.

It's not with ill intent. It's more that it's summarising from the results you're getting back. They're like, “Hey, let me just try this as well, and let me just try this as well.” Then suddenly, you put data somewhere that you had no idea where it was just because you wanted to try it. Then, that's always been the case. It is just that it's so tempting now with the results that you can get. It’s like, “Hey, I can actually get my whole like sales call pre-printed and with all the different possibilities where it's going to go.” Or “I can get my threat model created for me, as it is.” You can do a full stride just by uploading your internal documents and your internal system architectures, especially now with image recognition as well in a lot of the error models, and it's more, are you doing the right things? Have you done the research? Where's the data stored? Which service are you using? Do you have a contract with them?

I mean, most services now, actually, do operate on ephemeral data, so they're not going to store your problems, they’re not going to store your data. However, have you verified that? Do you know where it's going to go? So, I think it just again, it's preventing people from making mistakes, and I think that's where one of the biggest challenges from data governance and from internal security is like, making sure that people don't make mistakes. This goes also on the service building piece of it.

[0:10:42] Guy Podjarny: How do you tackle that on it? Like you've got a super elaborate organisation on it. You've got different people. They have different authorities on it. You correctly say that you are sort of seeing how it's easier than ever and more tempting than ever to pass data along without maybe sufficient thought or consideration on it. It seems like you're saying don't entirely freak out about those, but what's right over here? Should you block all use of whatever, ChatGPT in the organisation until such time that everybody's, what, sufficiently well-educated on it? If you don't do that, I guess, what's the right way to deal with that don't go to either extreme?

[0:11:24] Henrik Smith: Yes. I know. I think it's a really good question, and I think that speaking of less informed decisions, I think going in and blocking all use of GenAI and blocking all traffic to GenAI services, no matter where it is. It's also one of those where it's a very good career-limiting move, I think. If your approach is that you're going to block everything. I think this is again, where you have to fall back and understand that security is a service, and if you don't provide the best service, people who work around you will find the best service. If you can provide a safe and more effective way of getting through the data, for example, setting up various types of ROM filters, or if you either use a commercial service. If you build your own service, awesome.

But if you have that kind of service, be proactive and build and offer a safe or safer way for people to do it. If your approach is to block, you will always be circumvented. People will always walk around you. So, I think the best things, honestly, people can do is like, A, educate yourself. See where this stuff is going. Have a vendor to work with. If you don't have the capability to spin up your own models. But have a vendor, partner with, make sure that you have the contract in place. Make sure we guide people to the right location, instead of trying to block everything. That has never and will never work. It doesn't matter which service it is. We've seen that with really any technology shift. Blocking things will never work. People will just go on the side. The more they decide to do, the harder is to take it.

[0:12:44] Guy Podjarny: Yes. At the end of the day, the business kind of drives the result. That's the one that pays the bills, including the security bills. So, it's not –

[0:12:50] Henrik Smith: Hundred percent. This is a business enabler, for sure. And if you stand in the way, again, it's not the best career move to stand in the way of one of your biggest potential revenue drivers.

[0:13:00] Guy Podjarny: Indeed. Okay, cool. So, we understand that probably summarising a bit of that guidance is, don't freak out. Don't freak out on the sort of, “I need to totally reinvent the wheel and sort of shut everything down. But also, don't neglect it, because it's easy for people to make mistakes easily.” So, probably just sort of move quickly, find the right partner, the right vendor, the right sort of tools. They don't need to be perfect. They just need to help you move along, and then put the sort of mitigating efforts and then kind of get going.

I guess, maybe the next question is, who is you in this context? So, you have again, especially kind of in Salesforce, but also true for any large organisation. There are so many people, everybody's looking into AI. How do you think about ownership of AI? You've got the security lens of it. Who should be owning this on it? How do you organise? Because it did kind of land on us somewhat faster than past technologies.

[0:13:53] Henrik Smith: Absolutely. And I think that's where, honestly, a lot of organisations that I've talked to, in general, [inaudible 0:13:59], or whatever it could be. I think that's where a lot of people struggle is like, who owns or what organisation owns the various aspects of it? Because you will always have multiple owners in this case, I think. I don't think you can have one single owner because you will have people that are trying to build it from a research perspective. You have people that are trying to build it from an external service perspective. You have people that are trying to integrate it with various items. And you will, of course, have the security piece. Even in security, you will have multiple owners.

I don't think that you're going to be able to find – in a lot of technology pieces, like doesn't matter if it is database security. It doesn't matter if it has sort of traditional mentality, or if it is network security, whatever, it's way clearer to find a dedicated owner within the organisation. I think GenAI is one of those where collaboration is going to be the key. You're not going to find a single owner for it. You're going to have people that are working primarily on data governance. You're going to have very clear ownership when it comes to security assurance and review side of it. What services, what vendors do we want to use, no matter who does your vendor audits for you. And you will have to have no real policy, of course, like, how can we use this service?

So, I don't think that you can follow – This is one of those where you have to rethink a little bit of who owns it, and instead have a very clear [inaudible 0:15:13] of who owns what parts of the various aspects, and then collaboration. Collaboration is going to be the key on this one, is make sure that you have all the pieces inside of it. A simple thing like you’re learning data. For the people that have started to create a new model internally, for example. What data are they using? Is that data sanction to be used? Where is that data going to go after? Even if you own it, you own the data, but you're still using the data for learning models, for example. Are you going to ingest code in that learning model? Is that code clean? Are you now setting up, so you're learning data contains, I mean, unscanned code? It contains secrecy. It contains vulnerabilities, for example.

This is where you can damage both yourself and others. Because if you're setting up or building a service that is based on faulty data or insecure services with insecure code – whatever it could be. Now, you're not just putting your own service at risk, but you're also putting other people at risk. Ae going to sell that model out, for example?

We've seen cases before, where people talk a lot about, like, the data coming out is not sanctioned, it's not verified. It's made up or whatever it could be. I mean, there's been legal cases recently or awhile back where people use it where they shouldn't, that's going to go into the legality of it. But I think that's where it's going to be a little more challenging is ownership is not one person. Ownership is collaboration in this case, and you need to [inaudible 0:16:34].

[0:16:35] Guy Podjarny: And I think, I understanding. I think, also, it's kind of quite practical that everybody's going to jump on it. So, you can’t kind of go off and centralise, definitely not in a company that is large enough, all these different activities on it. I'm curious, though, the model example speaks to me, and I think it was [inaudible 0:16:49] security and everything kind of points out that at the end of the day, the code agents, they're trained on code that humans wrote, and guess what, it has bugs – vulnerabilities on it.

So, at the best-case scenario, if it produces code that also has vulnerabilities, I think it's a bit of a flawed assumption to think that it produces perfect code. But regardless, it's a good point, that you need to train security in an even more sort of concerted way than others. If we sort of hone in a bit on the ownership over there, in the developer security, you started off talking about how we need to fix some things in the source. Generally, we want to shift things to the platform to an extent, just make some problems go away centrally, and then, some others, we want to shift them to the time of code creation.

What's the equivalent in AI? Is this an AI security team? Is this an app security team that goes up over there, sort of AI knowledge on it, and they collaborate with whom? With the data scientists? If we, for instance, thinking about the data in for training model or roughly around that domain, what do you think about participants?

[0:17:56] Henrik Smith: Yes, I mean, my personal opinion, setting up a AI security team. I think that's honestly a little bit of a mistake, to be honest. Because I think that you're adhering too much to technology changes, and it just means that what about the next time this happens? What are the next technology shifts? Are you going to create a new organisation every time there's a new technology out there?

I think that's where a lot of people, just because GenAI is so-called invasive, almost, in the organisation of how we operate right now. Everyone wants AI in their business plans. Everyone wants AI in their services. I mean, it doesn't matter if it's a vendor, or if it's an in-house, there's not a single leader or executive right now, that's not saying like, “Where are we using any AI in our services? Where are we using AI in our business plans? And what are we going to do with AI?”

But I don't think that it's time to start spinning up dedicated organisations or dedicated teams for managing AI, in that sense. I think, this is one of those where it has to flow through all the teams and understand how is my piece impacting this? I do think, however, you need to have a very clear owner of, as I said earlier, the ratio of like, who owns the pieces. I think it's more prevalent to have AI dedicated teams on the service side, but not on the security side. One of the reasons is because once you start saying that you're going to have a dedicated team for it, you're going to lose out on the sort of the collective knowledge that people have from the various areas, because you will not have the people from the database. You will not have database security side. You will not have the people from the network security side, and they are all a critical piece of it.

So, focus teams are good, but don't think that you should have technology-focused teams when it comes to organisation security. Instead, focus more on who owns the critical decisions that need to be done. Who needs to focus on getting information out there? I talked about security assurance, for example. Security assurance is a very critical piece around this when it comes to vendor reviews, when it comes to internal app reviews. But even there, they should own their pieces of it. But I don't think that they should own the overall strategy of how security should be using GenAI and what decision is made. They own their piece of it. But everyone is different. If you have a strategic organisation that owns those initiatives, awesome. But spinning up a dedicated GenAI team, I don't think that's the right path to do it. I think this should be an interest of all the teams.

[0:20:12] Guy Podjarny: So, the point. I see the logic in it, which is, the different risks, the different threats. AI gets weaved in through our stack, so we want to make sure that we bring to the fore the relevant expertise, and that we factor in AI in every one of these security lenses. I guess, maybe asking you about data specifically. So, it does feel and you've mentioned this several times already in this conversation, that a lot of the concerns when it comes to AI are forms of data manipulation. The data in, the data out, the data governance, the ease of someone leaking out. Does it sort of restructure the data security?

[0:20:47] Henrik Smith: I think it absolutely does. No, I think it does, because it's the same there, like, you're basically using a lot of internal data, depending on what type of service you're building, of course. But you are using a lot of internal data to build services for external use in many cases. So, I do think that it shifts a little bit on how you can think about data governance, and in general, data governance within the company, but also externally. It's like, are you going to use customer data for this? Are you going to use internal data? Are you like, what is the data classification of that of where you can put it? Because while you're not putting that data specifically out there, you're putting a derivative of that data out there.

So, I think that your data governance team will have a little bit of mind shift of doing. They have to rethink a little bit, where the data is stored and how it's used. They need to really sit in and understand how the models work, how LLM works when it comes to our training data versus egress data. So, I do you think that it shifts. I don't think the responsibilities move. I think it’s just like they need to shift on how they think of where data governance sit and data classification.

I said earlier, if you're building a training model on data that is faulty, who’s accountability is that? If that is starting to generate faulty services outwards, who is accountable for that piece of it? Is the people that should have detected that, “Hey, this code had faults in it that is not being taught out as one of your – for all your customers?” Or is that something that people, they’re building like the data scientists, and we’re building them all, should they have thought about it?

So, it's going to come down to where – and it's not to just point fingers and saying you did wrong. It's more? What do we need to do with the data before we start using it?

[0:22:26] Guy Podjarny: Right. I think a lot of it comes back to the same sort of ethos, I guess, of the shift left on the developer side of it, which is you have to relate it back to the source of the place in which the decision was made and make good decisions, and then people could track record of them, and then maybe facilitate making those decisions correctly, by the data security team. So, whether it is, in the ingest time or in the training time, or whatever it is, visibility and engagement, I guess, in those phases.

I guess one other security leader that I spoke to in one of the big banks pointed out that he's been investing heavily over the years in the relationship that he has with his development team, development organisation. But he's never really up until now had a super strong motivation to deeply engage with the data science team. It was fine. It was like one of the many things that don't quite make it to the top. I guess, I'm curious, do you feel the same in Salesforce? Is there been a change in the relationship you and your team have with the data science teams?

[0:23:26] Henrik Smith: I think both yes and no. I think Salesforce has been a little bit different just because if we look at our organisation, when it comes to our base organisation, for example, like this didn’t transcend all the different businesses. So, we do have coverage and an integral part between security and the engineering side and the business side, that has always been there for last like four years, four and a half years since we started this organisation. So, I think that the collaboration was there. I think it just shifts a little bit of who are the big names within the organisations, and who has the most pull within the organisations? There's not a company today, where the data research team is not, I wouldn't say very happy about getting a lot of focus. I don't think [inaudible 0:24:06] as much focus as they are now.

Just because I mean, the shift has moved a little bit from, for example, engineering in general to data warehouse services, to I mean, go back a couple of years, again. Every service within a company that was working on like data lakes were the big names. They were the ones that were getting the resources and the focus. It's the same there. We’re just shifting around with the technology shifts of like, what team were being highlighted and focused right now. Now, it’s the data research team.

This has happened throughout a couple of different areas for a while. I mean, if you look, it's going to be like seven years ago, when the [inaudible 0:24:41] started building out data [inaudible 0:24:43], for example, to help build a lot of the services. I mean, that's when it's shifted more and more to be a research-based security posture and seeing how can we use the traditional fields of research, of automated reasoning, of pure math to not just be a crypto algorithm but also to evaluate security? How do we get a proper security posture of services and technology, using more traditional resources, especially within the research community?

So, this is just another shift in that way, where the data scientists today, they're working on AI, are getting more and more focused. They are getting more and more resources to help do their job. The biggest challenge, I think, is more, we have to maintain that balance of while we're putting all the resources into, we got to build new AI models. We got to figure out how we can do like our research team or the [inaudible 0:25:33], or like the kings and queens is like, are we catching up with all the other resources? Security, for example, are we doing the right thing on the resource there? I don't think that any organisation should think that, “We need to hire 50 new people or go to this company and hire AI security specialists.”

I don't think that that's the right path, either. It's more about, how can your service and how can your role within security be impacted by GenAI? How can you either use GenAI for your advantage or how can other people use GenAI against your advantage, like against your role? If you're a security for a network security, how will people exploit you using GenAI services? But also, how can you use it to your advantage?

[0:26:17] Guy Podjarny: Yes. So, that's an interesting topic. We'll switch to the tooling here in a sec. But maybe just one more question on the defender side, maybe, or I guess the other one is on defender as well. Amidst those, so we have a bunch of these things, and I think you have a very thoughtful this is the way to approach it, and this is like you need to embed it into the different organisations, and into the process on where to put the emphasis, in a slightly less systematic way. What keeps you up at night when it comes to AI security? What’s the main concern that tops the list, because of maybe the faster-than-usual adoption of this powerful technology?

[0:26:49] Henrik Smith: I think one of the main concerns I have all that is to be honest, people's ability to make mistakes, has become so easy, and it is so easy to grab whatever internal information that you have access to, and use it in learning models that go outwards, that are going to be in new services.

Again, it's the people of all of a sudden mentality of like, “Hey, I'm just doing a test. Just for testing. I'm going to fix it later. I'm going to close it up later. I'm just going to try something out.” And moving forward, I'm just going to try something out to, “Hey, here's the new service that we're selling,” it’s really fast and easy.” That's always been the case, I think, just today, if you say that you have a new GenAI-based service, I think a lot of people will not care less. But I think that a lot of people will open doors for you faster than you might have done before to get that out, out as a service. 

So, I think it’s just easier, sort of higher risk of people bypassing basic hygiene and general services and checks and balances that they should have done, just because it is such a hype right now of getting services out there. That's my biggest fear, is that people would bypass the controls that we all have put in place to verify that this is a valid service. This does not have the flaws. I think that's my biggest concern. It's people bypassing the rules or relations that it should do. Or it's simply because people haven't caught up and added the policies and added the controls that it should have done.

I don't think we can put everything at fault for the developers and we shouldn't. It's not their fault. If they can do something, they most likely will do something, and this is where the role of like you have to build the service for it and see how can we make it easier for them to make the right choices.

[0:28:33] Guy Podjarny: Right. And I think a lot of it is important in the solution aspect.

[0:28:38] Henrik Smith: Yes. There's always that aspect of like, what's going to be used in attacks, et cetera. But I think the biggest fear is people make mistakes.

[0:28:46] Guy Podjarny: Yes. I think it's a really good point and I like how you mentioned both. There's the mistake that you mentioned at the beginning, which is, “Hey, I'm just tempted to throw some sensitive data into it.” And then there was the mistake, you're also elaborating here, which is just the business, the competitive drive to benefit from this powerful technology so quickly that you would just run to either publish it or to use it faster than maybe what is prudent. I think, you were starting to say that what you can do as a security team to address it, is to stand up a service, identify those cases, or at least you're in the line of fire, and you have some ability to do something about it.

[0:29:22] Henrik Smith: Yes. In a lot of cases, just having the checklist and knowing that this passed through all the processes that it should do. I mean, we've spent a lot of time for us as well on making sure that we know and can trace, like new changes going out, have gone through the protocols. If you're tracking new reviews, if you're doing the proper assurance reviews, if you do code validation, like whatever it could be. But I think that's where people need to verify that their processes are sound. And if they don't have a process for it, I mean, they can always use GenAI to build a process.

[0:29:53] Guy Podjarny: What type of questions would you ask if you were – well, actually what type of questions do you ask when you, as a customer, purchase offer from a vendor that is somehow AI-driven? If you're starting to use a co-pilot, or whatever it is, like an SDR email writing, or whatever it is, above and beyond the regular maybe security questionnaire that you might give such vendors. Are there any additional questions that you asked if they are AI-focused?

[0:30:22] Henrik Smith: I think, this is where again, this is where like the old protocols, and the old questionnaires still work a lot, just because it is still data governance. It's like, where do you put the data? How will my data be used? I mean, we have this conversation back in the day with ML, before as well with any kind of training services. Where is my data is going to be used? Who's going to be able to access it? Will other customers be able to use the learning data that I generate? Will you share it with someone else? I think that's where a lot of the question comes in.

The other piece of it is just validate, and people are doing the best practices and following the industry standard for a lot of things. Because there's so many new vendors are popping up, and that's always been the case. Startups have always been popping up on a regular basis. But I think it's just the sheer volume of opportunistic companies are coming up. And it's more like, are they doing the right thing with their situation? While they might do the review, they might say that we have a single-tenant solution. We're not going to share any data. It's all like – you only control all the data. But there are so many new companies, are you doing the right thing from a corporate security perspective? Do you have your SOC 2? Have you done audits? Do you have – wherever you operate, whatever part of the world you're operating. Have you done the right things to secure your data?

Because it's one thing to see what are they doing intentionally with the data. They can say that they’re doing all the right things. But if it is a two-person shop, and suddenly you started to put all your internal data there for learning models, and you started to create your own MR, just because they had a good deal with whatever cloud vendor to do cheap computing, they get a lot of seed money, whatever it is. But do they have the right security controls in place? Have you done an audit? Have you done a pen test on them? I think that's a bigger risk, just because it's so fast to grow a very small company now, and this goes back to the whole piece in the beginning. Where are you putting your data?

[0:32:10] Guy Podjarny: Absolutely. It’s not [inaudible 0:32:13] just have the good processes, having to demonstrate to me that you're trustworthy, and all of these different aspects of security. It's not about, I figured out some sort of magic cryptography, voodoo, that makes this go, it’s both.

[0:32:28] Henrik Smith: Exactly.

[0:32:28] Guy Podjarny: It’s about, show me that the steps are thorough and robust and have been thought of and addressed.

[0:32:34] Henrik Smith: Well, especially since I mean, the whole concept of LLMs and GenAI is consuming a massive amount of learning data. That's the whole premise of it. Consume a lot of data to learn what should be out there, and learn how to predict and generate the new content, which means that you have to feed it with a ton of data. It's the whole base of the service. Feed a ton of data that you had internally before, and now you're putting it somewhere else that you need to ensure that you trust that company. How trustworthy are they?

So, I think that's honestly one of the bigger challenges I’m going to do. I mean, vendor and vendor reviews now, are they doing the right thing?

[0:33:08] Guy Podjarny: Yes. Makes a lot of sense. So, let's maybe shift to [inaudible 0:33:12]. I don't know if it's more positive. But it's more of a maybe opportunity on it. You started touching on this a little bit of, can we use GenAI as defenders? This is maybe a little bit more within the security industry from a tooling perspective. Where do you think we can use GenAI to be most valuable to us? And maybe a little bit to, in contrast, to where attackers can use GenAI to benefit them against us?

[0:33:41] Henrik Smith: Yes, absolutely. I've had this conversation with a couple of leaders in the industry as well around like the uses and the benefit for GenAI compared to previous technologies. And I think, in general, the way I was looking at it is like, is this a service that's going to benefit an attacking side? Is this going to be sort of offensive side or defensive side? I think that in the case of GenAI, obviously, it's one of those where it is more a tool for an offensive side, just because it's based on creating creativity. It's based on helping people learn or do things that they did not have the skill set to do before, because it is an enabler. I think that's why it's more an offensive tool than is a defensive tool.

If you look at ML, for example, in general. ML is a very much more defensive tool because it allows you to do deep learnings about your data, and allows you to harvest a lot of data very quickly and effectively to find various patterns, to find various falls, whatever it could be. Versus, GenAI can as well, of course, but it requires a lot of training on your data. So, it's not that you can just like shove a bunch of logs in there and think that you're going to get all the data out in the world.

But from an offensive perspective, it allows you to be very creative. I think from a defensive perspective, one of the biggest areas that you'll benefit off it is honestly creating a lot of the boring stuff that people don't want to do in general. Creating the process, creating the workflows, the runbooks, et cetera. I did, even – I was lying in bed yesterday. I was like, “Hey, let me create a runbook for child-based defensive security of a [inaudible 0:35:12] organisation.” And just started fiddling around a little bit on my phone to just see how well they could create a runbook for a first line [inaudible 0:35:23] incident.

It's actually not that bad. Like, you get a really comprehensive list of runbooks and processes, if you don't have them already. So, I think, creating the stuff that a lot of people don't have, and a lot of people have taken for granted, in terms of like runbooks, workbooks, what are my policies that I need to have, and postures, sanitising creating those is really good. Saying that I'm going to use GenAI for my active inline defence mechanism, it will still be used. But I think less than we have previously, and I think people will do a little bit of mistake of like, “No, I think you're talking about ML now versus AI.”

[0:36:00] Guy Podjarny: Kind of confuse the two. The reason, is this about the kind of the depth and predictability like is your distinction, mostly that ML is – when you say ML, you're referring to something that's a bit more like attuned to the specific use case? So, I know that this is, whatever, like normal behaviour versus abnormal behaviour or expected realm or not. While the GenAI has not been trained, maybe fine-tuned it, maybe you change it, but generally speaking, it just understands the language quite literally, and derives from it, so it's sort of less trustworthy. Is that the concern you have with using it in defence?

[0:36:35] Henrik Smith: I don't think it's a concern. I think it's more the applicable of the use case. I think, in general for detecting security and detective controls. I think more machine learning of saying, like parsing through data to find anomalies. Where do I have anomalies in my overall logs on a continuous basis? So, just harvesting through a large chunk of data that you don't need to learn on. You just need to find anomalies and find differentiators from what the norm is. However, most people don't know what the norm is when it comes to like network traffic, whatever.

But I think that's where ML is way more effective use case. From a defensive strategy of like, how to build the services, that's where GenAI can come in. It's more proactive versus reactive security of how do I build the ML script? How do I build the detection script? How do I build for example, the services or code to do whatever kind of detection, or code whatever to sanitise or even just inputting a code and say, like a CloudFormation template or an architecture saying, “I'm just using it to detect where do I have potential flaws?” From an assurance perspective, it's a really, really good use case.

But I think that's more, it's from a proactive versus reactive security standpoint, I think that's where it's really going to shine versus from offensive security. It is definitely more from a reactive piece of like, how can I continuously adapt my attacks? How can I continuously create new vectors of it? Take, for example, like social engineering, phishing exercises, that's where it's really going to shine for being able to like – I mean, we see now with voice impersonation, it's getting easier and easier, like going viral now with all the people that are like, “Hey, I'm going to make it sound like this radio star, this rock star, this musician.” But it could also be, how am I going to make it sound like your CEO, or your CFO, and adapt the script automatically. I can make a video where someone is actually talking to you and setting up a video conference with a recorded message to you.

I mean, that's been around for a long time, but it's just way more effective and way more accurate. It's harder and harder to tell the difference of what is fabricated data versus real data. If you get a call from your CEO, saying that, “I'm in a customer meeting right now for a $2 billion deal. I can't access my laptop.” There's very few people that will say no to the CEOs, when they're asking for like my password is not working. What is it?

So, I think that's where there's so many opportunities to use it in an offensive way, and I think defensive is going to have more sober use cases. And I think it's more going to be on the proactive side than the reactive side.

[0:39:08] Guy Podjarny: Yes, if anything that adds to that, I definitely relate to the concern on the offensive side, especially when you're, I guess, tricking humans, when you're attacking humans. Although, I've sort of seen other use cases mentioned, I don't know how much they're operationalised at scale yet. That also talks about continuing down a path indeed, okay, I've made it in, or I've gotten this response. What should I try next? To an extent, I think, costs help keep that a little bit at bay, because if you're entirely statistical, you rely on the systems being cheap. And at the moment, GenAI is still relatively expensive, but it helps there.

I do wonder though, whether I spoke to this one company that's really kind of increasingly, they've evolved themselves from security awareness to thinking about themselves as a security companion. I wonder whether that's something which you can sort of almost have someone always looking over your shoulder and trying to identify if this makes sense or if it doesn't make sense. The more complicated then, responsibly and all that, then the more they can help you avoid mistakes.

[0:40:11] Henrik Smith: I think it's basically, what we're talking about goes into the whole, like, proactive versus reactive security piece of it. And you're basically saying that every C-level or every security professional now have their own co-pilot. It's a decision co-pilot. Does this make sense? Even as running your like questions, like your decisions through a GenAI model of like, what is the best decision based on? If I were A, I think you’ll say executive prompt injection. I think that's where it will help, actually. I do think that it will make a lot more sound decisions. But I'm also a little bit concerned that we're losing a lot of innovation, intuitively from individuals. Just in general, like you’re gut feeling, is still has to be there. I mean, I think that's also it comes with a lot of experience in general, like getting that gut feeling for it. But having a gut feeling co-pilot might not be a bad idea.

[0:41:05] Guy Podjarny: Yes, I think it’s interesting. It might just require a slightly change of how we operate. So, it's interesting to see how easy is it for people to accept, such – and not see it as a as a big brother, but rather as a big [inaudible 0:41:23], like corporate big brother, but rather, something that is helping them.

[0:41:27] Henrik Smith: Yes. But I do think that like a lot of companies would benefit from using it more effectively to create more effective policies and standards and structures of including runbooks and help create training awareness. That's one of the things that's going to help people the most is like, what is the training and awareness with things? What is your basic hygiene? And help review and look at those items. I think that's where it's really going to shine from a defensive perspective.

There was a recent conversation I saw. I can't remember which podcast it was now. But we’re talking about a test against social engineering tests, against their support staff. And the ones that did most mistakes was the ones that had been there for four or five years, just because they were more comfortable with what they were doing. But also, because most companies only do in-training when you start somewhere. You don't get continuous training for it, and you become comfortable in it.

So, I do think that like having something using GenAI more in terms of productivity and creating better policies, creating more sound policies, rather than taking your InfoSec policy, running it through GenAI, and saying, simplify this so a human can understand it. Most people do not know what the policies are, and what standards are.

[0:42:34] Guy Podjarny: I mean, I think there was a lot of the policies in all that. So, probably something that helps you ingest those would be helpful.

[0:42:39] Henrik Smith: Yes. Make it human-readable.

[0:42:41] Guy Podjarny: Exactly. This is – time kind of flies. We're having a conversation and doing it. I'm sure we can dig into any of these things very well, but I think we're kind of at time on it. Before we go, Henrik, I’ll ask you my typical closing question over here, which does change from time to time on it. But right now, if you could outsource one aspect of your job to AI, what would that be?

[0:43:03] Henrik Smith: That's a very good question. I think honestly, one of the things that I am curious about using more is like you said, the co-pilot piece of it. I spend a lot of time doing reviews of material, reviews of anything from architecture to plans, business plans, to challenges, whatever it could be. I would love for people to use GenAI more to create the proactive material, to run through it, to make it human-readable, simplified. Remove a lot of the buzzword lingo that is often used in these and make it more easy to consume. Compact and easy to consume. I would love to do it on my own stuff, as well, as just both from a review perspective and a writing perspective, simplify it.

So, I think that's where like, I think, using more and more, especially if I have an internal, so I know where the data is stored and can actually use it for sensitive material. But help to do more reviews. Does this make sense? Instead of me reading through 75 pages of something, give me the cliff notes.

[0:44:03] Guy Podjarny: Yes. I think that sounds super useful. It does bring up there some comics kind of floating through the web of someone. It's writing three sentences, and then using AI to create a long and detailed email, and then someone on the other side taking that long and detailed asking to summarise it. So, I do wonder how many counter tools exist, which does sound very useful.

[0:44:24] Henrik Smith: It's a little dystopian in one way, but at the same time, I think we all can become better writers and more concise of what is the point you're trying to make here?

[0:44:33] Guy Podjarny: Yes, exactly. Henrik, thanks a lot for the great insights and for coming on to the podcast.

[0:44:38] Henrik Smith: No, thank you, Guy. Thankfully, we're going to see a lot more of this moving forward. I mean, recent changes that have happened. The industry is still just starting on it, especially for security perspective. I think we have a lot more services coming that will definitely help people create better security posture in their own companies. So, it's going to be interesting.

[0:44:57] Guy Podjarny: It will be. It will be. Indeed. Thanks, everybody, for tuning in and I hope you join us for the next one.

[0:45:02] Henrik Smith: Thank you, all.


[0:45:07] ANNOUNCER: Thanks for listening to The Secure Developer. You will find other episodes and full transcriptions on devsecon.com. We hope you enjoyed the episode, and don’t forget to leave us a review on Apple iTunes or Spotify, and share the episode with others who may enjoy it and gain value from it. 

If you would like to recommend a guest or topic, or share some feedback, you can find us on Twitter, @DevSecCon, and LinkedIn at The Secure Developer. See you in the next episode.

Snyk is a developer security platform. Integrating directly into development tools, workflows, and automation pipelines, Snyk makes it easy for teams to find, prioritize, and fix security vulnerabilities in code, dependencies, containers, and infrastructure as code. Supported by industry-leading application and security intelligence, Snyk puts security expertise in any developer’s toolkit.

Start freeBook a live demo

© 2024 Snyk Limited
Registered in England and Wales