Skip to main content
Episode 136

Season 8, Episode 136

The Intersection Of Integrity And Security With Guy Rosen

Guests:
Guy Rosen
Listen on Apple PodcastsListen on Spotify Podcasts

On episode 126 of The Secure Developer we had a fascinating conversation with Guy Rosen, who is the current CISO at Meta. In our chat, we are able to mine Guy's vast experience, expertise, and perspective on what being CISO at a huge tech company in today's climate requires, focusing on how security and integrity concerns come together and play out. In his role at Meta, Guy oversees both of these areas, and listeners will get to hear how he distinguishes the two worlds, and also where they overlap and intersect. We spend some time talking about human and technological resources for these fields, how Guy thinks about skills and hiring, and of course the impact of AI on the field right now. We also hear from our guest about issues such as privacy, account takeover, and the complexity of the policies that govern online abuse. So join us to catch it all in this great conversation!

Compartilhar

 EPISODE 136

“Guy Rosen: What usually we would ask someone is, “Hey, we can send a reset password link to your contact point.” Then what we find in some of these cases is it's hard to give support, because the people say, “Oh, but I don't even have that number anymore. That's an old email I don't have access to anymore.” It really reduces the breadth of how someone can actually prove their identity. That's where we may get into things like selfie captcha, where we ask someone to actually take a photo. We want to try compare and ensure that this is actually the person who is represented by the profile.”

[INTRODUCTION]

[0:00:40] ANNOUNCER: You are listening to The Secure Developer, where we speak to leaders and experts about DevSecOps, Dev and Sec collaboration, cloud security, and much more. The podcast is part of the DevSecCon community, found on devseccon.com, where you can find incredible dev and security resources and discuss them with other smart and kind community members.

This podcast is sponsored by Snyk. Snyk's developer security platform helps developers build secure applications without slowing down, fixing vulnerabilities in code, open-source containers, and infrastructure as code. To learn more, visit snyk.io/tsd.

[INTERVIEW]

[0:01:27] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today, we have a very special guest, Guy Rosen, who we'll describe in a second, and I go way, way back, probably over 20 years here. It's a pleasure to have him here on the show. While we both took various zigs and zags for our careers on it, I think we both have started in security, have left security and gotten back into it, bringing some different perspectives into it. It's really, really fun to have you on the show here, Guy, and hear a little bit of the learnings from your journey. I'll get to the bio, but first, let me say, thanks for coming on to the show, Guy.

[0:02:00] Guy Rosen: Thank you for having me, Guy.

[0:02:03] Guy Podjarny: Let's start by introducing you, Guy, to the audience. Guy Rosen is Meta's Chief Information Security Officer, or CISO. As its CISO, Guy oversees, on one hand, the safety side and combating on-platform abuse and all of that jazz, and security, protecting the security of the system itself, the product, the infrastructure, the company information. Has been doing it for a while. Has a couple of decades of cyber security, software development, product management experience coming into this role.

He joined Meta in 2013 when they acquired Onavo, which he was the Co-Founder and CEO of since 2017, has been leading an overhaul of Meta's approach to defining abuse on the platform and has built an interest leading integrity team that pioneered all sorts of approaches to complex challenges, such as harmful content, election readiness, and adversarial threats. It is a job, all in all, Guy, no? Like that.

[0:02:55] Guy Rosen: Yeah. You might say, I think you're to keep me up at night. That's the work I like to do. All of us on security love these things.

[0:03:02] Guy Podjarny: Indeed. There's a million topics I can ask you about. I figured we'll focus on some things that you have a very unique perspective on as Meta’s CISO. Maybe the first one is indeed from that bio, this notion of the combination of security and safety, those two are not entirely rare to see them in the same role, but it's also not obvious that they come in. How do you think about this combination of keeping the platform secure and maybe the users safe on it? How do they combine into one role?

[0:03:31] Guy Rosen: Yeah. I think it's a really good question. Even at Meta, they weren't together until roughly last year when I took on the more traditional information security, or cybersecurity team. For context, I've been doing what we today call integrity, which is the trust and safety work for just over six years since 2017. It's definitely an area that's very adjacent to a lot of what the industry does around cybersecurity, but indeed with a platform of scale and a user-generated content platform where you have these potential issues and you need to define your policy that's – it's a different and almost a new profession that has really grown in the past six, seven, eight years or so.

The way we think about it, so what we call internal, we call this integrity for a bunch of historical reasons. It's fighting abuse on the platform. That means everything from starting from what are the policies you have? What content, or behaviours, or actors do you want, or not want on the platform and then how do you enforce them? How do you think about that through a lens of technology and also, people? Because a lot of it has to be human-driven. You need to have a lot of context and the nuance that at the end of the day, humans are able to provide. AI has been an amazing tool to help scale that. But ultimately, it's really about scaling humans is a lot of how we think about it.

You also need to think about the impact of mistakes you make. A big part of our job and I see a lot of interesting analogues in the security space, too, is we're not just here to enforce the policies, we're also here to make sure that we are leaving the right things up. It's not just about taking content down. Because ultimately, we built a platform that people come to express themselves and it's too easy to just take a lot of things down because maybe they can block it all. Maybe they might violate something. But you need to have a really high bar and make sure that even things that many people are going to find offensive, or objectionable, or disagree with, you want to give room for that because that's ultimately the mission we have here.

If you think about that from an engineering lens, it translates into things like, precision and recall, which there is no perfect precision and perfect recall. There are going to be trade-offs. There is not going to be a perfect outcome here. Understanding and talking about what those trade-offs are is a big part of how we think about this work. That's the integrity side of it.

Then last year, I took on the security team, which indeed is going to be something that's more familiar, I'm sure to many listeners, are risks to confidentiality, integrity, availability. We think about this at Meta across all of our products, across our infrastructure, across our corporate systems, and the more traditional things, thinking about vulnerabilities, about incident response, about risk and compliance type work.

There's definitely a Venn diagram between these two. You see it in a few areas, areas such as account compromise. You see it in areas such as, we have a lot of work on coordinated, inauthentic behaviour. This is the pioneering work we've been doing in the last, I'd say, six years or so across so-called information operations. Understanding how sophisticated adversaries are trying to use these platforms to spread misinformation, or spread their narratives wherever they might be in an authentic way. This is really an area which sits the heart of the intersection between some of these areas because you need to think about it from an adversarial, security-type mindset, but also from a content-type mindset of what are the policies? How do you define them? What happens when authentic actors spread inauthentic messages? How do you mix and sort through a lot of those? Those are the challenges that you have to face.

[0:07:27] Guy Podjarny: If I echo this back, and it's a fascinating combo here. One thread in common is the adversary, is the fact that there is a villain, there is someone trying to abuse the system. It sounds like, another is also this separation between a pattern of attack. This is legitimate, normal use, this is how people typically use the system, whether it's for security, or for safety, or integrity, violations. Either way, you need to establish the norm, so that you can identify deviation from the norm and then spot the deviations from it. Like, is it too analogous? Did I get that correctly, or?

[0:07:59] Guy Rosen: One interesting thing is I think in security space, there's much more of a clear, bad actor adversary. In the integrity space, sometimes there is. Could be a spammer. Could be a nation-state adversary, a hacker of some kind. Also, integrity needs to think about regular people, who unintentionally run into policies, maybe writing something in the heat of the moment that ends up crossing our policies, sharing something they didn't realise. There's different archetypes of how and why people do that. Many of them are well-meaning and didn't realise and just need to be – they just need it to be explained, “Hey, this actually violates your policies. We'll take it down.”

Understanding how you build user experiences at scale, which help to educate, help to provide that notice to people and also give them the ability to tell us, “Hey, you guys are wrong,” so that we can go and fix it. That's really an important part of it, which is, I would say, less traditional part of the traditional cybersecurity role.

There are the areas where there is an adversary, where there is someone, for example, trying to spread spam and they might do it through fake accounts they're creating. They might do it by trying to compromise accounts, and some of these guys can be very sophisticated. Those are the kinds of areas where we're going to find overlap between the areas and indeed, areas where these teams I find work together a lot in order to try to fight some of these phenomena that are out there and are getting obviously many platforms, but something that we think about a lot.

[0:09:35] Guy Podjarny: Do they technologically, or organisationally in terms of people like, are these two things that roll up to you, but they're fairly independent? Were there a lot of cross-pollination of skills, or technology that spans the more traditional security and the safety integrity aspects of it?

[0:09:53] Guy Rosen: There are separate teams that roll up to me. There is a bunch of overlap, a lot of close collaboration, a lot of ways we both want to think about how we show up externally. Because at the end of the day, an adversary is an adversary. You don't want to ship your org chart to an adversary and have them find the vulnerabilities that fell through the cracks between multiple teams.

The same goes internally when we work with different product teams at the company and they want to make sure that they're bringing the best products to market and they're thinking about safety and security as they build their products. We also don't want to show up as multiple separate teams. Those are the things we think about to make sure that we're thinking through these in a more comprehensive and holistic manner is because we work with anyone that's not in our bubble.

[0:10:37] Guy Podjarny: I think that makes sense. You mentioned, I guess, account takeover is a specific example of that. If I can ask for a bit more, what does that picture look like, as an example of places where the two maybe touch one another a bit more?

[0:10:48] Guy Rosen: Yeah. If you think about account takeover, I mean, that's an area that definitely sits in the overlap between the teams. On one hand, you have the integrity team thinks mostly about a lot of the ad scale user experiences for people who are trying to log in and perhaps, detecting that there's something fishy, or something wrong with where they're logging into. We use a lot of AI to obviously try to understand, are there any patterns that are suspicious and put people into checkpoints?

Again, it's an example of where we need to build those products with billions of people in mind, because the person who's going to get suddenly a screen that tells them, “Hey, we need you to do one more thing to verify it's you.” It's not a sophisticated user, like you or me. These are people who maybe only just started using the Internet several months ago and suddenly, we're putting a bunch of friction in front of them. That becomes a really important part of how, especially the integrity team thinks about doing this job. The adversaries that were up were –

[0:11:46] Guy Podjarny: Sensitive there on the integrity side, with the confluence of, indeed, not being able to just black and white market user as legit or not.

[0:11:53] Guy Rosen: Exactly. The thing we have to get very good at is measuring true positives, false positives, understanding what the trade-offs are again. You can turn things all the way up, but then you're going to put a lot of friction in front of a lot of people that are legitimately trying to use your product and have done nothing wrong. You can also take it the other way and then obviously, you're going to let a bunch of bad people in. The integrity team really has the expertise in a very data-driven way, finding how we calibrate that and trying to find the optimum. There's no one right answer. There's no one right optimum. But at least having the methodology to go about that.

The security team will come in when, for example, in the account takeover space, we will find the adversaries, they will try to find vulnerabilities in the product. They will try to find little pathways that perhaps managed to bypass some of these checks because there was a different login screen built for whatever reason. Or another area where we find a bunch of these things come together is through any support we provide.

We have an effort to improve how we're providing support to people, businesses that use our products. Of course, the flip side of providing support is it will get abused. These adversaries are very determined. We really find any outlet we give people to appeal a decision, tell us we're wrong, try to get in contact with the customer support representatives, in any of these areas where it's one of the adversarial areas, like accounts or spam, we will almost by default be overwhelmed with adversaries who are trying to get around those systems.

It could be just, let's appeal everything. It could be, they could actually try to have a very well-manufactured attempt at social engineering for our customer support reps and try to say, “Hey, I swear, this was my account and this thing happened.” You get these stories and they're often real. Like, I have friends who ran into an issue and couldn't get back into their account. I feel for them and it's a bad experience and we're trying to improve these. We always have to be on guard, because sometimes the scammers are going to tell you the exact same story and it can be very convincing.

[0:14:10] Guy Podjarny: Right. You end up falling into a trust element with an unknown entity on the other side.

[0:14:16] Guy Rosen: Exactly. Exactly.

[0:14:19] Guy Podjarny: I guess, what are, and it’s like, a little bit hard to share when they're doing. But through the years, maybe if they're a little bit outdated at this point, what are some useful techniques for a more — thinking, if I'm like CISO of a newer, smaller, clearly social network, or similar user-generated content type platform and I worry about abuse, I have this responsibility. What are some good tips good, techniques that you would share?

[0:14:46] Guy Rosen: I mean, one of the areas we've looked at a lot is contact points. That means the email, or phone number you use when you sign up. What happens is people often don't pay enough attention to these, especially on a platform that's been around for many years, maybe they signed up with an old college email, or some webmail that today no one really uses, or a phone number that's changed. That means two things. One, in some of those cases, those contact points could be vulnerable, either because the person isn't paying attention. Maybe the email address was even recycled by the provider.

We've seen cases where the adversary will go and the provider will register a fresh new account on one of these webmail providers, and it's available because the person really hasn't logged in for years. Boom, they're now in a recovery address for one of their accounts. Phone numbers, you have a similar thing with recycling that happens if a number hasn't been used for a while. The other really hard part of this is if an account is compromised, what usually we would ask someone is, “Hey, we can send a reset password link to your contact point.” Then what we find in some of these cases is it's hard to give support, because the people say, “Oh, but I don't even have that number anymore. That's an old email I don't have access to anymore.”

It really reduces the breadth of how someone can actually prove their identity. That's where we may get into things like, selfie captcha, where we ask someone to actually take a photo, we want to try compare and ensure that this is actually the person who is represented by the profile. You can imagine in a lot of scenarios that becomes really hard, particularly when you have pages, you have Instagram accounts for pets, for example. It's going to be really hard to demonstrate like, yes, this is –

[0:16:38] Guy Podjarny: This is an actual photo.

[0:16:39] Guy Rosen: - Fita the dog. It's like, I don't know if we can do the right amount of recognition there. These are the things that are really challenging. One thing we do a lot and people will see this when they log into Facebook or Instagram, we'll ask, we'll have a thing at the top of feed where it says, “Is this still your phone number? Is this still your email?” The reason we do that is because we want to know, if it's not, we really want that off the account. If it is, we want to know, “Hey, if there's an issue, we can go and we can contact you.”

[0:17:11] Guy Podjarny: Yeah, that's a great tip. Basically, if I'm building these, make sure that you keep that contact detail fresh, because at the end of the day, that would be your first line of defence is –

[0:17:19] Guy Rosen: It’s really important.

[0:17:21] Guy Podjarny: - potentially attempting to fake it. The easiest thing is, do you have access to that original secret? If that's outdated, that's not as good. This is, we oftentimes in this podcast talk about developers and about responsibility with on the security side on how a lot of the functionality, a lot of the – at the end of the day, security impacting decisions happen at the development time. It's a developer making the decision. How do you engage them? How does that translate to integrity to safety? Is it the same? Is there, in Meta, a developer somewhere across Meta building functionality? How much impact do they have on the work of the integrity team? If relevant, how do you educate, or inform them about doing right or wrong decision?

[0:18:01] Guy Rosen: It's definitely an important thing, because the single most important thing we need is, especially when a new product rolls out, for it to be wired up to our integrity system. That means we can – people can report contents. We can review contents. We can run our detection on it. if you think about Meta, which has a lot of these certain services, you think about even just within Facebook, you have groups and pages and marketplace. There's a lot of sub-products as it were that are being built and teams are iterating on them. What we have is both technology and process to ensure that integrity is part of it.

It actually starts with people. The team I oversee is a central team, the largest team that thinks about this. But we actually have this federated model where we have these satellite integrity teams that are embedded into the different product groups. There's one in Facebook and in Instagram and in WhatsApp and so forth. Those teams essentially are the integrators and they sit and they think about – they're at the table when those teams are building their new products. They're able to make sure that we're thinking adversarially. “Hey, how's this going to be used? What's a spammer going to do here? What would certain categories of bad actors do with a service?”

They also bring all of the different technology in and make sure that they're wiring everything up to the central team. Now, even the folks that are actually building the product itself, so not – in Instagram, not in the Instagram integrity team. They're also thinking about this. We both give them technology and I see my team's goal is making it easy for them to do the right thing. It’s like that, quite as we think about a lot across different security areas.

[0:19:48] Guy Podjarny: Right. Because where are you now, I guess, to like, how you think about it, this is both for security and for integrity right there.

[0:19:52] Guy Rosen: Exactly. You're building a new place where someone can, let's say, upload a photo on whatever service. You want it to be really easy for that photo to go through a detection, to give that photo report button, to ensure that our content reviewers can see that photo if it's reported, or if it's detected and make a decision. You need the back-end widgets as it were that enable review. We provide all of those components, so that they don't need to work hard to do that. That's a matter of quite straightforward integration.

Finally, we have a process, which is we have an integrity review process and products that launch actually go through a process and we evaluate them. We make sure that we've defined policies and it's a new product that needs it so unique policies, that we're enforcing those policies, that we have the right kind of measurement, which is really important for the space that we know what's going on, that we have the right enforcement.

Essentially, this process helps us know that we're building the right product. The goal, again, as in security is to be involved early on in the process, so that you can help the design and make sure that people are thinking about this in a prudent way. There's examples, like when we built the Facebook dating product now several years ago, when people are contacting each other, they can't send photos. I mean, we could detect photos, pornography and various abuse, but we actually chose as a product matter to just disable, just keep it text to help to avoid that whole thing. That's an example of integrity thinking being baked in pretty high up in product design concept.

[0:21:32] Guy Podjarny: Like a product by design concept.

[0:21:35] Guy Rosen: Exactly.

[0:21:35] Guy Podjarny: By the way, as a small organisational curiosity here, the description you talked about, in the business unit, it sounds like the BCOs, like [inaudible 0:21:40], security officers. Where does integrity and security meet? Is it only at the top in your organisation, or in the different business units, they also roll up security and integrity roll up into single individual?

[0:21:52] Guy Rosen: It varies. Different teams have slightly different structure in how they operate. But generally, it's in a similar place because those are people who have – they have that mindset, they have the training. Again, it's a similar pattern of how they work with the product team. They're typically packaged fairly close together, I would say.

[0:22:10] Guy Podjarny: Depends on. Yeah. Let me transition a little bit, although you mentioned AI, and talk a little bit about that. Maybe just before we shift into AI security, or talk about the new technology security, you mentioned AI is a good tool to combat issues, or abuse on the integrity. At the same time, I know there's all this conversation around the value of AI and filtering and product filtering, versus the privacy first lens that is also pretty perceived as a safety, or an integrity, or security practice on it. How do you think about that balance of seeing more content, so you can filter more content, versus seeing less content, so you're not in the data?

[0:22:50] Guy Rosen: It's a really important one that, I mean, not just we, honestly. The world struggles with anything, right? I mean, we are moving towards encryption in Messenger. WhatsApp is encrypted, which means there's less ways to scan things. On the one hand, you could think, look, from integrity perspective, yeah, I might just want to scan everything. Really, if you think about this, first of all, that privacy is a first principle for these products. People need to be able to communicate in a way that no one is there –

[0:23:23] Guy Podjarny: Nowhere where they could see it. Yeah.

[0:23:24] Guy Rosen: Exactly. We talk about WhatsApp as – it's a digital living room. You want to be able to have a conversation. It's completely private. What you want is to then think about what are, within a private space, the right ways to give those capabilities? The things we think about are, for example, user reporting becomes important, because if someone's on the receiving end of some abuse, you want to make sure that they're able to submit it. Of course, they've seen that abuse. They're able to share that with us, so that we can investigate.

You also think about behavioural things, and we see this on, for example, BAM on WhatsApp. We're able to see through behavioural patterns, even without seeing anything in the content, that something here is, the volumes, or the patterns are not where they need to be.

[0:24:08] Guy Podjarny: An individual of that, or like a group that gets created with 100 people invited in one go.

[0:24:13] Guy Rosen: A lot of the levers also become those upstream things we could do in the product and ensure that we understand how we give people information, how we give people more levers. You can leave a group and say, “Hey, don't add me again.” Or, you see a message on WhatsApp that's been forwarded multiple times, you see a little double arrow that says, “Frequently forwarded.” Those are little cues that help in an integrated product experience to enhance some of the integrity features there that has to be about empowering people.

Ultimately, we know this privacy is a safety thing, because ultimately, these people who are at risk get their phones taken, or their messages intercepted. Making sure that we are really honest to what end-to-end encryption promises is just so important.

[0:25:02] Guy Podjarny: I think it's interesting, to echo that lens of empowerment, both across on one hand, before you were commenting on how your job is to make it easy for developers to do the right thing, and it's also true for users. I guess, to extend the more power you give anyone in this case, a user because you've opted for a privacy-first platform. Or a developer who is clearly entrusted with writing significant functionality that affects these billions of people, you have to make it easier for them to do the right thing.

Let's shift gears a little bit. This is probably a topic we can talk for hours on. I want to tap into another knowledge domain that you have a bit of unique view to it. This realm of securing cutting-edge technologies. I'm melding in together here two things which are not really related, I think, which are AI and metaverse. Just thinking about both as things that you'd need to get a grasp on ahead of much of the world. Maybe let's start with AI, just because I think more people are grappling with that. When you think about AI security, first of all, it’s like, how do you even approach this problem? Unlike many others, there isn't really as much existing best practices that you can follow and you can implement. When you think about AI security, or really metaverse security in this context, how do you approach knowing what to do? How do you organisation and structure it? How do you tackle that?

[0:26:22] Guy Rosen: The first thing is it actually leads on that same structure and same process I talked about, where in our metaverse team, for example, we have – there's a metaverse integrity team. It thinks about basically, end to end. What are the policies that we want to have? What are the systems we need to build? Where can we lean on the central systems? Where do we need to build unique things for what is a unique, different, synchronous experience? Also, what’s your privacy expectations, making sure we're honouring that.

It's really important to have these discussions early, so that we think about these things and to do the adversary thinking to figure out where is this going to be abused? What are the different kinds of things? Where do we want to draw lines? Where are people maybe going to have offensive things that we don't like, but it's their right to do so, and when it's, look, this is where we draw a line and we'd like some ability to understand and enforce policies. Where do you want to empower users? Where do you want to empower creators, who are building worlds, for example, and say, look, they get to set the rules, because it's their space? What are the areas where we need to take responsibility centrally and say, “Look, these are certain red lines that you can't cross no matter what.” Even a creator thinks this is appropriate behaviour.

You have things that are all across those dimensions. A lot of it comes back to that same empowerment point you made earlier, because if you think about a synchronous experience, like in Horizons, which is our metaverse applications, being in that world in the approach of content filtering, you have on a medium, such as Facebook, or Instagram, posts or stories might not be applicable, because people are speaking life, much as we are now, is it appropriate for that filtering to have in real-time? Probably not the right mechanism, but much more about empowering.

We will do things like, having a personal boundary. If someone feels harassed, they basically can do something you can't do in the real world, which is draw a circle around the [inaudible 0:28:27].

[0:28:27] Guy Podjarny: Wreck the barrier. Yeah.

[0:28:29] Guy Rosen: Exactly. No one is coming in here. This is my space. You can turn on a feature that, for example, garbles everyone's audio, except your friends, so that like, I just don't want to hear all those other people that are out there. I only want to be able to hear and interacting with friends. It's examples of integrity features that are product features actually, that help to empower people, give them choice, give them the ability to control their experience. It's probably the more appropriate way for us to be thinking about some of these things, because again, the traditional concept of content moderation might not apply in something.

[0:29:08] Guy Podjarny: Doesn’t apply. Yeah.

[0:29:10] Guy Rosen: That synchronous conversation.

[0:29:11] Guy Podjarny: Do you think about hiring profile when you think about individuals on it? Does that make that team look different in terms of makeup and skills? I mean, versus maybe the content moderation teams, or things that are a little bit more of a paved path? Although granted, maybe at the scale of Meta, there's elements that are unique there as well, but still they're a little bit less abstract than the Metaverse world, for instance?

[0:29:35] Guy Rosen: We have a bunch of internal mobility. People have moved around between these teams. You'll typically find someone who is maybe on one of the central teams, or helped build Instagram’s integrity, now thinking about things on the metaverse or some other new area. But we'll also hire new people who have worked in the space. I generally have found, this is true across integrity, especially as an area which is relatively new in the industry. Really strong generalists are able to take an adversarial approach and think about the bad news cases who are very analytical, very structured thinkers, and are very principled, because that really matters to make sure that they can articulate what is the policy? Where do you draw the lines? Where do you not want to actually enforce that?

I've found a lot of that talent can come from people who haven't necessarily worked on these areas specifically, were able to come in and do amazing things on both the central integrity team and on the broader set of integrity teams as they tackle some of these issues.

[0:30:39] Guy Podjarny: That makes a lot of sense. I'm slightly curious whether security profiles fit that bill. They're generalists and they're doing it, but is there so much compliance baggage, or is there too much, almost like negativity sometimes? Do you find people coming from the infosec side to be a good foundation to come into integrity?

[0:30:58] Guy Rosen: Some of them, definitely. There's definitely a mix. Originally, these teams actually, or at least parts of them were together back in, before my time working on the area. They gravitated apart, did a bunch of organisational reasons and ended up coming back together. There are a fair amount of people who actually worked on the crossover between both.

I think on every team, you want a diverse set of people who have had different kinds of experiences, worked at different companies, also see different things in life and understand what different people are trying to do with technology, what issues they're dealing with and their population, or their communities and how that manifests on the platform so they can bring that perspective. You're definitely going to have people who are pulling in different directions. The goal of a team is to try to create a cohesive plan and try to understand all of those different perspectives and try to bring a plan and create the product and the right solutions and the right policies and the right enforcement to be able to account for so many of those different issues.

[0:32:06] Guy Podjarny: If we continue from there, you have a team, they're divided. The team that works on metaverse, or maybe even a subset within that horizon is focused on that problem domain. They get to focus and pave the path on it. Meta is also a platform for a lot of other developers of metaverse applications, on VR applications on it. Basically, I think very few people know what to even answer when they say, what are potential security threats that you should consider when you are building a metaverse application?

I guess, maybe one is to actually literally ask you that question. When you think about the top one, two, three things that people should think about. Second is when you institutionalise this and you talk about these platform developers, do you build some guidance? How do you use your learnings to educate the ecosystem?

[0:32:58] Guy Rosen: Yeah. The goal is to provide guidance, SDKs, libraries, etc., for developers, so that they can ultimately create the right experiences, but we can help them through a bunch of it and give them the right umbrella of these are expectations. If you think about it, it's not that different from you're building an app on Android iPhone. The platform has a certain set of expectations of you're going to do some content moderation. We have some baseline expectations. They don't get into the nitty-gritty, because it's not reasonable for a platform to be very prescriptive down to the individual policy line. But we wanted define the broad stroke. We have the broad values, or principles of what we expect, the conduct on a platform where we have developers that are coming in, that are building their own apps for a Quest headset, or what may be.

We want to give them SDKs to help them do the work. Because ultimately, developers sitting down and they're trying to build the best game in the world, or an amazing interactive experience. We want to make sure that we're empowering them and helping them integrate these capabilities. Also, we don't want to be the ones that are setting the discrete rules in each and every app, because that is unsustainable and it's unreasonable for developers who are trying to create their own space and their own flavours, and each app has its own feel and different things that they want to make sure that are happening inside the app. We provide that guidance and the SDKs to essentially help them accomplish that.

[0:34:39] Guy Podjarny: Yeah. It's interesting how a lot of these answers end up revolving around this same path, which is you've got a central team. They relate the integrity and the security capabilities. But at the end of the day, you're trying to create and package sometimes in guidance, sometimes in software a set of practices, a set of behaviours that have been scrutinised to be secure, to be ethical, to be safe. Then you try to propagate those across your developers, you propagate those to users. Users is clearly a little bit more narrow a path, and then to software, to developers of applications on top of your platform.

I guess, the more you opt, this is – it sounds like the paved road approach. The more you opt along to the paved road, the less of these concerns you need to have as a developer in or out of Meta. The more you're bushwacking your way off to the side, the more you need to take on those considerations yourself.

[0:35:33] Guy Rosen: I think that's right. I mean, this is the general pattern one would expect for many security team and the supplies to integrity is you want to set the rules, or aspirations, or conditions, but then also help people meet them. That's, I think, really important for those to happen together, so that you're actually empowering whether it's engineers at the company, whether developers outside the company, whether it's creators who are trying to create a world or create spaces, making sure that you're helping them do these things. But giving some framework for them to think about these things.

You want to make sure, ultimately, you're letting them experience this amazing new thing that you've created and great things that we haven't thought of. I mean, the goal is to make sure that people are innovating. I think it's going to be hard for people to innovate if your rules are so onerous. It's not about the rules. The rules are, while you do that, here's how you can be a responsible actor. Here's things we're worried about. Here's things there's expectation the world has as you build these. Hey, here's ways to hope you do that and help you fulfil those obligations in the most effective way.

[0:36:38] Guy Podjarny: Yeah. I guess, combining a little bit of those two worlds, we've talked about integrity and purity and we'll talk about this more metaverse and more murky space. What do you do around tech for the new space? What do you do around detecting that something went wrong in the new reality, that someone's tampering with your AI, algorithm that someone's indeed performing some social engineering scheme within the metaverse? Do you have to build all of those yourself? Is there some nascent startup scene designs? I guess, the reason is there are enough of a market for that at the moment. What do you think about detection and about building those tools? Do you build them all in-house?

[0:37:19] Guy Rosen: I think, probably, almost all of this stuff is in-house. Just the sheer scale and the requirements we have for so many of our platforms makes it very hard to integrate with outside technology. To me, one of the things that is at the heart of so much of this work is measurement. It is ensuring that we are an integrity. This is such a big part of what we do is we define the policies and then we first measure ourselves against them. How much bad content are people actually seeing? How much is getting through? Then that helps us understand, where we need to make progress on enforcement? Whether they're changes in the ecosystem, whether an adversary is trying something and perhaps breaking through some defences and we need to go back and make sure that we're updating what we do to react to that.

[0:38:05] Guy Podjarny: This measurement focuses on user reports, or what's the primary measurement? I mean, you have to measure against something.

[0:38:12] Guy Rosen: On integrity, for example, one of the most important metrics we have is a metric we call prevalence, which is prevalence is how much, and think about this in the context of something like a Facebook, or an Instagram, how much content that violates our policies actually shows up on someone's screen at the end of the day? What we do is we sample a sample of content that's viewed, and then we send that for labelling by our content reviewers, and then we're able to see, oh, there is something – and we publish these metrics now on a quarterly basis, so it's something like 0.0 something per cent of content that is viewed violates, or hate speech, or pornography, for example.

By looking at those trends, we're able to see is the enforcement working? Because why would someone see it? It means either we didn't catch it, or we caught it, but too late. It took time and perhaps took a few minutes, or a few hours for a reviewer to get to it, because it was something that AI couldn't make that decision. Therefore, in the meantime, people may have seen that post. We use that measurement to help us understand exactly the direction and where there are changes in the ecosystem.

We will see adversarial evolution. See things like, in areas of pornography, for example, which has some overlap with spam, we will see spammers iterating on getting as close as they can to the line of where our pornography detection kicks in, but still something that's suggestive and to try to post that content. Understanding how they're doing that, how they're manipulating it, whether they're trying different services to see if we have different defences that maybe run at different pace across different services. Those are all the kinds of things that there's adversaries constantly trying to test and push against our systems, and so we have to be very, very alert and make sure that we're reacting to those kinds of things.

[0:40:15] Guy Podjarny: Indeed. I think we're running out of time here. There's so much to still unpack here. Maybe if I can ask for one last nugget wisdom, if you're talking to a security leader, security and integrity leader at a nascent startup, you've already given a bunch of tips. What's more bit of advice you'd give them to say, make sure to be mindful of as they embark on the journey?

[0:40:37] Guy Rosen: Wow. Yeah, that is a – it's probably a long [inaudible 0:40:40], but I’ll try anyway. I think one of the key things is that it's a multidisciplinary endeavour. It's not just an engineering problem. We have three teams, three disciplines that work on the integrity work in particular, which are the engineering teams, the policy teams and the operations teams. If you think about it, each of these, we always have to be working in concert, because the policies you define end up having to be translated by the operations team to guidance for content reviewers, by the engineering team to AI systems, or actually even the tools that those content reviewers use.

Nothing can happen in a vacuum, because what you don't want is, you don't want to write a policy that you can enforce. These teams are really deeply embedded. One of the things that I think was the most important that we did is not only do I spend time as I've historically focused mostly on the engineering side with my counterparts from those disciplines, but we also propagate that down. An individual engineering manager, or products manager working on, let's say, hate speech, will partner very closely with the policy manager for hate speech and with the operations manager for hate speech. That triumvirate will work together and ensure that we're constantly working in lockstep as we evolve the policy, as we build the systems, as we train the reviewers, because all, you have to have a tight feedback loop across those. I think that'd probably be the first thing I would do is make sure you're really thinking about these things as that interdisciplinary effort.

[0:42:22] Guy Podjarny: Yeah, that's a great bit of advice and totally not obvious. Probably, especially for someone coming from the tech side of the fence, and to do it. That is superb advice. Guy, thanks again for coming on to the show, for helping this and in general, for all the great work you do to try and keep the social world safe. We're sure that it's a big part of our entire world here. Thanks again.

[0:42:41] Guy Rosen: Thank you. Thank you for hosting me.

[0:42:44] Guy Podjarny: Thanks, everybody, for tuning back in. I hope you join us for the next one.

[END OF INTERVIEW]

[0:42:52] ANNOUNCER: Thank you for listening to The Secure Developer. You will find other episodes and full transcripts on devseccon.com. We hope you enjoyed the episode. Don't forget to leave us a review on Apple iTunes, or Spotify and share the episode with others who may enjoy it and gain value from it. If you would like to recommend a guest, or topic, or share some feedback, you can find us on Twitter @DevSecCon and LinkedIn at The Secure Developer. See you in the next episode.

[END]