Skip to main content
Episode 131

Season 8, Episode 131

Exploring Data Security In Social Media With Roland Cloutier

Guests:

Roland Cloutier

Listen on Apple PodcastsListen on Spotify Podcasts

In episode 131 of The Secure Developer, you’ll hear from former TikTok CISO Roland Cloutier about the realities of securing user-generated content at scale and his belief that we need to take a strictly data-centric approach rather than a humanistic one to solve many of these privacy-related issues. Tuning in, you’ll gain some insight into what it takes to oversee a social media company's cybersecurity, data protection, and crisis management, and find out why Roland believes that an innate understanding of company culture is key to building a large and fast-growing security team in an increasingly virtual world. We also touch on some of the challenges of user identity management, the need for user-driven authentication methods, increased state-level security regulations in the data space, and more, so don’t miss today’s fascinating conversation with cyber security expert and industry veteran, Roland Cloutier!

Partager

EPISODE 130

“Roland Coultier: You have millions or billions of users on any given day providing their content. It's impossible to have a humanistic approach to be able to monitor for data. You have to do that in the context of the data itself and what type of data it is. It can be as simple as I have algorithms that look for certain words in text-based things. Or I have to be able to, at scale, review a video before it gets posted to look for dangerous items. Those are three really basic things. Protect the platform, protect the agreement I have made with my user and enforce that, then that third piece, that trust and safety, ensures that the information doesn't violate any of that.”

[INTRODUCTION]

[0:00:46] ANNOUNCER: Hi. You’re listening to The Secure Developer. It’s part of the DevSecCon community, a platform for developers, operators and security people to share their views and practices on DevSecOps, dev and sec collaboration, cloud security and more. Check out devseccon.com to join the community and find other great resources.

[INTERVIEW]

[0:01:08] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Thanks for tuning back in. Roland, thanks for coming on to the show again.

[0:01:15] Roland Coultier: Guy, thanks for having me back. I was surprised. Just kidding.

[0:01:19] Guy Podjarny: I guess, it depends. People need to go back and listen to the past episode and make a judgement call. As I remember it, it was actually pretty brilliant. Actually, going back to that, it was quite a while ago, you were still CISO at ADP the last time we spoke. Maybe just to kick us off, catch us up a bit about what have you been up to since.

[0:01:37] Roland Coultier: Wow, it seems like eons ago. I took a new job in between then and now and just wrapping that up. I was the Chief Security Officer globally for TikTok and ByteDance. I had the opportunity to help build specific programs for the defence of the platform, the growth of the company and the assurance of the go-to-market and the people that use the platform around the globe. It's been an exciting two, three years.

[0:02:03] Guy Podjarny: Yeah, yeah. Sounds like it. It is, I think the pandemic makes the whole time horizon be especially elastic and, or you don't really know how long has happened. It also must have been an energising time, shall we say, in social media, specifically in TikTok land.

[0:02:19] Roland Coultier: It was certainly interesting. It was a heck of a ride. It was a hell of a mission, though, being asked to go in and help build what would eventually become national security programs to ensure the interest of the different jurisdictions of the platform itself and the consumers using it and build a new team. I mean, in that time, Guy, I hired 300 new people. Virtually, predominantly built three new critical incident response centres, infusion centres. It's just a lot of work in such a short amount of time. It's an amazing feat that I was fortunate to be able to do.

[0:02:53] Guy Podjarny: Yeah. It definitely has been a crazy growth time. Well, 300 people in a fairly short stretch of time. I guess, maybe digging into that when you were on the show last, we talked a lot about how in ADP, you were investing in reskilling people and bringing the old guard, if you will, of security approaches into more of the modern service-oriented approach to security.

I'd be curious about learning. I love the learnings back then about best practices and doing that at scale since we've now built, you've hired the 300 people and I'm sure you've hired a lot of people in ADP as well. Any key learnings about how to build a team at that pace and virtually, in this type of new setting that you think are repeatable learnings?

[0:03:36] Roland Coultier: Well, I've learned a lot. That is for sure. I would start – probably the most important thing, especially when you're in a new company, especially when it's in a diverse environment, like a multinational, you’ve really got to take time to understand the culture. I've always thought in the past that that is a strong portion of who I am as a leader. I think as you take different jobs and they're more and more challenging, you learn you have a long way to go.

Culture is such a dynamic word, right? There's the culture of the company. There’s the culture of the people from all of the different parts of the world that they come from. There's a culture of age. Look at TikTok; the majority median age was literally half my age in the company. Talk about making me feel old! But you have to learn that culture, the people that you're going to be leading. I mean, there's just a lot to it. Things that I can tell others is you can never take enough time to learn the culture and layer in your decisions based on those learnings. That's number one.

[0:04:36] Guy Podjarny: How does that, before we go into number two, what does learning the culture mean to you? What types of activities would help you get it? How do you learn a culture?

[0:04:45] Roland Coultier: It depends on the type of company in the industry you're in, but spending time with people. I get it. We had to do this in a time period where everybody was working from home. You're finding people, onboarding people, immigrating them into a company, even myself, and people are simply working from their homes. There is still a culture embedded in that. 

Where's the history of the company? Why do they do what they do? What are the important methodologies that they follow to keep that culture? How did the company literally start? What is the expectation of where the company is going? What are key things that the company holds true and dear? That's just a handful of things. Taking the time to do that. Meeting with people is truly just underestimated by a lot of folks. Taking the time to sit down and have a conversation.

[0:05:35] Guy Podjarny: How do you glean? I think, I very much relate to meeting people too. I think it's pretty easy. Like, book a time, especially if you're virtual and have a conversation with someone and really be very factual, right, in the information exchange. I mean, are there some telltale aspects, or when you think about a culture, are there attributes of it you're especially attuned to? How do you, I guess, maybe conclude, or try to notice the right hints when you're trying to take in a culture in these types of meetings?

[0:06:03] Roland Coultier: I think, there is more than just the go meet with everyone. That is really hard to do. I mean, you have a day job. You might be running a security program, or a portion thereof. I think the how is important. The first how, to your point, book a meeting, but make it often transparent and open. Here are some ideas. I like doing the CISO breakfast. Virtually, it's a little different, but still works, right? You book a time that's reasonable for people within their time zones. You sit down, have a piece of paper and a pen, and you have zero line items that you're going to cover. The discussion is, “Guys, I'm here for you. I have a piece of paper in case I have to take a note and do some follow-up because you guys give me an assignment. Other than that, I want to hear what you want to hear.”

I love those, some of my favourite meetings. I learn so much. I learn about the people. It gives you the opportunity to have a dialogue and exchange in a friendly format. By the way, none of your directs and no managers in the room. Just you and the team. I think that's super important. 

Jason Whitty, who's a CSO of a really large financial company, he was telling me in a meeting a few weeks ago, something he does on a weekly basis. He holds, I want to say, it's some odd number, like 27 minutes or something like that. It fits in a certain amount of time, but he does it every week. It doesn't matter where he is around the world. He does it every week. He always covers one or two topics, and then leaves it open for the team to have the discussion.

He was telling me that he has learned so much about the culture and it has brought him up quickly to understanding that environment, that that works. I mean, these are active listening skills. These are really simple. These aren't things for business. These are things just for people. The more you can use active listening skills to understand who people are, where they come from, and what their concerns are, I think that's a great way to get caught up real quick on the culture of the company, as well as the culture of the people.

Another way to go at the company is spend time outside your box. I used to go and sit with the sales folks. I used to love to spend time in call centres. I would go into different parts of the organisation and hear what they're doing. Again, you're going to pick up different aspects of the company that way.

[0:08:18] Guy Podjarny: Yeah. I love that. That's the embedding. The reality is when you're in a conversation, I mean, I guess you're describing almost three levels. You can have a professional conversation and just notice whatever aspects of the culture. You can have a conversation that is intentionally designed and these portions of it to leave space and be active to listen. I'm bad at that. I don't do well with silence. I'm good with listening. But if there's an awkward silence and I have the urge to fill it on it, but it relates to the importance of it, even if I suck at it. Then, the third aspect is just about – almost like, observing them in real life, be in their situation versus hearing what they want to tell you about and choosing to see the reality. Is that about right?

[0:09:03] Roland Coultier: It's absolutely right. I can't stress that experience matters in the company. You might be coming in to solve a problem, or you might be coming in to do something new, but there are a lot of people that have come before you that helped build what that culture is. Finding those people. Find people who are most passionate about it. At leadership levels, or just anywhere in the company. Latch on to that. Be an active listener, but find the people that understand the culture, too.

[0:09:34] Guy Podjarny: Maybe this is stating the obvious, but why? I mean, say, what's the cautionary tale here? If you didn't take the time to learn the culture, what could happen?

[0:09:44] Roland Coultier: Wow, there's a pile of stuff. First of all, you won't get your job done. You won't accomplish the goals and the mission that you've set out to do. There's a simple reality that humans will support those things they understand and ingratiate themselves with those things that understand them. When you fall short of that, or you miss the mark, the culture of an organisation, you get a lot of pushback. Even if they're smart and intelligent decisions, maybe it's on a product. Maybe it's on a program. Maybe it's on a new technology. Doesn't matter. The idea that you don't understand us, that you don't understand the we and that you're just a you, it's like white blood cells attacking bad things in your body. You have a really, really, really hard time. That's number one.

Number two is if you're in a leadership role, the question is: how long can you last being a leader that doesn't have the same values or strategic direction that the organisation has because that's all built into the culture? Culture is a component of multiple things. If you don't understand it and you're not leading in the same direction as others, you may be leading away instead of leading from the core. You have to think about that.

[0:10:58] Guy Podjarny: Yeah. That makes a lot of sense. It's interesting because, with Snyk, the journey, it’s clearly a much smaller organisation than both the ones we mentioned; as the company grew and as new execs join and new leaders are joining the company, it's so evident if you have people that are or aren't aligned to the culture and the contrast when they're not can be so sharp.

[0:11:20] Roland Coultier: Time for me to ask you a question. Because you've done this a couple of times and you've had to grow leaders into the organisation. As a CEO of a company that's responsible for helping build, develop, and drive that culture, when you see new leaders coming in that aren't grasping it, what do you do?

[0:11:41] Guy Podjarny: Yeah. I mean, I think there's a fairly short fuse, especially at the leadership level on it. There's a period of time. It starts with raising awareness and raising attention. The best case scenario is people thought that they needed to come in and deliver results and they just came in and tried to do their playbook. It's to say, stop, wait, give it time, listen to people, don't try to rock the boat right away, absorb it. Ideally, in a good number of cases, it's just a quick shake up and they set it up.

If that doesn't work, or if that doesn't work enough, then you get a bit more concrete on pointing out these things and those behaviours that you might want changed or you might want to – Frankly, when you get to that point, you're already at a fairly – not amazing odds of success. If that doesn't get resolved right away, it's unlikely to end well. That person is probably not going to stay in the org for long.

We've been pretty good at not having that happen too much because we've been super attentive to it. Yeah, it doesn't last long. Otherwise, it's – I mean, you said it, I think very well, which is it feels like it's leading away from where you want to get to.

I think culture is super important. We have some great tips on it. Probably the easiest objection is that's super time-consuming. How to buy, or coming into it, you do need to deliver results. Any thoughts on what happens if you're very tight on time and you care about this, but you're just – you feel like you're having a hard time making time for it?

[0:13:05] Roland Coultier: You're not going to get culture overnight, and you're certainly being brought in for a reason to go do a job, so you have to do that job. I think the best word I have for it is iterate on it. Meaning, pick a focus area to learn, then try to integrate it into your work style as you're delivering. If the CEO comes to you, or someone else comes to you and says, “Hey, here's what I'm hearing.” Iterate on it. Don't take as an affront. Don't become defensive. Bring it in and understand that it's just an opportunity to better yourself and better your approach.

Here's what works for me. I write it down. If I'm having a one-on-one with someone and they said, “Hey, Roland. I could bring something up. You may not be aware of it, or just might be a style thing, but here.” Okay, ask you a couple of questions. How did that come across? Here's what I meant. Do you have some suggestions for me? Has this happened in the company before? Just take notes, right? Then iterate yourself. Say, “Okay, I'm going to do these three concrete things to make sure that doesn't happen again.” It's not an all-or-nothing and it's not an overnight thing.

Continue to implement it. Add new things to your weekly schedule, to your meeting schedule, to how you lead your teams, to how you manage. Just add components of it as a normal course of working, just like breathing, right? I come from a military background and I can tell someone in the military that came out of the military, especially from high-tempo operational teams, when they're in a tough situation, or you can see them breathe. I swear, I can tell the military guy from across the room. You see them, they go into deep long-pause breathing to centre themselves and calm down. It's like the same thing, right? Take that time to take a step back, understand it, iterate it into your work and move on. It'll become muscle memory for you. Okay. That's how we operate, next. That's how we operate, next.

[0:15:00] Guy Podjarny: Yeah. Iteration is almost like, sometimes feels like the answer to almost everything that you do. I think culture is not different than, I think, those are great tips of how you have to take it a bite at the time, but you have to make sure that you're proceeding down in the right direction. We can probably go in this path for quite a while, and I think building a team, super important, especially for CISO positions in which you're – like oftentimes, you think you can tell people what to do when in practice, you can't. You need them to come along for the journey.

I'd love to dive into something that's a bit more subject-matter-related when it comes to security. That you've had a bit more exposure to it. There’s a whole world of user-generated content. I know that in the world of security, it oftentimes feels that really what you're tackling is the entry points. It's where can someone introduce a potential attack, a potential problem into your system. When you look at the world of user-generated content from the outside, it feels like a bit of a nightmare from a security lens, because content is coming and sometimes elaborate content with real security venues, if you will, can come at you from all possible directions. You're brokering these things, so maybe the control feels a bit low.

You've been immersed in this space for a while. When you think about securing user-generated content and doing that at scale, how do you approach this type of problem? How do you tackle it?

[0:16:20] Roland Coultier: Yeah. Where do I start? Taking a step back and obviously, having been in social media companies, there's the world of security risk and privacy enforcement, data defence, access controls, the typical things. Then there's this world of trust and safety, right? I have user content. I have user use of technology and platform and how does that work and how do we protect people from themselves and others? It's an emerging world for sure.

I think, first and foremost, you have to have really, really good transparency into what you're doing. User-generated content is a very big thing. It may be as simple as someone writing something on Twitter or as deep as creating a video and posting it to a media platform. It's so wide. I think, first of all, the focus is having an understanding of what you're protecting, what you're required to protect, what you're accountable for both from a legal perspective, a regulatory perspective, as well as set expectations. That's number one is what is my responsibility and what's my responsibility to my customers and the people using my technology?

The second thing from my point of view that's really important is what is the difference between securing that user information? Meaning, what comes into my platform, the general security, making sure my platform is appropriate for the data beyond protecting the interests of the information they put in there, whether public or private. Then, what are the aspects of monitoring that information for policy? It's hard to put those in two buckets, but they're very, very different. One is how am I securing your information, Guy? You created a cool video. I put it up there. You put it private for your friends. How do I ensure that it's private? That's a very high level of looking at it.

Then there's the aspect of it from my accountability to regulatory issues; what can be said on my platform or not? What are we held liable to and what are the conditions that we allow people to put user-generated content on?

[0:18:36] Guy Podjarny: This would be things like, whatever, like child pornography, or things that might cross the law, or things that are just against policy? I don't want to hate speech on my platform, or things.

[0:18:45] Roland Coultier: Right. We don't allow hate speech. We don't allow non-truth information, whatever your platform, or whatever your technology requires. Then the third piece of that is how do we monitor for violations of the agreement you have as a user and I have as a provider of that platform? That gets bifurcated a lot in organisations. There is technology that we have to use to do it, because think about it, you have millions or billions of users on any given day providing their content. It's impossible to have a humanistic approach to be able to monitor for data. You have to do that in the context of the data itself and what type of data it is.

It can be as simple as I have algorithms that look for certain words in text-based things, or I have to be able to, at scale, review a video before it gets posted to look for dangerous items. Those are three really basic things; protect the platform, protect the agreement I have made with my user and enforce that, then that third piece, that trust and safety ensures that the information doesn't violate any of that.

[0:19:52] Guy Podjarny: Yeah. I'd love to drill in, though, on both the security aspect and maybe adherence to policy, but maybe the protection that is beyond the technical securities. On the security side, when you think about complex formats, like video, or others, is it a real-world concern to think about, for instance, one user uploading a video that might actually trigger malicious activity on the users? If an attacker is using the platform as a distribution vehicle and thinking something similar, like, I know that in the world of simple textual forums and things like that, that would be persistent across that scripting and things like that in which a user might use the platform to propagate literally code that might reach users. I'm curious, is that a world, is that just theoretical than in practice, except for those maybe more biennial examples as real as they are?

[0:20:44] Roland Coultier: Yeah. I think in practice, what you're talking about there is probably not code, but probably off-platform direction. Most of the platforms that I've seen out there, any of your major platforms, they focus on things like what's being delivered if it's a user-to-user engagement, do they even allow that? Are links allowed? Are those links verified and validated through URL defence mechanisms? Those are the standard things that could happen.

Guy, why don't we just take a step back? Because I think one of the biggest things we forget about is identity, right? Who is on the platform? What are they doing on the platform? What's their purpose for being there? You can have the best platform in the world, but if you're not doing some of the basic things from a protection standpoint with regards to identity or access, you could have two-thirds of your user count actually be non-human, right? You can have bots and different types of synthetic identities on your platform. I think, where a lot of platforms that I have learned from and have listened to their technologists and implemented in different places is a focus on understanding who's coming. Is that a real person? What are they doing? How do you ensure the consistency of that assurance from the time of signup, through authentication, to platform use? That is not as easy as it sounds.

My hat's off to the men and women that have a full career focused on understanding identity, use of identity, and historical identity across a platform. Those are three things that sound pretty simple, but they are a massive, massive effort in these larger platforms, where user-generated content is at hand.

[0:22:28] Guy Podjarny: Do you find that to be a solvable problem? Having seen a bit more of the detail, when you think about bots, you think about historical, everybody's been suffering from this, Facebook, Twitter very famously recently, talking about users. They get the algorithm. They mimic history. They just need to be slightly more patient attackers, so that they replicated human-like, like we're not as sophisticated as humans. We post whatever it is, pictures of our dogs, our children on the Facebook channel, and some link to a few articles. I guess, how optimistic are you about our ability to detect that these are bots and not humans against an attacker that isn't just a flash activity, but someone who has nurtured a bunch of users on the platform?

[0:23:12] Roland Coultier: What I'll say is that we'll continue to attack the proponents of the problem. Organisations will get 75% to, call it 85% of the problem at hand.

[0:23:22] Guy Podjarny: As in, they will manage to identify 75% to 85% of the users that are actually not users? There are bots that are automated.

[0:23:28] Roland Coultier: Right. If they want to. I’ll leave it there. If organisations want to. I think the technology is available. Platform-specific technologies are being developed by great teams in some of these really huge technology platforms. That's today's problem. You have computer forums, if you will, in different countries around the world that house mobile devices and hardware devices, and that are just creating identities and managing, trying to be subversive against the defensive technologies in place to be able to do this. It's a massive-scale operation.

Where this is going to, though, is synthetic ID. I actually look like a human and feel like a human and smell like a human to technology. I think we're going to have to continue to have iterative change investments in this area to really be able to identify those issues at scale. I saw it in the financial space when we were protecting billions of dollars in money movement, in the fraud space against when card platforms and FinTech technology started coming up. Now, of course, we start with social. I think it's going to be, this next generation will just have to focus on true synthetic identity and how do we stop that at that level of scale.

[0:24:47] Guy Podjarny: Yeah, that's super interesting. You were saying, fraud, for instance, today, increasingly, the best way to tackle fraud and find, there are all the algorithms that raise a suspicion. Once a suspicion has happened, I think the best practice today is to prompt me on my phone and use my face ID, or my fingerprint to say, “Yeah, this is me.” In which case, I mean, it’s hard to be more confident than that, that it is indeed you doing the purchase. You're seeing potentially social media doing – maybe going down a similar route. Is that right? Did I get it correctly?

[0:25:20] Roland Coultier: Yeah. It really depends. Understanding your identity and having mechanisms to validate that it is you, not just to an email, or not just to a phone number, but having the capabilities to do – Most of these companies have some multi-factor authentication. I was a content creator myself on that side of the platform, I use MFA. No different than my bank. I've learned to live with it. Will that get better? It already has. Hey, open this app and say it's you. Okay. You're not searching for your Google lot anymore. They're doing different things.

[0:25:55] Guy Podjarny: One challenge that comes to mind to that, which I get the technology aspect of it, but one thing that probably raises some concern is the aspect of privacy. I mean, generally, if I’m making a purchase, and I want to validate myself to my credit card company, I don't feel like I'm really concerned about the credit card company actually knowing who I am. I mean, at this point, hopefully, they already do. In fact, I'm trying to prove that. Well, I think on social media, privacy is a real topic, right? I mean, wants to publish things on the platform, but not to have the platform have my fingerprint, right? Be able to identify that it's actually me at that deeper level.

[0:26:34] Roland Coultier: I think when you allow the technology at the device level to be managed by the user themselves, they can set that standard. For instance, probably shouldn't tell the world, but I use an iPhone, so does half the world, right? When I authenticate with a fingerprint, the application I'm authenticating to does not get my fingerprint. Apple has it on the phone. Apple does that into a digital sequence that then says, either it is true or not true. I'm not giving up my identity. I'm simply establishing a link to something that I have to prove that it is me that opened that account and did that.

I don't have to get into the – should we use our real names, or pseudo names, or have – that is up for an individual person, from a privacy perspective, what they want to do. To guarantee and protect the account of an end user that not only do they want to be private, but they don't want someone else using their account, or taking over their account, simplified things like that, that allow them to have it in their hands, decide how they will, or will not authenticate and give the user those decision points, I think enables that level of privacy control that a consumer is looking for today.

[0:27:41] Guy Podjarny: It makes a lot of sense. You're getting the biometric validation without actually getting the individual data. I wanted to ask about another aspect that you mentioned before, which is how do you protect your users from maybe nefarious comment? The analogy that comes to mind is thinking about the users of your platform as an extension of your enterprise users. When you're in the enterprise, you're trying to keep your employees from clicking the wrong link or being lied to or being tricked to go into some secluded location, or any of these harmful behaviours. Do you feel like those approaches and those techniques about enterprise security extend to users of a social media platform when you have this type of user-generated content? I guess, not that is similar to email, but they're consuming it on your platform?

[0:28:30] Roland Coultier: I don't think there's a one-to-one correlation, Guy. I think, you really have to focus on the data here. What are you doing? Whether it's ensuring that that is the data, or you're doing policy validation and safety checks against the data for appropriate use, or security of the data. Meaning that I've marked it as private and it's not allowed to go here, whatever it may be, that you have controls in place specific to the platform you have.

What is the same is that end-to-end process validation, right? I have a process. It's a technology process for technology to get the users and who is making the content. I’m bringing data in. That data's going through a process. I have controls to protect that process. Does the process work? We’ve always said this about controls and control validation and control assurance, it's the same thing, right? It's just data being used in a different way.

I often tell people to focus on the end-to-end process to make sure that it's secure, to make sure that you have the controls, you have monitoring capabilities. If it's a preventative thing, you have preventative capabilities in there and that you can learn from the output of those controls. In that way, in that way only it's probably the same thing.

[0:29:41] Guy Podjarny: Yeah. Just the methodology of it. But it's a different set of concerns to it. I guess, that actually naturally leads me to the constant anomaly in the world of security, which is measurement. How do you know? Like, you have this platform, you're protecting a gazillion users from harmful content. You're above and beyond the measurement of your own, maybe security program internally, keeping the user's video private when tagged as private.

What are the types of KPIs, or measurements that make you think that something isn't just a feel-good security control, but an actual impact on making your platform more secure, making your users more safe?

[0:30:19] Roland Coultier: This can go in so many different ways. It gets back to control efficacy. I know that sounds like a cheap way out. I'll explain it a little bit, right? What is the efficacy of the control that you've put in place? Whether it's user-generated content or it's a defensive process you've put in for the use of a specific technology on making a video or sharing pictures, whatever it may be, understanding the instances of when those controls worked, when they didn't work, how often they're used, or not used; getting into the actual details of data.

We actually talked about this three years ago. We talked about the most progressive part of our industry is going to be training next-generation analysts and data analysts to be able to make sense of the information we have coming from our tools and our capabilities. This is one of those things is breaking down, how do you prove the efficacy of a specific control? You mentioned video and I'll make a silly one and not for any specific platform, but a gun. Maybe top five platforms, say, we don't want guns shown in videos on the platform. Not making any judgments about that. I'm just using that as an example.

You have a gun-detection technology that your trust and safety organisation will implement to monitor for that in videos. Being able to actually go back and see how many times that control was used in pre-authorisation to publish a video, go and see how many times it made it past, false positives, true positives. Those things are still important. If you don't dive deep into that data and provide a mechanism by which you're getting frequent information on that. I mean, instead of having every 30-day reports on controls, have daily reports on controls.

By the way, this tells you that you may have a different issue. For example, maybe an organisation is trying to misuse your platform in some way. They're not trying to hack it. They're not trying to break it. They're trying to misuse the platform for their criminal –

[0:32:27] Guy Podjarny: Commercials. When they're not allowed to do commercials or some –

[0:32:30] RC: Yeah. Commercials, provocative speech, whatever it may be. All of a sudden, if you're instrumenting these controls and you see major jumps in these controls on a daily basis, this gives you actionability to be able to go re-evaluate, do you have the right controls in place, or do you need to make changes to your existing ones? As well as if you're getting additional manual complaints inbound that you missed, that the rest of your community is having a discussion on.

Being able to have that data analytics and having people understand how to create those capabilities is super, super important. I was talking with the former head of the European Cert and just this morning, we were talking about having the understanding or the deep understanding of the totality of the data coming from your infrastructure, your operations, your trust and safety, your cyber defensive devices, all at the same time to be able to make these intelligent decisions, using it as a decision support platform in a very automated way.

It's no different here, right? This is large-scale operations, very specific controls for very specific things. Being able to capture that, relay it in a humanistic form that someone that can understand, and then use it for both triaging efficacies as well as problems on the platform.

[0:33:49] Guy Podjarny: Yeah. I think that makes a lot of sense. Actually, in some respects, it might even make it easier, or at least more programmatic to measure that and to measure the unknown risk of someone potentially hacking you when the data that you have, the feedback data points that you have in terms of breaches, or semi-breaches is hopefully smaller than the number of videos that comes in it that try to live in. It does boil down to, first and foremost, connecting this to the comment on the 85% and 90% efficacy, it really boils down to the hosting organisations' interest and dedication to tackling these things, because these things can be pricey to identify whatever a bunch of analysts trying to find a gun in these videos that have been missed. First, they need to care, then they need to be willing to put in the dollars to make that happen.

[0:34:39] Roland Coultier: Yeah. I think you're starting now to get into sensitive regulatory rounds, right? What is the world going to do as we focused so long on data privacy and the data transmittal defence and things of that nature? What's next for regulators in this area? What do we expect them to ask companies to defend against and to manage and report on? With the difference in how online platforms are being used, I think we'll continue to see that development in regulatory pressures in that area.

[0:35:12] Guy Podjarny: Yeah. I think those regulations are applied probably like a lot of the hot topics over here are around content moderation. They're about inciting people to different activities. There's that category of regulation. It's interesting actually to continue down this route and talk about regulation for security itself. One of the topics we've very, very casually touched on our last call, have also come up in a few other conversations after, and of our different shows was really this notion of potentially increased regulation in the security space itself.

Actually, just as a bit of an anecdote is that it's interesting because I keep – when I think about supply chain security, which is a topic that maybe also makes some regulations, actually feel like the best corollary to it is fake news, this bunch of user-generated content, which is open source libraries out there in the world, generated by who knows who, that are not always – I don't know, there was a lot of bots that write code, but there's definitely people that aren't necessarily their own true identity. Then that gets disseminated and multiplied, generated in many places. Now people are trying to figure out the veracity of it and where they did come from and how trustworthy is that source.

I actually feel like, those end up being somewhat analogous, fake news and supply chain security. I think, I don't know, feel free to critique, or condone that analogy. I guess, how do you think about regulation above and beyond media regulation, or content moderation, regulations and laws that kick in? Do you foresee increased demands in terms of how well you're protecting the data about the security regulations themselves coming from governments and markets?

[0:36:52] Roland Coultier: Absolutely. We're seeing it now. We're seeing test beds on different layers and levels of it. I think, you and I have talked about this before, but my three focus areas in this coming year are around data defence and access assurance and what that means to corporations and private entities. Transparency is number two. How do I have transparency in my environment, not just the CMDB and the technology and the code, but the data that is along the way? Then the third is around how is technology impacting that through bias defence and other things?

The reason those are my top three is because regulation is coming out in all three of those areas. There is an expectation of your capability to be able to defend your environment. We saw some large fines last year for companies that governments, regulators, and/or courts thought should have had a better understanding of their environment. We're seeing the testing of new laws in Europe on understanding who has access to consumer and citizen data from any given country from another. The enforcement of that, we're seeing that in South America, and we're seeing that in South Korea, and we're seeing that in different places as well as in the form of different jurisdictional regulatory controls.

I'm saying this, because the people responsible for enforcing that are the security people, right? Not just “the privacy teams.” In some more mature organisations, you have privacy engineers within the CPO's office. Predominantly, in most organisations that I'm aware of, a CPO helps understand and decide and define what the rules and the regulations mean to an organisation. Your security teams have to enforce that. I definitely, definitely see more coming. We're seeing more in the US. We'll continue to see more state-level regulations come into place in the United States. We're seeing the maturity certainly across the globe. 

I don't see it slowing down. I see it focusing on more prescriptive requirements, especially where data is concerned and especially with understanding the controls and capabilities in your environment.

[0:38:52] Guy Podjarny: Got it. It sounds like, you're primarily highlighting the path that is almost from privacy to security, as in you have a consumer or a customer commitment to keep their data safe that has privacy implications, it has business implications, but the actual burden of making that happen, or sticking to the regulations falls on the maybe bigger and more equipped security team within the organisation.

[0:39:19] Roland Coultier: Being able to prove it. Getting back to analysts, getting back to data, right?

[0:39:23] Guy Podjarny: Show that it's actually auditable.

[0:39:24] Roland Coultier: It's auditable, it works and it's in place. Not sign attestations. That's not going to be a thing in the near future. You're going to actually be able to prove the number of records. These are real things that nation-states are asking of different people in critical infrastructure in other industries right now, but it will become a mainstay.

[0:39:43] Guy Podjarny: Yeah. Is that different, or is it just another aspect of the idea of knowing your security posture for activities? I mean, if you're whatever, securing a power plant, which hopefully, granted they're separately regulated, then your actions are also relevant. It's not just the user, right? If someone took charge of a social media platform and started posting fake tweets on behalf of individuals, you’re not protecting their data, but you are securing the platform and you might have implications. Do you see those as two different swim lanes of one of them is really about protecting my data and then being transparent about how you're protecting my data and knowing how you're protecting the data and that it works? Another is protect the system so it's not used for harm? Or are they really –

[0:40:23] Roland Coultier: I do. I think governments look at it differently. I think as a manager, or a leader in this industry, you have to look at it differently. Both have processes. They both have controls. They both have those things, but it's easier since we've built infrastructure defence and cyber defensive programs over the last 23 years, understanding controls, defence and depth, stacks, technology, all of that stuff is a little bit easier than where is my data? How does that replicate microservices? What jurisdiction does it sit in for any specific user from any given nation-state? Who has access to it? From what country? From where? Being able to report on that. It's a separate process.

I think people have to take a data-centric approach, the parallel part of infrastructure and technology defence approach, and be able to be prepared based on the industry they're in and the level of expected regulatory compliance in the jurisdictions they're operating in to actually create both the preventative and defence programs, as well as the reporting programs for each. That's four programs you're talking about. Two for each parallel stream.

[0:41:39] Guy Podjarny: Yeah. I guess, if you were to take a crystal ball and try to think a little bit about the timeline for this, you were seeing active work there? You think security leaders thinking a year or two out should start anticipating they're going to need it, or is it further out than that?

[0:41:53] Roland Coultier: I think, you'll start seeing in specific enforcement, three or five years on the data side, especially in specific industries. You're probably only two to three years out from an infrastructure side. You see all the great works at –

[0:42:04] Guy Podjarny: Supply gen security, a lot of it is already actually coming into play. You're seeing in the executive order, but the expectation is that will become regulation as well.

[0:42:12] Roland Coultier: I think that would become regulation. I think public companies with SEC adoption of some of these things. Certainly, our brethren, the financial sector has been feeling that for a while. They've been having to do third, fourth, and fifth-party attestations and something. We'll see that in healthcare with all the attacks on that. Of course, utilities and critical infrastructures that are now being used in active conflicts around the globe. I think you'll see that as well. Two to three, two to five on active enforcement on the infrastructure in tech compute side and then on data side, you're probably five years out before that's becoming a full reality.

[0:42:49] Guy Podjarny: Actually enforceable. Well, we have to build a bunch of those that are different controls before. I guess, when you say government regulation and security, there are definitely a good number of people that will freak out. One of the primary concerns that will come to their mind is the ability of the government to really get security is a complicated, already hard-to-measure space, fast-moving, and quite technical oftentimes. I guess, a critique or a concern would be, can really the government effectively regulate that, versus maybe the general, acceptable accounting principles where you think finances are more mathematical and methodical than tractable. Although, people will get pretty creative there as well.

I guess, how do you think about it? Do you think, are you hopeful for times of some government regulation, increased government regulation of security? Do you think it's possible to do that well, or would this be a bare minimum in some tax? How do you think about that?

[0:43:48] Roland Coultier: That's why I'm not in public policy. There's the part of me that says it's going to be extremely hard to do. At what level do you do it? Jimmy's Pizza uses an online app to let customers order pizza. Are you going to audit them? The IRS does, but is the government going to do it for that level? I mean, there's a reasonability. You can apply reasonability to that.

Then again, the minimum finance cyber defence programs and risk management programs, five years ago, if they thought they would be getting the questions they're getting from their regulators, there are three to five regulators, depending who they are. That will –

[0:44:24] Guy Podjarny: Probably be more detailed than they thought they would be.

[0:44:27] Roland Coultier: Yeah. Certainly, they're asking very detailed questions, proof around audit evidence, operational data evidence. They're getting to that level. I've been on the receiving end of other parts of the world that have specific laws that have asked for very specific data, that proves the efficacy of the controls and the durability of your environment, which is smart. I think you'll see what they're asking for is a little bit different that will prove efficacy. It'll get more detailed.

I think what scale and to what level do they demand it for what types of companies and will the targets still be the most impactful to the consumers or the citizens of their countries? They're getting more technical. They're certainly doing critical infrastructure. I see that bleeding in a lot of different areas.

[0:45:09] Guy Podjarny: I guess, a simple enough path would be to take existing industry regulations and just apply them to government requirements, right? Instead of saying, “I won't buy from you, unless you are SOC2 certified.” A market can say, “You can't list here, unless you are this certified.” Or a government can say, ‘Everybody within this industry that handles this volume of business has to be SOC2 certified.” In which case, you take an existing standard, I guess, like an industry standard to work it. To deal with a solution?

[0:45:37] Roland Coultier: That's a way to do it solution. I don't know if I agree with it. We've always had this discussion about, again, the efficacy of a program versus a compliance mindset, right? You’ve seen organisations that have passed their ISOs with flying colours, no observations, and they were breached the same week. I think, I want to be hopeful that whatever is implemented is done in such a way that takes into effect that this is not a checkbox compliance issue, but they're actually doing validation, or why wouldn't they just accept a separate certification from an industry entity that's already been accepted by the rest of the business world?

[0:46:20] Guy Podjarny: That’s inside. It's interesting, though, to think about the growth of security demands. It seems like, at the very least that would continue in one form or another and security scrutiny will grow. It doesn't sound like that would diminish in your view in any stretch of imagination.

[0:46:34] Roland Coultier: You mean, the security industry itself?

[0:46:36] Guy Podjarny:  The level of security scrutiny that businesses get exposed to.

[0:46:40] Roland Coultier: Yeah. that'll just continue to increase.

[0:46:44] Guy Podjarny: We're coming up a little bit on tight time. I want to make sure that I ask you my open-ended question of the season, if you will, that I have right now. Relates maybe a little bit to the topics that we're asking here, which is if you had unlimited budgets and resources, just as much as you wanted, and you were going after a problem in the security space, what problem would you go after and maybe, if you have any thoughts, what approach will you take for it? It may or may not be related to the conversation we just had.

[0:47:12] Roland Coultier: I’ve been spending a lot of time in the space recently talking about – there's a couple of areas. This problem is around data defence and access assurance. I spent the last few years of my life really diving deep into how do you defend data and how do you ensure the sanctity of the information that you've committed through controls and how do you validate that? It is a hard, hard problem. It's a hard problem for big companies with big money. I can't imagine being the CSO of a small shop for a multimillion dollar, or maybe even a billion-dollar company would short on headcount, but it has that wave of regulatory concern coming at them.

My focus would be on that data plane. How do you have the ability to have true intention transparency, to have data legacy management within your environment, and the ability to query data localisation, data access, and validate data controls all at the same time? I know it sounds like I got Ruby slippers on and I'm going to clip the heels three times.

[0:48:15] Guy Podjarny: That's good. That's what we're asking for.

[0:48:16] Roland Coultier: I think that is where we have to go. A lot of people use four, five, six, seven technologies to get that done. I think we as an industry need to take a step back and say, how do we do that better? How can we do it quickly? Because it's going to be here faster. That is one major issue. Another major issue is the world of microservices.

[0:48:39] Guy Podjarny: Though might scatter.

[0:48:41] Roland Coultier: It's the new shadow IT, right? You put all these controls on your edge and on your containers and being able to watch APIs. Then also, you have this entire platform of microservices that manage themselves independently together through code and have their own infrastructure. How do you defend that? How do you defend what runs between those microservices? What's exchanged? How tokens work in that environment? That is a big problem. Most people, when you take them down to that level, are really, really shocked when they start looking at microservices within those environments. Before that gets out of hand, I think that's another area I would spend a lot of time and money on.

[0:49:20] Guy Podjarny: To control. I think to put a positive spin on that, if someone does get control over that, maybe we secure the units a little bit better and actually, level up the security of the whole. Right now, it's a bit of a mess. We might do that. 

Huge thanks for coming on to the show and sharing further learnings from you. I'm sure we can probably get you on here 10 times and keep getting great gems on it. Thanks for coming and sharing these.

[0:49:45] Roland Coultier: Yeah. Well, thanks for having me back. I appreciate it. It's always a great conversation. 

[0:49:49] Guy Podjarny: Thanks everybody for tuning in and I hope you join us for the next one.

[END OF INTERVIEW]

[0:49:57] ANNOUNCER: Thanks for listening to the Secure Developer. That's all we have time for today. To find additional episodes and full transcriptions, visit thesecuredeveloper.com. If you'd like to be a guest on the show, or get involved in the community, find us on Twitter at @DevSecCon. Don't forget to leave us a review on iTunes if you enjoyed today's episode. Bye for now.

[END]

Up next

Responding To A Security Incident With Rob Zuber

Episode 132

Responding To A Security Incident With Rob Zuber

View episode
Securing Supply Chains In C++, Java, And JavaScript With Liran Tal And Roy Ram

Episode 133

Securing Supply Chains In C++, Java, And JavaScript With Liran Tal And Roy Ram

View episode
The Five Pillars Of MLSecOps With Ian Swanson

Episode 134

The Five Pillars Of MLSecOps With Ian Swanson

View episode
What AI Means For Cybersecurity With Sam Curry

Episode 135

What AI Means For Cybersecurity With Sam Curry

View episode
The Intersection Of Integrity And Security With Guy Rosen

Episode 136

The Intersection Of Integrity And Security With Guy Rosen

View episode