Skip to main content
Episode 16

Season 3, Episode 16

Security Training With Masha Sedova

Guests:

Masha Sedova

Listen on Apple PodcastsListen on Spotify Podcasts

In episode 16 of The Secure Developer, Guy is joined by Masha Sedova, co-founder of Elevate Security, to discuss how training for employees (even developers) can help companies stay one step ahead of the pack when it comes to preventing a breach.

The post Ep. #16, Security Training with Elevate’s Masha Sedova appeared first on Heavybit.

Share

Masha Sedova: “I think a lot of people know much more about security than they give themselves credit for. You think that it's something that happens to other people. So, security breach is something you see on the news. But it's not something that will happen to you. It's not necessarily outrunning the tiger, but it's about outrunning your slowest friend. I find training developers actually to be much harder than regular employees, because there's a certain amount of arrogance associated with I already know this, or I'm smarter than this. It's really hard to get past that shield of, you're wasting my time.”

[INTRODUCTION]

[0:00:36] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow. The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter, @thesecuredev.

[EPISODE]

[0:01:07] Guy Podjarny: So, welcome back, everybody. Thanks for tuning back in to The Secure Developer. Today, we have with us, Masha Sedova from Elevate Security. Welcome, Masha.

[0:01:15] Masha Sedova: Thank you so much for having me.

[0:01:16] Guy Podjarny: Can I ask you to just introduce yourself a little bit and Elevate Security?

[0:01:19] Masha Sedova: Yes, absolutely. So, I've been in the security space for 16, 17, 18 years. I've lost count at this point. But it's a passion of mine for as long as I can remember. I started in a very traditional space, mostly computer forensics, and doing cybercrime analysis. But over the course of my career, I became obsessed with this idea of what it would look like if we could get people to want to do security, instead of have do, and why don't people want to do security?

So, I started merging my passion for computer security with the study of behavioural science, which is the study of how people do things and why they do things and a habit change. I started applying that into my work, initially, while I was running a team called Security Engagement at Salesforce. In that role, I had the ability to be responsible for people, security behaviour, and general employees, as well as developers as they applied to creating secure code and fixing bugs in code. Also, to customers and getting them to adopt secure features in the Salesforce platform.

I got to do that for a phenomenal five years, and I built this amazing team. From there, I wanted to take a lot of my learnings and apply it to other companies and share what I had learned in best practices. So, that took me to starting my own company that I've co-founded with a fellow security engineer named Robert Fly. We started to Elevate Security at the beginning of last year and it's been an amazing ride so far. What we are focusing on is creating a platform that can measure, motivate, and educate employees to be the strongest link in security, however, that looks like in different organisations. So, really getting employees to understand their role and be able to be advocates for security in their organisations.

[0:03:14] Guy Podjarny: That's cool. That definitely is a worthy goal. Not an easy one, I guess. How did you even get into security in the first place, like into that forensics part?

[0:03:24] Masha Sedova: Yes. I love the idea of there being a group of good guys, and there being a group of bad guys, and being able to defend the good guys on a regular basis against an evolving threat. No two days are ever the same. So, it was an incredible application of a standard computer science degree into one that had much more of a dynamic attacker/defender landscape. Then, when you start including the human factor in there, I was all the way in.

[0:03:54] Guy Podjarny: I think security definitely has a certain mystique to it from the outside. If you talk about the good and evil and around the security attributes there. When you get into the weeds of it, there's just that much more complexity of it. It's not quite what you see on the movies.

[0:04:09] Masha Sedova: Yes. I actually think that the mystique of security doesn't do the security field a lot of good. I think a lot of people know much more about security than they give themselves credit for, and they write it off and say, “I don't understand this whole complex security thing.” In fact, our ability to recognise threats and when we're being psychologically manipulated, to be able to give up information that isn't necessarily public information. A lot of people have intuitions around that and have experience in their life to figure out what's good and what's not. With some basic tools and support to be able to understand, where attacks can come from and how you can defend yourself. I think a lot more people can be much more effective at security and really should give themselves much more credit than they give themselves right now.

[0:04:54] Guy Podjarny: Do you think the reason that people are maybe the core, or one of the core reasons that people make insecure decisions, or kind of make security mistakes is indeed, some sort of like self-assessment or self-appreciation, I guess, of their skills? Maybe I will just kind of ask this a little bit broadly, if you had to sort of simplify down something quite complex. Why is it that you think people don't apply security well? Kind of make secure decisions?

[0:05:21] Masha Sedova: Yes. I think there's quite a few reasons. But the one that I see most often is people think that security doesn't apply to them. So, what that looks like is, let's say, I'll choose not to put on my seatbelt, because I think I'm a better driver than everyone else and I'll never get into a car accident, right? When in fact, you may not be in control of all those circumstances and it's always better to put on your seatbelt for safety's sake. Same thing with security. You think that it's something that happens to other people, so security breaches is something you see on the news, but it's not something that will happen to you.

Most people get desensitised to some of the things that they have access to, that's a value. Even if you handle really critical information on a regular basis, it can become normalised, and you may not realise that that information that you have access to, whether or not it's emails, or code, or personal information of customers, that could fetch a really nice price on the black market, and there are people in the world that are quite motivated to get access to that data. And for many people, just being able to understand that they are in fact the target, and that they do have access to things that are of interest. If they don't take some basic precautions, and have a baseline vigilance about their work, they will absolutely be the easiest target and we'll make the next headline. The trick with security, it's not necessarily about outrunning the tiger, but it's about outrunning your slowest friend, which is it's a little bit ruthless. But hackers are inherently lazy. They want to go for the easiest target. So, just don't be the easiest target and you're pretty good. Unless, of course, it's a government-sponsored agency, in which case, that's a different conversation.

[0:07:06] Guy Podjarny: There are some entities, that rule doesn't apply to them. Yes. Talk about that for a bit as well, like the notion that you just need to be better than the next one, the next company. Funny enough, I guess we do that in the real world, right? We put these stickers on alarm systems that don't exist, just make a thing. It's like, “Oh, okay, you know what, maybe I'll go to the next person who clearly doesn't have at least an alarm sticker.” Maybe not alarm system either or those components. How can we remind them, though? I mean, we think about security, and there's no natural feedback loop, right? It doesn't hurt until it hurts really badly. How do we keep it in people's psyches? How do we not get them to forget and realise that might happen to them?

[0:07:49] Masha Sedova: Yes. So, you're getting two really interesting questions. How do we perceive risks and threats? We perceive risks, when it's immediate, it's imminent, it feels immoral. There's a study on this. But the idea is, it's in front of us, or I know it's about to happen, or I've just recently seen it happen, so I can recall it in my mind. So, when we watch the news, we think occurrences on the news happen much more often than they actually do. But the nature of the news is that they don't actually happen that often, which is why they've made the front page.

[0:08:18] Guy Podjarny: Otherwise, it wouldn't be newsworthy.

[0:08:20] Masha Sedova: Exactly. It wouldn’t be news. So, security, in order for us to realise that it is as ubiquitous as it really is, is I think it's on the responsibility of security teams, or people who have access to this information, to start disseminating it to people who are affected. What that looks like in practice is, if you're, let's say, an enterprise with security team, it's important to show the incidents that you've had to respond to, and share it with, let's say, developers, who are writing your applications and say, “We had to defend this app against an onslaught of hackers, and either worked really well, because you developed it perfectly and securely, please keep doing that. Because this was the impact of your work.” Or this led to an incident and a breach because it wasn't written secure. It wasn't tested appropriately.

So, now we actually have this case on our hand. Most developers that I talked to specifically, don't actually believe security is an issue that happens at their company. They don't think it's real. If you could spend the time to prove to them that it is real, they would absolutely prioritise their work, and they would understand that they're fixing something that's worthwhile. But that's not priority, because they don't actually think it's a real problem. So, I think that feedback loop that you were mentioning earlier is absolutely something that we have to consider sharing a lot more, which is, I would say not a practice that security teams are great at. The partnership with PR teams also make it really difficult for security information to get shared, because my experience is that PR teams often are really nervous about incident information leaking and showing the company in a bad light, when in fact, that should be an amazing lesson learned for people to evolve from. But often that information is locked down. It’s like you will never speak of this shameful incident ever again. When in fact, that's the most valuable teacher we have.

[0:10:21] Guy Podjarny: I think I love the analogies from everything you said to the activities that happened in the world of DevOps. DevOps is all-around measurement and dashboards as well. One aspect, if it moves, measure it. If it doesn't move, measure it in case it moves, and just the notion of a really - like the idea of attaching that to security components. I know that at Etsy at the time, they created dashboards that show the simplest types of attacks like SQL injection attacks and the likes, but just chart the numbers of them on dashboards, and put them up in the main dashboards just alongside the uptime, and just the other core components, just to show and remind people that yes, we are getting attacks, many hundreds, if not thousands of times a day, and those are just the simple attacks and other sophisticated attacks that evade around.

Similar, it used to be in the DevOps world that you didn't talk about failures. You didn't talk about outages. I think really around the velocity conference, and kind of the pioneering DevOps conferences where people will come out, and unashamedly talk about an outage, and how they messed up. Then, subsequently, what they did to correct that. Security is very, very scary. I still think – and O’Reilly is now running O'Reilly Security and they try to be more kind of inclusive in that aspect.

There's a bunch of other events, DevSecCon, those others that accept that they want to do that, they want to share it, and it's still so hard, so hard to get somebody to come in and says, “Hey, we had a failed breach, or maybe a successful breach in the failed data exfiltration.” They managed to get in, but they did not manage to get any data, or even a successful one, but any remiss. Something where we almost allowed them in. But it's just so scary to share these things.

[0:12:00] Masha Sedova: Yes. Or when it gets shared, the information story, so neutered of any valuable insights, right? It's almost as if it's not worth sharing at all. So, I hear you, yes. In my experience, I've seen that organisations that do have insight and that feedback loop are much stronger for it. Because if you can't admit that you have failure, the only thing it guarantees is that you're going to have a failure.

[0:12:22] Guy Podjarny: Indeed. Sweeping it under the rug doesn't help. So, with that, actually, let's talk a little bit about your role at Salesforce at the time, because I think that was a part of your responsibility there. Can you still tell us a bit about what that team was or that role was?

[0:12:35] Masha Sedova: So, as I was mentioning earlier, it had three components to it. The first one was general employee awareness. This was something that will apply to all of the employees at Salesforce, which was about 25,000 at the time. My responsibility there was making sure that employees were mindful of not clicking on phishing links, reporting suspicious activity, reducing malware infection rates, and being careful of social engineering attacks on the phone.

From there, my role evolved into focusing on, how do we create the most effective secure code in the platform? And that involved finding and fixing bugs faster. The last one was customer support. What I mean by that is, we had features in our product that customers could turn on. Things like two-factor authentication or IP range restrictions. But customers were still given the choice to turn it on, even though it protected all of their data. From a brand perspective, obviously, if something happened to one of our customers, it looked bad on the company, right? So, it was in our best interest as well, to make sure that our customers were as secure as possible, even if it was on them to turn on those features.

[0:13:46] Guy Podjarny: Yes. I think you can probably even make the argument that it's indeed an aspect of the product, kind of the usability of your security solution, and in turn, whether people actually use the right secure settings, and all that is indeed a part of your goal.

So, all three sound really interesting to talk about, but just given our podcast, let's hone in a little bit on that second on the secure practices. Can you tell us a bit about some examples of good programs, or tricks, or whatever, that helped out?

[0:14:14] Masha Sedova: Let me frame a little bit about one of the challenges that we wanted to solve, and which is a challenge that exists across many organisations. That was the ratio of security professionals to developers in the organisation. So, the statistics I'm going to have is not Salesforce specific, but we'll be roughly accurate to the industry. Let's say there is one security product review person for every 100 developers, which might even be generous. It could be, sometimes it's 100 to 200. I've seen even higher numbers than that. But that one poor security person was responsible for reviewing the code before it got shipped, and signing off to say, “Yes, this pass our security tests. It's good to go. I know that hackers won't get into it.”

You can imagine, what are some issues that come out of this that doesn't scale? Well, there's backlogs that get introduced. Bugs are found way too late in the release stage are sent back to the developers for fixing at which point product releases are then delayed, which puts everything else into firefighting mode. So, I really wanted to focus on how do we solve this core issue? It was almost a process issue, as much of an education issue, as much as security issue. It would be fantastic if all developers everywhere knew everything there was to know about cross-site scripting and SQL injection, and be on top of their security game. And there are some companies that absolutely invest in that training. But it does take a lot of upkeep and maintenance and making sure that there's a lot of training and hands-on training, specifically, to the code base as it continues to evolve and that knowledge is shared.

So, in absence of that being a solution, which I actually think that if you are small, do that too, because it doesn't scale. But do that for as long as you can, because you give the gift of enabling developers to take the security coding knowledge wherever they go. The second thing that I think is worth considering is security, for better or worse, should not be tacked on at the end as an assessment before – I check the box before it goes out the door. It should be part of Q&A, right? Same say way that you have quality, and you have dependability, and you test your edge cases. So, you should also test for your security edge cases. In which case, you may only need a couple of people on a team to know what it is they're looking for in that code. It could be your peer testers, it could be your QA engineer, but making sure at least one person on the team understood what they were checking for and could escalate before it got too late, before things were already done and dusted.

What I focused on was identifying one security champion, and every one of our scrum teams. Most of them volunteered. Some of them were voluntold at the time to dive into a security boot camp. So, they understood for their product what were the common bugs that we saw? What were the best solutions? What were critical features that absolutely needed escalation to the security team? This was sort of things along architecture and design. Things that if you're designing anything related to crypto or authentication, please include the security team. There's definitely a certain line where being immersed in security full-time is really required.

Then, there's a point where you don't need the security team and empowering those security champions with the knowledge of basically say, “I have tested all of our code for cross-site scripting, and SQL injection, and CSRF. These are basic known bugs with basic tests that we can run against this, and we know that it's good to go. So, having essentially someone who – traffic control, if security say, “Yes, this needs escalation.” No, it doesn’t. To make sure that the issue that needed to get escalated were seen by the security team, that the issues that didn't need escalated, weren't seen by the security team, and were handled as soon as they came up, so it didn't delay the process. What we found is that teams with these types of champions, 100% of them ship their code on time. Or if they had delays, none of them were security-related delays. So, pushing it further upstream. Not having security be a blocker was absolutely critical.

[0:18:29] Guy Podjarny: That sounds super useful. I guess, to an extent, it comes down to disseminating knowledge, versus having tools to run it, or at least maybe the kind of the counter for it is, yes, kind of starting point number one is you're not going to make all developers security experts. It's just not going to happen. So, acknowledge that. You can train them and you should still kind of level up their expertise, but they're not going to become the same professionals that dedicate their careers to it.

But on the flip side, there is tooling and some automation for it. I guess champions, fair to say, they kind of walk a line in between, around scaling the training to a subset of the individuals who have better, either aptitude or interest or others, just willing to devote more time to it. How do you see, I guess, in that sequence? You've got the broader training. You've got the champions. What's the utopia there of tooling in that ecosystem, right? If you fast forward to a perfectly handled DevSec type setup, DevSecOps type setup in a few years’ time. How do you see that interaction of training and knowledge versus tools? Or I don't know if it's versus.

[0:19:42] Masha Sedova: Yes. So, in any capacity, whether or not it's in DevOps, or educating on phishing, or malware, or password use, the human brain is the most fickle thing to deal with. So, getting people to care, to do it on a regular basis, when they are busy, when they are hungry, when they are hungover or tired. There's just so many things in there. That is such a complex and dynamic system that if you can solve it with tools, you absolutely should. Training really should be one of your last resources, and because it is hard to do, retention of just basic audio-visual content is at best 20%.

So, even if you have amazing content, depending on how you present it, 20%. If it's discussion-based, if you teach people by doing, it's 80% to 90%. But then you have this huge amount of effort where you have to engage people in discussions, and that's an ongoing basis, right? It's not like you learn one thing and you know it forever, right?

There's a lot of things fallible with training, and I think we overuse it way too often. Where possible tooling absolutely should exist. And one of my favourite examples is password management. We tell people, have a secure unique password for every one of your sites, and for years, decades, we didn't give anybody any tools and all we said was don't write it down on a sticky note. And yet, it was totally impossible for all of us to remember this.

So now, I don't even tell people about secure practices. I said, just please download a password manager and use it. That's it. That's the only thing I educate on. That tool has solved things that education has never been able to do. So, same thing back to this conversation about secure development and secure DevOps, where possible tools should absolutely be implemented. The trick with - and I think the reason why we don't see it more ubiquitously, is because many coding environments and code bases are very unique to every organisation. To get it totally right, to make sure that there's not false alarms, it takes a lot of onsite customisation, and it's hard from a vendor space to come in and build a perfect set of tools for your environment that I can then go and resell to every other company, as well.

So, the companies that I've seen do at best are the mammoth companies who do it, who would build it themselves in-house, build the tools custom, and then deploy custom. That is not a solution that most organisations can afford. But there are some vendors that get pretty close. But never custom.

[0:22:29] Guy Podjarny: It's a good perspective. If you can automate it, that's probably your most scalable, most efficient elements. But there's just so much to it, including the first one. We had a Molly Crowther from Pivotal here on the show and she talked about how at Pivotal, they have cognitive psychologists as a part of their team, which is already kind of mind-blowing on its own, and that they tried to deal with the fact that in general, they tried to teach the developers to be very empathetic to users, very trusting, the whole setup. Also, maybe the concept of DevOps and Agile, which they practice in the extreme is very, very trusting by nature, or very accepting of mistakes. And introducing the mindset or trying - contrasting that with the mindset of not trusting the user of what if an evil user comes along. It does something was really quite hard, and that they're still battling. They're doing a good job there. But trying to instil the notion that when you introduce those kind of QA tests, when you when you actually run those tests, how do you push aside for a sec you're trusting nature? We've just been kind of trained to do, and put a bit of an evil hat on.

I guess, do you have – when you've encountered that type of test, and you want to get people even engaged or think about it, do you have some best techniques on what gets people engaged and how they jump in?

[0:23:46] Masha Sedova: So, the best example, is – it’s a little self-serving. But the best example, based on my experience is actually the very first product that we built at Elevate based on my work at Salesforce. What we did is introduce several hackers that are on the landscape. So, hacktivist, cybercriminal, and government-sponsored hacker. Then, we walked through a group dynamic. Every individual to go through and understand what is they have access to that is of most interest to one of these hackers. Then we asked them to put on the persona of exactly that hacker and attack themselves.

So, you know yourself better than any attacker ever would. But then, walking through a path of like, how would I be psychologically manipulated? Do I fall for a reward or curiosity, or fear best? How could that be used against me? Why would it even be used against me? So, understand the people who are motivated by this.

If you can do it in a way that is light-hearted, and in a way that has an element of fun, it cuts through some of the seriousness of like, “Oh, my god, I'm about to get hacked.” Or, “Wow, someone could actually break into my code or steal this data, which is real and quite a reality.” But I actually think that bit of a shock and awe factor and seeing yourself or your code, from the perspective of an outside nefarious hacker, is one of the best ways to take away from a training, or from a lecture, whatever the education methodology is, a baseline sense of paranoia and saying, “This is real, and I need to be vigilant, and I need to know when I should turn it off and turn on. But I now have this level of vigilance that I can apply to certain situations, because I now know what I'm looking for.”

Because if you just tell me, “Oh, watch out for SQL injections”, and you don't tell me why, or who, or what it is they're trying to get it from me, the minute it's not that direction, but a different one, I won't see it coming, because I'm only looking at the front door, and a hacker is going to go in through the back window. But if you can teach me about the ecosystem around why did I should even care in the first place? What are the methodologies and the tools that an attacker might use? And given the resources to come to certain conclusions on my own, I am much more capable of defending myself and my organisation. But I'm also capable of writing more resilient code, creating more resilient processes, and just having much more of a sustainable, secure ecosystem.

[0:26:20] Guy Podjarny: Yeah, that's a really, really good tip. I actually apply this in talks that I give around security where I engaged the audience in hacking a vulnerable application. So, it can go down to the, like what you describe right now is very broad, and I can see how that can be applied to phishing or to many others. But it can also apply to just trying to break in. Now, I've got a vulnerable application and the audience might shout out. There's a cross-site scripting here, what might you try? I have this very unsubstantiated belief that there's no hard data behind this, that probably helps with retention as well, because you've done – I guess you did say that 89% retention, if you got people kind of engaged in doing that, but you have to invest in building that up.

[0:27:00] Masha Sedova: Yes, exactly.

[0:27:02] Guy Podjarny: That definitely sounds like a really good tip.

[0:27:04] Masha Sedova: Sounds like you're already on it.

[0:27:06] Guy Podjarny: We talked about developers here, by nature of this podcast, and kind of the audience that listens to it. But you deal with training, the broader sort of employee population as well, right? How do people act secure? How do you see the two differ, if at all? I mean, how would you say it's different to train developers versus to train everybody in security?

[0:27:29] Masha Sedova: I find training developers actually to be much harder than regular employees, because sorry, in advance for all developers listening to this. There's a certain amount of arrogance associated with I already know this, or I'm smarter than this. So, it's really hard to get past that, that shield of, you're wasting my time. In fact, it's not entirely a developer's fault, because I would have to say most security trainings have been waste of time for many employees and aren't respectful of people's time or intelligence. I can't blame entirely.

But there's a certain arrogance that comes from dealing with very smart, especially, engineering types. Sometimes that arrogance isn't well-placed. I'll give you some examples. I've run many hundreds of phishing tests over my career and I have found that the top group that clicks on phishing links is marketing. Then, with about one degree of difference of phish, of opening, and clicking phishing links are developers.

Marketing, I feel as forgiven, because their entire job is to click on incoming emails and resumes and link. I get it like. That's fine. But developers feel like, especially if it's a well-crafted attack, I would totally see this. It wouldn't affect me, plus my machine so locked down. Without understanding like the nuance of some really sophisticated pieces of malware that maybe directly targeted. So, again, it won't happen to me. If it does, I would catch it in a second. It is that arrogance that absolutely gets a lot of developers into trouble from, at least, my own tests.

So, I think things like phishing best practices, educating people on what not to click on, particularly, like, very specific malware targeted to developers, is something that is worth educating anybody on, but obviously make it role specific. I think that there is immense value in teaching developers, specifically, how you write better code as well. So, I think, unfortunately, developers might have two doses of security, but it's not limited to them, right? If you're in customer support, you need to be mindful –

[0:29:42] Guy Podjarny: Sharing customer data.

[0:29:42] Masha Sedova: Exactly, right. So, every role has additional layers of how it applies specifically to the data that they have.

[0:29:48] Guy Podjarny: Yes, indeed. I can definitely see the developer aspects. I think, maybe the crowd that is worse in terms of confidence in their skills, as we say, is security. Security people might be next step, the next year up there, although hopefully security consciousness they're a little bit higher up.

[0:30:08] Masha Sedova: Yes. Not going to root for that one.

[0:30:09] Guy Podjarny: Not always. I guess, I would also add to the developers probably have a bigger sphere of influence. If you compromise a developer, you can compromise their code, which in turn compromises their users and it's a big deal. We've had some big attacks like that with XcodeGhost, and with a bunch of Chrome extensions, where malware in those is becoming bigger and bigger issue. So, I can definitely sort of see preventing that education or can see the attitude, keeping the knowledge out, and subsequently the damage can be multi-fold. So, definitely worth doing the broader security education. Not just the security coding, to keep developer’s machines and keep themselves secure.

[0:30:48] Masha Sedova: Yes. One of my favourite avenues of attack that I've seen, red teams, particularly. So, people who are hired to break in. They're good guys get paid to hack in. But their favourite way of attacking developers is going through a lot of engineering docs. There's a lot of things documented around. “This is how we check in code. This is our testing process.” They'll hop on group channels. They'll compromise one developer’s laptop, hop on a chat channel, scroll through history, where they'd be like, “Oh, yes, the credentials to this site is here.” So, you're sharing internal – and thinking you're internal and safe, and you have your whole architecture, and your process of how you check-in code. Your onboarding docs, essentially, for your new engineers. You download one of those, you go through some of your chat logs, you get all the appropriate credentials, assuming you're not handling them correctly, which tends to be most the case. You put those two things together, you don't need the most sophisticated hacker in the world to be able to pivot from that point on.

[0:31:47] Guy Podjarny: Yes, indeed. So, I think that's in general, this conversation is a really good point. When we talk about secure development or securing developers, I would say 99% of the conversation comes down to secure coding. But an area that is very lightly discussed is developers securing themselves, just securing their environments and not falling for those. Because developers are early adopters. I mean, we install stuff from the web, day in, day out. The number of development pipelines that pipe into sudo Bash, which is terrible, like downloading something from the web and just pipe it blindly into some shell script is, is quite troubling.

So, some of those best practices around securing our own systems, around environments, is really, really key. So, happy to have the chat. I think there's probably a whole room of – there's probably room for whole sets of best practices around developers securing their environments.

[0:32:44] Masha Sedova: Yes. Although I, I would still say that my favourite tips apply to developers as much as anybody else, which is manage your passwords securely back to the password management thing and make sure that they're all different. Please install two-factor authentication on every piece of software you have. It doesn't matter if your password gets stolen in that case. I appreciate that it can introduce additional complexities when you're trying to test. But if you have two-factor built in from an API perspective – just find ways of including that because that is one of the best ways to defend against hackers. So, if you can create –you can have two FA in as many applications, especially the most critical ones. And then the third one is more of a mindset thing. It really is, know that your internal environment is not any safer than the Internet. So, be careful what you post. And if you do post things, make sure it's locked down. People who see it are only the people who need to see it and nobody else.

[0:33:44] Guy Podjarny: Yes, definitely sound advice. I mean, I think you gave me the sort of great favourite here to tee up, which I often kind of like to close off with the question around a favourite practice, or a pet peeve, I guess in it. Would you say that's also kind of your pet peeve in it? Or do you have – what is your – if there's one thing that annoys you, that you kind of want people to address that you have sort of a security pet peeve, what would that be?

[0:34:12] Masha Sedova: My pet peeve has two audiences to it. The first one is security teams who always say, “Users are the weakest link.” I personally think that's totally inaccurate. I think we as security professionals have done a terrible job of empowering other people, our employees, developers, general users, to know how to defend themselves in the organisation, and it's on the security team to do a better job.

But then the flip side of that for developers or for any employee in any company say, “Well, security is not my job.” There's a security team that is paid to go into work every day. That is where a lot of things go wrong as well. If you remember, like the earlier thing we talked about where there's like one to a hundred people, a security person can't possibly cover everything that's happening. They can't know all the things, the ins and outs of what you look at and you do on a regular basis, and they don't know what normal is and not normal is in the weeds of your daily work. So, it's everybody's responsibility to say, “This is weird. There's something going on with my phone, my machine, my browser, and it's worth me escalating it and finding help.” Because the longer you wait, the longer you allow an attacker to move around on a network. So, knowing that everyone is a vital part of this equation is really critical.

[0:35:33] Guy Podjarny: Yes. Definitely. It's everybody's problem. Well, thanks a lot for all the great insights here. I think we can we can continue talking here for a while here, but I think we're already kind at time. If people want to find you on the Internet or check out Elevate Security’s kind of offerings here, where can they find you?

[0:35:52] Masha Sedova: Yes. So, our website is elevatesecurity.com and we have a blog there that we occasionally post insights around the space. I'm also on LinkedIn, Masha Sedova, and on Twitter @modMasha.

[0:36:03] Guy Podjarny: Cool. Well, thanks a lot, Masha, for coming on.

[0:36:06] Masha Sedova: Thank you so much. It's been great.

[0:36:08] Guy Podjarny: Thanks everybody for tuning in and join us for the next one.

[OUTRO]

[0:36:12] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show, or want us to cover a specific topic, find us on Twitter, @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over a hundred videos about building developer tooling companies, given by top experts in the field.

Up next

Collaborative Security With Marten Mickos

Episode 18

Collaborative Security With Marten Mickos

View episode
Measuring Security With Allison Miller

Episode 19

Measuring Security With Allison Miller

View episode