Episode 33

Season 4, Episode 33

Engineering Teams With Leif Dreizler And Eric Ellett

Guests:
Leif Dreizler
Leif Dreizler
Eric Ellett
Listen on Apple PodcastsListen on Spotify Podcasts

"Leif Dreizler: There's been a lot of improvements to languages and frameworks over the years. I really think that is where the industry is getting some of the biggest security lift. It's just making it really easy for people to do the right thing and making it harder to do the wrong thing. The most important thing that we think about when creating and delivering training is making the training relevant. If you're asking for developer's time, you should be using their time wisely."

[INTRODUCTION]

[0:00:31] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers, covering security tools and practices you can and should adopt into your development workflow.

It is a part of The Secure Developer community, check out thesecuredeveloper.com for great talks and content about developer security, and to ask questions, and share your knowledge. The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com.

[INTERVIEW]

[0:01:05] Guy Podjarny: Hello, everyone. Welcome back to The Secure Developer. Today, I have a security team here from Segment, which I'm really excited to talk to and hear about developer-focused security practices at Segment. I have Leif Dreizler and Eric Ellet. Thanks for coming on the show, Leif and Eric.

[0:01:21] Eric Ellet: Yes, thanks for having us.

[0:01:22] Guy Podjarny: Before I dig into a variety of topics over here, can I ask you to just give the listeners a little bit of a background about how is it that you rolled and found yourself in security and into this role?

[0:01:34] Leif Dreizler: Sure. It all started for me studying computer science in college. I started working at a security consulting company while I was still in school, and work there for a couple years after graduating as well. Then, from there, I actually was a sales engineer at Bugcrowd for a couple of years. Then, after leaving Bugcrowd, I joined Segment on their application security team about a year and a half ago.

[0:01:59] Eric Ellett: Yes. For me, I was a software developer, as a contractor doing DARPA projects over in DC. Then, I followed my dreams to try to start a SDN security company here in the Bay Area, which was way too early. So, I ended up going to Credit Karma as an application security engineer, and then ended up coming over to Segment after Leif stole me from Credit Karma for me to work with him over here.

[0:02:25] Guy Podjarny: Very cool. I guess we both came into AppSec, I guess, a couple of companies ago, or sort of two, three companies ago. Although, specifically, AppSec role, it sounds like. Leif, at least for you, it's like it was from an SE, from sales engineer dealing with security, more in sort of the bug bounty side, to now switching to AppSec itself.

[0:02:43] Leif Dreizler: Yes. I would definitely put Bugcrowd in the application security space. Then, most of the security consulting I was doing was application security assessment. I was doing third party pen testing. I've been in some portion of the AppSec space since the beginning of my career.

[0:03:01] Guy Podjarny: Yes. I mean, it's interesting. It's like, you actually hear about a lot of people that move from the dark side, if you will, from more of the red team, the sort of the pen testing side, into the defender side. It's a version of that, because you have a solution that sort of helps people do that type of pen testing even if it's not a classic pen test. It's a Bugcrowd platform. But fundamentally, you go from being the attacker to being the defender still.

[0:03:21] Leif Dreizler: Yes.

[0:03:22] Guy Podjarny: Tell me a little bit about Segment security. Where do you sit in the org? What's the team structure? How does it work?

[0:03:29] Eric Ellett: Yes. Segment security as a whole is led by our CSO, Coleen Coolidge. We have several sub teams underneath that. We have our GRC team, we have our corporate security team, which is focusing on security outside of production, and AWS, and GCP. Then, we have cert. When inevitably security incidents happen, how do we handle them with grace? Then, we have application security, product security, and cloud security. That's actually under an umbrella of security engineering, which is the team I lead. Technically, on application security, but like my team is pretty much very flexible in the sense that we kind of work on the problems that make the most sense at a given time. So, it's not unreasonable for an application engineer, application security engineer specifically to work on like cloud security projects or product security, given the demand in a given domain.

[0:04:23] Guy Podjarny: It's a security engineering group. That's the sort of the title of it. Is that title more sort of inspired, if you will, by the people inside of it, or that it's working with engineers?

[0:04:34] Eric Ellett: Yes. think we just put a title on there, just because we didn't want to silo the teams too much. I think it's a little too early for us to do that. Granted, we have five people on our team. So, having hyper specialisation per team is just too early. We also believe pretty heavily that, to be good at application security or cloud security, it requires kind of a foundational understanding in the other domains. It's great to just be under security engineering umbrella, and again, like I was mentioning earlier, being able to transition, and put engineers where it makes the most sense, given the problems that we're facing at any given time or quarter.

[0:05:14] Guy Podjarny: Got it.

[0:05:14] Leif Dreizler: I think part of it also is to capture the idea that we do expect the people on our team to be software engineers at a certain level. So, software engineering and security engineering aren't that different. Obviously, they're specialisations. But we're still writing code, we're still deploying stuff, we're still working directly with software engineers. I think, the security engineering title does a good job, both encapsulating that work as well as Eric said, kind of creating an umbrella for product security, cloud security, and application security all under one org group.

[0:05:48] Guy Podjarny: Cool. Does the security engineering group also own and operate these tools, basically, like an engineering group inside like internal tools or the platform service? How does that work?

[0:05:59] Eric Ellett: It really depends on the type of tooling. We're trying to figure out, for example, right now, with things such as like the WAF, Leif. It makes sense for us to be there for like the tuning, and to put the rules in, and ensure that we're actually being effective there. But the operational side, I think our SRE team would feel more comfortable owning that. It just really depends per tool where things would fall, ultimately, based off of who's the best person to handle a given aspect of a given tool. Operationally, for the WAF, it's definitely SRE. Whereas, maybe the tuning and ensuring that the rules actually make sense is more on AppSec side.

[0:06:40] Guy Podjarny: Yes, it makes sense. I guess, these are the tricky questions, and those elements you work. It's between sort of the experts' centre, and the people may be dealing with operating tools day to day.

[0:06:49] Eric Ellett: Thos types of things always vary. I think, just having those candid conversations with the other directors, or the other engineering leaders in the org, to figure out who is best suited to manage that different aspect of the tool is always typically what we do.

[0:07:03] Guy Podjarny: Yes, sounds right. That's actually like a good tee up, indeed, on how does your group, be at security engineering, and maybe specifically application security work with the engineering team. It sounds like it's not a part of the same org. What's the kind of high-level ratio, would you say that you have of security engineers to engineers on the regular R&D side?

[0:07:26] Leif Dreizler: I think it's roughly about five of us to around a hundred engineers. Then, security as a whole, which also includes IT is about 15.

[0:07:37] Guy Podjarny: Okay, that's actually a pretty good ratio. I mean, as compared to a lot of companies we talk to. That's sort of a high level of investment for those components. How is the affiliation? Is it still like one team application security work with the entirety of the engineering organisation? Or, is there some lower level partnering of working with this group or the other of the engineering org?

[0:07:58] Leif Dreizler: Similar to what Eric said earlier, it really just depends quarter to quarter, hat the most high-priority projects are for both engineering, as well as security. We get called in to consult on pretty much every project, but we may not have a super hands-on part of it. Then. other projects, it may be a formal partnership, where we're both contributing equal amounts of work. An example of that is, we added MFA for our customers last quarter. I did almost all the backend work, and then, an engineer, Kat, from our enterprise core team, she did most of the frontend work. That was kind of a shared responsibility delivering that feature. That would be a very high-level involvement from us, but it really varies project to project, quarter to quarter.

[0:08:45] Guy Podjarny: Yeah, that's awesome. You kind of choose a specific initiative, and then you figure out, how is the collaboration going to work? Then, is that mapped into sprints? I guess, it comes back to software development chunks of work?

[0:08:57] Eric Ellett: Yes. We don't do sprints like holistically as an organisation. Each team kind of does their own thing. We do have quarterly OKRs. Before the quarter starts, typically, what happens, I'll reach out to all the engineering leads, and give our spiel of, "Hey, if you have these types of projects, let me know. If it takes some security heavy, maybe we should do, what we call partnership, which is about 80% FTE time from my engineer." To help implement or just provide support where possible for the duration of the project. We have more of the consultation style partnerships, which are typical in most organisations. Where we'll do a design review, threat model, and follow up from that threat model, and maybe, pen test down the road if necessary.

That's typically how we break down quarter to quarter, and we actually do another thing, which is, we do an embed program. One of the things that Leif talks about, definitely in his AppSec California talk, which is – we want to get our engineers working with other teams, at least for a quarter. We've been experimenting when this should happen in a given security engineers lifetime at Segment. We've basically decided, hey, with one of our recent cloud sec engineers, within the first month, he was actually sitting with our tooling team for a quarter, and just doing tooling work. And understanding how that tooling team operates, understanding actually how security processes impacts the tooling team. The goal of that quarter is for them to work on a capstone project together, to ship something that will hopefully benefit both tooling, and ideally, security.

Then, other engineers like Leif, with the MFA, we actually– that was part of an embed as well. So, the enterprise core team, we reached out to them at a time and said, "Hey, we really want to get MFA in the app, we know that you've been responsible for kind of the other parts of auth historically. Let's do an embed and have Leif come sit with your team for a quarter and work with you. Ideally, a resource on your end to build this out." Again, it really helps – I think we've coined the term like walk a mile in the developer's code, to really understand what the development process is like, what our processes, or our controls look like from the other side. Then, really bring that back to the team to figure out how we can improve those processes, and get a better understanding, and empathy for the developer.

[0:11:26] Leif Dreizler: Yes. It also really helps just understand whatever you're trying to protect. If there's ever a question by somebody else on our team, or cert, or somewhere else within engineering, and they have questions about what is our authentication flow. I can walk anybody through that whole process, because I made so many changes and modifications as we were rewriting an older service, as well as adding MFA at the same time. So, having that knowledge within the security team, and not having to rely on an external team to answer questions about something that's as important as authentication, I think is really, really valuable.

[0:12:02] Guy Podjarny: Yes. I mean, I really enjoy kind of affiliations to changes that happened in the DevOps world. It feels like this was very much a part of it about the sort of the walk a mile in the other team's shoes. But it sounds like it also goes hand in hand with your comment, Eric, at the beginning around the team being engineers. Because you can only do this if the team is indeed capable of acting as an engineer inside that engineering team. If you bring somebody that doesn't have that sort of sense for what code is, and sort of how you do that software development process. Then, I guess you're not as able to do it. Do you do a little bit of the other way around? Have you considered taking an embed from the development team and having them walk a mile in the security engineers' shoes?

[0:12:46] Eric Ellett: You're blowing our cover here, because one thing we're going to do was get people comfortable with the security embed, then kind of flip the script, which is like, "Hey, how about you come on the security team?" Again, yes, exactly what you said, walk a mile in our shoes, and maybe get some empathy from a security perspective, and understand what we have to deal with on a cross-functional basis. Also, given that they are developer powerhouses, they can probably build some really fancy, and pretty tools for us in the meantime, as well.

[0:13:17] Leif Dreizler: I think a lot of this is, I know a lot of people hate the term DevSecOps, and that's okay. You can hate the term. I think that this is kind of what the goal of DevSecOps should be, is similar to DevOps, where you have operations, people learning how to code. Now, everything at Segment is infrastructures code, and you have developers that are running their own services built on top of the building blocks provided to them by the foundational teams at Segment. I think it's just about becoming a more well-rounded engineer. Whether you're a foundation person. or a developer. or a security person, you need to know at least a little bit about all those other aspects, because it's just part of delivering quality software in 2019. Is you need to know enough about the whole stack of your application. Part of that is, knowing how to keep it secure.

[0:14:09] Guy Podjarny: Yes, makes perfect sense.

[0:14:11] Eric Ellett: I think it's [inaudible 0:14:11], like developers have just adopted reliability in general. How do we get them to adopt security as part of how they adopted reliability over the past few years with the services that they've been deploying?

[0:14:25] Guy Podjarny: Let's get philosophical here a little bit. Because we talked about the org, and about embed, and about sort of exchanging people, and getting exposure, maybe a bit about the skill set. How do you operate not from an embed or from a security capabilities built into the product, but these security controls around? What are some of the principles or the guiding lines that you use when you go to consider a new security control, a new security program, and try to get developers to be engaged with it?

[0:14:52] Eric Ellett: I think the first thing is, would this tool actually be used by the developer? I mean, I think answering that question as quickly as possible is the quickest way to derisk any controller, any vendor that you're going to use. If it's something that has an awful UI, or awful just usable – or awful from a usability perspective, developers aren't going to use that. Ideally, you always want to see if they have an API, where you can maybe build or extend the tooling.

I think, really, the first thing is just getting the developers part of that eval process for any vendor that you're looking at, or any control that you're going to implement, because they're ultimately going to be your users at the end of the day. Just like product, we don't develop products without having user input. We shouldn't be developing security features without our users who are the developers' inputs during that process?

[0:15:46] Guy Podjarny: That makes sense for me, you're kind of bringing them in. I love the customer centricity in this case, the customer being your developers. I know, Leif, I heard you refer to this notion, or made some quote around making it easy. Make it easy for someone to write secure code, and you'll get secure code. How does that manifest in the day-to-day?

[0:16:07] Leif Dreizler: I think that there's been a lot of improvements to languages and frameworks over the years. I really think that is where the industry is getting some of the biggest security lift, is just making it really easy for people to do the right thing, and making it harder to do the wrong thing than doing the right thing. I think that isn't a notion that's unique to developers, by any standpoint. That's just like humans, in general. It's like, just make it as easy as possible for people to do what the right thing is. I think, a really great example of this is, in React, it's really, really hard to introduce cross-site scripting. In our security training that we give developers, if there's somebody who's working on the frontend, if there's one thing that they remember, don't use dangerously set inner HTML.

Our security org, we very rarely say never do something. Dangerously set inner HTML is one of those things. Even before any of us got here from the security team, just because our developers have made the choice to use React, because it was easy to use, and cool, and Facebook built it, and a bunch of other more valid reasons from a development standpoint. We've only had a few instances of cross-site scripting. I think it's like less than five in our app, like ever. Compare that to somewhere else that isn't using React, and is having to remember to escape user input every single place in the app. It's like that is so much harder than just using React and not using dangerously set inner HTML. I think that's a really good example of an instance where it's just been very easy to write good code.

[0:17:45] Guy Podjarny: Yes. I guess we're all humans and we're lazy. At the end of the day, the path of least resistance is the one that's going to prevail. So, might as well make that the secure path.

[0:17:56] Eric Ellett: Yes. We also kind of adopt that ourselves. I mean, at least the dangerously set inner HTML, I love the fact that dangerous is in the name. We use Terraform for most of our infrastructure here at Segment, including like S3 buckets. Our public bucket module is dangerously public S3 bucket. So like people, when they're using that, they're like, "Okay. Maybe I should think about this a little bit more." We try to adopt that where possible, just because I think it does definitely send a signal as well.

[0:18:24] Leif Dreizler: Yes. It definitely make somebody think twice about, "Is this actually what I want to do?" Sometimes it is. Sometimes it does make sense to have a bucket be public, and we have just static assets or something like that, that's just getting loaded on our public web page. But anybody who even remotely follows InfoSec News knows that many times the bucket should not have been public.

[0:18:45] Eric Ellett: Yes. I think any signal that you can send to a developer who's probably done like the 15th code review that day, or looking at something, and they see something like dangerous, like, "Oh, maybe this is something I should actually pay attention to a little bit more" versus, "Hey, this is just an S3 resource that they're using."

[0:19:01] Leif Dreizler: It also makes it a lot easier for a developer who's more junior. We don't expect every developer to be a security expert, we expect them to try, and we expect them to ask more senior people in their team or people from our team for help. But having flags like this, like even somebody who's – this might be their first software engineering job, they can see that something says dangerous, and I think that'll trigger something for a brand-new dev, like, "Hey, maybe I don't want to be using something dangerous."

[0:19:30] Guy Podjarny: Yes, indeed. Switching maybe into education. When we talk about making security experts out of these developers, and at the end of the day, there's only so much. We can become experts in, developers are heavily overwhelmed today with information. I know you've invested a fair bit in training and educating those developers on the important parts of security. Tell us a bit about this. How do you go about building some of these security perspectives in your dev teams?

[0:20:00] Leif Dreizler: Sure. The most important thing that we think about when creating and delivering training is making the training relevant. If you're asking for developers' time, you should be using their time wisely. When developers start at Segment, they go through a two-part training. The first part is thinking like an attacker. The second part is secure code review. In the thinking like an attacker training, all of the examples that we talked about are things that we've had submitted to our bug bounty program, or things that we've gotten in pen test reports, or things that we've found internally. Every single example that we show them is something that was a vulnerability that was previously in Segment.

[0:20:42] Guy Podjarny: That's awesome.

[0:20:42] Leif Dreizler: t's a lot easier to get a developer to care about a vulnerability when you can say, "Hey, this feature that you're probably familiar with, this was a previous vulnerability, this was the impact. This is what the fix looked like." Versus talking about a cross-site request forgery example from a bank where you're transferring money. The developer might think like, "I don't work at a bank, why do I care about this?" It just makes it a lot more tangible if you actually just have all of your examples come from stuff that is similar to where they're working.

Then, the follow up to that is, we teach them how to use Burp Suite with the OWASP Juice Shop project, which is our favourite vulnerable web app. Because it's written in Node and Angular, and it's a single-page app. The tech stack isn't exactly the same as Segment, but it's pretty close. It's definitely close enough that when we show them the architecture diagram, developers understand like, "Hey, this is pretty close to what Segment looks like."

Then, part two, the secure code review training, one of our coworkers, David, he built a couple of small, intentionally vulnerable blocks of code that actually run and create a Hawaiian shirt store. Then. we asked the developers to review the code and identify vulnerabilities based off of the training that we've given them that day. Again, all that is React, Node, or Go, which is what we use to build Segment. Whether they're a backend engineer or frontend engineer, there should be some part of that code review that is in a language that they're familiar with, based off their time at Segment.

[0:22:24] Eric Ellett: Into thinking like an attacker training, we also want it to be competitive, in a sense. After we go through the theory, and examples of each type of vulnerability, then we get them smart with Burp Suite and hitting Juice Shop. We actually have them go after the specific vulnerabilities that we went over. On the whiteboard, we have those vulnerabilities, like at least the names of them enumerated, and we kind of treat them as flags. When people capture them, like the person that's given the training, they'll have to show it to them, and they'll write their name up on the board. It really gets people in a competitive mode. We've had people stay after the training was over, like, what, half hour or 45 minutes still trying to exploit the last flag because they were just so engaged in the training. That's definitely a huge positive signal on our end, that people are really taking something away from it.

[0:23:18] Leif Dreizler: Yes. It's also just a great way to meet new developers. We try and give this training within somebody's first month or so of them getting into Segment. For a lot of these developers, this might be their first interaction with us as a security team. It might be their first interaction with the security team ever, maybe this is their first job, or maybe their previous company didn't have a security team. We think it's really important that they have a positive experience with us from day one, because a lot of how we're effective as a security team is relying on developers, letting us know when they need help. So, they're the ones that are letting us know that, "Hey, I need the design review" or "Hey, maybe I've identified something, and I'm not sure if this is a vulnerability or not, but I just wanted to let somebody know." None of that stuff works, and you can't rely on that if you don't have the kind of relationship where developers are encouraged to come and talk to the security team.

[0:24:14] Guy Podjarny: It sounds amazing. I absolutely love the idea of using code that you relate to. The fact that vulnerabilities, historical vulnerabilities is sort of especially resourceful here, and innovative, probably, kind of required a good chunk of effort. Because you have to sort of sift through those and explain them as well in a way that is manageable. But I guess, on the flip side, you get something that is much more attached, much more relatable to their surroundings. I also love how it's all coming together around, I think a lot of the overlying theme here is around good interaction, good, healthy people interaction. Whether it's the embed, whether it's the skill set, whether it's that good initial relationship with them. Very much about positive security. Unfortunately, not yet super prevalent in our industry of security.

[0:25:06] Leif Dreizler: I do think that is changing. I think that, historically, that probably hasn't been super common. But I think, there's kind of a new wave of security teams that understand like, this is really the only way to get stuff done. Especially in an instance where you're at company with a microservices architecture, and every developer is pushing code tens of times per day. It's like, you can't review every single pull requests. Like, we're not pair programming with every single developer. We just have to give them the training, and resources, and teach them security judgment, which is a term that I stole from a Netflix presentation a couple of years ago at AppSec California that I just absolutely love. It's just the idea that, even if you're not a security expert, you should at least know when something looks off, or maybe something doesn't seem like the right way to be doing whatever you're trying to accomplish, and just let us know, and we're always available to come and help out.

[0:26:05] Guy Podjarny: That's awesome. I'm fortunate enough to run this podcast and have seen some, like yourselves, people that are at the forefront here and talk about it. I very much hope that this is a trend, versus an echo chamber. But I think over time, it's a must, as you point out. Software is accelerating too much, and getting too complex for anybody from outside to secure it. I guess, on that note, we talked about education, we talked about engagement with the team. When you talk about this notion of positive, I know you've done some things to celebrate successes. Can you give us a couple of examples of when somebody does a good thing around security? How do they have some stickers? I remember some mention of a crown.

[0:26:45] Eric Ellett: Yes. The stickers actually come out as part of our training. When you complete the training, we have this hacker man sticker that is like this online meme that we use quite a bit in the training itself, so that people can kind of show like, "Hey, I did the training." Another thing that we've started, and I'm actually presenting at OWASP next month at Uber, I think on which is the leaderboard. It's effectively this gamification platform that we built, that celebrate those small wins that people have. When people come to you and say, "Hey, I think I noticed this issue" or "I noticed maybe some PII in this log." How do we recognise those small wins, right?

This leaderboard is basically this UI. I really got enticed by Halo 2, and kind of the notion of matchmaking back when I was in high school or middle school, and how people can be ranked. Basically, what happens is, when you do these small things, it will recognise you, and you gain experience points. Everyone starts off at level one, and you'll get, depending on the type of thing that they've done, you'll get 15 experience points or 25 experience points. When you get 100, you go to level two, and it posts all the great things that people do every Friday in our security Slack channel. So that people, not just the people that were part of that interaction, like the security team and the developer, but even the VP, or the CTO, or people that are higher up can say, "Hey, this individual has done all of these great things this past week or this past month." You'll see that recognition happening in the security channel overall.

[0:28:26] Guy Podjarny: That's awesome. What types of actions do they get points for?

[0:28:30] Eric Ellett: We have vulnerability management program here, like most people, and we rate our vulnerability. If you find a P1, we'll give you 100 points. If you go out and actually find these things, it's like 100. P1 is like the worst type of vulnerability. If you fix it, we'll give you 50, because fixing could have been – because we assigned you a vulnerability to fix, but people that are out there proactively finding these things, like we give you 100. That's automatic level. We have a kind of a catch all, like went above and beyond for security. If they ask someone to badge in, this isn't even just for engineers, like our salespeople are on this board. That's typically because they've asked someone to badge in that was maybe trying to tailgate.

Another thing that we've done with this is, also open it up to other people. This is not just security given these points. We're not always around to watch people tailgate. We've had other people that are not security engineers, or on the security team submit these points through the Slack command that we have. We're just really trying to build a culture of people, recognising each other for doing awesome security things.

[0:29:36] Guy Podjarny: That's awesome. I still think stickers are good as well, even if they're just from the training. I think you also showcase that it's important, but I very much love the leaderboard and those results. We're sort of getting to the tail end of the podcast. Can you just kind of rattle off a little bit of some of the tools of choice that you have today in your in your stack for people to consider?

[0:29:58] Leif Dreizler: Sure. As listeners of the podcast might guess, we are Snyk customers. I think the way that we introduce Snyk does kind of do a good job of encapsulating a bunch of the stuff that we've talked about. When we're evaluating Snyk, as Eric said earlier. Any tool that we buy, we want developers to be able to use, because we want them to be able to take control of the security of their services. When we were doing our evaluation of Snyk, we partnered up with our growth team to integrate it with some of their repositories, and get feedback on the tool, accuracy, usability, et cetera. Once they said, "Hey, this looks pretty good." We added it to a couple other repositories.

Then, as part of our introduction to the rest of the engineering team, we actually at all hands had them had engineers write down on a piece of paper how many total JavaScript libraries they thought we might be including across all of our repositories. The person that was the closest, we actually gave them a crown at our engineering all hands. Yes, Snyk is definitely one of the tools that we rely on. That's how we introduced it.

We also use to Detectify for our DAST scanning. I think the DAST scanning market is, it can be challenging to find a tool that actually can log into a single page app, like React application or if the DOM is just doing –

[0:31:26] Guy Podjarny: Yes. Crazy things, yes.

[0:31:28] Eric Ellett: Yes. For SAST, we're actually looking at [inaudible 0:31:30] right now. We have been using – Coinbase created this awesome tool or concept, which is called Salus, which is a way to like deploy a container to each CI. If you're using Circle, you can create a new job that will spin up a separate container that would be like the Salus container. From there, from a central location inject various different linters, or other tools that you want to run that will do some static analysis. But now, we're looking at something like [inaudible 0:32:02] as well to help supplement. They also have a pretty good developer story. I love the fact that they have a query language for their SAST product, and people that aren't just security folks can go and use it to find other types of problems that aren't just security related, so we're looking at them.

[0:32:22] Leif Dreizler: As I alluded to earlier, we use Bugcrowd for our bug bounty program. That's kind of a combination of tools and services. But I think running a bug bounty program is pretty important, just to show researchers like, "Hey, if you do find something, we're not going to sue you, we're going to pay you. So, please be responsible, and tell us about anything that you might find." We're looking at Assetnote, that's another tool that we're evaluating. They're in the asset discovery space. Something that will go out and look for Internet-facing assets, and try a variety of tools, and techniques from the bug bounty world. Some of the co-founders of Assetnote were really successful bug bounty hunters. It'll scan your external resources and see, okay, hey, can we do subdomain takeovers or things like that. That's a tool that we're pretty excited to use in the future.

[0:33:13] Guy Podjarny: Very cool. Well, thanks for sharing. I mean, I think those are very useful to sort of hear the vetted set of tools around. I guess, before I let you go here, I like to ask all my guests one last question, which is, if you have one pet peeve or key advice that you would like to give a team that is looking to level up their security calibre, what would that be?

[0:33:35] Leif Dreizler: I don't really know if it's a pet peeve, and I think it should be relatively obvious from the rest of the podcast. But, be friends with people, people are way more likely to do the things that you need them to do if they like you. So much of security revolves around getting other teams to do work, because they have domain expertise that you don't, and you need their help to improve the security posture of your company. So, do everything you can to build really great relationships inside of your organisation.

[0:34:05] Eric Ellett: Yes. I think the one thing is definitely try to do the investment in building out quality where you have the most face time, so with your engineers. Training, for example, it's paid us back in spades, I think. The amount of value we've gotten out of it, yes, we could have gone down the automated route or like a video route. But the amount of time that we have spent making that training awesome has definitely outweighed the amount of time we would have had spent dealing with the vulnerabilities or issues that would have came up if we didn't spend that time. s

[0:34:39] Guy Podjarny: Cool. This is also a good time to mention, that if you want to join this very forward-looking team here, you can check out some of the job openings that the Segment team has on segment.com/jobs. Especially in the San Francisco area, but it seems like across the US as well. Thanks a lot, Leif and Eric for coming on the show. This was excellent.

[0:34:55] Eric Ellett: Yes. Thanks for having us.

[0:34:57] Guy Podjarny: Thanks for everybody tuning in to the show, and hope you join us for the next one.

[OUTRO]

[0:35:03] Announcer: That's all we have time for today. If you'd like to come on as a guest on this show, or get involved in this community, find us at thesecuredeveloper.com, or on Twitter, @thesecuredev. Visit heavybit.com to find additional episodes, full transcriptions, and other great podcast. See you next time.

Snyk é uma plataforma de segurança para desenvolvedores. Integrando-se diretamente a ferramentas de desenvolvimento, fluxos de trabalhos e pipelines de automação, a Snyk possibilita que as equipes encontrem, priorizem e corrijam mais facilmente vulnerabilidades em códigos, dependências, contêineres e infraestrutura como código. Com o suporte do melhor aplicativo do setor e inteligência em segurança, a Snyk coloca a experiência em segurança no kit de ferramentas de todo desenvolvedor.

Comece grátisAgende uma demonstração ao vivo

© 2024 Snyk Limited
Registrada na Inglaterra e País de Gales

logo-devseccon