Episode 8

Season 2, Episode 8

What’s In A Security Policy With Geva Solomonovich

Listen on Apple PodcastsListen on Spotify Podcasts

In this episode of The Secure Developer, Geva Solomonovich, COO at Snyk and founder of Snowy Peak Security joins Guy to discuss security policies, and why you shouldn’t wait to implement your own.

Geva shares the 3 categories of security policies he developed with his clients and emphasizes that it’s not enough to create a set of documents or processes. You need to establish a security mindset and integrate it into everything you do. Don’t miss this episode for practical tips on reducing your company’s risk surface.

The post Ep. #8, What’s In A Security Policy? appeared first on Heavybit.

Teilen

“Geva Solomonovich: There comes a point in their lifetime where they meet their first big customer who says, ‘Show me your security policy. Show me that you're serious,’ and most of the companies just don't have it.”

“Guy Podjarny: Nothing like a big customer requirement to sort of motivate your interaction.”

“Geva Solomonovich: In a sense, it doesn't increase the value of your product when it's secure. Just if you have a bad security experience, it can devastate your company. Starting points could be vastly different. But security by itself, it's a mindset. It's something you need to integrate into everything you do, how you think, how you're building stuff. Most startups are generally in existential mode. Just be a little bit more secure than your competitors.”

[INTRO]

[00:00:36] Guy Podjarny: Hi. I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow. The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show or if you would like to suggest a topic for us to discuss, find us on Twitter @thesecuredev.

[INTERVIEW]

[00:01:07] Guy Podjarny: Hello, everyone, and welcome back to The Secure Developer. Today with us, we have Geva Solomonovich. Hi, Geva.

[00:01:13] Geva Solomonovich: Hi, Guy.

[00:01:14] Guy Podjarny: We're going to be talking about security policies, which sounds all sort of formal and bureaucracy-like. But in fact, there's a lot of good substance in them. They're not just sort of a necessary evil but something that could be quite useful. Geva has a fair bit of experience there, and we'll explore that world in a bit.

I guess before we get started, Geva, do you want to tell us a little bit about your background, how you got into security?

[00:01:35] Geva Solomonovich: Sure, absolutely. I've been for the last 10 years or so more involved in something more broad, maybe risk management. I've been in a anti-fraud company. I worked at PayPal. Then I worked at another payments company. In all of those, a big component was kind of risk management, which is a very similar to security in the mindset that you're always thinking what can happen and what can I do to protect myself and how to better manage it. Kind of towards this year, I started doing also just this security consulting, helping companies with their cloud infrastructure, with their policies, with their development policies and practices. It's been great so far.

[00:02:21] Guy Podjarny: Yes. It’s a good point about sort of risk reduction and sort of all these practices around fraud and security. I guess sometimes we get stuck when we talk about security, about all the sort of technical implementations of them when you talk about whatever, running some test in your continuous integration process or protecting from SQL injection. But when you take it up a level, security and all these security controls are about risk reduction. They're about reducing the probability of something bad happening, whether you've done it via reducing fraud and catching a bad payment or whether you've done it via some input validation in your code that prevents an attack. Kind of an interesting path but, yes, totally logical.

Let's kind of focus a little bit on that sort of latter section you talked about. You do consulting today about security policies. How does that come about? Who do you interact with when you are building a security policy? What's the motivation for starting that conversation?

[00:03:14] Geva Solomonovich: Most of the clients or customers I've worked with so far, there comes a point in their life cycle as a startup as most of the companies I've helped so far, come the point in their lifetime where they meet their first big customer who says, “Okay. Before we integrate here and I give you some APIs or I share with you some data, show me your architecture documents. Show me your security policy. Show me that you're serious.” Most of the companies just don't have it. Then they're at a point where they're scrambling to have something that they can show to this customer, so they get the business.

At that point, they raise their head and they’re looking somebody to help them solve that immediate paint point. That's often times a point where I get introduced into the company, and that's a segue for us to work together and then explore other things. But oftentimes, it does start with a need for a security policy to satisfy some big customer who wants to see seriousness from the company.

[00:04:14] Guy Podjarny: Yes. Nothing like a big customer requirement to sort of motivate your interaction. Okay. So we're in this situation. I have a startup. I have this first big customer. They ask for a security policy. I come talk to you. What happens, right? What's the first conversation then?

[00:04:30] Geva Solomonovich: Well, the first conversation is get rid of this pain point that they have. I mean, most of the startups are generally in existential mode, so they're working mostly on their product and getting more customers. Security, everybody knows they need to do it, but it's kind of an aftermath. In a sense, it doesn't increase the value of your product when it's secure, just if you have a bad security experience and it can devastate your company. Most of the companies aren't as proactive as they could be in thinking about security. Again, it comes a point where somebody is demanding it from them, and that's when they're scrambling, and they're ready to open up and invest and put those resources in to do what needs to be done.

[00:05:22] Guy Podjarny: Yes. I guess security is invisible, right? Sort of you don't see the problem or the fact that you have high risk unless it was exploited. In which point, you're feeling the incredible pain, but there's no obvious feedback loop. We're there. They understood. They want to take the pain away. What happens?

[00:05:36] Geva Solomonovich: So then when we're talking about a policy, well, I usually tell them there are kind of three categories of policies you can be thinking of. One which I call a presentable policy which is a policy that kind of details in very nice English that you're a serious company, that you've thought about security and all these different aspects. They call it presentable because the goal of the product is to give it to a vendor, a customer, a partner, somebody you're trying not to impress but to show the level of seriousness. This is a great policy to have. It doesn't serve with other needs you might have in the organization. A big one is, for example, educating your employees. It's not a good tool to tell the employees what they need to do and to teach them. That's a different type of security policy. I call it an internal security policy, which is more like maybe a handbook, a set of best practices, a set of this is the stuff we do. These are the kinds of stuff we don't do. This is the rationale. The internal policy is geared more toward being programmatic advice that employees will read and understand and do.

Then there's a third one. Let's call it the elaborate security policy, which would be tens of pages, maybe approaching 100 pages, which will really cover every different aspect of security in the company, including all the procedures, all the processes, all the tickets, all the event handling. That, of course, is not a good tool for educating employees because nobody wants to read 100 pages security document. It's not a good tool to send it to customers, but it is something you'd need if you're kind of engaging in an audit-level relationship with somebody serious like you're working with a bank or you're working with a government or something of that nature. They don't want to see something that's at a presentable level but something that's really, really deep and that they can verify and then go in your organization and, okay, you say you have a so-and-so many hours SLA to handle vulnerabilities. Well, they want to see that you actually can do it.

[00:07:43] Guy Podjarny: What's the relationship between the security policy that you wrote down and the security policy that you apply? I mean, let's take the first example. You talked about the presentable security policy. You go through it. You create a security policy. We'll talk a little bit about contents in a bit. You say you're doing stuff there, right? You say you are using whatever, two-factor authentication on thing. Or you're encrypting some private data. How does that interact with actually doing it? I'm trying to understand the delta between the presentable and the employee-oriented. Is it is it often that you build the two together, with the second one being the employee one being meant to actually do it, actually sort of apply the security practices?

[00:08:26] Geva Solomonovich: Well, I see it more as just the level of the detail. The employee one is like a real practical one, maybe even down to the code level. We don't use this library. We use that library, okay, which, of course, is not interesting to any customer if you tell your employees to use this library, that library. The presentable one is more between a high level and a not low level, but let's say high-level and mid-level description of your thought process. Try not to put there too many very detailed things, and so not to get anybody in trouble.

Again, the focus is to show that we are thinking of all these different assets. We have thought about how we access our environments, how we treat our physical devices, how we – the laptops, the locks in the office, the employees, that we do background checks on the new people that we hire, training. We have an onboarding and off-boarding policy, although we might not detail it in a whole lot of words in the policy. But we do say we have one and how we manage a data. We have retention. We have a backup. We have encryption. We have a separation between different customers. Your data is not going to get mixed up with somebody else's data. How we look at the network security. We have a firewall. We have routing rules. We test them. We do this and we do that, monitoring. Okay, we don't go into detail how many employees are sitting and monitoring and who's listening on the queue and is anybody getting – waking up in the middle of the night. But we just say, “Okay, we're monitoring all these stuff using all these tools, et cetera.”

[00:10:05] Guy Podjarny: Right. You have – these things are – this is both the indication of the areas of concern that you've identified, the areas where you understand this is a security consideration and that you're looking at it. But the presentable security policy or the one that you want to display should give you some wiggle room to say, first of all, I want to be able to apply it in a reasonable sense, not sort of commit to some specific components that you wouldn't need to change it all that frequently because you could just evolve it.

Also probably a big component of it would be evolution, right? So I can start off by saying I'm restricting access to my servers and do it by tracking the specific individuals. You're kind of a small company. You know exactly who has it. Then as your company becomes bigger, there might be some other form of management that you're doing that has more audit levels or has more key management because it was necessary. But the same policy or the same statements in the policy kind of apply to it.

[00:11:00] Geva Solomonovich: Still apply, yes. It’s a beginning for a conversation. If one of your customers wants to drill down, then he has a starting point. Then you can drill down. So far, the feedback has been pretty good. Everybody's been happy. Hopefully, helping people get more business is always a pleasure.

[00:11:17] Guy Podjarny: Yes. I find whenever I sort of interact and even in our own security policy when we kind of built one, right? That we worked with – give on doing that. Even the areas that you know, even the actions that you know you want to take, when you sit down and you need to write them down in a policy, it forces more structured thinking about it and forces you to understand, okay, I knew that this is a risk. But am I actually taking action here? Am I modeling it or addressing it in a way that I find satisfactory? It felt very much like a useful exercise to do, regardless of your level of expertise, even if it's a type of activity that you already know what to do. Then, of course, for people that are not in the security landscape, many of these might be just eye-opening, right?

[00:12:03] Geva Solomonovich: Definitely, that's the case. There's definitely different levels of education and different levels of attention to the security. Companies at different levels have invested in it or have different types of knowledge. Kind of when I come in and help, starting points could be vastly different. But with everybody, the first thing I say security is not – I can't come in and spend 10 hours with you and then say you're secure because security by itself, it's a mindset. It's something you need to integrate into everything you do, how you think, how you're building stuff. We start a conversation and every change we need to make. We also don’t talk about why is it good or what could be wrong and how you need to think about it going forward. You can apply the same kind of logic for other stuff you're working on.

In general, you're always going to have risk. There's always going to be security vulnerabilities, but you need to think about it as a risk surface that you have for your company, for your portfolio, for your cloud, for your infrastructure, and, okay, how do I reduce the risk surface all the time. Keep it as minimum as you can. If you don't want any problems, you can just be out of business, and you have no problems. But you want to be in business, so you want to have an infrastructure but just trying to keep the risk surface of your infrastructure as low as possible.

[00:13:25] Guy Podjarny: Yes, I like that concept of security is a mindset and your task, I guess, the topic that you get kind of pulled into to do. But also that maybe is the pain point that people are trying to do is pass an audit in a sense, right? Or be able to provide the policy as opposed to truly kind of apply the spirit of the law here. But in the process, you're sort of forced to consider it and to apply it. Of course, you need to be kind of straightforward in those policies. Lo and behold, that security policy might just make you more secure. I like the means, the use of the security policy to sort of help trigger trying to establish a mindset of security and not just the policy.

[00:14:05] Geva Solomonovich: An additional part of that mindset which I try to tell the companies I work with is you need to protect yourself from what you can see or what you know but also from stuff that you can see. It doesn't have to be a concrete vulnerability for you to put a protection there. I've been managing companies for many years, and some of the conversations I would have with my engineers is they would ask, “Well, why do we need to put this protection here? How would a hacker get into our database?” Kind of that's a wrong mindset to have. The right mindset to have is, well, let's assume a hacker is in the database. What can we do to minimise his access to our data? What can we do to get notified that we're being hacked more quickly? You don't have to be able to explain yourself how a hacker can get into your database to put more protections on your database or your network or whatever it is.

One more piece of advice I give companies, security is all about layers. Every layer you have, you need to put in some security. I call it don't be an M&M. Don't have a hard shell on the outside but then have a soft stomach in the inside. Just because you think your network security might be good is not an excuse not to protect and patch your servers, not to protect your database. You never know how hackers are going to get inside.

[00:15:29] Guy Podjarny: Yes. You should use the UK version of that which is don't be a smarty.

[00:15:33] Geva Solomonovich: Don't be smarty, all right. I like it.

[00:15:35] Guy Podjarny: Have more variation to it, so yes. But I agree. It's all about sort of defence in-depth about the layers and protecting there. A lot of this has been on kind of the concept of the policy, and you gave a bunch of examples of it. But maybe let's sort of touch on the top highlights, right? If you're a startup, you're a B2B startup, right? Let’s even say the big customer has not approached you yet, and you haven't created the policy quite yet. But what are the kind of high-level bullets of areas that you should worry about when thinking about your security practices that would then be translated into a policy?

[00:16:09] Geva Solomonovich: That's a good question. I tell the companies I work with, “You need to think about security, about the design of your security kind of like you tell your engineers, and they need to write a good code and that has a good design, so it can, A, scale up. B, it will have less bugs over time, so it will be easier to detect problems.” The same thing you need to do with your infrastructure. As you're building your cloud, don't just go and hack servers, and throw them in there, and let them live by each other, and give all your employees access to everybody. Everybody has a super root access, and everybody's SSH-ing into the machines. Give it a little bit of a thought.

Generally, when you work clean, when you work neat, when you separate your environment, you have a virtual cloud for your production, a virtual cloud for your development, for your staging environment. You give the right access to the different roles in groups, you give good naming conventions, and you start working organised. Then you can immediately see if you have any problems. It's very easy to scale. You can add in more employees later. It'll be easier for them to contribute. First and foremost, just work organised. It's not even anything related to security. But working organised gives you the infrastructure to be secure. Okay.

Then kind of additional things to think, there's a big trade-off between the ease of administrating of your system and having more security, right? It's very easy to give all the employees SSH access to the machines. It's harder to maybe integrate the solution that will pull your logs out of the machine and present them for everybody, so they can see the logs when they need them. There's a big trade-off between system administration and security, right? It's very easy to have everything open, give all the employees a super root access. Everybody's SSH-ing into the machines, pulling off the logs, changing the code with VI on the machine. Of course, it's easy to administer that way, but it's not very secure.

Kind of having a multifactor authentication, it's annoying to get SMS every time you want to log in here or you want to log in there. You need to be somewhere on that curve. But definitely pick only the solutions that get you a lot of security value and not a lot of administration overhead.

[00:18:32] Guy Podjarny: Yes. I think that one actually I think aligns well with thoughts about reliability. I guess reliability at the end of the day is also about risk reduction. It's also about reducing the risk of something accidentally falling down and crashing. I think access control and kind of good visibility and just sort of knowing that there's a smaller set of potential paths to a destination is something that's also good for just, again, the health of the system. I guess your first point was as well. If you're not messy, if you're organised, if you know what's where and who can access what, then the likelihood of something blowing up, whether it's because of an attack or because of a mistake or lower. Okay, sorry. But I cut you off. We talked about those two things.

[00:19:14] Geva Solomonovich: That's one. Another one to consider is work on a minimum permissible policy instead of maximum permissible policy. Kind of white list the access that you need instead of give access to everybody and try to close it. At the end of the day, you find your servers don't need that many IPs open, and they don't need that many ports open. So why give access to all the Internet? Not 100% of your employees need access to your server, so don't give everybody access. Give just a minimum amount of people access. If you think that way over time, again, going back to the analogy of the risk surface, then your risk surface remains as small as it can. You have a lot of employees. They're, of course, your biggest asset in the company. But from one perspective, they're also kind of a liability because they all have these laptops that they carry in their bags and they put in their computer when they're driving to hang out at night. Guess what? The their password is the same one they've been using for the last 10 years, and it's their girlfriend's name, and maybe the date they got engaged or they first kissed.

At the end of the day, those laptops hold your code, most likely hold passwords to important infrastructure, maybe to your cloud, maybe to the company email. The least amount you give the people ability to accidentally cause the damage over time, and you'll get the rewards from it.

[00:20:44] Guy Podjarny: Yes. It is, I think, kind of a good emphasis to go back to that concept of a mindset, right? You sort of wish for the best but plan for the worst or try to not be quite as optimistic when you're just sort of granting permission and access. How much does physical access in kind of today's modern world when you see kind of people discussing security policies, whether as the creator of it or as those demanding it? How much does physical access come into play? What type of recommendations come in there?

[00:21:15] Geva Solomonovich: In this age where everything is in the cloud, we tend to ignore the physical aspect, and we feel our office is safe, and why would anybody come here, and all that kind of stuff. Truth is a lot of the hacks these days are happening from inside. I’m not talking about malicious employees but stuff that happens by accident to employees. I heard this great story about someone paying 100 bucks for a cleaning lady to drop a few USB devices on the floor, and they have a little sticker that says, “Salary is 2015.” Well, and guess what, it's almost guaranteed somebody's going to pick that stuff up and stick it into their laptop. That USB is going to install a Trojan on the computer. By chance, that's your sys admin, and he's typing in all his passwords, and there you go. It starts from there.

That's kind of a little social engineering trick, and you think to yourself it's not scalable. But actually, all the bad guys work at this stuff at scale. So there's always this big smart guy at the top of the pyramid who comes up with all the schemes, and it trickles down all the way to mules who actually do their physical work on the physical level at the office, at somebody else's office, even stealing money from ATMs, unrelated to security. You think it's unscalable, but it does scale. It does scale from the bad sides guy.

[00:22:36] Guy Podjarny: Yes. I guess physical access doesn't necessarily need to be somebody kind of breaking into your office at night or stealing your laptop. I guess the solution to that is pretty easy. Just to get everybody the new Apple laptops that don't have any USBs, and you're sold.

[00:22:49] Geva Solomonovich: That's definitely –

[00:22:51] Guy Podjarny: One way to approach it. Cool. Well, so I think there's probably a lot of those components. We're not going to be able to go through every single one of the different policies. What do you in practice see when you go into companies that people are kind of doing poorly, right? Odds are a listener to this podcast who has sort of involved in some form of B2B startup. What are the most common mistakes that you see in existing actual setup that the policy flushes out?

[00:23:23] Geva Solomonovich: I mean, some of the big mistakes are – well, not – I want don't want to call them mistakes. They might have a vision of –

[00:23:31] Guy Podjarny: All right.

[00:23:31] Geva Solomonovich: Might there have a vision of how their network is structured. But at the end, when you come and look, well, see the database is sitting on the Internet. It's not protected on a secondary tier. Guess what, it's listening on the default Postgres or MSQL port. That's a recipe for disaster. There’s a difference between what they’re thinking they have or what they want to have or their architecture in mind and what's in place. That's definitely one thing. That's just a matter of attention. Most of the companies don't have a dedicated system administrator, don't have a dedicated a network engineer or something like that. That's kind of one category of things you would see.

The other one, I call it the separation of concerns. When you start and you have a few employees and most of them are your buddies, kind of everybody gets access to everything. A single person generally has permissions to build and ruin your company at the same time. It’s much better if you can divvy up the responsibility. No single person has the ability to single-handedly cause a lot of damage. Let's give a concrete example. Your engineers are writing code. Okay. Of course, you don't fear anybody, any of your employees writing a malicious code. But let's say they can write code with the vulnerabilities. On the other hand, do they have the ability to also push that code to production? If a single person has all that access from the beginning to the end, he has more potential to cause even accidental damage.

If you separate that concerns, there's one engineer and one release manager. The release manager is another step that your code or that your product needs to work through. Then you have one more one more gate, and that prevents a lot of those mistaken vulnerabilities. Definitely, it prevents a lot of malicious. But let's assume nobody has malicious employees.

[00:25:31] Guy Podjarny: Yes. It’s an interesting kind of comment that indeed adds a layer of protection, but it does so by basically adding a gate, which are oftentimes things that we try to avoid as part of continuous development. I'm curious like how – I don't know what your mindset is on this around the use of some tools. You might be able to tell some Slack bot or something to deploy. You technically have the permission to do it because you go to Slack and you’ll inform it or sort of you provide a command. But, one, it's kind of well-logged and documented. Second is you kind of had to compromise another system to get that done.

[00:26:04] Geva Solomonovich: Those are definitely the way to go because you spend one time. You put in effort. You're very conscious. You make sure it has all the controls, whatever you need. It does only one activity, and it does it well. It gets your code to the right place on the system and doesn't deviate left or right and has very little chances of screwing something up. That's definitely the way to go. From my perspective, that ties to one more thing that you see more often than not, so kind of the separation between your different environments. You have the production environment, the development environment, the staging environment. You really don't want to give the production keys, make them readily available to everybody.

If you're not thinking about it from day one, then what you see more often they’re not. Then the production keys are checked into the same GitHub repository as all the other keys. Any employee, including the 100th employee that comes to the company, is going to download that repo, is going to have all those keys there. It's just not a way to go. Okay. I recommend everybody, okay, put the production keys somewhere else. More importantly, tie it to an automated system that would pull them from wherever they're sitting and get them on the production server without anybody having to touch them on the day-to-day.

[00:27:21] Guy Podjarny: Yes. I think all those are very sound advice. In general, I think the notion of thinking about the developer as an attack vector is something that's probably overlooked, right? People don't – they consider the risks that happen when an attacker attacks their systems but not when they attack their developers. There was an interesting play with XcodeGhost with the sort of malware in iOS world, where a malicious Xcode was distributed, and developers of mobile apps were somehow compromised. They basically used the XcodeGhost. But really they weren't the target. They were just the distribution vehicle. What was the real target was attackers adding and injecting malware to the applications that those developers submitted to the App Store and, therefore, compromising that many more users than install those on their phones.

In this case, it's literally a distribution vehicle to many systems but to kind of new many devices and users. But in the case of a B2B startup or even a B2C startup, you get compromised. The developer gets compromised. The software gets compromised. That might be your path to many users data. That might be your path to many users that are visiting. A lot of these components I kind of enjoy the conversation or the highlight of the practices in them. I think that might even be a good kind of finishing note is just to say that a security policy gets triggered from a big customer asking for it or sort of from an audit requirement. But it really is a good opportunity to just have all of these conversations and to think about them if you haven't done them. It’s not all just necessary evil and a piece of paperwork that you need to create.

[00:28:54] Geva Solomonovich: No, definitely not. Whatever gets a conversation going inside the organization, whatever external driver, external push you need, just as long as you have it, that's what counts.

[00:29:05] Guy Podjarny: Cool. I guess before I let you go, Geva, just one last kind of question that I ask many of my guests is what's your kind of one pet peeve or recommendation to a developer or sort of a dev shop, an organization trying to up-level their level of security? What's the kind of one thing that you would recommend that they do?

[00:29:27] Geva Solomonovich: Well, let me give one concrete piece of advice that everybody can go after and do today or check if they're doing today. One, you take into consideration you have a company. You have your product. You're specializing in it. But there are all these great infrastructure companies that do have security specialists, network architects, database architects. They're masters of building all this stuff. They're definitely way ahead of your organization. If there's one actionable recommendation I can go give everybody is make sure all of your servers are behind a load balancer, a CDN, or some reverse proxy that's managed by your cloud infrastructure. If you go to your DNS table and you see an IP of one of your servers there, that's just not a way to go. Shed the few bucks, put it behind a load balancer, and let Amazon, Google, or Microsoft, wherever your cloud is be the first layer of defence between your company and the Internet.
[00:30:29] Guy Podjarny: Yes. That sounds like really good advice. Introduce a little bit of the pros doing kind of you and the world to not be the low-hanging fruit attack target.

[00:30:38] Geva Solomonovich: Yes. In that sense, there's always the story about the two guys who are walking in the forest, and they see a bear. The bear starts running after them. Well, you don't need to be the one that's running faster than the bear. You just need to be the one that's running faster than your friend. Just be a little bit more secure than your competitors. If that can be a milestone for you, that will definitely put you in a better place.

[00:31:04] Guy Podjarny: Just be more secure than the next guy. This was a super enlightening conversation, Geva. Thanks for coming over. If people have further questions for you, they want to kind of connect with you, contact you, what type of contact details? How can they reach you?

[00:31:17] Geva Solomonovich: Well, best just to email me. My email is geva@snowypeaksecurity. That's G-E-V-A@snowypeaksecurity. Or use the contact form on my website, www.snowypeaksecurity.com.

[00:31:30] Guy Podjarny: Cool. Well, thanks a lot, and I hope you enjoyed this episode, and tune in for the next one.

[00:31:35] Geva Solomonovich: Thank you.

[END OF INTERVIEW]

[00:31:36] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show or want us to cover a specific topic, find us on Twitter @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over 100 videos about building developer tooling companies given by top experts in the field.

Snyk ist eine Developer Security Plattform. Integrieren Sie Snyk in Ihre Tools, Workflows und Pipelines im Dev-Prozess – und Ihre Teams identifizieren, priorisieren und beheben Schwachstellen in Code, Abhängigkeiten, Containern, Cloud-Ressourcen und IaC nahtlos. Snyk bringt branchenführende Application & Security Intelligence in jede IDE.

Kostenlos startenLive-Demo buchen

© 2024 Snyk Limited
Alle Rechte vorbehalten

logo-devseccon