Episode 35

Season 4, Episode 35

Secure Coding In C And C++ With Robert C. Seacord

Guests:
Robert Seacord
Listen on Apple PodcastsListen on Spotify Podcasts

"Robert C. Seacord: I'm not sure if there's a lot less code being written in C. The less specified a language is, the more room there is to optimise it. There's this desire to want it all. But when push comes to shove, performance has been winning out over security in terms of the decisions that are being made. There's a lot of reasons to try to code securely to begin with. Fixing defects later in the development cycle, it's more likely that you'll fix it incorrectly or introduce additional defects while you're repairing an existing problem.

[INTRODUCTION]

[0:00:33] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers, covering security tools and practices you can and should adopt into your development workflow.

It is a part of The Secure Developer community, check out thesecuredeveloper.com for great talks and content about developer security, and to ask questions, and share your knowledge. The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com.

[INTERVIEW]

[0:01:07] Guy Podjarny: Hello, everybody. Welcome back to The Secure Developer. Thanks for joining in for another episode. Today, we have a great security trainer with us, Robert Seacord. Welcome to the show, Robert.

[0:01:16] Robert C. Seacord: Thanks for having me.

[0:01:18] Guy Podjarny: Robert, before we dig in, we're going to go a little bit more sort of bare metal here or maybe like a little bit more sort of C, C++ programming security and the likes later in the show. Can you give us a little bit of context about yourself, who you are, how you got into security, what do you do these days?

[0:01:32] Robert C. Seacord: Sure. These days, I'm a Technical Director at NCC Group. So to split my time between doing secure coding training, developing secure coding training research and customer work, doing a lot of security code analysis for various customers, reviewing source code and the like. How I got into security, I started as a developer for IBM back in eighty-four and I had a startup company in ninety-one. I worked for a company called Secureware down in Atlanta, Georgia, and did not do any security work for them whatsoever. Continued on my career, went back to the SEI in 1996.

Then, in 2003, I just sort of changed tracks completely, and went from working in component-based software engineering, and I moved over into the CERT team, originally on the bull handling team. You can find, I guess, one or two bulls that I actually handled. Then, I didn't get a lot of direction while I was there, so I wandered off and started writing some books. I wound up writing Secure Coding in C and C++, and really liked the security field, because it gives me the opportunity to get very lost in the weeds of things, and not just have to deliver functionality on a schedule, and sort of move on to the next project.

[0:02:53] Guy Podjarny: Yes. More, but security is quality. Would you identify first and foremost as a developer or more as a security person?

[0:02:59] Robert C. Seacord: I'm kind of these days right at the intersection of those two things. That's kind of my sweet spot. Because as a security person, I'm not the best. As far as people who are experts at C language, I'm not the best. I mean, most of rooms I walk into, I am. But when you walk into the C standards meeting, I'm the dumb guy.

[0:03:21] Guy Podjarny: But you're being broad in your sort of specialties. You have to internalise a lot of things, you can't just sort of focus in on one thing. But let's dig into kind of the meat of your training curriculum these days. You've written a lot and spoken a lot about sort of secure C coding and the likes. At the risk of like condensing a world of knowledge into a few highlights, what would you say are the primary emphasis that you give a development team today, when you come in and try to give them like the core principles of secure development? Maybe, how much if at all do you feel like that has changed over time?

[0:03:56] Robert C. Seacord: Well, I think that the devil tend to be in the details. Rather than sort of superficially perform a treatment of a variety of topics, I tend to dive deep. So, most notably, the second day of my secure coding in C and C++ training, I tend to talk about integers. Six hours seems like a long time to talk about integers, but it turns out, they're very misunderstood. The reality of C programming and C++ programming is, buffer overflows are sort of the biggest issue both writing outside the bounds of objects and reading outside the bounds of objects. The way you do that is, you add a pointer to an integer, and then start dereferencing memory at that address. If you don't know what value is stored in that integer, you really don't know what that eventual pointer's referencing, so you can't have any assurance or confidence that that's not an out of bounds read or write

[0:04:58] Guy Podjarny: I guess this statement would have been true 20 years ago, 30 years ago as well. This is the mistake for it. The systems have slightly changed. If nothing else, a lot of systems that used to be written in C, C++ are now written in other languages. Like you know, it might be written in Java, or C#, or sort of higher-level languages where they don't deal with those pointers. Do you feel that, or am I just being a little bit biased, I guess, and everybody lives in their world?

[0:05:27] Robert C. Seacord: I'm not sure if there's a lot less code being written in C. There was kind of a point at which Java was being explored for the desktop. Eventually, I think it was sort of abandoned as having adequate performance for desktop applications. Now, it feels like it's largely relegated to running server-side software. Plus, there's just a world of embedded software, cars, all sorts of transportation are all written in C. I've kind of followed the TIOBE index over the years, which kind of talks about language popularity. They used to be Java, C, and C++ are all there with 20% plus of the market. But what's happened is that, the usage has balkanized, so there's more and more different languages, each with sort of less percent to the market. But at the top is still Java, and C, and C++ has actually dropped off a bit. It's now, last time I looked, I think in the fourth position.

[0:06:31] Guy Podjarny: Yes. I think there's actually some reason to that, because from a programming perspective, it might be that C++ was a path towards more sort of structure, maybe a little bit less sort of low level that you control. But basically, that is now something that is being taken up by, indeed, other languages.

[0:06:47] Robert C. Seacord: Yes. I kind of agree with that. I think to a certain extent, some C++ people went to Java, the people who wanted the obstructions, and the people who were keen on performance, and small footprint, all that stuff moved more to C, and sort of vacated the C++ space a bit.

[0:07:05] Guy Podjarny: This is an interesting observation, and important thing to remember. Like I said, the volume of developers, as a whole, also has increased. Like C, C++ is, C developers are probably continuing to grow as well, and indeed, in all those brave new embedded worlds. How much are you feeling in the context of security, maybe dealing with agile development? Do you feel like in those contexts, is the world of a C development a little bit less sort of agile, or sort of driven for these biweekly shipments? Or is it getting the same type of pressures?

[0:07:39] Robert C. Seacord: Yes. I don't see C being driven so much by agile development, as maybe website development, and projects like that. Agile projects I've been involved with tend to have a lot of problems with security. I mean, it doesn't typically seem to fit into the model of quick release cycles. There's always this sort of short-term push to push functionality out to deploy. Secure coding, a lot of times is the antithesis of that, to focus on gaining assurance in the code, and the functionality you're about to deploy. Some people, I've seen even have trouble sort of expressing security in terms of a backlog that they can even address as part of their release cycle.

There's probably things people are doing to make it more appropriate for security. But to a certain extent, I feel it's not really built into the model. But then, again, who knows? I think when you go out and you look at real companies and what they're doing in terms of security processes, it's just always alarmingly much worse than you can imagine. I mean, you'll see companies who don't have configuration management in place. I mean, real, real basic things like that.
[0:09:00] Guy Podjarny: Yes, I understand. I think we're basically – it's just these different worlds. Like in a website development environment, you might be sort of pushed with faster sort of iteration, languages might be higher level, maybe a bit more agile. I think we're sort of discussing a slightly different world, which is, like you mentioned, embedded systems, like these connected cars, quality and assurance is much more important. You can't iterate quite as much. You can't ship a new car every two weeks like you need to. Even the uptake mechanisms are a little bit more controlled there. But also, the written in languages where more damage can be caused, so you have maybe a slightly higher responsibility to invest in that assurance, I guess, for everyone.

[0:09:39] Robert C. Seacord: Yes. I mean, when I was at CERT, we kept track of vulnerabilities, but we didn't deal with all of them. We focused on the more critical ones that would affects things like critical infrastructure. As a result of that, two-thirds of the vulnerabilities we found in the CERT database were related to C and C++ code. Again, that's because we focused on critical infrastructure. We didn't focus on mom-and-pop websites, in which case, it would have been all PHP and cross-site scripting vulnerabilities.

[0:10:12] Guy Podjarny: It makes sense. Again, kind of just sort of different, super high gravity type surroundings, maybe once where that sort of balance between agility and safety can be taken a little bit differently. You do a lot of this assessment now and reviewing. Maybe sort of share some bits of what works well for you. If I can start, what are your favourite reviewing tools when you do this sort of analysis of C codebase?

[0:10:34] Robert C. Seacord: Yes. I guess the most surprising thing to me is people tend not to use the tools in front of them. I would say, starting with the compiler. We'll talk to organisations that talking about buying Coverity, or buying Fortify, or some other high-end analysis tool. But they haven't set their warning level on their compiler, or they're disabling warnings, so they're not seeing sort of critical problems.

My favourite warning is the sign to unsigned conversion warning that developers like to turn off. It turns out that, that's a really bad idea. Many of those warnings are identifying real problems and potential vulnerabilities in the code. I would say, just start by using your compilers better/ Clang and GCC now have a bunch of dynamic analysis tools integrated with the compilers. There's AddressSanitizer, and MemSanitizer, and UBSanitizer, and one for ThreadSanitizer for analysing parallel execution. All those tools along with the static analysis capabilities of the compilers is very effective.

[0:11:49] Guy Podjarny: How do you know when to be happy with your results? You come across, you look at these, these are like complicated beasts, and you run the review. How do you feel like you've explored the unknown sufficiently to, to feel like it's ready to go?

[0:12:06] Robert C. Seacord: Yes. It's almost about defect detection rates. How many how many defects you're finding per day or per hour of analysis? Once that rate declines to a certain point, you get sort of past the point of, we are getting a good return from that. Usually, at that point, it's a good idea to change strategies, because a lot of times, once you go to a different tool, a different approach, suddenly, you start to find new classes of defects that you weren't finding with the old approach. Typically, you do what you can until you run out of time.

[0:12:48] Guy Podjarny: Yes. Until reality hits you.

[0:12:49] Robert C. Seacord: Yep. But usually, an indicator that you're getting there is sort of when you're not finding things as quickly when you get to a diminishing return point.

[0:13:00] Guy Podjarny: I assume like in some of these systems, you have some form of continuous build, like you have some form of automated builds that happen. How do you sort of set standards, to sort of know you haven't slept, almost the regression test aspect of security? Do you feel like there are good tools around that? Does it come back again to the compiler warnings and disallowing them?

[0:13:21] Rober C. Seacord: For regression test?

[0:13:22] Gu Podjarny: Basically, to know, like you've done, you sat down, somebody hired the top talent of Robert Seacord. You've gone through, you've done an analysis to help them get to a point that is of a higher comfort. Now, they don't want to slip, they don't want to regress. I guess, regression tests in terms of, I'll say, [inaudible 0:13:39] quality do sets of lines and thresholds.
[0:13:43] Robert C. Seacord: Right. For example, NCC Group would do security analysis of a system, including analysing the source code. We'll write a report, and we'll identify the defects, and we'll explain what the problem is, what possible mitigations are, and so forth. That gets moved around. But a lot of times, it makes sense to follow that up with some onsite training, where we'll come in and talk to the developers, and give them the training course. And maybe supplement that with some examples from their own system, actual mistakes that they made, and try to up their game on a whole. Because, I think what you don't want to do is always rely on the pen testers to find the problems in the code. I mean, because it's not necessarily the best approach. You really want to not code the errors to begin with, because that's the most effective time to code correctly and securely is while you're writing the code, anytime we come back to something.

A lot of times, you're looking at someone else's code, so you had to relearn that, you had to learn that person's code. Sometimes, you're looking at your own code, but often, you have to relearn your own code because enough time has passed, and you're not really familiar with it. Fixing defects later in the development cycle, it's more likely that you'll fix it incorrectly or introduce additional defects while you're repairing an existing problem. Again, there's a lot of reasons to try to code securely to begin with.

[0:15:16] Guy Podjarny: I'm going to switch a little bit the gears here, and maybe talk about people. Indeed, talk about these dev teams from a team composition. We talked tech. Let's talk a little bit about the teams that you teach. When you're coming in to do more of a training, or when you interact even with teams to sort of share the results, do you feel like there's no change over the last while? You've been doing this for a good many years. Do you feel like approaches are different? Is there a higher awareness or appreciation for security? Is it about the same? Do you get pushed back around like, "It's not my job from dev teams"? How do you see the state of the industry amongst the sort of the customer base you work with?

[0:15:56] Robert C. Seacord: Yes. Well, I would say that there are some changes. We don't really get the same type of arguments that we got into years ago, where we call a vendor and say, "You have a vulnerability in your code," and they would say, "Prove it." Nowadays, they're more willing to accept that on face value. From a teaching perspective, I mentioned this in the intro, but I started out as a developer, so I've always maintained the developer focus.
I've had security people come and try to train me unsuccessfully, because they would say really stupid things that were just unpractical things that we would never do, could never do. Security, people tend to be very dogmatic in their approach without having any kind of firm basis for it. When I do teach, I'll tell students that security is a quality that you have to achieve. People sometimes ask me, why do people [inaudible 0:16:54] C as a programming language? You know, one of my answers is that, typically, security might be fourth or fifth on your list of reasons you pick a language, right? The first reason would be, "Hey, we've got existing software we developed for this platform." It's in C, or C++, or what have you. There's an advantage to keeping your code base in the same language.

I've had conversations recently about sort of Frankenstein systems, where they started in C, and then someone switched to Java, and then C#, and then Rust, and then Go. Then, you have 12, or 15, or 20 different languages. Those systems become very brittle and very difficult to maintain. That's the first reason. The second reason might be that, that's where your expertise is. If you have a group of experts, C developers, and you tell them to build the next system in Java. I can guarantee, that system will be less secure than the system those developers would have built into the C language. Then, you get to things like performance, and eventually, security might be fourth or fifth on the list of reasons you would pick a given language.

[0:18:02] Guy Podjarny: I think you're right, and I think fundamentally, the choice of language is attuned to what you're trying to do. If you built an embedded system in Java, maybe there are some explicit cases where that may make sense. But more often than not, that's just not as performant, or it's too resource consumption heavy to fly, and security has to cope, like you have to, you have to build a secure, despite the choice of language. Whether that's a helpful or negative thing for you.

[0:18:25] Robert C. Seacord: Yes. I mean, in a lot of security advice, tend to be, again, overly dogmatic. Just saying something as simple as, always check bounds is sort of overly prescriptive. Because you can look at many loops in C and C++ code, and just prove that they don't have an out of bounds read or write. Why waste cycles, securing something where there's no possibility of a defect or error when you can use those cycles elsewhere to provide the real security. Performance and security always tends to be a trade-off to some extent.

[0:19:02] Guy Podjarny: I think we're describing kind of an interesting profile here, which is a little bit different than a bunch of the maybe example systems that we oftentimes have on the show. We're talking about these embedded systems, or sort of low-level systems, performances of material, sort of a very high importance. Maybe oftentimes, a physical thing that gets shipped, or something that doesn't get delivered or deployed quite as often. Therefore, between the sensitivity and maybe just the form factor, they end up doing security in a slightly more maybe thorough fashion. How do you feel the engagement? You come in, people contract NCC group, or they work with you. Do you feel like before they would bring you in at the end of the process, now they bring you in more midway, is there going to be a change around when that type of security process is done? I mean, understand that the deployment, that it's less agile, but do you find it still is waterfall? Do come in after six months of code have been written or is it more collaborative?

[0:20:04] Robert C. Seacord: Yes. It's a bit hard to say. We certainly get engaged at all points in the lifecycle. A good time I think to engage us is to engage as early with the training. A lot of times, during the beginning of a project, you have a small group of architects and designers who coming up with the initial architect and design, and you have a lot of developers who aren't fully engaged at that point in the process. Because they're more novice programmers, and not fully engaged in the design process. That's a good time to deliver some secure coding training, get those folks up to speed when they're not necessarily fully engaged yet in the development process.

We do get asked to do architecture design reviews, those are always worthwhile endeavours. But there are still a lot of companies that bring us in for pen testing, expect the pen testing to go great. Are shocked and dismayed by the results of the pen test, and then decide to bring us in. I would say. even more commonly is just sort of some big exploit gets discovered or published. Then, the alarm bells go off, and organisation decide we've got to more proactively address security.

[0:21:21] Guy Podjarny: I mean, I think, that's true at all levels, all levels of seniority, and all levels of the stack. Security is visible. When something big happens to sort of act as this big hit to that feedback loop, it mobilises people to action. I think this was, I think, a really interesting conversation. Because I feel on one hand, the reality describing kind of a lot of these principles are the same as they would be in any language. Like this notion of people at the earlier stage of their career, they need more of the secure training and education element of it. While people that are further along, might be a bit more like looking for you for subsequent verification, but not quite as they go. Even a lot of the commentary about teaching – the specific examples change, whether it's out of bounds versus cross-site scripting, or sort of knowing sanitize inputs and outputs.

Fundamentally, the idea of teaching a principle versus teaching a specific, I think kind of holds as well and runs with it. Maybe the biggest change I think that runs over here is the trade-offs, and the specifics, as well as the pace that maybe this world works. This risk tolerance around the likelihood of a problem, and maybe the tolerance for slightly slower paces.

[0:22:38] Robert C. Seacord: Yes. when I think about the training, because I've been delivering secure coding and C and C++ training since probably 2005, so some time now. The problems don't change very much. They sort of remain there in the languages, and particularly C, there's a very strong reluctance to change the language. Lind of the first rule in the standards committee is don't break existing code. They're okay breaking compiler implementations, but they don't want to break existing source code that's out there. Because the saying is, the world runs on C, and we don't want to break the world.

You'll find that there are code bases out there, where there really aren't maintainers left for that code. They'll update a compiler, and rebuild it with the latest version of the compiler. If something goes wrong, they're kind of out of luck now, they don't know how to repair that code anymore. I think the thing that probably changes the most are the solutions. There's different and better tooling that comes along, different and better processes. Sometimes, there are newer libraries, which are introduced, which are potentially more usable, and more secure.

[0:23:53] Guy Podjarny: Yes. I think that's a good thing to hear. I think the ecosystem, the surrounding evolves, while, indeed, I guess, this sort of a cornerstone of software development that is C remains a little bit unchanged for, I guess, you can say, for historical reasons. But also, because it's pretty darn powerful. It allows you to do a lot of things, including shoot yourself in the foot. It lets you do a lot of good things as well.

[0:24:19] Robert C. Seacord: Yes. The time I've been involved in C standardisation, I would say that it's really still driven by performance more than security. We have these undefined behaviours in the language, which the less specified a language is, the more room there is to optimise it. The simple view of that is, if you have to go from point A to point C, but you have to stop at point B on the way, and you try to optimise your route, that constraint of stopping at point B is going to limit your optimisations, your tend ability to optimise the route. But if you can eliminate sort of the necessity to stop at point B, you can come up with much faster routes to your final destination.

One of the things that's been going on in the evolution of C is that, compiler writers are sort of taking advantage of these undefined behaviours to do sort of greater and greater optimisations. There's kind of weird pushback on by the C community. The C community will physically show up at standards meetings, representatives, and they'll, "We've had it up to here with these optimisations. We've written these codes, codes always worked. You know what it means, you've always known what this code meant. But now, you're doing these optimisations, and our code is broken now. Cut it out."

The compiler writers will say, "Well, okay, you can do without these optimisations." Then, the C developers will say, "Well, no. We want those optimisations." Then, the compiler writers will sort of throw their arms up in me potentially mock disbelief. There's this desire to want it all, and it's not necessarily feasible. But when push comes to shove, performance has been sort of winning out over security in terms of the decisions that are being made.

[0:26:18] Guy Podjarny: Yes. It's an aspect of the functionality. You can be secure all the way to bankruptcy. I mean, at the end of the day, business value is what dominates and security is invisible. It's something that you have to work to make visible. Robert, thanks for sort of the good guidance here, and sort of all this, sharing your experiences as we went through the show. Before I let you go, I like to ask every guest on the show. If there's one sort of tip, or one bit of advice when you have a team, a team of C developers, and are looking to level up their security knowledge. What's kind of the one small piece of advice or just pet peeve, I think you get annoyed with sort of people repeatedly getting wrong, would you give that team to get better at security?

[0:27:01] Robert C. Seacord: I suspect it every time I get asked this question. I give a different answer, based on what's most on my mind at the time. But this time, I think I'll say, take a look at, write some C code. Imagine what sort of assembly code is going to be generated, then take a look to see what assembly code actually gets generated. Then, when your expectation doesn't match the reality, read the standard again, and repeat until you can predict what the code you're writing actually does. Because people are increasingly surprised by the semantics of the language and what the compilers are doing these days.

[0:27:39] Guy Podjarny: Cool. Yes, that's good advice. Well, once again, thanks for coming on the show, Robert.

[0:27:43] Robert C. Seacord: Thanks again for having me.

[0:27:45] Guy Podjarny: Thanks to everybody for tuning in, and I hope you join us for the next one.

[OUTRO]

[0:27:50] Announcer: That's all we have time for today. If you'd like to come on as a guest on this show, or get involved in this community, find us at thesecuredeveloper.com, or on Twitter, @thesecuredev. Visit heavybit.com to find additional episodes, full transcriptions, and other great podcast. See you next time.

Snyk (スニーク) は、デベロッパーセキュリティプラットフォームです。Snyk は、コードやオープンソースとその依存関係、コンテナや IaC (Infrastructure as a Code) における脆弱性を見つけるだけでなく、優先順位をつけて修正するためのツールです。世界最高峰の脆弱性データベースを基盤に、Snyk の脆弱性に関する専門家としての知見が提供されます。

無料で始める資料請求

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon