Episode 4

Season 1, Episode 4

Getting Down To The Metal With Eric Lawrence

Guests:
Eric Lawrence
Listen on Apple PodcastsListen on Spotify Podcasts

In episode #4 of The Secure Developer, Guy is joined by Eric Lawrence of the Google Chrome security team. Eric and Guy begin with a discussion on what it takes to be a great security engineer – namely curiosity and a willingness to learn. Later they discuss the growing importance of the modern web browser, and how security previously only found in operating systems is now moving into browsers themselves. Finally they discuss the current state of HTTPS, including the carrots and the sticks that browser designers like Eric have at their disposal.

The post Ep. #4, Getting Down To The Metal appeared first on Heavybit.

共有

"GP: I don't really subscribe to that proposition. I feel like the developer mindset of understanding the components, maybe the engineer mindset, actually very much applies to security as well.

"EL: As more and more of your data moves to the web, that's where the protection needs to be. People don't really think that attacks are going to happen to them."

"GP: I think a lot of it is about secure defaults and just sort of explicit. Make it very explicit."

"EL: But you don't need local code execution necessarily anymore to attack the cloud. The reality of this situation is bad guys will make a buck any way they can. You want to make sure that the application, the website, the things that your designers have worked really hard to deliver is the one that the client actually sees."

[00:00:42] GP: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. And you're listening to The Secure Developer, a podcast about security for developers. Covering security tools and practices you can and should adopt into your development workflow.

The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybits.com. If you're interested in being a guest on this show or if you would like to suggest a topic for us to discuss, find us on Twitter @thesecuredev.

[INTERVIEW]

[00:01:14] GP: Hello, everybody. Welcome back to The Secure Developer where we talk about everything security for developers. Including tools, best practices, and general ideas about how you can help build more secure applications.

With me today, I have the pleasure of having Eric Lawrence from Google, who's doing developer relations that focus specifically on security. Thanks for coming on the show, Eric.

[00:01:33] EL: Oh, great to be here. I've got a whole bunch of questions for you. But before we dig into that, can you maybe just give a bit of a background about your job today? How you got into security? Or just what's the path that led you to your spots today in the security developer relations world in Google?

[00:01:51] EL: Great. Sure. Actually, technically, I'm a software engineer on the Google Chrome Security Team. But I got hired for the purpose of evangelizing HTTPS to the masses and finding places where the Chrome team could do more to help developers move seamlessly to HTTPS.

And so, my life right now is pretty much all about getting sites onto secure protocols and helping them out in that task wherever we can. And whether that's by creating documentation, guidance, making changes in Chrome to smooth rollouts, or talking to the developers who are encountering problems and figuring out what those problems are and finding a way to help them out.

How I got there is a bit of an odd path. I know a lot of people sort of start out in security from the beginning with the notion of, "Hey, I want to do software security, so and so." Cool. I'm something of an old guy. However, I was an intern at Microsoft in 1999 and I worked on the product that turned into SharePoint at the time. It was called Office Web Server. And so, I've been doing web things for quite a long time. But I had no particular security bent.

Just before I started full-time at Microsoft in 2001, I was on the phone with my new team who hadn't worked with me previously. I was going to work on a new future team within Office called assistance in worldwide services. And my conversation with those guys was interrupted by my computer that played a clip from Star Trek. Something about incoming fire, shields holding. And the person on the phone asked what that was and I laughed.

And what it had been was, at the time, this was back in the heyday of IS Worms. And so, there was a worm going around campus at the time attacking Windows servers, Windows 2000 servers. And I had written a little ISAPI filter. It was a trivial little thing that looked at the incoming queries coming into the web server. And if it saw the signature of a known attack, it would block it and it would play this audio clip.

And the guy that I was talking to on the phone said, "Oh, that's great. You've got a background in security. You'll be our security PM." And I laughed really hard and assumed he was joking. And then I showed up on the first day and asked what I was working on and they said, "Well, you're going to own our clip art website." The clip art website at the time had a million visitors a day. And so, that was kind of cool. And they said, "And, of course, you'll also be our security PM." And I laughed and tried to explain, "No. No. I don't know anything about security." And the response was effectively, "Well, no one else here does either. Because this was just the beginning of sort of the trustworthy computing timeline at Microsoft. But we have this book from Michael Howard, Writing Secure Code. You'll read that and you'll become an expert on that topic."

And so, I thought this was a little bit preposterous. But I'm the new guy. I'm not really going to protest too much that I can't do a job that they've hired me for. And so, I read the book. And it turned out that one attribute that I have was really super useful for security. And that's kind of curiosity about how things work and a willingness to go all the way down to the metal.

And so, one of the things that I did sort of over the following year was really get into security things. And you don't think of clip art as a good place to get into security. But as it turns out, the clip art file formats were super interesting. Because everybody trusts clip art. It's just clip art. Right?

And as a consequence, all of those file formats were configured to automatically open on Windows machines with no prompt, whatsoever. And there was no gating as to where they came from. And so, you could be surfing the web. And the site could serve you a clip art package, which would immediately be opened by native code on your machine with no prompts, whatsoever.

And so, I spent a summer taking over a machine with clip art and I thought, "Hey, I actually got something of an aptitude for this." And so, that's kind of how I got into security and kind of carried me over a little bit after. A couple years of working on clip art, I joined the IE team. Not because I wanted to work on security, but because there were some problems in the way IE6 was behaving in handling our clip art website. And Microsoft Source Code wasn't open across teams.

And so, I figured, "Well, I can join the IE team. Find out all the bugs that are causing this problem for our website and fix them." And so, I did that. But as a consequence, I was also on the trusted networking team. And so, effectively, I started working for security teams in IE starting in 2004.

After about – I guess it was four or five years, I started leading the security team for IE. And so, I did that for a couple years. On the side, I was working on a side project called Fiddler. Fiddler is a web debugging proxy. It runs on Windows, and on Linux now, and kind of sort of on Mac. But not really. And Fiddler, being an HTTP proxy, is in a very privileged position to do things that are very interesting for security and security testers.

And so, I got kind of deeper into security through Fiddler. And in 2012, I left Microsoft when Fiddler was acquired by Telerik. And I moved to Texas, of all places, and started coding Fiddler for Telerik. And I did that for three years. When an opportunity came up to join the Chrome Security Team, I really was impressed by sort of the quality the people that Chrome had working on security and was excited at the opportunity to kind of do security and web browsers again for a while. I joined the Chrome Security Team and started in January of 2016.

[00:06:58] GP: Cool. That's quite a journey. And, actually, a lot of interesting things about it. I mean, one, the thing you noted about having some aptitude for security being related to the curiosity to go all the way deep down. I feel like that's often times also a good property of a good developer. Of somebody that wants to understand this. Oftentimes, debugging, or troubleshooting an ops problem, or something like that. It's very much this problem-solving case where you need to sort of get down and understand sort of the bits and bytes of what's moving and what's happening. And I like that. I always feel like there's a lot of separation sometimes in the world of security. You talk about the breakers and the builders and how a breaker's mind, an attacker's mind works so differently. And that it's a different person, different persona.

And I don't really subscribe to that proposition. I feel like the developer mindset of understanding the components, maybe the engineer mindset actually very much applies to security as well. It's just you anticipate bugs versus maybe trying to build around them. I like that.

And I'm definitely a fan of Fiddler. I've used it a fair bit. Until I moved to a Mac, which unfortunately kind of killed that opportunity. As long as I was working on Windows, Fiddler was very much a favourite tool. I appreciate that. I'm a happy user of the tool.

[00:08:14] EL: Thanks.

[00:08:16] GP: I guess fast forwarding from that path right and from all this world, you sort of land in the Chrome Security Team. And I think, today, this world of browser security, of front-end security, this sort of term that's not quite well-defined yet is an interesting one. It seems to have evolved.

I think a lot of security controls have been added to browsers today to go beyond just the notion of securing the browser itself onto providing controls to make the application, the web application that is running through the browser secure. I guess, how do you see those right now? Do you have some favorites of the recent additions? Is there some trend that at least from the Chrome perspective you're trying to achieve?

[00:08:59] EL: Right. I mean, I think the key – I've worked in browsers for quite some time now. It's 12 years, which seems utterly ridiculous to me. Because it all feels so new. And that's partly because things are changing so much. In 2004, when I joined IE, all the interest was in effectively, "Hey, we need to get root on the machine." You get an arbitrary code execution through some memory corruption and you completely own the machine.

And there was a ton of investment in things like sandboxing the browser and making it hard to get reliable memory corruption and code execution. Things like DEP/NX and ASLR. Things to complicate the allocation of memory such that it was less predictable. And so, those things were really kind of the focus of what the IE team was working on in the 2004 to 2007-ish time frame.

But, eventually, sandboxes got reasonably good, particularly the Chrome sandbox. And memory protections and other features got better as well. And so, the challenge sort of moved a little bit. Because in the old days, the reason that we wanted to protect the machine so much was that's where the user kept all their information.

And so, if you got remote code execution on the machine, you needed it in order to steal the information that the user had on the machine. But the world has moved quite a bit since then. And for many people, all the data that's of value to them is in the cloud somewhere.

And so, this is particularly true on devices like Chromebooks where there's very little local storage and almost everything is in the cloud. But even people who are using Windows PCs or Macs are very often – all the data that matters to them is in the cloud.

And so, you've got this situation where certainly bad guys are still very interested in getting arbitrary code execution on the local machine. Because if they can own your local machine, they can attack your cloud. But you don't need local code execution necessarily anymore to attack the cloud. And this is the notion of where sort of web app security comes into play. This idea that, as more and more of your data moves to web, that's where the protection needs to be.

And so, something like a universal cross-site scripting bug that allows a site to steal data from any other website you happen to be logged into is now in many cases as dangerous as a remote code execution bug in your browser itself. And so, we've gotten a lot more interest now in providing ways for web applications to defend themselves against attackers.

Coupled with that, we also have an increase in capabilities of the web platform. And so, back in the day, if you had arbitrary code execution in the user's browser that was confined to the browser within the sandbox, there wasn't necessarily a whole lot you could do because we didn't have APIs necessarily for things like recording the user's camera, or turning on the user's microphone, getting their location, and so forth. And now all those capabilities are moving to the web platform. And that's because the web platform needs these capabilities if it wants to be a compelling platform for developers who have the option of doing things like building mobile applications in Java for Android, or in Objective-C, or Swift for iOS.

And so, the web platform is inherently getting more powerful. And, thus, a compromise that allows you to execute code certainly is something that is much more interesting that it has been in the past. And so, we've got a lot of browser-side features. Some of the first ones were things, very trivial things, like HTTP-only cookies, and cross-site scripting filters, and things like that.

And then, over time, things evolved a bit with attacks like clickjacking, where UI redress attacks as they like to call them in the security research community where the user is sort of enticed to perform an action in a browser where they don't recognize that it's happening.

And so, browsers have started to layer on defences against specific attacks like that. As well as starting to create primitives that are of interest that have analog in the old world and now are applicable to the web. And so, you've got things like sub-resource integrity hashes. You can say, "Hey, go download this shared JavaScript library from this third-party CDN site. But only run it and allow it to execute code if the hash is the thing that I'm expecting it to be." That way, if I happen to compromise some CDN's repository of jQuery, I don't get to own the entire internet. Because, sites that are using sub-resource Integrity will not run those JavaScripts because they're not the scripts that the developer expected. And so, we're starting to see things like that.

We're also starting to see some gradual introductions of things to restrict legacy features that were not designed in the way that we would have designed them today. In particular, you've got HTTP cookies. Cookies are super useful. And they're used by virtually every site. But cookies have some properties that make them very bad from a security point of view. Things like there's not a good distinction necessarily between cookies that came from a secure origin versus cookies that came from an insecure origin.

And so, many sites are both HTTP and HTTPS. And if you can get a cookie set on HTTP, that cookie will get sent to the HTTPS server. And so, cookies didn't have a protection against that. There's a protection going the other way. You can mark your cookie secure so that a securely set cookie is not sent to HTTP. But it doesn't prevent the reverse.

And so, now there's actually a feature called cookie prefixes, which is working its way through the design process where a cookie can have a magic name if it's prefixed with __host or __secure, it's not set unless it set from a secure origin. And so, a server upon receiving such a cookie can tell, "Hey, this was set securely." So, I know it came from me.

But a lot of the security that we're designing today is predicated on the notion that the user is getting the application from the server securely. Because as you add these powerful capabilities, it becomes very dangerous if there's anyone on the network that can interfere with the delivery of that application. In the same way that when you downloaded your programs to run natively on your PC, you're trusting that windows or Mac is actually checking the signature of that program to make sure that that program is the one that you asked for and it wasn't corrupted by a virus or tampered with by a third party.

Well, HTTPS is really what we have for the web. And so, a lot of the capabilities that have been added to the web platform are starting to require HTTPS. And so, if you want to use geolocation, you need to deliver your website over HTTPS connection or the geolocation calls will fail.

The thing that Google as a whole is probably most excited about from the web platform perspective is what we're calling progressive web applications. And so, these are web applications that start to blur the line between the web applications of old and the native applications that have become so popular for mobile.

And so, the most powerful feature of progressive web applications is called service worker. Service worker is kind of, you could argue in some respects, like a mini little Fiddler that runs inside of your process or inside your web application and it's able to service web requests. And so, you can create applications that work perfectly offline using service workers.

But service worker, because it has the ability to intercept subsequent network requests and respond with whatever it likes, the service worker itself obviously needs to be delivered securely. And so, in order to help achieve the vision of having a web platform that's fully powerful and fully competitive with native applications, we need sites to start moving to HTTPS in order to unlock features like geolocation, service worker, and the like.

And beyond that, we're also trying to sort of just raise awareness of the dangers of non-secure applications in general. People don't really think that attacks are going to happen to them. They think, "Hey, the attacks that you're hearing about, the revelations from the Edwards Snowdens of the world and the like are only against people that are doing something super interesting."

But the reality of the situation is bad guys will make a buck any way they can. And if they can do an attack that injects JavaScript into web pages that's going to trigger some funky behaviour in your browser that tricks you into running malware or clicking on their ads, they'll do that. And we're starting to see cases. There was a really great paper in the spring by some researchers that found that there are some interesting attacks that can be performed at the TCP/IP level where they can control some of the content that's going over HTTP links that aren't even coming from They're not necessarily a man in the middle. They're more like a man on the side. And they can serve effectively malicious JavaScript for pages they don't own. And so, these sorts of attacks are obviously devastating.

[00:17:38] GP: Yeah. And I think basically that whole sort of continuum and evolution that you've described is interesting. I'd like to sort of pick it apart because there's a whole bunch of interesting sort of entities here. Right? I guess, at the beginning, you were talking about how sort of the complexity of the applications and the value of them has evolved making them kind of a more rich target. Right? More opportunities to get in and more value if you did.

I like that. I like the analogy. The reality is that we have – it's true today that you can argue a little bit around what's the situation in mobile. But on the web, we definitely increasingly use very few or very select number of desktop applications. And it's all web applications. And I like the cyclical nature. I don't know if you - but I sort of agree with the analogy of sort of the cyclical nature of some of the security controls that we have on operating system need to move to this browser operating system. And needing all the sort of the vetting and verification.

I definitely feel like the same evolution happened on the operating systems themselves. It started off from securing the operating system and it continued on to help protect applications on the operating system from being hacked or sort of being a mechanism to spread viruses. And now the web world is undergoing a similar transition where the browser is the operating system. It's not just in charge of securing itself. But also, securing the applications on it.

I guess the good news is that the browser is actually in a much more controlling position. And to an extent, I love service worker. I love sort of these new capabilities that are coming into the browser. I think I am pretty firmly on the website of the web versus native in terms of where I think kind of the evolution and the path is going. And through that, I believe service worker and some related technologies are amazing ways to bring native-like experience and more into the world of the web. But I am afraid of it from a security perspective. But a pretty large degree.

I haven't opened up HTTPS quite yet. You sort of touched on it. And I think that one is worthy of a lot of conversation by itself. But in general, it feels like there's some contention between the new functionality that we're adding and how it's so powerful. And the sort of security risks I guess that we're exposing in the process.

I'm curious, when you see the work going on inside the Chrome Security Team, as well as when you talk about giving advice to web developers building and trying to use those technologies, do you have some insights around how to judge those trade-offs? I mean, how to choose whether you want to tap into some cool new feature and new capability? And how to understand the security risks? And does it happen that you have great new functionality ideas in the browser that you end up disqualifying for security reasons?

[00:20:28] EL: Yeah. Certainly, I mean, it's always been a hard tradeoff. And I like to think and I hope that the tradeoff is harder for designers of web browsers than it is for site operators building sites. Because unless you're trying super experimental stuff behind flags, in general, if a browser has an API available, we want developers to be able to use it securely and not shoot themselves in the foot with it.

There's two aspects of that. When I worked at IE in 2004 and so on, we were extremely hesitant to add platform APIs to the browser because of the fear that, "Hey, we can't make this secure. We can't ship fast enough if there is a problem to fix that. And so, we were extremely conservative around adding APIs. And, certainly, there were times when platform feature teams said, "Hey, we want to add X to the browser." And as a security team, we pushed back pretty hard because we were very concerned about the security implications of that.

The problem is that's not really very sustainable. You can't sort of neuter the platform and hope that the platform is still going to thrive and live on. And so, you have to be very thoughtful about the principles under which you're operating, the tradeoffs and balances, and find ways as much as possible that you can deliver new features in a secure manner.

And sometimes the relative change that you need in a feature in order to make it less interesting to be abused is fairly small. Whether it's a confirmation with user, which sometimes there's user fatigue. Or feeding things through updated services where you can keep track of legitimate uses of APIs and things like that.

We've got this feature now that we're starting to allow origins to opt into trials of new features and things like that so that we can get some real-world experience in using them before we're unlocking them for the web as a whole. And so, I think that that's kind of the trick.

We definitely are giving users more of an opportunity, or developers, as it is, developer users of our APIs, more of an opportunity to shoot themselves in the foot. And so, one of the best examples of this probably is what happened with cross-origin requests.

And so, same origin policy in browsers is really all about isolating sites from each other and not allowing interactions. And the problem, of course, is sites want to interact with each other. You want to have your APIs calling APIs from other sites. And so, same origin policy becomes a real problem.

And, as we looked at this in the 2006, 2007 era in IE, we felt that the model that had been sort of proposed for cross-origin, which was the dominant model in Flash. You've got this cross-domain XML file. And the cross-domain XML file you have effectively what resources should be accessible across origin.

And there were some notable mistakes in configuration on the part of important sites. And so, I think it was Flickr briefly had a cross-origin policy that said effectively any site is allowed to get any data it wants from us. And so, you could – if a user browsed to a malicious site, that site could go out, crawl Flickr and take all that user's photos. And so, on the IE team, we were very hesitant to do anything like that. Have a system whereby a user could shoot themselves in the foot in a wildcarded way.

We created this object called xdomain request. And the notion behind xdomain request was it was sort of a nerfed mechanism for getting data cross-domain. And the security principle that we use in building cross-domain requests was – effectively, it was syntactic sugar. We didn't want to have anything that could go to the server that the browser couldn't have sent some other way. And so, you were limited to getting post. You couldn't use other methods. You were limited in the content types that you could send to the server and things like that. And the notion was, "Hey, if a site is vulnerable to xdomain requests, they were probably vulnerable in some other way."

For those with an understanding of web browser history, they'll know that xdomain requests has been relegated to the dustbin of history. Because no other browser adopted it. And instead, a proposal called CORS, cross-origin resource sharing, took off instead.

CORS was effectively the flash model whereby a website declares via policy. The policy's delivered slightly differently. But they declare via policy who is allowed to access them. And the reasoning is that cross-domain request was not powerful enough for some of the scenarios that web developers wanted. And web developers will say, "No. No. We'll be real careful. We'll do it right." And once other browsers sort of adopted that object, it was over for IE. And IE has now adopted course as well.

But it's certainly true that you can still shoot yourself in the foot with CORS. And so, a security researcher from I think Cloudflare recently wrote a paper finding sites where the sites had said, effectively, access control allow origin and they would reflect back whoever was talking to them and they would allow credentials.

And so, effectively, they configured their policy such that they had the same problem that Flickr had. Anybody could ask for any data they wanted and the server would return to them. And so, there was a site, some sort of ordering site that he used in his demo where if the user was logged into this site, the malicious site could reach out and grab the user's profile page and grab the user's home address, phone number, credit card digits, and so forth from this site because it had been misconfigured.

And so, certainly, there are opportunities that have been unlocked as we add platform APIs where developers who aren't careful can shoot themselves in the foot. But for the most part, browser designers are working pretty hard to make that as difficult as possible. And we definitely want to add powerful capabilities to the platform. We don't want to have sort of the Fisher-Price platform with no sharp edges. Because it turns out, sharp edges are useful for things. But we want people to know when they're using the sharp edges.

And so, one of the things that's been sort of fun is there's a feature, I like to call it sort of depenx for the web, called content security policy, where you can throw away permissions and say, "Hey, I don't want script execution coming from inline blocks on my page." And that feature has a couple of directives that negate the security benefit in one way or another. And those tokens are prefixed by unsafe.

And so, if a site wants to use a content security policy but they want to do something that's not secure, they are literally typing unsafe into their policy. And, hopefully, a developer is going to say, "I don't really understand why that's unsafe." And then they'll go look further to understand what's going on.

[00:26:59] GP: Yeah. I think a lot of it is about secure defaults and just sort of explicit. Make it very explicit. I gave a talk with Rachel Ilan Simpson, who's a designer on the Chrome team in the Munich office. And we gave a talk titled Security Ergonomics. The talk is all about how users make insecure decisions.

And one of the keys, as we sort of analyzed a bunch of use cases and a bunch of scenarios, a lot of it came down to what do you expect the user to do with information that they have? What type of knowledge or insights do you expect them to have ahead of time? I think it's entirely legit to say I expect the user to have sufficient reading, understanding that they know that if they write unsafe in the name, then that thing that they're just adding now is probably not a safe thing.

And in the flip side, understanding the security implication of writing even like an asterisk in some course field is just not quite as easy. You need to think more broadly. You need to have kind of a better understanding of the implications.

We could probably talk for hours and hours around sort of all these security controls that browsers have. I'd like to sort of switch over to HTTPS. But before I do that, what's a good place that you'd recommend for developers who want to take advantage of all these different – you threw out a whole bunch of security controls. From the cookie prefix, to the content security policy, to a bunch of other sort of security headers today. Do you have a recommended location? A recommended web destination or something like that where these are well-explained and inventoried?

[00:28:31] EL: I think there's a couple of – there's no shortage of information on the internet, for sure. The Chrome team actually puts up a fairly substantial amount of documentation for web developers not just related to security. But, also, for performance and things like that.

The one thing I would encourage in some cases certainly for developers that expect they're going to make kind of a fundamental change in their security is to have a look at the spec itself. It feels kind of super bleat and nerdy to actually go read the specification. But sometimes you'll find things in there that are really interesting and sort of motivate – explain the motivations behind the design that sometimes gets lost when you read the BuzzFeed article of five things to put on your site today. And so, the specs can be good.

In terms of practical guidance, there's definitely books that have been written about web security. There's some great ones just about sort of the fundamentals. Things like the tangled web, which kind of explain the security model of the web and some of the places that it had shortcomings. And many of the features that have been designed since that book was written are designed sort of to combat some of the problems that are mentioned there.

There's also some scanners. They're interesting. You've got things like securityheaders.io and Mozilla's Observatory, which came out toward the end of the summer, that you give it a URL and it will go out and look and see what's in use on that URL in terms of security features. It'll say, "Hey, you're not using strict transport security. You're not using x-frame options and so forth."

And so, my hope for those tools is that they continue to evolve sort of beyond scanners. But, also, to help the user understand the importance of the security features. And so, saying, "Hey, you're not using x-frame options. We're going to deduct points from your score." It's not necessarily as useful as explaining effectively. X-frame options is about UI redress attacks. Users might be clicking in your site thinking they're clicking on somewhere else. And so, providing that mapping is something I think that these sites will continue to evolve and help users understand kind of the importance of the feature that's been cited as missing.

[00:30:49] GP: Cool. Yeah. I think those websites are sort of super useful. And I guess we can throw a link to them as well in the transcript of the podcast so people can check them out. I agree with around understanding the concept behind them. It's a little bit of a trade-off, right? To an extent, you want to make them secure. You would rather they be secure blindly than they be insecure blindly. You would rather they be secure with eyes wide open.

[00:31:13] EL: Yeah. Absolutely. And, certainly, one of the challenges we have, and in particular as I work with sites on HTTPS migrations, is that's really important that sites don't break. And this has been true from the beginning of time. Desktop applications had the same problems. As we introduced features like data execution prevention, and no execute in Windows, and ASLR and things like that, we couldn't just blindly turn them on for everybody. Because applications would break and they'd say, "Well, just don't install the new version of Windows." Or, "Don't install XPSP 3," which is a disaster for security everywhere else. But it breaks our application.

And so, in the web platform, we have sort of an obligation in some respects to be insecure by default just to make sure that historical sites that haven't been updated continue to work. And so, we need sites to opt into these directives in order to get the protection. But we also need to have them test their applications.

With HTTPS in particular and sort of maybe perhaps as a segue, we've got two features. HTTP Strict Transport Security and HTTP Public Key Pinning that are very powerful in helping sites protect themselves against important attacks. But they're also footguns. Effectively, you can create a denial of service for yourself where you send one of these directives with a long lifetime and say, "Hey, for the next year, don't allow anyone to connect to my site except over HTTPS. And all the security scanners tell me include subdomains is a good thing. And I don't really know what that means. But I'm going to do that too." And then they turn on that policy and, "Oops," they forgot that they also have admin.example.com. And admin.example.com for whatever reason isn't using HTTPS. And I can't get to the login page for admin anymore. And my site's broken. And it's broken for a year or until I manually tell all my users to go clear out their configuration and things like that.

And so, we've been doing more to try and help developers avoid the footguns. And so, things HSTS and HPKP are designed to – the notion is you're going to start it out with a very short lifetime. And so, when Google turned on Strict Transport Security for google.com, the main site, I think when they first turned it on, I think we used like a 5-minute lifetime or a 10-minute lifetime and watched to see what would break and what would continue to work.

And those experiments actually very often turn up sites that you'd long forgotten about. And so, one of the pieces of trivia was I think we broke the Santa Tracker the first time that we turned on Strict Transport Security. Because Santa Tracker was not a secure application. And, oops, now kids can't look at Santa. And so, hat got fixed relatively quickly. But, certainly, things that are more critical to your business, you want to make sure those things aren't breaking. And so, definitely, there's opportunities to use security features incorrectly and hurt your site. We definitely don't want that.

[00:34:15] GP: Yeah. Of course. You want to somehow kind of walk that line. I guess many of the security features actually have a little bit of the advantage in this extremely over-backward compatible web world. Some of the security mechanisms actually have the advantage of not requiring full adoption. You can use some of these headers like HSTS despite the fact that it is not supported by old browsers. Because that's fine. It's a security mechanism that would reduce your exposure for those browsers that support it. But at the same time, you have to be wary of the ones that are still quite sort of broad in implication.

I guess you opened this up, which probably going to be kind of our last topic here, is HTTPS. Sort of a topic that's near and dear to both of our hearts. I've recently done a study that talks about HTTPS adoption earlier this year. And as a whole, it was a positive. It talked about how HTTPS adoption has done in one year what it took 19 years to do prior to that in terms of adoption.

And at the same time, if you look at the absolute numbers, you still see that a very, very small percent of the web, we're talking about definitely under 20, depending on the stat you're looking at, probably under 13% or 12% of the web is using HTTPS. And so, the trend is positive. But it seems to be advancing quite slowly.

I think the move to HTTPS has a lot of sort of pros and cons to it. I was wondering, in your view, you deal with this maybe more than most people on the planet. What would you say are sort of the key motivators to moving to it? And what would you say are the easiest sort of first steps? How do you advise a company that is sold and wants to move to HTTPS to sort of make it so?

[00:35:58] EL: Right. In an ideal situation, the key motivators are the reason for HTTPS to begin with. Confidentiality of the information that's being sent and received, so that people on the network don't know what your users are looking at. Don't know what they're reading. Integrity is a very important one of those. You want to make sure that the application, the website, the things that your designers have worked really hard to deliver is the one that the client actually sees.

And one of the things I've started doing is calling HTTPS hi-fi for the web. You want a high-fidelity experience where the user is actually seeing what your designers wanted. We've seen cases where you're on a captive network where ads are getting injected by the network itself into pages. And so, there's some screenshots of you're on apple.com and there's an ad for something from Best Buy.

Well, obviously, Apple didn't put an ad for Best Buy on their website. But it has been injected by an ad network that is man-in-the-middling traffic and rewriting non-secure traffic with these ads. And so, you want these protections for your users.

Now the challenge we have is that some sites are not necessarily focused on their users' experience to the degree that we in the browser world would like. And so, carrots effectively have been added. Powerful features are starting to be added behind checks for HTTPS.

Geolocation now is a sensitive operation. We don't want websites to be requesting this data if they haven't been delivered securely. And so, we require HTTPS. But there's also now a performance element of it. And so, in particular, two of my favourite features in the browser, HTTP2 and Brotli Compression, both require HTTPS.

And the reasoning for that is not really the strict, like, "Hey, we need security for this information." It's about, "Hey, we need Integrity for this information." And the reasoning is that there's a lot of gateways and proxies out there that will go do really weird things to your network traffic. And those things were written in an era where HTTP1 and 1.1 were dominant. And if those things touch new traffic, if they touch HTTP 2 traffic or Brotli traffic, they end up corrupting it.

And Google saw this and other browsers saw this first with WebSockets. WebSockets are TCP/IP sockets that are run over sort of an HTTP handshake. And then you have a bidirectional stream. And it was found that many gateways would actually look into the traffic on the socket. And if it looked like HTTP traffic, they would start doing really strange things like caching bytes from the middle of the WebSocket. And so, they created this sort of weird masking system for WebSockets where the data is obfuscated so that the gateway doesn't see it and manipulate it. But, really, a better approach is to just use HTTPS to begin with.

Similarly, for HTTP2, the faster next-generation transport for the web, those streams just are not going to work properly if a gateway is messing with them at all. And so, while, technically, the specification does not require TLS in order to use HTTP2, all the browser implementations do require a secure transport so that that traffic is not manipulated.

And we saw those problems of manipulation as well with compression. Google at this point probably many years ago, I think it was eight years ago, came out with a new compression algorithm for the web called Shared Dictionary Compression Over HTPP. And they found that there were gateways and proxies that would – if they saw SDCH, they would say, "Oh, they spell Gzip really funny. So, I'm going to try and Gzip, decompress this, and then remove the content encoding header." And so, clients were getting corrupted content that they couldn't decompress.

And so, that's why Brotli, which is a new compression that gives significantly better results than Gzip and Deflate, Brotli requires HTTPS for the browser. And so, the browser will not advertise its support for Brotli over a non-secure connection. And so, we're making faster sites by having high-fidelity as well. And that's an important thing. And so, those are both pretty motivational.

And one of the other things that we're hoping is that sites are starting to understand, "Oh, well, my users are actually concerned about privacy. And so, you've got sites like BuzzFeed." BuzzFeed is not your bank. They're not your retirement company. But BuzzFeed has readers. And readers want to be able to read articles about topics that may be sensitive particularly in their region.

And so, there's places in the world where reading certain articles on BuzzFeed could get you in trouble with your employer, your government. And they want users to be able to read whatever they want. And so, BuzzFeed deploys HTTPS to help ensure that their users are not getting spied upon. And so, those are all sort of the incentives that are being exposed.

But on the other side of the equation, we're also trying to reduce the roadblocks. And a huge one of those is the advent of Let's Encrypt. Let's Encrypt is a free certificate authority that delivers certificates in an automated fashion to servers. And this has unblocked a huge number of sites from moving to HTTPS for their customers.

And so, sites like WordPress. WordPress – you can get a WordPress account and it's a checkbox to turn on HTTPS for your server. Same thing for DreamHost. And it's not just those mass hosters with hundreds of thousands of domains. It's also smaller hosters are going out and saying, "Hey, we can turn on HTTPS. It's not going to cost us anything." The marginal complexity is low, particularly if you're using one of the platforms for which plugins that automatically handle Let's Encrypt and turn it into a checkbox feature are there." And we're excited about the number of Let's Encrypt certificates. I think I saw recently it was like 5 million certificates that have been issued. And so, that's going to make a huge difference.

Now, to your overall point, we want everybody to be on HTTPS. And the web is enormous. And it's going to be a long time until we get there. But we definitely want sites to be thinking about making that move. We want to help them make that move. And one of the exciting factors is not the raw number of sites, but the percentage of the time the user spends in their browser for which they are on a secure transport.

And with major sites like Twitters, and Facebook, and Google, using HTTPS for everything. This is starting to mean that most users are going to spend most of their time in the browser on secure connections. And it's going to be a long time until we clean up the long tail of the archive of the GeoCities website from 1996 that's out there and not HTTPS.

But, certainly, if most users most of the time are in a secure position, I think the world as a whole is going to be better off. And we may be in a position to make some of the attacks that have plagued people over time, essentially not lucrative and not financially rewarding for the people that would perform those attacks. And if all the user is doing on – if all the user's sensitive operations are happening on an HTTPS connection, passive surveillance of that user becomes far less interesting. because I can't tell anything meaningful about them. I may know, "Oh, he seems to be interested in Battlestar Galactica fanfiction written in the 90s." But I'm not going to know how much he's got in his bank account. How much his profile has set up on a dating site and things like that.

And so, it's a long road. It definitely took the HTTPS evangelism job feeling a certain level of job security. If we were at 100% HTTPS by the end of the year, I'd be a little nervous. But that would also be a great place to be in.

[00:43:55] GP: I'm pretty sure Google will find some interesting things for you to work on in the very unlikely scenario that that will happen. Yeah. And I think that it's also sort of a nice loop-closing here from sort of one of the points you made earlier about the fact that everybody is constantly under attack. And it's true that if you have some small blog or if you're not someone who is very prominent, you might not be a dedicated target. But you are an opportunistic target to the tools that are around.

And HTTPS is one of the best tools, as a consumer and as a website, to just reduce the attack surface there. Reduce the opportunity for someone to either steal your data or sometimes compromise the information that is received to your machine.

Hopefully, we see really good adoption. I'm super excited about it. I also have kind of a post enumerating some of these things. The carrots and sticks. I think somewhere in there is also the fact that Google – from a business perspective, that Google has kind of announced a few years back, a couple years ago or so, that it is factoring in HTTPS into its security algorithms. Very few things move the needle in terms of web adoption of a technology or some component of the website than the sort of the SEO, the Google ranking of it. I think there's definitely sort of a lot of goodness there. And, hopefully, we see a lot of really good kind of growth and increase. See that curve go up and to the right in terms of the percentage of website. And you're right. Maybe the more important stat, which is the percentage of time that users spend in HTTPS versus HTTP.

[00:45:29] EL: Yeah. You need both. I mean, we definitely want people to be on secure sites as much as possible. But we also need – eventually, we need them all. Because any non-secure navigations you're performing, if there's an active bad guy in the network, he can intercept those and do really bad things to you. And so, we certainly want them both.

In terms of the up and to the right, beyond the blog post that you wrote, I think it was over the summer, around HTTPS adoption. Google maintains an ongoing updated HTTPS transparency report where we show the percentage of our traffic that's been done over HTTPS. And that's gotten a great trend particularly with major sites like YouTube adding a huge amount of traffic to that.

But we also look at the top 100 sites around – it's worldwide. There's some worldwide sites in there. The top 100 sites and track their adoption of HTTPS. And since we launched that report, I think it was in the spring, the numbers have been really quite good in terms of improvement. And we're seeing more and more of that over time. And, certainly, we're doing our part to help evangelize that with things like our progressive web applications outreach work to try and get sites to build sites on progressive web applications, and service worker, and the like.

[00:46:43] GP: Cool. Yeah. We'll definitely sort of post that link up on the website as well. I'm definitely sort of interested. I think I somehow missed that one. We definitely want to see it.

Well, I think we're sort of overtime here already for what we intended. This was a kind a great information and great insights here. Thanks a lot, Eric, for joining us here. And kind of keep up the push for HTTPS and security. And I'm sure we'll share many circles in that journey.

[OUTRO]

[00:47:11] GP: That's all we have time for today. If you'd like to come on as a guest on this show or want us to cover a specific topic, find us on Twitter @thesecuredev. To learn more about Heavybit, browse to heavybits.com. You can find this podcast and many other great ones, as well as over 100 videos about building developer tooling companies given by top experts in the field.

Snyk (スニーク) は、デベロッパーセキュリティプラットフォームです。Snyk は、コードやオープンソースとその依存関係、コンテナや IaC (Infrastructure as a Code) における脆弱性を見つけるだけでなく、優先順位をつけて修正するためのツールです。世界最高峰の脆弱性データベースを基盤に、Snyk の脆弱性に関する専門家としての知見が提供されます。

無料で始める資料請求

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon