Skip to main content
Episode 17

Season 3, Episode 17

Security Research With Adrian Colyer

Guests:

Adrian Colyer

Listen on Apple PodcastsListen on Spotify Podcasts

In episode 17 of The Secure Developer, Guy meets up with Adrian Colyer, Venture Partner at Accel and author of The Morning Paper, a daily recap of academic articles in computer science. The pair investigates how researchers are discovering new side-channel attacks and vulnerabilities that look, at first glance, like they’re out of a science fiction or spy novel.

The post Ep. #17, Security Research with The Morning Paper’s Adrian Colyer appeared first on Heavybit.

共有

“Adrian Colyer: The media and I each have a library just deployed on these sites in their study, behind their most current version is something like 1,177 days. We took it like, literally, many years out of date. This whole sort of package manager, coupled with continuous integration, continuous deployment is a wonderful, wonderful attack target. If you can get something in that delivery stream somewhere, then we’re all set up to automatically put it in production for you. It would take us about 244 days, I think if you actually wanted to read the terms of services for the major services that you use. You’re never as anonymous as you think.”

[INTRODUCTION]

[0:00:36] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk, and you're listening to The Secure Developer, a podcast about security for developers covering security tools and practices you can and should adopt into your development workflow. The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter, @thesecuredev.

[EPISODE]

[0:01:10.1] Guy Podjarny: Hello everybody, welcome back to The Secure Developer. Today, we have Adrian Colyer with us. Adrian, welcome to the show.

[0:01:16.7] Adrian Colyer: Thank you, a pleasure to be here.

[0:01:18.0] Guy Podjarny: So, it’s good to have you, Adrian, here and today, we’ll do a slightly different spin on what we do. Typically, we talk about kind of this intersection of security and development or developers, and today, we’re actually going to talk about sort of this intersection of secure development and security as a whole with science, with sort of paper, with the research, with the proper research, and that’s because Adrian writes The Morning Paper.
So, Adrian, can you tell us a bit about, you know, just give us some background about yourself and about The Morning Paper and what it is?

[0:01:44.1] Adrian Colyer: Sure, of course, yes. So, my background was technical, I did a number of CTO roles for many years as most notably with a company called SpringSource that did something called the Spring Framework in and around the enterprise Java space.

[0:01:54.6] Guy Podjarny: Little known.

[0:01:55.7] Adrian Colyer: The little-known framework and sort of carried that journey on through VMware and sort of the formation of the company called Pivotal and then so a couple of years into that journey, which is now, three, four years ago, I left, really, to come back, mostly to Europe. I think where I’ve always lived but, not always worked, at least, not full-time, to see what was happening in the startup scene and launch myself temporarily with a venture capital firm called Accel here, based out of London.

And you know, it’s four years on, I’m still there now. So that’s a little bit about my sort of background and then, yeah, The Morning Paper is, I guess a habit I slipped into by accident. About four years ago, I was sitting on the train on my commute on the way into London with my fellow passengers looking at sort of The Times and The Telegraph and other newspapers that we have in the UK and I happen to be reading an academic paper that morning.

And I thought, “This is kind of fun, everybody’s reading their morning paper.” And I tweeted the title of my paper with a little hashtag, #themorningpaper, and I’m not quite sure exactly how it happened but I’ve done it every day since. So, for – it will be four years this August. Every weekday, bar, you’re sort of each doing Christmas breaks, et cetera, I have read a computer science research paper and typically in the morning and then sort of written up my thoughts and posted it as a summary towards the end of the day and so –

[0:03:13.0] Guy Podjarny: Wow. Yeah, that’s a – definitely, we’re going to dig into that a little bit more but that’s an unusual and quite impressive habit that you’ve developed there. I think today, you and I prepped a little bit in a bunch of topics, mostly you, kind of some of the really interesting studies or papers that happened or that you posted and wrote about that dealt with security or secure development.

So, let’s dig through some of those and then we can, at some point, we’ll come back a little bit to this, like, morning routine of yours. So, you know, you write about a lot of papers, right? And they cover many topics that touch security, you know, clearly, a hot topic these days, including in the tech startup world. The first kind of area maybe to tackle is just things around security that are more sort of in the day-to-day like you know, we read, we think oftentimes of these research papers as these, you know, like, far –

[0:03:59.9] Adrian Colyer: Yeah, exactly. Exactly, yeah.

[0:04:00.8] Guy Podjarny: You know, theoretical issues. Are there any of the papers that you’ve read that are interesting that are sort of applicable that people can absorb and use something in their day-to-day development jobs or security jobs?

[0:04:10.7] Adrian Colyer: Yeah, there really are anyway. It’s a really common misconception that there’s a big gulf between work that might be done in academia and sort of practical day-to-day stuff. I guess, you saw my bias is to select papers that have some more immediate relevance to practitioners anyway, but you know, there really are a bunch of papers that kind of open your eyes to what’s going on and what’s possible.

Things that it’s easier to think about and maybe even to take practical steps and so yeah, thinking about this week, we picked a paper to begin with, called, “Thou shalt not depend on me.” Which came from the NDSS Conference in 2017 and it is by Lauinger et al, and he just said that everybody can relate to it. It’s really straightforward he said, you hear the story and you go kind of like, “Oh, of course, yes, it’s not surprising.” And yet, it’s kind of sad at the same time.

And so, what the researchers did is they studied top websites, I think it was about 133,000 or so and they split it between top Alexa websites at the top 75,000 and also it enters some random sampling from the .com, you know, sort of long tail. So, you got a good mix of sites in there, pretty representative and they simply look at kind of what was being included on those sites. In particular, JavaScript libraries.
And then, within that world of JavaScript libraries, as best they can, which is also sort of an interesting part of the paper because this is quite difficult, they try and figure out, “Well, these are the most popular JavaScript libraries.” And they settle on like, the top 72. Then, they try and find out what they can about known vulnerabilities, exploits, et cetera, in those libraries, which has been not that easy.

[0:05:38.2] Guy Podjarny: Indeed.

[0:05:38.7] Adrian Colyer: And they get good data on kind of 11 or so of the 72. Then they do an interesting analysis that just says, “Well, how many of these sites now?” because you know, we got the big purpose. How many of them have at least, one vulnerable library? And so, they start to analyze like, how good are we at keeping up to date with libraries that we’re including in our projects, particularly on the browser side?

You know, what’s going on there and sort of, trying to understand people’s patterns and practices and I guess the – I say that the shocking but perhaps not surprising thing is, a huge percentage of even very popular sites have no, you know, vulnerable libraries included in their site. It doesn’t mean they’re all directly exploitable, of course, but they’ve all got some vulnerability in there. It’s sort of 21% of the Alexa top 100 sites to give you an idea and as you grow out sort of to the full 75,000, we hit about 38%, which is pretty stunning.

[0:06:30.8] Guy Podjarny: Yeah.

[0:06:31.8] Adrian Colyer: And what’s interesting when they dig into this is, you know, I just tend to be aware of but then there’s a few extras like sort of often, it’s not the libraries you’ve directly included, though it can be.

[0:06:41.8] Guy Podjarny: Yeah.

[0:06:42.3] Adrian Colyer: We can talk about that in a moment but it’s libraries that get indirectly included by something that you have pulled in and out of that, the very worst culprits turned out to be all of the kind of added trackers and analytics kind of libraries, et cetera, they have a really bad habit of pulling out or taking things into your stack and then you’re kind of in a bad place.

[0:07:02.1] Guy Podjarny: Super interesting I know and I remember that piece and you know, it made some headlines and you know, kind of the – it got to sort of the hacker news.

[0:07:08.0] Adrian Colyer: That’s right, yes.

[0:07:09.0] Guy Podjarny: The mainstream attention because you know, beyond the catchiness of the data bit, you know, it’s not always that you can sort of summarize the key finding of a research paper.

[0:07:18.3] Adrian Colyer: Yes, it’s very immediately understandable, isn’t it?

[0:07:21.2] Guy Podjarny: Yeah. So, it’s interesting, I mean, I think there’s the insight itself, which is really interesting and the secondary insight of the fact that they didn’t pull it in but I find it interesting, like, this delta. Like, we oftentimes see, you know, we do some of these ourselves, vendor-driven analysis, like bold data analysis that comes down and does it. How do you differentiate or how do you see the difference between like, a research paper that comes out with this type of statement versus, you know, some non-research or a vendor entity?

[0:07:49.9] Adrian Colyer: Yeah, that’s an interesting question. I mean, this is sort – clearly, you know, a vendor could equally have done this particular piece of research just like the, you know, the academic team could have done. I think, there’s always this sort of air of plausibility that comes with the academic thing or put it this way, you know, if you want to flip it the other way, if a vendor does it, it’s obvious they've got a vested interest.

[0:08:09.1] Guy Podjarny: Yeah.

[0:08:09.2] Adrian Colyer: So, nobody is that surprised when the result comes out and there’s always an overtone of, “Buy my tool.” And whereas, when it’s a pure piece of academic research, you can just look at the data and you go, “Well, okay, you know, that is, you know, it’s at least it’s sort of it should be independently verifiable, it should be peer-reviewed.” Et cetera. You know, I can hopefully go and actually look at the data sets they use in many cases, and you know, verify this for myself.

And now, you’ve got the question of, “Okay, what do I do about it?” And that’s where of course, wouldn’t it be wonderful if somebody had a tool that would tell you if you had those dependencies, you know? Surprise, surprise, here we are but you know, it is actually – and that’s the other interesting side I think of this work for me is that question about what do you do about it? I mean, clearly, it says you really do need something that’s keeping you on top of these. You can imagine what’s going on in these larger companies, so many bills, the site is done, it’s deployed, why would you go kind of back in touch this and tamper with it, et cetera? So, they also drill into the states around that and that’s quite some – again, not surprising but you know, it’s quite a revealing picture.

So, the media and I each have a library that’s deployed on these sites in their study. Behind the most current version is something like 1,200 odd days, 1,177 days. We’re talking like, literally, many years out date is the median situation, which also means, you know, painfully, especially, given the rate of breakage, new versioning in and around the JavaScript ecosystem that from people to keep current is often not just changing patch level.

[0:09:37.6] Guy Podjarny: Yeah.

[0:09:38.4] Adrian Colyer: Yeah, semantic version, et cetera has been adhered to but you're going minor or you might be going major. So there’s – you know, we actually have, you know, collectively a ton of work to do to bring this together, you know? So that, you know, if there was even an understanding as an outside, like you know, get it cleaned first and have some process to keep it clean.

[0:09:54.8] Guy Podjarny: Yeah.

[0:09:55.4] Adrian Colyer: Is really the only way to do this.

[0:09:58.0] Guy Podjarny: Do you find that like, when you read these papers, do they generally bias to have kind of actionable recommendations? I mean, they, in theory, should be you know, used to prove a thesis, right?

[0:10:09.3] Adrian Colyer: So, so I guess, you know, that varies tremendously. So, there’s a little bit of selection bias in the papers I choose to cover. You know, sort of I tend to go for ones that I think are relatable and sort of somebody can apply but certainly, not all papers are actually you know, are there with the purpose of telling you pragmatic actionable steps, you know? The first and primary purpose for any kind of academic paper is to be published.

[0:10:33.2] Guy Podjarny: Indeed. Yeah.

[0:10:34.1] Adrian Colyer: And sort of, anything beyond that is normally a bonus.

[0:10:36.4] Guy Podjarny: Yeah, it’s just extra. So, this is kind of one example. So, you know, good, very concrete example, what other example comes to mind around, you know, kind of practical –

[0:10:45.3] Adrian Colyer: Yes, so I guess there’s a lovely counterbalance to this. So, if you think, we just talked about a paper and a piece of work that really says, it’s really important that you keep your dependencies up to date and you need some process around that, and if you don’t, you’re probably going to have way more vulnerabilities than you think. The flip side is, you know, how do we do that? Well, often, we’re using package managers, et cetera and it kind of you know, makes me smile that we put an enormous amount of trust in like, running, GetUpdate, or you know, whatever, ‘bundle install’ or whatever it is that –

[0:11:14.8] Guy Podjarny: The automatic action.

[0:11:16.4] Adrian Colyer: Often, with privileges, it’s the thing that we always do first that pulls, essentially, software we don’t really know quite what’s in it, off of the Internet, and installs it in our precious machine and you know, that’s the –

[0:11:26.6] Guy Podjarny: Why wouldn’t you trust something you just downloaded off the Internet, you know?

[0:11:28.7] Adrian Colyer: Yeah, exactly.

[0:11:29.6] Guy Podjarny: It’s the obvious thing.

[0:11:30.4] Adrian Colyer: That’s the one thing we’re all trained to trust and yet, obviously, when you kind of flip it around and think about it, this whole sort of package manager coupled with continuous integration, continuous deployment is a wonderful, wonderful attack target. Anyway, if you can get something kind of in that delivery stream somewhere, then we’re all set up to automatically put it in production for you, which is a –

[0:11:55.1] Guy Podjarny: That’s a thought.

[0:11:55.8] Adrian Colyer: Yeah, a beautiful and scary pipeline and so, yeah, the second paper that is interesting is one called “Diplomat” and it’s got long times of using delegations to protect community repos. So, it’s from 2016 and it looks at this problem of really, how do you know you can trust what, you know, what you’re getting when you MPM install when you gen install, when you, you know, whatever these package managers in the docker hub from PyPI, whatever it is.

And they analyze a bunch of these systems and look at what’s going on and in particular, this work focuses on kind of the signing of the packages in various ways and looks at how do we sign today? What are the very strategies that they use? For example, you know, this one single master key that’s used by the repo or maybe developers have their own keys, maybe there’s some kind of delegation mechanism, how does it work? Obviously, there are ways. You know, I think with a compromised key can do a ton of damage.

[0:12:47.9] Guy Podjarny: Indeed, yes.

[0:12:48.6] Adrian Colyer: You know, as in fact has happened in some, you know, packages in the past and so, again, this is a very pragmatically grounded paper. I guess if you’re not writing a package manager, it’s of less immediate use but it’s something that you really need to be aware of when you think about what you're pulling into your systems all the time and so, they devise a kind of key delegation system.

That they tested it out with PyPI and I think with something behind docker and a few others that really says, “Look, we need to think about how keys are managed for signing their stuff.” It needs to be pragmatic so it actually works. We need to accept things like, we wish developers always signed but you know, well, sometimes, they don’t and so actually, we as the managers of the repository are going to have to sign, et cetera.

And really, it’s very straightforward, it’s a delegation hierarchy of trust and they got two basic mechanisms. You can basically have a prioritized list of signers. It’s exactly what you think like, first, go to A, if A doesn’t have it, you can fall back to B and then, they have a way of specifying like, this particular role terminates the chain. So, you know, if you get as far as C, you should never look any further so that you can stop cascading.

And then really, they just say, “Given that, how could we pragmatically use these tools to do it a little better around the repos that we’ve got?” And then sort of their maximum security model, I have a legacy one as well but let’s talk about the maximum security model, it basically has three buckets, right? So, you’ve got what are called, claimed projects. Let’s think of these as the healthy ones with active maintainers and we know who they are, et cetera.

You can set up a proper key delegation to the people that own that collection of projects, often as a group you go to. I think about Spring, for example, there’s lots of things under the Spring umbrella that are collectively own group of projects. So, you can say, “These are claimed known projects. We can use offline keys that are owned by that particular team to do this.” And the developers themselves will do it because they’re active, they’re engaged.

The second kind of bucket of packages is they call them the “rarely updated.” You know, I think like, the forgotten ones, you know, they’re useful, they’re still out there, they’re downloaded but they’re kind of stable and mature at least, let’s put that benevolent light on it.

[0:15:00.3] Guy Podjarny: Yeah, that’s the positive note on them, yeah.

[0:15:01.2] Adrian Colyer: And so for these ones, I say, “Well, actually, in that bucket, we use offline keys again.” The admins of the repository will do it, this is okay because they’re infrequently updated so we can manage attract ability around that.

[0:15:14.7] Guy Podjarny: Right, make it more cumbersome to deploy but that doesn’t happen that often.

[0:15:17.5] Adrian Colyer: Exactly, and then you’ve got your sort of, your problematic bucket, which is the new upcoming project bubbling up. What do you do about that? And for those that keep the keys online, it’s the solution because they’re the – they’re coming through all the time. Offline signing is a pain but it’s mitigated by saying, “Look, when a new one comes on and the signing with online keys but every two weeks, we kind of rotate those down into the claimed project bucket.” So, you’ve always got a limited window.

I mean, it’s actually a fairly kind of simple to understand scheme but they analyze, for example, what would happen with PyPI users looking at the Python packages, what comes down, if you had this system in place. They assume that, it’s quite interesting, a threat model there by an attacker basically takes over the repository, has all the keys, exists undetected for about a month. I mean, I would imagine upfront, that this is like game over. We’re all hosed. Actually, they manage to protect those because they have that system in place. About 99% of PyPI users would still be kind of good even under that kind of threat, so –

[0:16:15.6] Guy Podjarny: Yeah, pretty massive, kind of.

[0:16:16.2] Adrian Colyer: Again, a few pragmatic things that can make a big difference. You know, maybe it’s worth just very briefly talking about a related work called CHAINIAC because it goes one step further here. They look at things like co-authorities, which is having multiple signatures all coming together and fully fashionable to have a blockchain. Actually, they have several blockchains underpinning it.

So that we’ve got actually a proper use case. I think this is a genuine valid used case, you know, an immutable public record of the releases that have come out and the corresponding signatures. You know, that makes a ton of sense.
[0:16:47.2] Guy Podjarny: And they don’t say blockchain in there?

[0:16:50.0] Adrian Colyer: They do use the word blockchain but –

[0:16:51.5] Guy Podjarny: But it’s not, but it’s not the concept.

[0:16:53.1] Adrian Colyer: But it’s not, publishers are flashing lights, blockchain pay for, you know? So yeah, but then there is a lot of work in this area around that. I mean, I think it’s the whole sort of software supply chain is a phrase that you know, likes to be used here. You know, we’ve got a lot of work to do to secure it all the way up and down the line.

[0:17:09.4] Guy Podjarny: Yeah, indeed. No, this is also – I think for me, it almost like on a natural level feels the most appropriate type of analysis for kind of more academic mindset to do. Just given that it is, you know, first of all, kind of fundamental, you need a very comprehensive analysis to understand all the scenarios. The word, “Pragmatic” that you use there is not often kind of seen in there, so that’s nice that that was added.

But also, the you know, the entirely neutral or you know, again, minus the desire for your work to be used but there’s no financial interest at least, that – in play and subsequently, you know, there’s probably like, core math elements here almost, right? Sort of, architectural structures and just an understanding of what is –

[0:17:48.3] Adrian Colyer: In many cases, to do the analysis of various kinds. Yeah, I mean like, there’s – I like to say, like a – when you get a good project, these research papers are really treasure troves because you know, somebody or maybe a team has spent many, many months normally doing all this work and packaging it up and then condenses all the learnings for you into kind of a short, you know, relatively bite-sized piece. So, yes, if you get one on a topic of interest, they can be terrific.

[0:18:12.6] Guy Podjarny: What’s your experience been around seeing these papers manifest in real-world products or offerings or open-sourced projects that get used in earnest?

[0:18:24.8] Adrian Colyer: That’s a really interesting question. I’m often surprised, you know, I have a bias towards picking sort of as a more practitioner-oriented papers but even so, you expect that the bulk of the ideas are ahead of or a little bit sort of left field for some, the mainstream kind of commercial industry. But I do have sort of anecdotal data from sending out papers to you know, a few thousand people now every day, it’s nearly 20,000 people to my surprise that at least, get the email.

[0:18:52.9] Guy Podjarny: It’s really interesting.

[0:18:53.5] Adrian Colyer: Maybe they’ll open it, who knows? But that’s terrific and you know, reasonably often, two things happen. So, one is I’ll get feedback like, “Well, like, you know, I never knew I needed this research but it arrived just at the right moment and it really helps with something I’m doing.” So, there is like an element of serendipity where sort of something arrives that helps somebody that I couldn’t have predicted, they couldn’t predict it in advance.

[0:19:13.5] Guy Podjarny: Yeah.

[0:19:14.1] Adrian Colyer: The other thing that is really interesting to me is again, although, I couldn’t plan it and I never know what element is going to be but obviously, in my role at Accel, I get to meet a lot of companies, you know, hear a lot of business plans, a lot of exciting tech stuff that’s going on.

[0:19:28.1] Guy Podjarny: Yup.

[0:19:28.5] Adrian Colyer: And it’s amazing how often what I’ve learned and picked up just from trawling through some of the research is highly relevant to those conversations. So, you know, again, if you said to me, “Just do the session the papers are going to be relevant.” I couldn’t do it, obviously but there is quite a high correlation and I think in general, the gap between academia or industry, that kind of transition time has been shrinking.

As it has everywhere else but you know what? The one I really sort of always thinking about is like the Berkeley Unplugged and all the projects that came out of that. You know, the spark, et cetera and you know, how quickly they went from you know, as a pretty well-structured research agenda but to open-source to companies in a matter of no time.

[0:20:08.1] Guy Podjarny: Yup, yeah, indeed. You know, definitely different than the past and I think a part of it is the world learning to embrace the tech or the research and a part of it is you know, maybe, at least, a stream within the world of the academy that is biased for sort of more practically applicable research but either way, you know, we’d benefit from it.

[0:20:24.1] Adrian Colyer: Yeah, yeah, I mean, the entrepreneurial spirit has definitely infused academia in a way that it hadn’t, you know, like, 10, 15 years ago and so I think, I’m sure that helps.

[0:20:32.9] Guy Podjarny: So, let’s sort of shift gears a bit from the practical to the novel, right? So, like, those are concrete practical things that we did today that we should change but sometimes, you know, research is all about kind of breaking limits, right? Finding new avenues of thought. So, in the world of security, you know, what examples come to mind that have done something novel around kind of this security activity?

[0:20:51.8] Adrian Colyer: So again, this, this is kind of one of my favourite things actually in security papers is the sheer ingenuity of the research is in the ways that they find to break things that leave you kind of like, simultaneously with this, “Well, that is so cool” feeling and, “Oh my God, that’s terrifying.” Both things are true at the same time and one of the papers that I came across that sort of caught the imagination of a few folks in the arena was called, is sort of, “When CSI meets public WiFi” which is actually for an academic research paper, it is a very catchy title.

[0:21:23.0] Guy Podjarny: It’s a really good title, yeah.

[0:21:23.9] Adrian Colyer: So, they did terrific there and the CSI here actually stands for channel state information, and sort of the headline of the paper is sort of like, you’re using your mobile phone, you’re interacting with you know, some service that requires you to enter a PIN in order to, for example, validate a payment, something like that. Simply by the way that your hand moves across the surface of your phone when you’re tapping in the pin, the researchers are able to infer what your pin is with surprisingly high accuracy.

I think it’s like 60, 70% success rate and they’re recovering six-digit PINs in one test for the AliPay service and they give you like a stack ranking of the PIN could be this and you know, the number one result is what the PIN actually is and if you looked at their top three, top five scores, obviously, they’re doing pretty well. So, this is really quite outstanding and it’s kind of like, “Okay, how the hell does this work?” And it’s really – it’s like super genius.

So, the setting is a classic coffee shop kind of setting and you’re sat at the table, working, and the attacker needs to be relatively close, sort of, another table within the coffee shop, a few meters, something like that, and it begins with the old chestnut of, “Let’s set up a rogue kind of access point.” So, we’re here. We should all know about those rogue access points.

[0:22:34.7] Guy Podjarny: Indeed, we still fall for them but yeah.

[0:22:35.9] Adrian Colyer: We still fall for them. So, it’s a rogue access point. So now, I was in a decrypt your traffic load. So, it turns out that you can figure out when somebody’s about to go to one of these payment services, you’ve got to time the attack just right, simply by looking at the IP address, they often use different IP addresses for payment part of the services because –

[0:22:54.0] Guy Podjarny: They need to know.

[0:22:55.4] Adrian Colyer: So, they just look for traffic going to that IP address –

[0:22:56.6] Guy Podjarny: For security, actually like to – they use the other services –

[0:22:57.9] Adrian Colyer: Yeah-yeah, exactly.

[0:22:59.4] Guy Podjarny: For security purposes.

[0:23:00.5] Adrian Colyer: Doesn’t this happen so many times? Yes. So, it’s a different address, they’re relatively stable, like a couple of weeks or so. So, like I’m an attacker, I use the site, I go to that service, I use some kind of, you know, traffic sniffer, see the IP address, great, got it. When my victim in the coffee shop now goes to that IP address, what I start doing is sending a high rate of ICMP requests.

ICMP echo requests and they’ll bounce back little replies, about 800 a second, which sounds a lot but actually, the bandwidth requirement is such that nobody’s going to notice this. This is completely surreptitious and what happens is, in the network interface card, many, many of them will also be freely made available to you in what’s called this channel stake information. So you know, so that your WiFi can go over a number of different channels. And the strength of the channel is based on sort of the constructive and destructive interference, all sorts of things that goes on. So, you can imagine, in the phone, your hands are around the phone and moving across it.

[0:23:53.7] Guy Podjarny: Yeah.

[0:23:53.7] Adrian Colyer: That’s enough to interfere with these different channels, which is detectable in this CSI information inside the network card and it turns out, you can run a classifier on that and in the paper, there are little pictures of the waveforms and they are very identifiable and you can figure out which digit it is. Now, you do need to know how that particular user moves their hand.

So, you might think, “Oh, this is a weakness of the strategy.” But, as they point out, you know, they need a little bit of creativity to figure that out.

[0:24:20.2] Guy Podjarny: Yeah.

[0:24:20.6] Adrian Colyer: For example, one could throw up a captcha for using this particular WiFi service or something along the lines that sort of happens to have digits in the captcha image, and there you go. You know, you’ve caught, you know, there are very creative ways of doing this.

[0:24:34.0] Guy Podjarny: We also don’t need 67% success rate. I mean, if you normalize the patterns and you were successful, you know, 10% of the time, that’s really good stats for attacks.

[0:24:43.7] Adrian Colyer: Exactly. So, you know, it’s a – there are loads of attacks here. I mean, if you look at this, there are so many genius ways of recovering passwords and PINs, you know, microphones on the keyboard, accelerometers, you know, they’ve done it with smart watches, they’ve done it with you know, webcams, all sorts of things but this one is kind of special and it is zero access to the device, you know, sort of a nice remote, hands off, and it’s just like, it shocks us all. “Oh wow, that’s even possible? I just never thought of that.”

[0:25:09.4] Guy Podjarny: It’s all this, this whole class of side-channel attacks, right?

[0:25:12.5] Adrian Colyer: Exactly, yeah, yeah. That is some –

[0:25:13.4] Guy Podjarny: Sort of this ability to use a generally, you know, a seemingly benign channel that would have nothing to do with it and you can still reconstruct it.

[0:25:21.3] Adrian Colyer: Yeah, and this is a little bit off-piece but very interesting because you reminded me of that. There is a piece of work called CLKSCREW, along that lines that also really kind of caught my imagination, which looks at the DVPR, the kind of the power management for the chip to save your battery. You could sort of, you could downscale it a little bit, you could reduce the power.

You can reduce the frequency, etcetera, or you can, of course, increase it back again and it turns out there aren’t good enough safeguards around that and its output in, so you can push the thing into overclocking that cause occasional bit flips and then there’s this whole other wonderful story that like – and again, it doesn’t seem intuitive but they’re widely feared. It’s just incredible. If you can flip on single bit, you’re basically kind of hosed, it turns out really. The way I kind of intuitively came to understand that is to think about at the core something like the factoring of prime numbers or something like that. That fundamental exists right behind all of this. Imagine if you could just change one bit at the appropriate time. They all die, you’ve got a much more factorable number for example and so there are – you know, that is just an example. It’s not exactly how it all works but there are many little things like that while you time the bit flip such that all the guarantees you thought you had, you know break and –

[0:26:31.4] Guy Podjarny: Yeah, it disappeared on it.

[0:26:32.4] Adrian Coyler: Some of these side-channel things are just incredible.

[0:26:34.2] Guy Podjarny: Yeah. Okay, so that’s fascinating, you know, alarming but fascinating.

[0:26:37.8] Adrian Coyler: Yes, indeed. Yeah.

[0:26:38.4] Guy Podjarny: Give us another example of like an interesting –

[0:26:40.6] Adrian Coyler: Ah, yes. So, another one I kind of picked when we were thinking about this, there’s also phone-based it turns out but “Leave Your Phone at the Door” is a really kind of interesting one. So, this is actually about sort of I guess industrial espionage or those kind of scenarios and so imagine we’re in an industrial manufacturing plant and you know, I’ve got some CNC milling or some 3D printing or something like that going on.

And there’s a lot of IP actually in the way I construct or manufacture these objects and so, because if you’re already physically in the plant and you’re near enough the machine and I can either figure out a way to plant some kind of malware on your phone or if that’s too much hassle, I just call you up and you’re happy to talk to me in the vicinity of the machine, then I can use the phone’s microphone or if I’ve actually got a phone, I can use the magnetometer as well.

And you know, these machines give off characteristic noises depending on the angle of the head, so depending on sort of what were they’re printing and sort of when they move, kind of for example, vertically up and down. You can imagine the carriage, this sort of white noise I can sort of hear when I’ll be liking my head, those are also I think are printable, it turns out. I mean, it could be dumb reliably enough to be in the background of a phone call.

And so, you can actually in the – again, the paper house is a pretty amazing examples of you know, a particular shape being printed out and you know reconstructing after the fact is the shape that I think you printed and they are again, they are relatively simple shapes in the world but the fact that it is possible at all is actually, you know, the astonishing thing, you know? And hence again, the kind of the fun title in the paper, like really?

I guess, you know, one of the takeaway lessons here is we all know these phones are amazing kind of spying devices. It’s packed with sensors and all sorts of things and people have the most creative ways of getting information out of them and so really, you know that’s just another example of what can be done.

[0:28:31.8] Guy Podjarny: Any information you provide can and will be used against you, right?

[0:28:34.7] Adrian Coyler: Yeah, exactly, yes.

[0:28:36.1] Guy Podjarny: In those elements.

[0:28:36.6] Adrian Coyler: Yep.

[0:28:37.1] Guy Podjarny: Well, okay. Yeah, and there’s a – you know, I know we had a challenge here because there’s just so many creative. I guess that’s really where the minds kind of go wild a little bit when kind of researchers can just sort of explore different paths.

[0:28:48.7] Adrian Coyler: Yes, yes.

[0:28:49.7] Guy Podjarny: So, this is you know, we’ve only mentioned like a handful of these but these are, you know, like you collect this, you know, ridiculous number of papers, right? That you said you read, which I’m assuming also you read more of those, and then you write and you summarize. I mean, how do you do it, you know? How much time does it take? You know, what are your sources?

[0:29:09.1] Adrian Coyler: Yes, some of them is frequently asked questions. So, I guess short of really you know, how long does it take, which is probably the most favorite question. It probably takes me between two and three hours on average to kind of read the paper, think about it, write it up, and then, especially if I took into account, then like the time to actually turn it into a blog post and an email newsletter and a couple of tweaks to go, you know like the whole packaging album pushing it out.

So, it is about – it probably is closer to three hours a post. I try not to add up the total time too often but it is somewhere on that order. In general, I read the paper in the morning if I have a commute, I’ll use the commute time to do it but always otherwise if I’m just at home, I’ll sort of read it in the morning. I sort of like to let it run around in the back of my mind. I’ll mark it up quite heavily as I read it and then later on, I’ll just come –

It’s just one take because if you do one every day, you haven’t got time to be too precious about it. So, I think that discipline actually helps. You just got to do it, you just got to start writing, and I was kind of outline the piece, my key thoughts. So, I’m trying to figure out what’s the story I really want to tell around this and then get to it. So, that’s kind of the basics of the process. You know, paper selection is something I said I guess I’ve honed it over the years.

But people will say. “Where do you find the interesting papers?” There is a number of ways of doing it. When you’re just getting started, there are actually quite a few. You could just the papers out on the Internet, you know, so they’re – you’ll get through that fairly quickly but for something like seed material and a topic area that’s great. Then you might look at recommended reading lists for university courses, etcetera, which will help you find some, kind of the classics sort of test of time papers that give you a solid background.

And then the other thing that I guess has become the bedrock of my personal routine is you get to know both research groups in conferences that regularly publish work you like.

[0:30:58.7] Guy Podjarny: Yeah.

[0:30:59.2] Adrian Coyler: That’s how I knew. I actually have a calendar with the main conferences I follow, will mark on it. I know when they are in the year, I know right now is the time to go look at their proceedings and kind of I’ll work through and do a first pass through the abstracts of these ones might be interesting, then I’ll do the quick read, and then I’ll have the final selection and that is kind of like the cornerstone now of my year.

You know, I probably have about 20-odd conferences that I regularly follow, plus you know, sort of the pressure. I suppose in a sense, I mean, you got to comment on every day. I am always on the lookout for an interesting paper and so instead of anywhere I see you know, Twitter and use case, whatever, I stash them all away and then I work through that backlog, so –

[0:31:36.6] Guy Podjarny: Explore the – that you at this point, do you get a lot of papers sent to you? Do you get a lot of recommendations?

[0:31:42.7] Adrian Coyler: I get some and it’s always very gratefully received. If anyone is listening and wants to send me a paper, that’s always you know, very welcome. I do get a few, you know, sometimes from researchers saying, “Hey, we just published this work. I think you might find it interesting. Sometimes from researchers pointing out someone else’s work, which is always lovely, saying, “Hey, I saw this thing. I think it is really good.” Sometimes from practitioners, so they do come in but it’s still a minority sourcing avenue for me, yeah.

[0:32:09.7] Guy Podjarny: Still a small amount. Got it. Do you write them up in advance? Like, do you bulk write? Like, do you write like seven of them so you get a day off?

[0:32:17.1] Adrian Coyler: Yes. Yeah, I do, absolutely. So, this is one of the things that keeps me sane. So, my weeks are very hectic and very you know, like I could be off here, there, and everywhere. So, I don’t live within a 24-hour pressure window to have the next day’s post. I am normally one week in advance. By the end of the weekend, I like to have all the posts from Monday to Friday kind of all scheduled and good to go.

[0:32:37.0] Guy Podjarny: Yeah.

[0:32:37.3] Adrian Coyler: People who follow regularly will notice they all come out and the tweets at exactly the same time every day. That’s because they’re all scheduled in advance of – it’s fine actually. This week I am reading a collection of papers from a really terrific workshop. I am excited to be able to share this, it’s called Recoding Black Mirror. If anyone has followed the Black Mirror show and it is looking at –

[0:32:56.1] Guy Podjarny: Talk about spooky.

[0:32:57.2] Adrian Coyler: Many of these various scenarios are you know, how technology could go wrong or the ethics around it and one of the papers I was reading was about sort of what rights do people have to data once, you know, a person is deceased? You know, how should we think about data rights? So, that just tells us we have a really interesting question and any cause meets the thing.

You know, yes actually, so it happens to be, it is a very long answer. On average, you’d still get two and a half posts. So yeah, fingers crossed this won’t happen but you’re good for at least two and a half days if you’re done, kind on average, yeah.

[0:33:25.4] Guy Podjarny: Yeah, it takes some investing, yeah. Fascinating, so yeah, and I think we’ll probably share like a bunch of these links, also a bunch of the stories that we don’t have time to talk about today.

[0:33:33.9] Adrian Coyler: Yes, of course, yeah.

[0:33:34.6] Guy Podjarny: In the notes of this podcast because there’s so many of them. Let’s sort of shift back I guess into the content. So, you know I guess another category we chatted about was sort of you know, not just exploring the new attack techniques but rather the other way around, like the security of the new technologies, you know, and what do they imply. Are there a couple of interesting examples from that world?

[0:33:55.0] Adrian Coyler: Yes. So, there’s one that we kind of bounce back and forth, that again, there is a – it is another “oh, of course” once you hear it but actually I was a bit naïve and not thinking about this beforehand and so the paper title is called, “Game of Missuggestions,” and again, it’s a bunch of researches that analyze what auto-complete suggestions do you get when you start typing in your web browser. You know, when you go to sort of Google and do a Google search, etcetera, and I think I said I never really thought about that as an attack vector but it turns out not only is this an attack vector, there is actually an entire service industry that will carry out this attack for you for a fee, ranging from about $300 to $2,500, depending on kind of like the sort of keywords you’re interested in, etcetera.

And the goal is that like you know, suppose the example in the paper, the main one is, “I’m interested in finding online backup software.” Classic thing that I may want to provide you with my online backup software because that’s probably going to have access to your files and that sounds interesting and so you know, you start typing online backup and in your autocomplete suggestions, you might get like shady vendor online backup free download kind of autocomplete suggestion.

And to me at least, I’d imputed some kind of degree of trust to what I was seeing and this is clearly a popular search, it must be a well-known thing. That turns out it’s completely unwarranted due to these services that I’m – even with Google says were very effective and so they analyzed this whole ecosystem, which turns out to be making about half a million dollars a week for some of the more popular kind of manipulation services or I think that it’s called online reputation management services.

[0:35:37.1] Guy Podjarny: So benign.

[0:35:38.2] Adrian Coyler: That’s the phrase they might like to use and really what they do is very straightforward. You know, you say like, “This is the search terms I am interested in, this is the little phrase I want to pop up.” And they’ll go and they’ll use armies of sort of crowdsource workers and other things to just drive a high volume of search requests using those keywords and it must appear somewhere in the results to then click on the appropriate link and to do this over and over and over again.

And it turns out, you know, in the experiments that after some period of elapsed time, which can be up to a month but in the kind of season that long, you can seriously gain this and get your preferred phrase right up near the top of the results and it can stay there then for one to three months as well. So, that’s actually a pretty effective mechanism.

[0:36:18.6] Guy Podjarny: Yeah, that’s where good people pay kind of good money for advertisements that are placed on that –

[0:36:22.8] Adrian Coyler: Exactly, yeah, and vetted that. Actually, the way they uncover how this goes is also kind of cool. This is I suppose a short digression, you know, many people have heard of this thing called Word2vec that lets you kind of take a word and turn it into sort of a vector representation that somehow embodies its meaning. What they looked at is if the autocomplete suggestions are kind of genuine, they probably ought to be fairly similar in space to true search results that you would get if you actually completed the search.

And so, they do this kind of Word2vec and then they look at the distance in the vector space and they find that indeed, actually, the ones that have been manipulated are like sort of about – I’m just going to go about point seven distance, yeah, point seven distance and the ones that are genuine are point five, something to that. There is a real gap anyway and these Word2vec techniques kind of uncover it while looking at the similarity.

[0:37:12.4] Guy Podjarny: Yeah, which will probably like to apply that, you know Google might employ these to identify those components and then the attackers would find a different way to sort of regroup.

[0:37:19.5] Adrian Coyler: It’s a never-ending arms race, yeah.

[0:37:21.1] Guy Podjarny: It’s amazing that they can –

[0:37:22.2] Adrian Coyler: Every sort of surface you expose, somebody will find a creative way to try and manipulate it to their end.

[0:37:29.8] Guy Podjarny: And that the attacks can scale to a magnitude where if you think of Google as this like just sort of volume and size that can no longer be affected by kind of a single entity.

[0:37:39.5] Adrian Coyler: That is the incredible thing is that – but you know, I guess if you are targeting sort of niche enough areas, you can game the system it turns out, yeah.
[0:37:46.3] Guy Podjarny: That’s amazing. It’s interesting I guess in many ways, right? It will bond the identification of it, the patent itself, scary, and they’ll trust those recommendations less but the attack on Google on that scale but also the attack on machine learning data, right? This is kind of a new methodology that we’re also embracing and shows how if you can kind of manipulate or poison the data, you can poison the results.

[0:38:09.5] Adrian Coyler: Yeah, that’s a whole other area kind of we – we haven’t got time to trade in for this, great work on that. You know, sort of I just say if you can influence the training data or the training time in a model, etcetera, you know, they’re learning machines and they will learn what you tell them and –

[0:38:23.2] Guy Podjarny: You can introduce them to something wrong.

[0:38:24.3] Adrian Coyler: You can absolutely bias what comes out of those systems if you’re feeling malevolent.

[0:38:29.8] Guy Podjarny: So, I think we have time for one more. Let’s dig into one more kind of interesting you know, security of a new frontier.

[0:38:35.7] Adrian Coyler: Okay, so my absolute favourite that I have recently read and it’s like science fiction to me this particular one and it’s called, “Securing Wireless Neurostimulators,” is the paper title and if you’re not familiar with a neurostimulator, it’s a medical device called an IMD, an implant for medical device that you often sort of have parts of it in your chest or somewhere around there and it’s directly wired to your brain. So, already this sounds like, “Ooh, this is kind of interesting.”

[0:39:02.6] Guy Podjarny: Yeah, that’s over there.

[0:39:03.4] Adrian Coyler: And you know, if you have, you know like I said, there’s like Parkinson’s and things, this can be very therapeutic delivering kind of the right kind of you know, voltage to the right parts of the brain at the right time. Of course, it’s implanted. It needs to be remotely programmable. It is kind of like everything wrong with IoT/embedded security tale. If you get into the paper it’s sort of like security by obscurity and sort of a protocol is not documented but it could be reverse-engineered and once you’ve reversed-engineered it, it turns out you could drive all these various attacks and things.

So, you think the obvious. “Oh yes, okay, once I can sort of send commands to the device, yes.” I mean, you can cause a person to not be able to move, should not be able to speak, maybe you can do brain damage because you could probably kill someone, you know? I hope these devices have safety guards but given all the other things I know, you know, who knows?

[0:39:52.0] Guy Podjarny: Yeah, I agree, and those might be overwritable if you know that code.

[0:39:55.3] Adrian Coyler: It might be overwritable, so there’s that but the thing that really – there was two things about this paper that made me go wow. So, one of them was you know, well, what if you aren’t actually trying to change the signal. We are just trying to actually use it to get signal from the person’s brain and this is something I haven’t twigged but there is this brainwave called the P300 wave, which in the name kind of comes about 300 milliseconds after you’ve visually seen something.

And you know, you can’t really spoof this thing and they have shown that when you’re recording this wave, it’s possible to see if you recognize something like a picture of your password or your PIN or a face that you claim never to have met that you do actually know, etcetera, and so I mean, this is like sort of literally hacking your brain in a sense, you know, getting secrets out in a way that’s like –

[0:40:43.1] Guy Podjarny: Reading your results, yeah.

[0:40:44.1] Adrian Coyler: And they’re the newer generations of neurostimulators will expose this P300 wave information. So, this is like, “That’s actually possible as a hack?” You know, this is amazing and scary, like all of these things are and then the second really cool thing they do is like, oh, come on, we need to do better. You know, we can’t have this kind of security but actually, how would we generate a secure key for communicating with two of the implanted devices and the programmer? You know, many people tried different schemes and the challenge is nearly always, you know, it not only gives a sufficient source of randomness but also then how do you transmit it from inside the device to the programming in a way that is secure that can’t be eavesdropped on, etcetera?

And the really cool thing that the researchers do is they find something called the LFP, the local fluid potential, which is like a physiological signal that can be read from your brain to deal with the fluids around the brain. And the electrical field in them and some other things I can’t fully explain and don’t fully understand but suffice to say, it’s kind of pretty unique and pretty random, so they use this as a genuine source of randomness. So, literally, your brain is kind of making the random encryption key, which is really cool.

[0:41:50.2] Guy Podjarny: Wow.

[0:41:50.6] Adrian Coyler: And then you know, the other bit is then fully straightforward, they require an actual explicit physical touch from the programmer to the skin to kind of have this sent from the wire that goes from the brain down to the device and through the skin and so, they’ve got a few protections around it but you know, those two mechanisms like the P300 wave and this LFP thing are both like, this is really kind of science fiction.

[0:42:11.0] Guy Podjarny: Yes, it’s very cutting edge. Yeah, this like the world of like medical IoT is a scary one when you want – when you say security next to it but its potential is amazing but you know, the whole with great power comes with great responsibility phrase really, really kind of hits a nerve here. These are fascinating and you know, they’re really again just sort of scratching the surface. You know, we didn’t get to talk too much about privacy. We’re going to talk about it, do you want to just sort of mention in passing a couple of interesting papers there?

[0:42:36.0] Adrian Coyler: Yeah, let’s do it. Let’s do it very quickly in passing. So, one interesting paper, again, sort of kind of practical paper that I recently came across is called “PrivacyGuide,” and this is the idea that you know, as we all know, like hopefully getting better with the GDPR but buried inside the terms of services are all sorts of stuff. They’re incredibly lengthy.

Again, the research has analyzed it, it may take us about 244 days I think if you actually wanted to read the terms of services for the major services that you use, totally impractical. So, they use you know, machine learning and NLP and various other techniques and they’ll read it for you and they actually turn it into what I think of as a bit like a nutrition label. So, they’ve got kind of 11 categories that they’ve broken it down into and you got a simple kind of traffic like scoring in each category. And they’ll tell you basically whether the terms look good or let’s have to go and investigate. So, that’s a kind of really quick aide.

[0:43:27.5] Guy Podjarny: Yeah, it sounds very useful.

[0:43:28.9] Adrian Coyler: Yeah, underpinning that, my absolute favourite work on privacy notices that work because also the research shows you can tell somebody, “This is what this has access to.” You can be explicit as you like and they’re still click yes. They just push through and –

[0:43:39.4] Guy Podjarny: Yeah, they’ll just push through because they’re trying to do something else, right?

[0:43:42.5] Adrian Coyler: So, it turns out there is one dialogue that works and it’s from a paper, they give it a brilliant title, “The Curious Case of the PDF Converter that likes Mozart.” So, it is a PDF converter app, as you can imagine it’s downloading, looking at your music, and all sorts of stuff it shouldn’t really need to do and what they found was, the thing that gets people’s attention is instead of just saying, “It will have access to your photos, it will access to this.” They said they do say that.

And then they say, “And this is what it can reveal about you.” And they take some examples of information that could be inferred like, “These are faces of the top five people you communicate most with.” Or you know, “We can figure out that you know, you like X and you frequently shop here.” And giving these insights of the gleanable from the data actually works and it could like, “Whoa, hang on, I don’t want that.” We get this impact of what it actually means to reveal that information because I just don’t think people have a –

[0:44:33.5] Guy Podjarny: No.

[0:44:33.8] Adrian Coyler: A good intuition about how much you can be learned from fairly scarce data. Let me finish with one last kind of amazing-to-me tale, which ties into privacy and GDPR and sort of this discussion we’re having around the anonymization and pseudo-non-anonymity, God, that’s a hard word to say. It’s kind of like the – it’s anonymous but it isn’t really and sort of to be declared fully anonymous. I think the word kind of sounded like it must be impossible to reverse engineer, which if you kind of read around a bit like that’s also an arms raise and I guess the paper that brought home to me how clever people are at reidentifying from data is called, “Booting Trajectory Out of Ash,” something very close to that.

[0:45:18.1] Guy Podjarny: Is it Trajectory Recovery from Ash?

[0:45:19.5] Adrian Coyler: Trajectory Recovery from Ash, yeah, there you go and very short, maybe I’ll leave this as a mental exercise is to think of how it’s done. So, here’s the setup, you have aggregate data from cell towers, it’s a timed series and so what you have is cell tower, name/number, identify whatever it is, and count off the number of devices, you know, communicating what is actually attached to that cell tower at this point in time.

And you just got that data for all your cell towers and the device counts. You would think that would be safe to release for kind of analysis and research, etcetera and in this paper, the researchers show kind of step-by-step. When you work through it, you go, “Oh no, no-no-no-no-no.” You know, they can uniquely identify all the individual user’s trajectories from this aggregate data. Once they’ve got back to the trajectories, it is actually trivial to find the person. I mean, if you look at like the accumulation of paths that are connected, like the two most common addresses are probably home and work and you know, you’re nearly there. You know, give me three most frequently used locations and I probably know who are, with a few extra scraps of information and so, they do this step-by-step just from the aggregate data. It’s super clever.

And it’s really ingenious and the essence of the idea is this, if you imagine two cell tower locations, that may be sat as three, let’s make them in a straight line so I could walk from A to B to C and if there are sort of five people at each of them, you know, at time one and at time two, there are sort of six in the left-most, let’s say, and four in the centre, and five on the right, your most likely guesses that somebody walked from the central tower to the one on the left if I kept my analogy right.
And you can do this and look at like essentially it might be something of a big optimization are probably like, “What’s the least cost movement that can cause this with some other heuristics about time of day and where the people like to move on?” And if you are going at a certain velocity you’re likely to keep going and it works and you can solve this set of constraints and out falls highly reliable trajectories for people. So, you’re never as anonymous as you think is really the lesson.

[0:47:22.0] Guy Podjarny: Yeah, you’re never anonymous and you know, never is safe really like in all these side channels. Wow. So, these are scary items. I guess I’ll kind of remind the, you know, everybody kind of reading this is that oftentimes, there’s some recommendation element here, right? And also like in a way, we go in, one aspect of it is, is sort of highlight the concern but another more important thing is you know and I think in practically everything, and you probably have that bias in selecting them is some concrete suggestions, advice.

Sometimes advise to you as a consumer of these things and sometimes advise to sort of the future creators of these types of entities, which is us, right? That’s the tech industry that sort of development industry or the developers, that element that can build things right, right? To learn from those and build them correctly. So, Adrian, before you entirely kind of disappear on us here, I have a question that I like to ask every guest, which is you know, if you had one kind of security-related advice or pet peeve that you like to sort of share, what would that be?

[0:48:17.9] Adrian Coyler: It’s a great question. I guess I normally have in the half an hour I had a lot of this conversation, just think about it but it is basically been in the back of my mind. Maybe the thing I would say is the more you read and you understand sort of like just to reinforce the impossibility of first coming up with a design and then bolting on security after the fact. I think you know really security is an integral part of the design and everything else that follows and that’s the way that you need to be thinking about it.

And I love that the right things will fall out with that approach. So, perhaps, my pet peeve is kind of the bolts on security/security by obliviousness/kind of like, “Yeah, we didn’t really think about it but we would probably get away with it.” Which in today’s kind of highly networked world and some of the things that are becoming connected as we talked about, you know, it is just not responsible anymore to have that approach.

[0:49:11.1] Guy Podjarny: Yeah, cool. Oh yeah, that’s really good tip, good advice but this was fascinating. We’ve gotten longer than we intended just because we can just go on and on. We’ll post the whole slew of links here but tell us just quickly if somebody wants to subscribe to The Morning Paper, you know, how do they do it? How do they find you?

[0:49:28.3] Adrian Coyler: Ah yeah, great question. Thank you. So, I guess the simplest way would be to Google The Morning Paper and maybe my name, Adrian Colyer, which is spelt C-o-y-l-e-r, or it’s at blog.acoyler.org.

[0:49:39.9] Guy Podjarny: Cool. Yeah, and they can register and you know, I highly recommend it. I have been reading them for quite a while and yeah, we’ll post a bunch of these links so you can, you know, figure out the math for the trajectory from Asher, you can actually read the paper or just Adrian’s summary of it. Adrian, this has been a pleasure, thanks a lot for coming on.

[0:49:54.7] Adrian Coyler: That was a ton of fun, thank you so much.

[0:49:56.2] Guy Podjarny: And thanks everybody for joining us and tune in for the next episode.

[END OF INTERVIEW]

[0:50:01.1] Guy Podjarny: That's all we have time for today. If you'd like to come on as a guest on this show or want us to cover a specific topic, find us on Twitter @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over a hundred videos about building developer tooling companies, given by top experts in the field.

Up next

Collaborative Security With Marten Mickos

Episode 18

Collaborative Security With Marten Mickos

View episode
Measuring Security With Allison Miller

Episode 19

Measuring Security With Allison Miller

View episode