Episode 19

Season 3, Episode 19

Measuring Security With Allison Miller

Guests:
Allison Miller
Listen on Apple PodcastsListen on Spotify Podcasts

Allison Miller: "I have inserted the fit of piques, said, well, ML is just a fancy name for statistics, and caught a lot of heat for that. Yes, I know the difference between the different algorithms and supervised versus unsupervised. But at the end of the day, a lot of folks end up realising what works best, and that's the trick. But it doesn't matter which tool you use, no one gets bonus points for using a cooler algorithm. If it moves, measure it, and if it doesn't move, measure it in case it moves."

[INTRODUCTION]

[0:00:36] Guy Podjarny: Hi, I'm Guy Podjarny, CEO and Co-Founder of Snyk. You're listening to The Secure Developer, a podcast about security for developers, covering security tools and practices you can and should adopt into your development workflow.

The Secure Developer is brought to you by Heavybit, a program dedicated to helping startups take their developer products to market. For more information, visit heavybit.com. If you're interested in being a guest on this show, or if you would like to suggest a topic for us to discuss, find us on Twitter, @thesecuredev.

[INTERVIEW]

[0:01:07] Guy Podjarny: Hello, everybody. Thanks for tuning back in. Today, we have an awesome guest with us that I've long wanted to sort of bring on the show, Ally Miller. Thanks for joining us, Ally.

[0:01:16] Allison Miller: Thank you for having me. Very excited to be here.

[0:01:19] Guy Podjarny: Ally, you've got a sort of a long and interesting history in the world of security. Can I kind of ask you to just give us some context? How did you get into security? What's the short story of your life here in the security world?

[0:01:31] Allison Miller: Absolutely. I think I have tried a few times to figure out where I got bitten by the initial bug. Every time I think about it, I start going back further and further to childhood, where that sort of paranoia or interest in protecting things came from. But, I largely got interested in it in college, I was studying finance and economics. I was really interested in e-commerce, which was just getting started to grow. How technology was going to be applied to the needs of business became interesting to me.

For some reason, I mean, no one around me knows where it came from, but something about – well, that's going to go wrong, or that's going to get exploited. I became really interested in the security implications, the economic implications of electronic money, and of commerce, and transactions, and payments. I was very interested in that in college. There was nothing really to research. These were in the days of e-cash. This was long before cryptocurrency became a thing. That's where my interest kind of gelled.

But then, I didn't really know how to do that for a living, it was more of an academic or prurient interest than a particular idea of what I could do for work. I ended up just going into IT. I used the information systems decision science part of my degree, and kind of left the business economics, and criminology aspects of what I studied to the side for a little while. But within six months, I got a reputation as the girl who asks all the weird questions about cryptography and security.

When the company that I worked for decided or realised, rather, that they needed to have a more specific security strategy and they were going to build a security department, I became the first hire into that department and was able to help build that practice from the ground in partnership with the CSO and a couple of other folks.

Then, I kind of took the show on the road, decided to see what other opportunities I might be able to find. Interestingly enough, I ended up at Visa, which was one of the places that I had kind of thought was interesting from the outside and from an academic point of view because that's where payments were happening. They looked at my resume, and you very kindly described it as interesting and lengthy. But when they saw it, they just thought it was weird and so they wanted to chat with me. That's how I moved out to the Silicon Valley, to not to work for a Silicon Valley startup, or maybe perhaps to work for one of the original Silicon Valley startups, which was Visa. Then, ended up going to other sort of .com

[0:04:40] Guy Podjarny: Yes, environments. You went on from there, so this sort of got you into this risk category. I guess, we'll kind of continue a little bit along this journey, but what was sort of captured in those in those days in risk, what type of activity falls under that mantle?

[0:04:58] Allison Miller: My career has been kind of a jungle gym swinging from one side to the other. I went to Visa to work on technology risk, which was, they were interested in using new technologies in making payments work. Things like making payments work online, chip cards, mobile technologies. My job was to help them figure out how to employ the new technologies safely. But I very quickly became interested not just in what the technology implications were, but how the design of the financial products themselves created or resisted risk or exploitation. So, I moved from technology risk to product risk.

Well, I don't know if you noticed, but compared to, say, a startup, Visa is a little bit slow. I actually got through and risk assessed, if you will, all of their new and emerging products at some point and realised I didn't have anything new to work on. Perhaps, I actually wanted to go where there was a lot of risk being dealt with directly. That's what led me to make the jump over to PayPal.

At PayPal, the game of risk was very different. It wasn't about, Allison's going to go in and pick apart this design of this product, and figure out where all the weak spots are. It was all math and statistical modelling, anti-fraud technologies, as played by large banks. They've been developing those sorts of modelling techniques and capabilities for years and years. It was actually quite structured and required that I learned on the job fairly quickly how to use things like SAS, which was an SPSS, and R, and those types of things.

That was a very, very different approach to what I had initially thought about, related to understanding how risk works, which was from that sort of technical point of view. Where you're deconstructing a design and figuring out where there are flaws, and then design to go from that all the way over to just the sort of pure math approach. I did that at PayPal for a little while, and really enjoyed what I learned and learned a lot. I worked in that sort of anti-fraud risk function for many years at PayPal. Then, went back to the technology, which is, I was helping design the technology, design how mobile authentication mechanisms were going to be incorporated into the login flow. For example, using 2FA, and getting all of these different factors of authentication, and adding in the type of identity validation that we needed to do for new account signups and other technology that we were bringing to bear to protect the accounts, or to reduce the fraud risk.

That sort of approach to technology, I was working a lot more with engineers, I was working as a product manager in some contexts. I was also still doing a lot of analysis and working with the modelling team, but I've shifted then away from the math and back more squarely into the technology. Since that experience, using data has infused or informed almost every job that I've had since then. In some form or another, I've worked a lot on detection technologies, and the underlying math of it is often similar.

Then, I have continued working with engineers in every role that I've had subsequently. Because the risks that I've been dealing with, battling, if you will, the controls, we embed them sort of in the fabric of whatever the platform is. At PayPal, it was fairly straightforward. You are either trying to prevent a fraudulent transaction, or you're trying to prevent someone from logging into someone else's account. Those were the primary risks that I was looking at. But every platform has their sort of version of that. Communication platforms deal with spam, gaming platforms deal with cheating or griefing. Then, in advertising, you can have bad actors trying to get ads into the system or use other people's accounts to put ads into the system. Account security has also been a common thread.

I guess, the broad strokes in my career, I went from the economics to the technology, to the technology of economics, to the economics of technology, back to the technology, and there's still economics in it, plus data.

[0:09:45] Guy Podjarny: Yes. Well, that's it, kind of a combo there.

[0:09:49] Allison Miller: Yes. It's just amazing how everything is interconnected. People, when they look at my resume, they still – their reaction is. "Wow, you've had a really weird journey, but it all makes sense in retrospect.”

[0:10:04] Guy Podjarny: Well, I think the acknowledgment of kind of the combo of data and engineering, and having data-driven decisions is something that actually permeates more and more with kind of the world of machine learning and AI. I think the notion of using data for risk assessment is sort of an early player. It doesn't matter if you call it ML or AI, but sort of in a driving technology actions to sort of act on the data at the end of the day.

[0:10:30] Allison Miller: I think you're quite right. I have inserted the fit of piques, said, well, ML is just a fancy name for statistics, and caught a lot of heat for that. Yes, I know the difference between the different algorithms and supervised versus unsupervised. But at the end of the day, a lot of folks end up realising that, what works best, and that's the trick. The trick is, you're trying to optimise the performance of this decisioning system. Sometimes, the ML helps you, sometimes a neural net or AI might help you. But in a lot of cases, you're just kind of end up with a rule's engine with a bunch of heuristics. No matter how wonderful the Bayesian learning network is.

Sometimes you're just going to end up with a logistic regression model. It's okay, folks, it's okay. Use the tool that helps you best get to the optimisation level that is your horizon. Push the envelope, it doesn't matter which tool you use. No one gets bonus points for using a cooler algorithm.

[0:11:39] Guy Podjarny: Yes, indeed. You just need to like sound impressive. I think that's pretty much the way it is. That's what the term machine learning kicks in. You've done a lot of these different risk assessments. Are there like low-hanging fruit? If you think of somebody building a solution and granted, this is probably a very broad question, but building a solution that has some large volume of transactions. Are there first, early suspects around reducing risk? Have you seen some trend if you only did this, then, you sort of have some, initial first hit at eliminating noise, or kind of the most blatant abuse, or something that is equivalent to the input validation of sort of web attacks for risk reduction or bad transaction reduction?

[0:12:22] Allison Miller: Well, I guess it really does depend on the system. I suppose a few – well, a few rules of thumb or something that I would recommend to folks who are starting out. You have a platform, you deal in something; therefore, you may have someone figure out how to exploit whatever your version of a transaction is. Instrument the heck out of everything, in the sense that the way that these decisioning technologies work off of is telemetry, what would be considered telemetry. To give you an idea of what to instrument because I know that's, in my head, I'm looking at these horrifyingly long tables. That's what I'm imagining. I'm imagining when an event happens. I mean, you have a timestamp, and you have, who attempted it, what they were attempting, and what the result was, et cetera, et cetera.

That's how I think I know not everybody's using – I mean, thank goodness, not everybody's using relational databases for everything. But in my mind, it looks like logs, log files. And you want a record of what happened, so that, when you start to build these decisioning technologies, you have data where you can start to process and look for what you then learn are unusual behaviours.

In transactional systems, if we're talking about something like payment, or even to an extent, communications, there's a couple places where risk tends to cluster or bundle, I guess. Which is, when you have newness, for example, a new account, you don't know much about it, it could be real, it could be a bot. What it does initially, or what you let brand-new accounts or actors do on your system, that there's an interesting place there. Then, when you have accounts, or features, or processes that you've had for a very long time, and suddenly, they're doing something new, that's also a place to look for risks. That it's very abstract.

[0:14:30] Guy Podjarny: Yes. No, but actually, it's super practical. I think, I feel like the first recommendation you gave is very kind of DevOps in nature. If it moves measure it, if it doesn't move, measure it in case it moves. Very kind of oriented and accumulate the data, so you can later establish right from wrong. Actually, I find newness to be fairly concrete, not very abstract, a very clear-cut. When the new entity gets created, a new action, and action is done for the first time. That's when you scrutinise.

[0:15:00] Allison Miller: An example that I like to use is, I worked with Skype fairly early on, pre it being owned by Microsoft. Folks forget, it was owned by eBay briefly, for a couple of years. When Skype first started up and was offering voice-over IP, what many folks might not have thought about is the fact that VoIP minutes were extremely highly monetizable and very attractive to fraudsters. Fraudsters, being folks who are using other folks credit cards to buy things.

Skype did something very interesting because to provide voice-over IP is not expensive. It's pretty cheap, which is why they were doing it, but it wasn't free, interestingly. Because there's telecommunications providers that have to pay each other for connecting calls and things like that. Anyway, what Skype did is, if you were a new paying customer, meaning, you wanted to make calls out of Skype to phones or accept phone calls into you, that was the paid portion of the service. I think you were allowed to pay in 15 pounds worth of calls, for the first 90 days that you were a subscriber, and it was very specific, and there was a reason for it. Which was, if you had stolen someone's credit card, and you had maxed out your £15, then they expected that they would receive a chargeback from the legitimate card holder's issuer within that 90 days. Because, at least at that time, the average chargeback return time was somewhere around 45 days.

They figured they would probably – they would get 80% to 90% of the chargebacks in within that window. If you were a normal, innocent customer like most books are, maybe you wanted more, and you would complain, "Why can't I have £30, I want £15. Fifteen pounds is not enough for me to make all the phone calls I want to make." They were very annoyed. But, the annoyance factor of the good customers was just, they had to risk that because the attraction of the system to fraudsters was such that they just put in that really strict sort of draconian measure. Because new customers, new credit cards to their system, those were the riskiest ones.

[0:17:31] Guy Podjarny: Yes. This is sort of risk, and we got into it, and you've done all that dig. The current leg of your journey is a little bit more in the sort of security engineering side, right?

[0:17:43] Allison Miller: Yes, that's right.

[0:17:45] Guy Podjarny: Tell us a little bit like, is that indeed a transition? How is that a transition? What's this new world and what made you try it out?

[0:17:53] Allison Miller: Right. Yes, and a lot of folks, when they think of me, they think of fraud. That's an interesting distinction.

[0:18:03] Guy Podjarny: It's not a bad association if you can dismantle it if you're not the fraudster yourself.

[0:18:08] Allison Miller: I think, to sort of put it into context, I want to mention what I had been doing just prior to this new role that I'm in, which is, that I had been working as a product manager, I guess. Yes, that's how I describe it. Doing strategy for some of the engineering teams working on security at Google. Specifically, well, I've been working with a few teams on things I cannot necessarily talk about, but I've been doing a lot of work with the Safe Browsing team.

The Safe Browsing team, what people see, the public version of what they see, they go, "Oh, Safe Browsing. They're the ones who make Chrome show a red warning page if there's a phishing link or a malware link that I just clicked on." That's the Chrome experience that gets created. But interestingly, all of the major products at Google use the Safe Browsing, the results of the Safe Browsing work. Some of them have customer-facing experiences, and some of them just have back-end things that they have done to make their product safer. But Search uses it, Chrome uses it, android uses it, ads uses it, Gmail uses it. The Big Five as sort of I thought of them, and most of the other products too.

What Safe Browsing does on the back end to create this list of URLs that are hosting harmful content is, they crawl the whole Internet in pieces, of course. There's no way to sort of snapshot the whole thing and process it overnight. They crawl, they sample, and then they evaluate what they see. For a phishing page, maybe they're evaluating the content on a page. But for malware, they actually evaluate the behaviour of the software. They have these enormous pipelines set up to actually understand the behaviour of the software itself, which means, they're downloading it and running it.

If it moves, measure it, and if it doesn't move, measure it in case it moves. The thing about that, that just blows my mind, man, is that all of the behavioural analytics techniques that I learned in a transactional environment like payments, all of those techniques can also be brought to bear to understand or to make evaluations about the behaviour of software.

[0:20:41] Guy Podjarny: Yes, letting that sink in a little bit. All of those can help measure the behaviour of software. Software or sort of the humans using that software?

[0:20:51] Allison Miller: Malware classification is a classification event that's being done based on the behaviours of software because you ran it. You ran the software and were able to extract out of the resulting behaviours, data. You essentially created a transaction by making the software run or taking some data associated with what else was on the page. You can essentially classify the behaviour. I'm doing air quotes for anyone who can't see me. You can classify the behaviour by what you observe. It just kind of blew my mind, I guess. This idea that content analysis associated with spam, fine. Behavioural analytics on transactions, those are events, something's in motion already, fun.

But the idea that it could then go back to the behaviour of software, and be right back into the security use cases that I had kind of left a decade ago, it just kind of it blew my mind. I felt like I was home to a certain extent because it's one of the things that I was thinking to myself when I was in payments at PayPal, or any of the other gigs that I've been in where I've been doing anti-fraud. I wish I could bring this expertise back to information security, because so much of information security, it feels like it's guessing. It's hard to quantify how much attack you have diverted. It is hard to clarify why to make the investments you're going to make and protections beyond compliance. Compliance is sort of the backstop bottom line answer for a lot of shops.

[0:22:40] Guy Podjarny: Yes, protect yourself from audits.

[0:22:43] Allison Miller: I was always hoping I could bring something back in, related to that quantified understanding of exposure, and the performance of what you'd built. When I work with Safe Browsing, what kind of blew my mind is the data-driven approach that I've been using operationalised also could be helpful. The quantification, awesome, yes, I still want to pursue that. But so cool, like I developed this expertise in this kind of approach in this technology, and it wasn't just an anti-abuse, anti-fraud thing. It could also be useful in kind of the core guts of what information security is. Which is either, the software is broken, it's vulnerable in their exposures or the software is bad and it's coming for you.

In any case, I certainly got a good taste of the bad software and the malware software. Man, I never thought I would be working on anything that someone might call antivirus. In my life, I never thought that I would be working on that. But malware, and phishing and a lot of the things that that folks stumble onto on the web, to me, I was back in kind of core InfoSec. Then, I still wasn't spending a ton of my time working on what a lot of folks think of when they think of core InfoSec, which can be boiled down to AppSec. Understanding the vulnerabilities in the software and fixing them. But, here I was back again. It was kind of a nice homecoming.

Then, the role that I'm just moving into is one where I am working as a technologist in an information security context. We are engineering and building the protections that are being incorporated into protect the organisation and all of our technology. I'm back into the thick of it, and it's kind of – it's a full circle because I started in the IT security. Way, back I started in IT security, then technology risk, then product risk, then anti-fraud, then anti-spam, anti-abuse, blah, blah, blah. Back to security with Safe Browsing, and now I'm back in IT security in an enterprise context for real. But I brought with me all of the data-driven goodness, all of the platform engineering, building things in, and measuring and that sort of learning system approach. I'm really excited to see how it's going to play out in an enterprise context.

[0:25:25] Guy Podjarny: In some sense, that inverse of artificial intelligence, I mean, you're using – if software is trying to behave a little bit more human-like, it's created by humans, so there's probably some attributes there. There's models, and statistics, and data-driven angles that we can use for it. But also, we're now using human techniques to analyse software, sort of There are techniques you would use to analyse kind of human misbehaviour, to analyse software. Misbehaviour.

[0:25:53] Allison Miller: I see what you're saying.

[0:25:55] Guy Podjarny: It was like the AI version of –

[0:25:59] Allison Miller: I think I know what you're saying. There's a lot of IT security that as humans, it's manual to a certain extent, trying to deal with the implications of the software. Versus where I was at, where I was using the software to deal with the bad or malicious humans. You're absolutely right, and that I am full speed ahead trying to figure out how to apply computational power to identifying bad behaviours and solving the problems of security. Anything that can be automated, I want the software to do the analytics. Then, we reserve the hard stuff for the human compute, the analysis. That definitely is infusing how I'm approaching that for sure.

[0:26:45] Guy Podjarny: We're talking a lot about sort of going from the risk analysis to defence, to an extent. Then, you've also been involved in a lot of education, sort of in the ISC organisation, kind of helped create O'Reilly Security. I wanted to touch briefly around this notion maybe of defenders or defending techniques in that security world. Do you want to kind of just sort of share a few thoughts around how do you see kind the world's evolution a little bit about sharing these techniques, your involvement with O'Reilly security, or similar conferences?

[0:27:13] Allison Miller: Yes. Thank you for mentioning that I was involved with ISC(2). I think that's a good organisation. It's doing a lot of good work to arm our practitioners, if you will, with a baseline set of skills, and to try and connect those folks so that a rising tide can lift all boats. The other piece that you mentioned, O'Reilly. To me, that's also sort of a homecoming. Because back in college, when I was studying things for which there was no major, no corpus of research, and I was just kind of hunting around the books with the animal covers. We're still in the bookstore, and I probably collected most, if not all, the yellow series, which were the ones that were security-related. I have the one with the safe on the cover, the one with the bodies, the British police on the cover. I thought of them as the Keystone Cops, but I realised that is not what they were.

The security series from O'Reilly was one of those things that it was so exciting to read that. When I thought about technology and reference materials, I would always think of O'Reilly. They were really sort of foundational to my self-education, learning Perl, and understanding Unix, and all of those things. A lot of that was self-taught, there were no courses in school to learn those things. Those were things that I was just interested in because I wanted to understand how the underlying technology that I was using worked, and I appreciated it so much.

In the past few years, what I had sort of realised is that. O'Reilly was also in the conference space. And some of the conferences that I had a lot of friends who were excited about, we're talking about were, were Strata and Velocity. These were both really interesting events for me, because I was working with data. What was happening at Strata, and the data scientists who were talking there, and presenting there, and in fact, a CTO who I worked with. I helped some of the materials that he ended up using on that.

Then, Velocity, with high-performance computing, which was instrumenting everything, optimisations, scaling everything to the high heavens. To me, it seemed as though this was the perfect time. If O'Reilly was ever going to get interested in doing a security event, this would be sort of the perfect time for them to do that. Because to me, where data and DevOps were dancing, if you will, security needed some of that. I was very excited to attend or hear about those events.

When O'Reilly did decide they were going to dip their toe in, I thought, how wonderful. I hope that what they do is they consider how these existing things they're exploring with the communities are shaping how those communities work, how those could drive or infuse a security event. Well, I made a few comments to that effect, and they liked that so much that they brought me on to help lead the security events. Courtney Nash, who was the person who I worked with, originally, on the concept. The idea that this was about building, this was about engineering defences, and providing defenders with the tools and capabilities they needed. That was the spirit in which we pursued this. I was so happy with how the events turned out. I think that we were able to bring folks in.

One of the things that we talked about is how folks who are tasked with defending systems and organisations today, they don't necessarily self-identify as IT security folks. You have folks who are looking at problems with privacy. They're looking at problems with compliance. They are the ones who are building the software, not just the ones who are auditing the software. All of these folks have to be empowered with the right information, and the right incentives to build better systems, and the more defensible systems and systems with less inherent vulnerabilities.

The events themselves went really well. I think that we were able to infuse the conference with the sort of excitement of creativity, collaboration. The idea of sharing how to do things better, which it felt to me sometimes that a lot of the shows and conferences in this space had been really more focused on the narrative of the breaker. The idea that you have to know how an attacker thinks in order to defend against them. I don't disagree with that. But I also think, defenders may also have things they need to do, in addition to living with a breaker over their shoulder kind of mentality.

The idea that resilient systems, and optimise systems, and scaled systems, there's kind of an art to that, and an emergent science around that in and of itself. In addition to the fact that there are attackers on the way, always. It was really refreshing to be able to have conversations with folks who are thinking about doing things differently. Always, with that build mindset, as opposed to the constant kind of the idea of the defender as the ultimate reactionary. I'm really excited to have been a part of that. I think it's changed the industry a little bit, I see a lot more events, sort of trying to hone in on, this is by defenders for defenders, or make sure that they're STLC tracks, or more outreach, if you will, to developers. I think it's fantastic. I'm so excited to have been a part of it.

[0:33:19] Guy Podjarny: I fully agree. I like that the journey you're describing with O'Reilly, and kind of maybe the whole ecosystem is, is actually somewhat similar to your own journey, kind of going from understanding software, to appreciating risk, to using data, to combat that risk, and then to bring it all of that back into technology solutions, and into the technology partners that do it. I'm a fan, both of the approach, and of O'Reilly security, which I tried to help out a bit as well, and kind of hope that that sort of vibe of events, that oriented defenders, and using data, and using kind of technology to kind of help us build things that are more indeed resilient to attacks as well, not just at downtime, but within those communities.

I appreciate the effort there. I appreciate you sort of sharing the journey. I guess, I'll leave you just like one quick question on a pet peeve. I like to ask every guest, if you had like one quick word of advice or pet peeve on security that you wish people would do, or stop doing on it, what would that be?

[0:34:21] Allison Miller: We're talking everyday average, folks, I like password managers. I'll just kind of say that. I know they're not perfect, but I actually do really like password managers for multiple reasons. But I think you're really asking more that question on behalf of the technologist. I guess, I would say, I'm not sure if there's a nice DevOps equivalent, that there's already a saying for. But I will say that your design is not done if you've only considered the happy path. What I mean is., that people are on average good, design assuming good intentions. But always make sure that you look out for the outlier abuse cases, and failure cases, and instrument them, and make sure that there's a path for them as well.

[0:35:13] Guy Podjarny: Very, very valid advice. Ally, Thanks a lot for your time joining us today.

[0:35:18] Allison Miller: Thanks.

[0:35:19] Gu Podjarny: Thanks, everybody, for tuning in. Join us for the next one.

[OUTRO]

[0:35:23] Announcer: That's all we have time for today. If you'd like to come on as a guest on this show, or want us to cover a specific topic, find us on Twitter, @thesecuredev. To learn more about Heavybit, browse to heavybit.com. You can find this podcast and many other great ones, as well as over 100 videos about building developer tooling companies, given by top experts in the field.

Snyk (スニーク) は、デベロッパーセキュリティプラットフォームです。Snyk は、コードやオープンソースとその依存関係、コンテナや IaC (Infrastructure as a Code) における脆弱性を見つけるだけでなく、優先順位をつけて修正するためのツールです。世界最高峰の脆弱性データベースを基盤に、Snyk の脆弱性に関する専門家としての知見が提供されます。

無料で始める資料請求

© 2024 Snyk Limited
Registered in England and Wales

logo-devseccon