Episode Summary
Machine learning has been around for decades, but as it evolves rapidly, the need for robust security grows even more urgent. Today on the Secure Developer, co-founder and CEO of Mindgard, Dr. Peter Garraghan, joins us to discuss his take on the future of AI. Tuning in, you’ll hear all about Peter’s background and career, his thoughts on deep neural networks, where we stand in the evolution of machine learning, and so much more! We delve into why he chooses to focus on security in deep neural networks before he shares how he performs security testing. We even discuss large language model attacks and why security is the responsibility of all parties within an AI organisation. Finally, our guest shares what excites him and scares him about the future of AI.
Show Notes
In this episode of The Secure Developer, host Danny Allan welcomes Dr. Peter Garraghan, CEO and CTO of Mindgard, a company specializing in AI red teaming. He is also a chair professor in computer science at Lancaster University, where he specializes in the security of AI systems.
Dr. Garraghan discusses the unique challenges of securing AI systems, which he began researching over a decade ago, even before the popularization of the transformer architecture. He explains that traditional security tools often fail against deep neural networks because they are inherently random and opaque, with no code to unravel for semantic meaning. He notes that AI, like any other software, has risks—technical, economic, and societal.
The conversation delves into the evolution of AI, from early concepts of artificial neural networks to the transformer architecture that underpins large language models (LLMs) today. Dr. Garraghan likens the current state of AI adoption to a "great sieve theory," where many use cases are explored, but only a few, highly valuable ones, will remain and become ubiquitous. He identifies useful applications like coding assistance, document summarization, and translation.
The discussion also explores how attacks on AI are analogous to traditional cybersecurity attacks, with prompt injection being similar to SQL injection. He emphasizes that a key difference is that AI can be socially engineered to reveal information, which is a new vector of attack. The episode concludes with a look at the future of AI security, including the emergence of AI security engineers and the importance of everyone in an organization being responsible for security. Dr. Garraghan shares his biggest fear—the anthropomorphization of AI—and his greatest optimism—the emergence of exciting and useful new applications.
Links