Skip to main content
Episode 154

Season 9, Episode 154

Revolutionizing Coding - The Future Of AI-Driven Development With Jeff Wang

Hosts:
Danny Allan

Danny Allan

Guests:
Listen on Apple PodcastsListen on Spotify Podcasts

Episode Summary

Are you ready to revolutionize your coding experience with cutting-edge AI tools? In this episode of The Secure Developer, host Danny Allan is joined by Jeff Wang, Head of Business at Codeium, to take a deep dive into the transformative power of generative AI in software development. Discover how coding assistants have evolved from simple auto-complete functions to sophisticated AI-driven tools, the significant impact these advancements have had on productivity and innovation, and how Codeium is addressing some of the security challenges they pose. Tuning in, you’ll learn how you can stay ahead in the rapidly changing tech landscape and supercharge your development process.

Show Notes

In this insightful episode of The Secure Developer, host Danny Allan sits down with Jeff Wang from Codeium to explore the rapidly evolving world of AI-powered coding assistants. As organizations increasingly look to harness the power of Generative AI in software development, Jeff provides a comprehensive overview of how these tools transform the coding landscape.

The conversation starts with a journey through the history of coding assistants, from early autocomplete features to today's sophisticated AI-driven tools. Jeff explains how Large Language Models (LLMs) have revolutionized code generation, offering unprecedented levels of accuracy and efficiency. He delves into the various features of modern coding assistants, including chat functions for code understanding and debugging, highlighting how these tools cater to both junior and senior developers.

Security concerns are a key focus of the discussion, with Jeff addressing how Codeium tackles data privacy and protection. He outlines strategies such as air-gapped deployments and local data processing to ensure that sensitive code remains secure. The episode also touches on the challenges of measuring the impact of these tools, with Jeff sharing insights on how companies are quantifying success through metrics like code generation percentage and developer productivity.

Looking to the future, Jeff and Danny explore the potential trajectories of AI in software development. They discuss the possibility of more complex, multi-step AI processes and the integration of AI across the entire software development lifecycle. The conversation concludes with thought-provoking insights on how AI coding assistants are improving productivity and enabling developers and organizations to "dream bigger" and tackle more ambitious projects.

This episode offers listeners a deep dive into the cutting-edge world of AI-assisted coding, providing valuable insights for developers, technology leaders, and anyone interested in the future of software development. Tune in to understand how these tools reshape the industry and why they're becoming essential to modern development practices.

Links

Share

Jeff Wang: “A lot of people have different styles of coding. For example, if you're a senior developer, you probably just sit down and just start writing code. But if you're a junior developer, it seems that more junior developers use a chat function more often. That's either to understand code, it’s either to debug code, or even to just generate new code from chat. Like say, “Hey, I'm trying to write this function. Can you just give me the code for it?” These coding assistants spit out the code. It’s actually quite amazing.

[INTRODUCTION]

[0:00:26] Guy Podjarny: You are listening to The Secure Developer, where we speak to industry leaders and experts about the past, present, and future of DevSecOps and AI security. We aim to help you bring developers and security together to build secure applications while moving fast and having fun.

This podcast is brought to you by Snyk. Snyk’s developer security platform helps developers build secure applications without slowing down. Snyk makes it easy to find and fix vulnerabilities in code, open-source dependencies, containers, and infrastructure as code, all while providing actionable security insights and administration capabilities. To learn more, visit snyk.io/tsd.

[EPISODE]

[0:01:05] Danny Allan: Welcome, everyone, to another episode of The Secure Developer. We're so glad that you're with us today. We have a really exciting topic today, and that is coding assistance. Something that I hear coming up all the time. So, I thought it would be excellent to bring someone from one of our partners, Codeium, Jeff Wang with us today. Jeff, maybe you can introduce yourself to the folks in the audience.

[0:01:27] Jeff Wang: Hey, everyone. My name is Jeff. Nice meeting you all. Lead business at Codeium and it's been a crazy ride for the last year, as we'll probably discuss. But thanks for having me, Danny.

[0:01:37] Danny Allan: Well, it's great to have you, Jeff. I have to say, last week, for example, I was at five different customer sites. At all five customers, every single one of them was talking about coding assistance. So, clearly, there is something happening in the industry around Generative AI and coding assistance. But maybe you could just start by giving the audience a history, a lesson on what are coding assistants? How did they come to be? How did we get to where we are today?

[0:02:04] Jeff Wang: Yes. I think coding assistants are just another tool to help developers either code faster or even reference things that they weren't even aware of. If you want to talk about the history, I think it goes all the way back to maybe even like IntelliSense, which is maybe like 25, 30 years ago when it first popped up in IDEs. That was like giving you an auto-complete for maybe other parameters or variable names or things you've written elsewhere in the code.

I think, where things really shifted, it was back in like 2017, 2018, where I think Intelicode and there was like Tabnine and other kind of machine learning type auto-completes came out. Actually, you probably notice around that time your text messages and your emails started to have some auto-complete, but it was pretty bad, right? I think 2021 is where there was a major game changer, which was with GitHub Copilot. GitHub Copilot was using LLMs and, you know, the transformer model to do auto-complete. As you know, like with GPT and ChatGPT, auto-complete with LLMs is fantastic, right? Since then, now, we have a whole bunch of, there’s a coding assistant sprouting up every few months, now we have Codeium, of course, in 2022, and then I think Amazon and Google in 2023. Everybody is starting to try to come up with their own version of a coding assistant. Now, obviously, it’s not just auto-complete, but it's a whole bunch of other features as well.

[0:03:22] Danny Allan: What are those other features? So, when I think of it, I also think of auto-complete. I started to hit a character and it says, “Hey, he's going to write this particular function in it.” It completes for me. But what are the other key capabilities that they're introducing now within the tools?

[0:03:35] Jeff Wang: Yes. A lot of people have different styles of coding. For example, if you're a senior developer, you probably just sit down and just start writing code. But if you're a junior developer, it seems that more junior developers use a chat function more often. That's either to understand code. It's either to debug code, or even to just generate new code from chat. Like say, “Hey, I'm trying to write this function. Can you just give me the code for it?” These coding assistants spit out the code. It's actually quite amazing.

There are obviously a bunch of other ways to do this. You can actually, for example, highlight code and ask the coding assistant to modify it. Then, you could stream in-changes as well. But basically, the underlying technology is the same. They're all using LLMs behind the scenes, and there is a way to tell you the auto-complete or tell you the next token that is relevant to code. I think most people don't know this, but coding is a great use case for LLMs because in English language or other language, there's a lot of variations in how the tokens can relate to each other. But code is code, right? It must run. It is very structured, and there's a lot of public data for it as well to train. So, coding has been just a fantastic use case for LLMs.

[0:04:43] Danny Allan: So, the LLMs that are being used, they're feeding in code. Does it matter the model that you're using when you're feeding those LLMs in order to determine what is suggested back to the developer?

[0:04:54] Jeff Wang: Yes. Absolutely. I mean, there are many ways to make the quality better from LLMs. So, one of them is, of course, just putting more code into the model and increasing the model size, right? There are ways to fine-tune the model to make it very specific to auto-complete outputs or even chat outputs, right? Or even fine-tune it to make it very code-biased, right? Then, there are ways to construct prompts that are more useful, like have more context in the prompts. Then, the better retrieval, the better kind of snippets you can get that you can put into the prompt, the much better the results. All three of these kinds of vectors are ways that these coding assistants are trying to get better. I think, what we've found to be the most useful right now is actually the context. So, getting the exact right prompt correct gets you much more accurate results. But obviously retrieval is a tricky part in that strategy.

[0:05:48] Danny Allan: How do you handle CISO? I come out of the security industry. This is The Security Developer. How do you handle the chief information security officer that says, “Hey, I don't want to send, or I don't want my code contributing to these LLMs,” or maybe the reverse of that? “I want to be using my secure libraries that are internal. I don't want it giving me something generic.” Is there ways to address the security concerns that come along with these LLMs?

[0:06:12] Jeff Wang: Yes. There's actually several things we do, at least on the Codeium side. These are not easy things as well. So, one thing is to have a deployment option that is completely air-gapped. This obviously has a lot of trade-offs. One of them is that the customer needs to find the GPUs to host the LLM on. The other is there's a lot more kind of deployment work on both the customer side and sometimes Codeium needs to step in. It's not as trivial as flipping on an account on SaaS, right?

But the benefit of this is that none of their being sent anywhere. All the inferences, all the fine-tuning, or all the context is all just getting assembled on the server itself. They don't have to worry about any of their data going to some other source and not knowing what to do with it.

The other thing, though, is we have some tricks as to where the flow of data is going. So, for example, when we set up context, we're putting up your entire code base as a mapping to index and then find what's relevant. We actually do that locally on the user machine. So, on the cloud version of Codeium, we actually don't even store any of the user's code, but we still have very powerful retrieval of where the relevant code snippets are anywhere in the code base. In fact, if the user or even like on a non-prem instance, they could have multiple repos where they want to retrieve data from, they can do that without any worry, that the code is somehow getting sent to somewhere that's either getting trained on another data set, or even just stored on like some other hacker’s machine, right?

I think these are strategies that have had trade-offs, of course, like a lot more work on our side. Deployments have been obviously not as trivial as a SaaS self-serve, but it has paid off. I think in terms of security, we've been very serious about how we can fit into any organization.

[0:07:54] Danny Allan: Well, that is amazing. Clearly. security is a big concern, but ensuring that the data is air-gapped or that it's tuned for that specific customer, it's not being shared is obviously a huge value. I can see why that has resulted in the growth of Codeium and what you're doing over there. Do you expect the models will evolve over time? Or do you think the models are what they are based on where we are today?

[0:08:17] Jeff Wang: Definitely. So, there are many strategies that kind of take up effort, I would say. One of them, of course, is increasing the model size. Of course, you need a lot more training to compute. You also need more training time. Also, you need more data. So, this is a lot of sacrifice, of course. You make a bigger model, you get better results. But then people don't realize like there's also the latency aspect. If you train a bigger model and people are using it for auto-complete, they sometimes don't want to wait more than maybe 100 milliseconds or a couple hundred milliseconds at the most. We do have competitors in the space that have rolled out these models that take over one second. What happens is the developers typing code and they have to pause and wait to see if there's a result that pops up. That is not something that a developer wants. In fact, we find that most people just uninstall those kind of coding assistants.

That means that either you have to go with a very small model or you have very, very kind of like high-tech stuff going on, on the infrastructure side, which is what we've had to do. I can name at least a dozen of different projects that we had, that were just saving like 10 milliseconds here, or 20 milliseconds there, while increasing the model.

So, it's a constant battle between increasing model size and then doing a bunch of tricks on the infrastructure side to get it to have such great latency. Of course, you can add a lot of other variables in here. You can say like, “Oh, can we fine-tune it for even more specific purposes? Can we increase the context length, for example?” But that even makes it harder to – it increases the training window significantly, but it also can increase the latency as well.

Everything we do, there's always a trade-off. But then, of course, we have to make sure that our customers are happy. Then, one of the benefits of Codeium is we have hundreds and hundreds of thousands of users on the cloud assistant that we and just try a new model out. If it's taking just even a few milliseconds more, and then we see like they're not accepting as many tokens. We know at least we can measure these things and then roll out these to the enterprise when we're comfortable.

[0:10:14] Danny Allan: Makes total sense. I can see why the speed matters, the latency more than anything else. It's funny, when I first approached this, my thought was, “Hey, you can have these AI coding assistants generate code and then Snyk can check it on the security side.”

But actually, what I've come to believe is that that will never happen, that you'll have to curate the models before the code has ever suggested because of that latency aspect of it or figure out other ways on the security side. So, that's definitely interesting.

If you look at the explosion of languages, do the models that you use, do they translate to new languages as organizations adopt, move from JavaScript to Java, or probably Java to JavaScript or Python or whatever they're doing? How does that work?

[0:10:54] Jeff Wang: You know what's really interesting is that if you train a model only on one language, let's say you have a model trained only on Java, and then you create another model only trained on Python, but then you create a third model that's trained on both Java and Python, what is interesting is that the model that's trained on both languages will outperform the models only trained on one language.

Isn't crazy? Somehow, with more data, it has more reasoning and more data to show that how these things interconnect. What is interesting about that is, if there's a new language that pops up, and at least if it shares a lot of the same traits as other languages, it should actually still be very good. Obviously, there are some very obscure languages out there that are either domain-specific or there's just not a lot of data, or they don't behave as other languages do. Of course, those will not behave as well. But for the most part, if there is an updated language or a new version of a language, as long as there's some way to tell that that's the version you're going for, it should actually perform very well.

But I think some of these things that LLMs do, probably most people would not guess that having a mixture of data is more powerful than having a single language in a model, right? So, I guess my answer is like, yes, obviously, when you have a new language, it's outdated. Most of these LLMs are not trained even every year, right? We try to do it every quarter. But even then, I think the most important thing is like, once you have enough data and it knows the objective, the model can actually still give you very good results.

[0:12:24] Danny Allan: Yes. One of the things that is interesting is we've been doing some testing internally here, not related to AI, to Generative AI on the code side, but on the fix because we use Generative AI for suggesting fixes to customers. One of the things that we realized is actually by putting in more code with newer models, sometimes it came up with worse results than smaller models that were more tightly tuned.

I just point that out because there's these interesting things that are occurring, and I would never have guessed, for example, multiple languages gives you more accuracy. But I can see why that would be the case. Do you think we're ever going to get to the point where the Generative AI will help translation of languages, that it will be able to say, for example, “This is a better way of doing it. Don't do this in Perl, do this in,” I'm really dating myself there. “Don't do this in Perl, do this in Python,” that will help in translation or suggesting languages?

[0:13:19] Jeff Wang: So, I think you're typing code and then it just stops you and says,

“Hey, before you continue, you should do it in another language.” I think we're not actually there yet, just because of behaviour of coding assistants is very passive, right? Most people have an objective. They start typing code. The AI is usually helping them on the way. Then they're very goal-focused. I have this function I'm about to write and then maybe I'll use a chat to help me.

I think what you're referring to is kind of like the reverse, which is like, looking at what you're doing holistically, what are some tips that I can give you as an AI, right? I think that should be developed today. I don't think there is a feature like that, but it seems like a more – it's like passive, but it's actively trying to change the strategy of what you're doing. I think that should exist because I think the capabilities are there.

So, for example, in your Perl and Python example, maybe behind the scenes, it's like crunching some numbers. “Hey, the objective he's doing is going to take a thousand lines of code in one language, but only 300 in another. So, maybe I'll just suggest that other language to the user.” But what I'm saying is like, that's a good idea, and maybe we should do that.

[0:14:25] Danny Allan: Well, there's certainly lots of AI type scenarios that can be deployed here. It's actually an interesting concept or thing to think about as to whether the coding assistants stay coding assistants or whether they grow in scope. Do you have thoughts on that as to whether these are standalone tools that are being built? Or do you see them as broader platforms that engineering is going to use in the future?

[0:14:47] Jeff Wang: I think one thing we've noticed is like if you 10x the productivity on one step of the life cycle. Let's say we're talking about coding only right now. We 10x every developer in the coding kind of step. Then you realize, like, “Oh, wait, now in the design phase or the product requirements phase, there's not enough stuff going in, which is a great problem to have,” I suppose because it's the opposite today. But that means, in the future, you need more stuff going in from the actual features and maybe tech debt or other things that need to go into the pipe.

On the flip side, you're also 10x-ing the amount of stuff that needs to go to code review and do unit tests and check for security, right? What we are very cognizant about is like, okay, now that we've 10x-ed the developer for coding, there should be a product that is in the code review step. There should be a product in the design step. Because we want to 10x the entire workflow. We don't want to create bottlenecks. It's great to make one existing bottleneck smaller, of course. But if we end up creating bottlenecks in the rest of the software development lifecycle, we want to make sure we do have a product there or a solution there that can help address that as well. I think that's the first answer.

The second, though, is, yes, these capabilities are only going to be more diverse. One of them could be, for example, multimodal. Hey, I can actually read a diagram and generate code. Or even vice versa. I can create documentation from looking at the code or I can look at a website and just give you the code or a mock-up.

The other is kind of the multistep LLM, which is like, you've heard of like Agentic or LLMs with better reasoning that you can like get a PR and then it just – or it generates a PR from a ticket. Obviously, that is the dream state. Obviously, a lot of these companies have stuff in their Twitter demos showing that they can do it. But realistically, we're very far out from that.

We've tried experimenting a lot with this. I think our measure of success is if we are able to deploy it to tens of thousands of developers in a single org and they don't complain, that would be the right state. But I think we're not there yet. And I think a lot of these Twitter demos showing these multistep Agentic LLMs, don't realize that you cannot go enterprise with that kind of solution in its state today. So, I think, yes, multiple directions I went with that answer, but hopefully there's something in there.

[0:17:01] Danny Allan: No, that definitely helps. If you're talking to these big customers, like I was saying last week, five different customers who are looking at this, what is their measurement of success typically? Is it time to roll out? Is it productivity? Is it quantity of code? What are they measuring in terms of success of these AI tools?

[0:17:21] Jeff Wang: We've talked to so many customers now that the interesting thing is there's a lot of different measures of success. I wouldn't have expected that. For me, I would have expected, “Oh, wow. You have this many features you're trying to build and you could build them faster.” That is, to some extent, one of the main things we hear is like, if I have a $100 million R&D budget and I can do $120 million worth of work now, then I have saved or I've made $20 million on this investment.

But there's a lot of things. For example, onboarding engineers. If I hire 100 engineers, it takes me six months to onboard. We are hearing stuff from some customers that it takes like three to six weeks now to onboard with tools like Codeium. There are things like tech debt, and we are hearing some companies have like $80 million budget to handle tech debt, and they came and hire the people that know some of the code that is in the stack. Then, these tools can actually address that and then also make it faster. So, you can argue with that. There's like a $40 million return there or something like that.

We've heard some companies trying to think of some other use cases that we've heard recently. But it's like, there's a whole bunch of things in the unit testing space, the debugging space, I think what we've trying to settle on is the percent of code written. So, if a company has X amount of lines written by Codeium, then I think we are successful if we can get even a third of the code or even half of the code written by AI. That is a good measure of success for us. For the company, it can be decreased PR cycle times. It can be like much faster ticket closures. It can be decreased time spent on legacy code or tech debt. There's a lot of things we hear depending on the type of team that we talk to.

[0:18:59] Danny Allan: This is actually something that we've been looking at at Snyk. How do you measure the code that is actually created by the coding assistant? Is it by tokens generated or is there some way of actually looking at the code knowing, hey, this was generated by LLM or whatever, by the tool. How do you deal with modifications? How do you measure that?

[0:19:20] Jeff Wang: Yes. Behind the scenes, we kind of keep track of every token or every character that's typed. If there's a character that's modified, then we kind of deduct that from ours. So, they type their own tokens or they delete some of our suggestions. We are passing so many snippets and so many different things to construct the prompt that we can keep track of everything. Then, ultimately, what they save, or what they actually push to the code base, on average ends up to be like 44% to 45% of all the code that's written, which is really insane if you think about it. That's the average.

We're being conservative on that number two because we don't count copy and paste. We don't even count the chat insertions. The chat insertions could be a lot of comments or it could even be net new code. But those are huge blocks of code getting pushed into the text there, and we don't count that. So, I think just conservatively, we're probably writing around half of all code after getting deployed.

[0:20:16] Danny Allan: That's amazing. It's the new stack overflow. Actually, that's consistent, Jeff. I was speaking with a customer at RSA, and they said they were seeing about a 45% to 48% productivity increase in the developers. Now, I know developers aren't coding 100% of the time, whatever it is, 30% or – what is the percentage of time actually that a developer is actually coding? You know that? I don't know it off hand.

[0:20:38] Jeff Wang: It actually varies. It varies on the type of developer, which is interesting/. Senior developers tend to either write more code or are helping other developers understand or helping them code. What we've actually done is – what we've noticed is that, when we roll out tools like this, junior developers actually can use the chat more often to help understand the code. It frees up those 10x engineers so they don't have to waste their time helping the developers on smaller tests. It actually helps them become utilize the 10x factor, of those engineers. But yes, it does vary. I think there's some data scientists that use a tool, and they probably don't use it as much as like a senior developer. There are some DevOps engineers that are spent more time like trying to just look through code to see like where a bug could be or something, an issue could be. So, it does vary, it's kind of hard.

But what we also are very – we're conscious about this too. If we write half your code, we don't know what the extra time goes to either. We don't know if the organization wants to spend more time on security, or testing, or design. It could be any of those things. It could be that the engineer just goes and watches Netflix for the rest of the day. We don't know, but at least we opened up the possibility. Then, what we also hope for is that we've wiped away all the mundane tests or the boilerplate tests. We want the engineer not to think like, "Oh, no. Here's the part of the day I hate, which is writing unit tests.” We want the engineer to be like, "All right, I'm so glad that these coding assistants got all the stuff that I don't like out of the way. And now, I can focus on all the stuff that I really like to do."

I think like quality of life and happiness definitely should go up with these tools. I think, in fact, it'll be hard to hire some of the best engineers in the world without having these tools. It'll be like going – let me just think, it'll be going to a company and if they only have typewriters, it's like, "Okay, I don't want to go to that company." Or maybe, it's like an Uber app, but they don't give you the GPS. It's like, I'm going to hire a bunch of Uber drivers, but I won't give you GPS. It's going to be like that.

[0:22:36] Danny Allan: Yes, we're already seeing it now. That organizations, I can think of one, Snyk customer, actually, that it was a carrot. They said, "Look, the developers were all asking for it, and they said, "Look, we'll give you this if you do this." So, you're starting to see that now, where developers are not only asking for these, but demanding it, to have these types of tools. How do you handle the diversity of developers? Because I've yet to go into an organization where everyone is using Visual Studio or VSCode or IntelliJ. Developers like to use their own things. How do you manage that?

[0:23:10] Jeff Wang: This was one of kind of key strategies we had a while back, which was, we should be getting across as much surface area as possible. Meaning, we should be on as many IDEs as possible, we should support as many languages as possible. In fact, even the SCM, we should be making sure that we work with all SCMs equally. So, if you look at, for example, GitHub co -pilot, a lot of their best features are only on the GitHub SaaS product. Then, for us, we don't want to discriminate against any of these SCMs. We want to make sure if you're on BitBucket, or GitLab, or GitHub, you get the same experience.

I think, to your point, yes, developers don't like to switch off their IDE. We have customers that are still using sublime techs in VIM. We know that the VSCode extension is so powerful. Then, this is where I think the strategy has paid off, is when you deploy to organizations with tens of thousands of developers, there’s going to be a lot of developers that are just not going to switch where they're working on. You want to meet the developer where they are comfortable with.

There are some other kind of coding assistants that have gone the opposite strategy, which is like, "Hey, we're going to create the IDE for you. We're going to have a web IDE that you must use. Then, honestly, good luck switching tens of thousands of developers all to the same IDE. It's just not going to happen. So, that strategy has paid off, like meeting developers where they are, making sure we support as many languages as possible, and making sure that we support all the SCMs equally has really made it easier for deployment and adoption for these organizations.

[0:24:41] Danny Allan: It's a brilliant strategy. I know for myself. I used to use UltraEdit, but I've switched to VSCode, but it's a very personal decision on my part. I know the tool that I like, I know the way I want it configured. I don't want someone coming in and telling me, "Hey, you have to use Eclipse because we do Java here" or whatever it happened to be. So, I think it's a smart strategy. We have something similar at Snyk. Where do you see the industry going? Obviously, Gen AI is all over the place and you're generating code. Where does this go?

[0:25:10] Jeff Wang: Yes. I think, one thing, we mentioned earlier about like multistep LLMs and kind of like having a human out of the loop. Basically, an issue comes in and then maybe it fixes itself. Or maybe there is a product that people want to build, and then at the end, it just outputs the PR that you approve. Then all of a sudden, the web pages up, it has all the features that you just ask for. I think that's where the end state is going to be. But for us, we know that it's never going to happen without a human in the loop and for a lot of iteration.

I think people kind of have that dream scenario, and they post those Twitter demos about it, and then it seems like everybody has it all of a sudden. But we have to live in reality, and we have to live in a world where people will actually purchase this product and get value from it. I think, we are – it's going to seem like we might – not be behind, but it seems like we're moving more incrementally, I guess. But those incremental steps actually add up to the end state. It's not obvious until you see maybe three of these incremental steps happen. I think that's something that I think we want to make sure, like when we deploy to customers, we are showing these roadmap ideas, and then we're showing the incremental steps.

But it is getting there, it's getting to that point where you're unlocking more, a lot more capabilities from the tool to get to the end state faster. It's getting to the point where we're going to be on other parts of the life cycle. We're going to have these ideas, where, for example, code review. You can scan the code to say like, "Hey, here's your coding standards in your natural language, in your playbook, and we found these different things that are going to be something we flag." But we still want a human in the loop to go fix.

I think, what is going to be interesting is the gap between open-source and closed-source models are going to keep – it's going to keep closing. So, for example, by the time this goes out, like Lama 405B is going to come out. What we think that model is going to be is very close to like GPT-4 today. Obviously, OpenAI can release GPT-5 anytime, and maybe blow it out, increase the gap again. But my point is like, hey, even these on-prem deployments are going to be almost just as capable as anything you can find on the cloud. I think that's going to unlock a lot of things internally too.

For example, retrieval. If we can use these bigger models, and then compute to really solve the needle in the haystack problem, to solve like getting all the relevant snippets to get better prompts. Then, even the quality of the prompts are going to get way, way better. So, I'm going off multiple tangents here, but I think the summary is that, there'll be bigger models that are higher quality, there'll be multiple steps that are taken care of. There'll be multimodal, meaning like, not just text, but other types of kind of inputs. Honestly, what we care about is for developers to just dream bigger. We want to make sure, whatever a developer was capable of today, a junior developer can only do X, Y, and Z. No, no, no. With these tools, they should be able to do far more. Then, even an organization, we hope that they can dream bigger too.

We want an organization to say like, now that we've implemented generative AI, what we thought we could do in this next quarter is actually, there's no ceiling. We can do so much more because we've just implemented these solutions. I hope that that message is clear when people listen to this or whenever they use these generative AI tools, is that, if implemented properly, it can allow you to really just dream bigger.

[0:28:32] Danny Allan: Yes. So, is it safe to say, Jeff, that you don't believe the number of developers is going down? What this is really doing is just enabling more productivity, more innovation, more technology for the organization?

[0:28:43] Jeff Wang: It's going to be two things, actually. So yes, in history, there's never been a moment where the technology gap for efficiency increases and there's less employees, or there's less workers. I think what's going to happen is, different types of employees can now do coding tests. So, you'll have business analysts that are semi-technical that can actually do fairly technical things that a developer used to do. Then, you'll have much more stuff being outputted that are things that people want to do. So, instead of like doing a lot of unit testing, or a lot of debugging, or only working on tech debt, I think everybody will be working on net new features. I think that's like what everybody wants to do, and that's what the company wants to do.

I think the companies that are not using generative AI are going to be kind of like nerfed. They're going to be like maybe half-speed compared to everybody else. I think we're going to probably end up in a competitive landscape where it's like, "Oh, if this company is not using generative AI, why do we even bother investing in them?" Right? Because they're going to be like, their capabilities are going to be so much worse than all these other companies that are using generative AI. It's not obvious yet. I think it's maybe another year before that starts showing up in earnings reports and even competitive analysis, but it should be on everybody's radar today.

[0:29:55] Danny Allan: Yes. I'm absolutely thinking of this now, and I would argue certainly in technology space and in the financial services space, that is happening, that people are looking at this, because it has impact both internally. To your point, like developers are happier, more productive, they can watch Netflix in the afternoon when they finish their coding. I'm being sarcastic, but they're more productive, and happier, and better retention of employees internally. But also, on the outbound side of it, they're creating new features better, faster than all the all the other companies. So, it's a competitive advantage in my mind to be leveraging these types of capabilities.

[0:30:32] Jeff Wang: Definitely. I think it's not obvious, today. I think everybody wants to believe it's obvious. Some generative AI solutions are just not that consistent in ROI. If you want to do like, let's say, you deploy ChatGPT to your entire organization, you're probably going to get different results for every organization because it probably depends how good they are at training their employees of how to use it the best. But with coding, it's a little different because it's a passive AI. So, you deploy it on everybody's machines, but it's always trying to help them. Obviously, for some people, that might get annoying at first. But it is always trying to help them. That's why we see pretty consistent kind of value in ROI, it's because it's always autocompleting what you're trying to write. It's really spotting, if you hit an error, coding will say, "This is what we think the error is."

That is the difference, is like, these organizations will eventually have a playbook of like, here are the generative AI tools that are providing value. Then, if you don't have these, good luck, because everybody else is going to get them.

[0:31:26] Danny Allan: Yes. I have no doubt that it's going to impact every single employee, every single company. I say it's the biggest change that has happened in the last 20 years, and probably more for the industry as a whole. I usually end, Jeff, by asking the participants in the podcast a question, which is, "If you could take AI and help it one area of your business,” and this is more for the people who are always looking to start up companies. But if you could use AI to help you be more productive in one area of your job, or career, or whatever, what would it be? Where do you think AI would have the most value to you as an individual?

[0:32:02] Jeff Wang: Well, for me, it's very different. I have a very wide variety of things I have to do throughout the day. I think an AI that can prioritize those things and really give me the best path forward is probably the most useful for me, specifically. So, for example, if I'm handling, let's say, like, five different projects, each one of them has a lot of things I have to like figure out, what's like the most value? What's the best ROI for me at that moment? If I had an AI to say like, "Hey, Jeff, this project you're working on is going to affect the companies, like let's say the bottom line, the most, and requires the least amount of effort, and you should tackle this first. That would be very useful to me. I don't think there is such a thing that is looking at stuff that is outside of coding or outside of even your calendar or email, and really analyzing that. I think that would be very good for me personally.

For a developer, specifically, though. I think if reasoning gets better between these models, meaning like it is knowing exactly what your intent is, it is knowing exactly the relationship between all of the files, and the platforms that you're using, and it knows how they link together, and it's able to give you a good plan of how to solve those things. I think that is what I would – if I was a developer, that's what I would be hoping it gets better, the reasoning. So yes, a couple of answers there, but I think for me personally, better prioritization and looking at things holistically, and then telling me what is the ROI with the lowest amount of effort.

[0:33:21] Danny Allan: Well, you and I think alike, because I'm the exact same way. Prioritization is always an issue, and you have so many things going on, and it's a lot of vowels, but I guess it's an AIEA, artificial intelligent executive assistant. I don't know. Anyway, we only have 168 hours in the week, so, where you actually spend those hours is a pretty critical question.

Anyway, I love what you're doing over there at Codeium. Thank you for joining us today to talk about coding assistance. Like I say, we have a number of customers within our portfolio, but every single one of them is looking at this, and I do think it's going to change the entire industry. So, it's been fantastic to have you join us.

[0:33:56] Jeff Wang: Yes, thanks for having me.

[0:33:58] Danny Allan: All right. Thanks, Jeff, and thank you everyone for joining us today. We look forward to having you join us for the next episode of The Secure Developer. Thank you.

[END OF INTERVIEW]

[0:34:09] Guy Podjarny: Thanks for tuning in to The Secure Developer, brought to you by Snyk. We hope this episode gave you new insights and strategies to help you champion security in your organization. If you like these conversations, please leave us a review on iTunes, Spotify, or wherever you get your podcasts, and share the episode with fellow security leaders who might benefit from our discussions. We'd love to hear your recommendations for future guests, topics, or any feedback you might have to help us get better. Please contact us by connecting with us on LinkedIn under our Snyk account or by emailing us at thesecuredev@snyk.io. That's it for now. I hope you join us for the next one

Up next

You're all caught up with the latest episodes!