Natalia Burina, a product leader with deep expertise in AI, joins this episode to discuss Generative AI and its potential impact for product managers. The conversation with Natalia, a former leader at Meta, Salesforce, Samsung, eBay, and Microsoft, weaves through many important focus areas, including:
The potential and limitations of generative AI.
Generative AI and its potential impact on business.
AI technology and its applications in product development.
AI technology risks and challenges in product management.
Cultivating psychological safety for AI teams.
Generative AI, product management, and leadership.
CONNECT & FURTHER RESOURCES:
CONNECT WITH NATALIA:
Twitter (X): Natalia Burina (@Nale) / Twitter
AI Canon from Andressen Horowitz
Welcome to Product Voices, a podcast where we share valuable insights and useful resources to help us all be great in product management. Visit the show's website to access the resources discussed on the show, find more information on our fabulous guests or to submit your product management question to be answered on our special q&a episodes. That's all at product voices.com. And be sure to subscribe to the podcast on your favorite platform. Now, here's our host, JJ Rorie, CEO of Great Product Management.
Hello, and welcome to product voices. Today we're going to be talking about generative AI for product managers. Wow, I can think of very few more topical conversations than that. It's it's everywhere. And it's so important and it's so exciting, and I just really couldn't have a better guest to help me dive into this and teach us about what's going on in this world. I've got Natalia Burina with me. She is a product leader with deep expertise in AI at Meta AI, she managed a portfolio of AI products and technologies, meeting the needs of billions of users and internal customers powering products such as ads, why are you seeing this ad in the first place? Feed, which is video classification, video integrity, etc. AR VR, like a smart keyboard. Such cool things, and many others. In fact, she was the leader of the team that built the recently released AI system cards. So awesome, awesome stuff at Meta. And prior to that she has been recognized in 2017 by Business Insider as the most powerful female engineer of 2017. Wow, what an honor. Prior to that, she was a director of product for machine learning at Salesforce, she led teams building all kinds of AI capabilities and platform services. She's led Product Development at Samsung, eBay and Microsoft Bing. She was also the founder and CEO of Parable, which was a creative photo network, bought by Samsung and 2015. She was just a absolute superstar. And I could not be more excited to have her with me. Natalia, thank you so much for joining us.
Thank you for having me, JJ, and super excited to be here today.
Yeah, it's so exciting to talk to you and about this topic. It's just wow, it's gonna be great. So we've been hearing about AI for a while you've obviously been working in in this space for several years. Tell me why you think it's blowing up the way it is right now? What's what's going on in this space? It's making it be even bigger than what we've seen over the last six, seven years.
Yeah, yeah. So there are a few things that are going on. But mostly, I'll say it started with a paper that came out of Google in 2017, called attention is all you need. And they introduced transformer models. And basically, the big breakthrough with that paper is that models can track relationships between words in a sentence using this attention mechanism. And then secondly, the transformer can be trained significantly faster. So that's one reason. The second reason is the models that we have today, the foundation models just simply suck up literally all the data from the internet. And the kind of scale and compute power that it takes is something that we've not had available before. And then thirdly, there is we've done, we've seen some UX improvements around chatbots. We've had chat bots for a really long time. But the recent chatbots that we're seeing are a lot more human like and the word for that is fluency. So they sound like humans. And essentially, there's a technique in machine learning called reinforced reinforcement learning from human feedback that's been applied to these giant models. And that's resulted in this this fluency and there's also been some hacks around contexts. So chat GPT really gives you this illusion that it understands and it can keep context and there's some some very smart things that they did on the back end to make that work. Beyond that, I will also say that you know, we have on the business side, the latest research estimates posit the generative AI could add the equivalent of 2.6 to four 4.4 trillion annually. So from a business perspective, the opportunity is enormous. And just to put that number in context, United Kingdom's entire GDP in 2021, was 3.1 3 trillion. So again, from a business perspective, just enormous opportunities, and people are very eager to take advantage of a tech. It's,
it's really amazing. It's exciting. It's, it is truly world changing. You know, or at least could be. But as a product person who has a bias towards the, I'll say, the customer and the business, and I don't know that we all shouldn't. But in other words, I'm not a technical person. I'm not an engineer, I'm not a computer scientist. So which, by the way, for the record, I kind of wish I was, but I'm not. It's just not my educational background. With that being said, because of that, I tend to not get enamored with the technology, because frankly, I don't know it. But the solutions, or excuse me, the problems we're solving, right, something like that. So, and I think a lot of people are asking this question these days. So the the technology truly is world changing? I have no doubt about that. And the ability to do things that we haven't seen before. With that being said, what are we going to use it for? Right? What Tell me some, some ideas, and there's already some out there, and certainly some future innovations in mind. But what's it good for? Right? Like how do we know as a business or as a product team that we should be building something in and around generative AI? Like, are we solving real problems or, or just, you know, gravitating to the awesome technology? So what's it good for? What do you what do you what are you seeing out there with use cases? Etc?
Yeah, great question. Because as with any technologies, there's certain things where it really shines, and others were less so where you have challenges and problems and generative AI really shines for use cases where you need creativity, or inspiration. So I mentioned before this notion of fluency, it sounds remarkably like, like we do like a human. And this is this is why it's so mind blowing. And why, again, going back to your earlier question, why we're seeing so much excitement. However, one of the big problems is that it's, it can sound plausibly correct, but it's actually lying. And this is something that's called a hallucination, or konfiguration. So you're, you can't be 100% sure that you're getting a correct answer. So if you think about sort of a framework for when you should use it, really, for those times where you may not need to be absolutely accurate, but you need to have, again, creative inspiration, that's where it really shines. So some examples of these things. Like, let's say you need you want to write a poem, you don't need to be correct. You want to write science fiction, you want to generate beautiful images that really stretch your imagination. You want to, you know, you need stock images for marketing purposes, children's books, compose music, that's, again, where this technology really shines. For those use cases where you absolutely need the right answer. That's where you run into people run into trouble with with these hallucinations, because it's very difficult to predict when those happen, and we haven't really figured out great ways to mitigate the hallucinations. So. So yeah, I would and by the way, there is a framework by a colleague of mine whose name is Barak Turovsky for when to use generative AI, and I encourage people to take a look at that I will also send you the link, so everyone could take a look at it.
That's great. Yeah, we'll put that in. In the show notes. I that's such an important part of this conversation right now. And over the next decades as we continue to use this and build this is, you know, what, how do we know what are those? What are those frameworks that that make us know that yes, it's okay to use this. Yes, it's valuable to use this or we should have some parameters around this. Like it is and I anticipate as as you said, just that being such a big part of the Commerce Ah, and hopefully will be part of the big conversation because there's, there are ways that this can be world changing in very good, amazing ways for humankind. And then there are some, you know, ways that I think concern people. But to me, the good outweighs the bad, at least right now. You know, and so we'll see how that goes. But as you think about this, as you work on this every day, as you as you see the the current and the future, I mean, is there, is there a secret sauce, per se, is there like this is, you know, this is the area that's going to make the biggest impact? This is where we should focus this is this is how we, you know, do this right? Is there a quote unquote, secret sauce in generative AI these days?
Yeah, I'll go into the secret sauce. But I do think, as you were talking, you know, there's one other aspect. So he talked about generative AI from a use case perspective. But there's also from a business perspective, because this is such a hot area, I'm hearing a lot of people say, Well, you know, my leadership comes to me and wants a plan for generative AI. So I wanted to offer the audience a, a quick summary of a little framework that I have for that, and then I'll go into the secret. And explain, explain what's so special about it. So it really depends, what I'd say is, you know, depends on what kind of a business situation you're in. So if you're in a business where generative AI could make your business obsolete, an example might be you have a business that generates stock images, right? Or you have tools around creative work, etc. Those are the times when you should really go all in and put a lot of resources into exploring generative AI. There are other cases where generative AI can help boost your business. And some examples of that might be customer support. It does a fantastic job of summarizing you know anything with call centers with chat, productivity improvements, you could you could have summarization, automated note taking, etc. There, you know, you have to look at should you build or buy, because a lot of the tools are coming up and available. Third, you might have something in your business that gives you a competitive advantage with generative AI. And then, you know, there I would make bet. So what are examples of that might be that you have an incredible user base, it might be that you have proprietary data. And if you have proprietary data, that's where you could really fine tune this technology and gain a tremendous advantage in the market. Or it could just be that, you know, there's really no impact, it doesn't make sense. So I would say you have to go through and kind of evaluate what your situation is. And where if the technology is a match for for your business. So that's a brief framework for thinking about defense business aspect. Now, regarding the secret sauce. I mentioned earlier, most of the chat bots we've seen before, I kind of stateless, so they treat every new request as a blank state, they aren't programmed to remember or learn from previous conversations. What we've seen with chat GPT, and all of the large language model, type chat bots is it can remember what the user has told it before, in ways that really personalize it and give you that awesome experience. And so, you know, that's, that's, that's why, again, this is this is really shining and coming through and people are excited about it. Secondly, there's, I mentioned earlier, something called a reinforcement learning from human feedback, which is essentially a technique. Reinforcement Learning has been around for a while. But it hasn't been applied quite in this way. And essentially, it's a, it's a process. That's the that's part of training these models. What happens is a reward model is trained with really high quality data from human labelers. So chat GPT had something like between 100k and 1 million examples used to train the reward model. So this is why it's so remarkably good at just lots of human labelled data that really put a smiley face on a model that was otherwise unwieldy. And then A large language models have been optimized to generate response for which the reward model gives the high scores. And that's it. In a nutshell, what reinforcement learning from human feedback is, and I mentioned, there's various stages to training these foundation models. And, again, going back to your question of what's the secret sauce? Well, more data than ever before and more compute, it's extremely expensive to train the models to pre train the models 99% of training compute time is on pre training. And really, there's only very few organizations that can afford to do that. And it's the big companies. So, you know, again, those are the reasons and the secret sauce behind this stuff.
Yeah. And I mean, it may I think I'll I think I'll ask you dig deeper into this in a moment. But But I think that may, the fact that only a handful of companies entities can afford to do the big models, right to the secret sauce, if you will, is maybe concerning, right? Maybe a disadvantage. So hold that for a second, though, because I want to I want to, I want to ask a bit of a more basic question, if you will, not the dancers necessarily. Basic, but AI hear about all the time have heard about four years generative AI, of course, I'm sure people like you have heard about it for a long time. But for the general public, it's a somewhat new concept or term, so So tell me a little bit about the difference between generative AI and other AI?
Yeah, yeah. So normally, with AI, what we've done, the way that we've developed it is, you would train a library of different AI models. And each model would be trained on a task specific data for a specific task. So maybe summarization of text classification of photos into there's this famous meme of hot dog or not? Well, foundation models, one model drives can be used for a lot of different use cases and applications. Foundation model can be transferred to any number of tasks and perform multiple different functions. So what happens is you train a model on a huge amount of unsual of data in an unsupervised manner on unstructured data, and then you fine tune on top using small amount of labelled data. So So you have one model that can be trained, fine tuned much faster, for a variety of different tasks. And so the advantages of this is when applied to small tasks, you can dramatically outperforms a model trained on just a few data points. You need far less labelled data. The disadvantages with this technology are the compute costs. Well, we I mentioned already, it's very expensive to train. But it's also very expensive to use. So weak inference adds up. And when it cost has been a big issue, I think it's we're seeing improvements and cost is going to come down, but it's still pretty costly. And then the other big disadvantage is trustworthiness. We touched on hallucinations, but because there's so much data that went into it, you just don't even know what when to so it can it can generate toxic information, hate speech, etc. Some things that are potentially dangerous.
Thank you for that. I think that's an important kind of part of the conversation context. If, as you're thinking about this, from a product manager perspective, or product team perspective, you know, tell me how you've seen it. And you touched on it a little bit about how when, when you mentioned businesses in the framework of should we do this and you know, what should we do? So I assume that that framework is also very valuable for product teams, whose business you know, like you said, as leaders come to them and say, Hey, we need AI you know, what, what does that mean for you? Right? And so using that kind of framework of what what's the impact is important, but from a product manager perspective, are there other challenges in in, you know, working in AI and generative AI at this point in in our world, you know, with it being So visible? Are there specific challenges or, you know, advice that you would give to a product manager?
Yeah. The first one, I think all of us as product managers think about users and customers and their their set of challenges around the user experience. And a big one is inconsistency in user experience. And what I mean by that, when using an applications, users expect a certain amount of consistency. And this technology is stochastic, it's probabilistic. And that means that there's no guarantee that it's going to give you the same output for same input every single time. This is a huge challenge that has to be taken into account. Again, you have to go back and match it to your use case. So that's one and there's some ways to mitigate it. But again, not perfect, and there's a lot of research and work going into making it more consistent. Secondly, I haven't seen many people talk or write about this, but writing prompts is hard. The issue with generative AI is like you know, you you write a prompt, and then a generates text, image, audio, whatever it is. But most people are not articulate enough to formulate good prompts. And there's, there's actually a funny thing that happened is a cottage industry of people selling their products. And, you know, the read the latest research around literacy, even for very rich countries, like United States and Germany is that half of people or low literacy, people can't write very well. And writing is just it's, it's a skill, it's not easy to come up with the right words, to get everything out of the model that you need. This is why you're seeing prompt engineering come out as a discipline as well is this like, new skill, a new job that basically it's kind of tickling the model to get the output that you want to get the ideal output. And so if you're building with generative AI, what I recommend is giving people I think Adobe, Firefly is one company that did a pretty good job with this is what they've done is they've given you options around describing different dimensions of aspects and photos. And so when you're writing a prompt, they tell you about two different styles, they offered them as navigation as a panel of refinements. And that means that people know, you know, there's certain styles that can pick from they don't have to come up with them on their own from scratch. So I would say, you know, offer people refinements, so they don't have to come up with the perfect prompt every time. But I think we're going to see a lot of new thinking around the user experience for generative AI. And I think this is really an exciting area where there's a lot of room for innovation.
Yeah, it I agree with you, I think it's going to be a really fun time to, to watch and to be a part of, over the next few years and decades, etc. So you mentioned something that I would call a risk for, for this type of thing. And there are all kinds of risks for this, this, again, kind of groundbreaking or world changing type of technology. But one of the risks internally for organizations is that the skills may not be there for people who are really, really great, and other types of technology or other types of product management, etc. Maybe we need to build, build some skills. So that's one, you know, kind of avenue of of risks around this. But what are some other risks that you know, come about for businesses, for customers? For, you know, everyone that's kind of in this ecosystem? What are some of the risks that you're seeing or that you're prepared to see in the future? Yeah.
And to add on to your comment around that organizational knowledge, it's, I'm a firm believer that eventually all product managers are going to be AI product managers, we're not going to be able to escape it. And so while you don't need to know the deep technical details, I think it's really good to understand what the technology can do and what are some of the trade offs. But as you mentioned, there's there's a whole set of risks around I mean, let's, I'll go into them. So one is fairness. So fairness, consider the problem, you know, how, what does that mean in the context of a large language model? Well, One good example is that these treat men and women equally. So for example, consider a prompt like Dr. Hanson studied a patient's chart carefully. And then what you'll find is a lot of the times they'll say, Lee referred to Dr. Hanson as he. And so that's where we that's one example of where a lot of the systems really skew. typically male. And I did a bunch of experiments, if you go to mid journey, and you ask them to give you a photo of a product manager gives you men most of the time, it gives you men for, you know, technology, CEO, and all of these. So fairness is one dimension that I think is really important. Another one that's really huge, and I think that people have have to pay attention to is privacy. So I'll read a quote, verbatim quote from GPT. For GPT. Four has the potential to be used to attempt to identify private individuals when augmented with outside data. So these models have could potentially leak data. And we're seeing there's a lawsuit against open AI. Samsung, also had an incident where workers made a major error by using chat GPT and shared proprietary source code. So privacy is a really big one, it's a really big risk that organizations have to consider carefully. Because you have to, you know, user and customer trust is really paramount for any business. Then, you know, there, there's so many of them. But another one that's pretty close to my heart is that these systems are not transparent. They are blackbox, even for people who build them. So what I suggest is, there's actually a bunch of tools that can help with this. But transparency for these systems is really huge. Because it's a way, if you don't have transparency, then you can't mitigate any of these issues that you find. And, you know, this is one of the things that anyone building with generative AI should consider when investing in making your systems more transparent is something that, you know, could really pay dividends later down the road. And here, I'll mentioned my team at meta, shipped something called AI system cards. And, you know, I was lucky enough to be the one who kind of set the very original vision around this an AI system cards are a tool that describe how AI systems work. And they're, they're published publicly. So you might be like, Why is this important? Well, AI regulation is coming. And in order to be compliant, it really helps to be transparent. There's one final one that I'll talk about, which is around copyright. So a lot of the models use copyrighted data. And there's been a whole set of lawsuits around copyrighted data, people really feel that they're being taken advantage of. And again, you know, I think one company that's done an amazing job with this is Adobe, Firefly, everything, all the models that they trained, were trained with data that they had full and total permission. There's, there's so much to think about. But again, you know, I think, for anyone building AI products, you have to carefully weigh these risks with the business. The business advantages and so it's a game of trade offs. And you have to spend time thinking about all of these.
Yeah, absolutely. And it's you know, we risk mitigation is part of a product managers and a product team and a product leaders role in any product but you know, for these types of products that are you know, that there are new context constantly and there's new situations and there's new legal risks and there's new regulations that aren't even there yet that we have to anticipate and work towards. So I love that night and I appreciate you you walking through those because I think anyone who's in AI generative AI today or will be in a product team has to be you know, thinking about these risks. as they're going forward, because, again, it's not just about, you know, can we build it? Should we build it? But you know, how do we build it in the right way? So you really, really important conversation there, you know, somewhat tied to what some of the things that you just mentioned is you, you know, people are building these right. And so, you know, we talk about this, or at least I've had a lot of conversations about psychological safety, just in general product management, and how do we, as leaders create a safe environment, an environment that sets us all up to, to bring our full selves to work to be psychologically safe to build the right things for the right people? And the right, you know, for the right reasons, all of that, right. And that just, again, kind of is highlighted even more, in my opinion, in this area, right. So in, in your experience with with leading teams, who were building these amazing systems, how do you cultivate psychological safety for AI teams? What are some of the things that you've seen that really set that culture and set that stage?
Yeah, this is a great question. And I think a lot of it goes back to what you would alluded to you do these things that are, you know, generally work for any any product management or any organization. But I think what's different with AI and or generative AI is, again, the probabilistic nature of of AI products. And I have a little story about this. A few years back, I had, I had a role where, you know, I was waking up at 6am in the morning, and I had a particularly sleepless night, I had one and a half hour commute and an 8:30am in person meeting with the SVP. So I go in to present my big AI project, of course, the traffic was horrible. The SVP is pedantic, you know. And the thing that he I really got burned on is, you know, why, why do the recommendations? Not make sense? Why does the AI not always give me the exact same answers kind of exactly what we talked about earlier? And I, you know, given the situation and just I felt completely perplexed that he didn't understand this. And I just felt unappreciated, it was very rough meeting. And going back, you know, what, what would I have done differently? Well, again, it's around setting the expectation with your leadership, that the systems are probabilistic, you run a program, you don't always get the same output, this is the fundamental shift with any of the AI. And there's always some sort of a confidence threshold that you set. So executives really have to internalize that AI doesn't always have reasonable answers, this is a normal part of development, especially in the beginning, it's going to be the first version is going to be terrible. It is an incredibly iterative process. And, you know, that might be new for some organizations, you have to be very iterative, very flexible. And you have to really refine your product over time. And the more data you have, the better it gets, which is true for all AI products, but as well as generative AI. So going back to your question, you know, what about organizational psychological safety for for AI teams, I think, again, one, set the right expectations, to really work with your leadership, to explain to them that you have a whole different development process that you need to put in place. AI will always go off the rails, it's always going to give some odd results here and there. And then, you know, what I'd say is, I mean, classic pm mistake going into any meetings with the SVP, make sure you prime them before, beforehand and you give them the right context. But you know, psychological safety, like when I have my own teams, it's it's really about working with the team, setting the right expectations, setting the right context, making it safe for them to ask questions, teaching them that you know, about the Socratic process. So I think if you're a leader in an organization, it's really up to you to set that tone and make people comfortable. And, you know, it applies to any product, but it's okay. You should people should be okay, making mistakes. That's how you learn. And that's how you know you're having you have progress if you're not here. And everything sounds perfect. It probably, probably is.
Yeah, too much love that. Yeah, too good, too good to be true. Absolutely. You know, I think it's so important that we talk about this, because there are lots of folks listening right now who are leaders of teams that will build AI products, and they themselves are not experts, right. Like, like myself, like, like I was saying earlier. And for, for leaders to know what they don't know or to know, that folks on your team know, better, allow them to set the expectations allow them to teach us what we need to know, and don't go in with these, maybe traditional expectations, or traditional ways of doing things, because this may be different. And you know, it won't set a good example, it won't set a safe environment. And ultimately, the the products and the teams won't be successful. So I think it's a really important part of the conversation, to, you know, as, as product folks in AI teams, you know, we may have to do some educating around the organization of, of what this means and the process and the outcomes and what to expect. And as leaders, we we may have to realize that we don't know everything that we think we do, because this is maybe a new frontier. And, again, just kind of go in with the idea of, you know, you've got to set that up and set that culture from the beginning. So I love how you framed that and talked about that. And I think that's a really, really important part of this conversation and success in organizations.
Yeah, absolutely. I think good. Another another one I'll add is I think if if you're a leader, to set that psychological safety, have clear processes, have regular check ins and reporting mechanisms, give people the avenue to come talk to you and to get to get the context that they need. And also just it really helps. Nobody likes to be blindsided. It helps to know how to work together as an organization. And you know, surprisingly, it's not. I've been in a lot of organizations where that doesn't happen. But the ones that have been amazing to work in, have that clarity around what's expected, and what are the processes and how people should work together.
Yeah, I love that. So important, that clarity just makes everything just run more smoothly, I call it a lack of organizational drag, right? Organizational drives, like everything just takes so much longer. And, you know, because of because of that lack lack of clarity, or it's just harder. So. So So my final question for you is what resources I mean, you yourself have mentioned some of your resources and others that you've that you've used, and again, we'll link to everything, but what resources have you used over, you know, your learning? And what would you recommend to product folks trying to learn about this?
Yeah, so Andreessen Horowitz has a really good it's called AI cannon. And it is a list listing of all of the different resources around generative AI with from from different perspectives, whether it's business, technical, etc. Google has a decent introduction to generative AI, I think it could have been a little bit more interactive. But it gives you a pretty good understanding of the technology. If you put in the time. There's a community called ml ops community, that's, again, it's a little skews a little bit more technical. I, you know, what I would suggest to people is to actually spend the time to write I myself have a sub stack. And I also have a podcast, that jet that focuses on generative AI and, you know, if you have that kind of time, I would encourage people to write or to set up the kinds of conversations that we're having, I think it's a fantastic way to grow. As a writer, I really believe that writing is thinking so for me personally, writing really helps get to clarity and understanding. Another tip I have is if you have the opportunity to give a guest talk or teach a class that's a good way to really refine your skills. It's something I've been doing lately.
I love it. I love it. And you have a you have a workshop coming up right on Maven that you're teaching generative AI?
I did one a few weeks ago, it was generative. AI for business, if there's demand, I might, I might do another one, but I'll definitely be doing more courses on Maven. Either around AI product management, I partnered with a colleague whose name is Marilee. Nico, who does a lot of the AI pm courses. And I'm exploring potentially, you know, expanding, going into more detail around, whether it's user experience for AI, or privacy or transparency, again, depending on kind of how much time I get and the kind of interest I see from people. Awesome.
That's great. Well, again, we're gonna link to those resources. And link to Natalia is substack, YouTube, Twitter, LinkedIn, all of the ways that you can connect with her. This has been just an amazing conversation. Natalia, thank you so much for sharing your time sharing your wisdom, I have learned a tremendous amount and I know everyone listening has as well. So thank you so much for being my guest on product voices.
Thank you so much, JJ. It's been a pleasure.
And thank you all for listening to product voices. You can find more at productvoices.com And I hope to see you on the next episode.
Thank you for listening to Product Voices hosted by JJ Rorie. To find more information on our guests resources discussed during the episode or to submit a question for our q&a episodes, visit the show's website productvoices.com And be sure to subscribe to the podcast on your favorite platform.