top of page
  • JJ Rorie

Emerging Technologies - A Panel Discussion

Episode 081

In this episode of Product Voices, we venture into a world transformed by emerging technologies, with esteemed guests Eva Agapaki, Tatyana Mamut, Katharina Koerner, and Yuying Chen-Wynn. These tech pioneers offer their insights into emerging technologies that will impact our world for years to come.

We grapple with the immense promises of AI as it has evolved from predictive to generative, and from domain-specific to generalizable - while acknowledging the monumental challenges and responsibilities tied to this potent tool. We discuss the future of product management in the AI era, and the panel offers advice for product managers on the skills that will be needed to succeed.

This episode is brought to you in partnership with Product Advisory Collective, a firm that accelerates product with curated Product-Leader Fit™ and outcomes-driven advisory and consulting services.



Connect with the panel:

Eva Agapaki, PhD: Head of Product with AI/ML expertise at startups and enterprise corporations, Founder of Hatch Labs

AI Product newsletter:

Tatyana Mamut, PhD: Product/Tech Innovator and Board Member

Katharina Koerner, PhD: Advisor, Researcher, Educator focused on AI Governance, Privacy Tech, Responsible AI

Yuying Chen-Wynn: Head of AI at PEAK6

Product Advisory Collective (PAC)

Show Notes:


Introduction to Product Voices Podcast

[0:03] Welcome to Product Voices, a podcast where we share valuable insights and useful resources to help us all be great in product management. Visit the show's website to access the resources discussed on the show, find more information on our fabulous guests, or to submit your product management question to be answered on our special Q&A episodes. That's all at And be sure to subscribe to the podcast on your favorite platform. Now, here's our host, JJ Rorie, CEO of Great Product Management.

Introduction to Product Voices and partnership with Product Advisory Collective

[0:37] Hello, and welcome to Product Voices. I'm excited. We have a very special episode today with an amazing panel of guests. This episode is also in partnership with an amazing organization called Product Advisory Collective or PAC. The PAC accelerates product with curated product leader fit and outcomes-driven advisory and consulting services. It's really an amazing organization that's helping organizations all over the world. So check out more information about PAC at Really excited to partner with PAC on this. I think it's a great synergy with Product Voices and truly honored to be in the presence of my esteemed guests. So let me do a few quick introductions and then we'll get into the conversation for the day. So my esteemed guests for today, Eva Agapaki, head of product with a lot of AI and ML experience at startups and enterprise corporations. She is the founder of Hatch Labs. [1:40] Tatyana Mamut, product tech innovator and board member, amazing product leader and AI expert. Katharina Koerner, advisor, researcher, educator focused on AI governance, privacy, tech, and responsible AI. And Yuying Chen-Wynn, head of AI at Peak6. Thank you all for joining me. I'm so excited about this conversation. So first question, I think I will give to you, Eva, you know, the truth be told, we know AI is the elephant in the room when we're talking about emerging tech. And we're going to get into lots and lots of questions and conversation around AI. But are there other emerging technologies that are out there in the world that we should be considering, that we should be learning about, that we should be pulling into our product efforts? Thank you for the question, JJ. So first of all, when it comes to AI, These days, it's all about generative AI. [2:43] And that's where the wow effect that I would say comes from, especially in the software development and everything we know. We all know about chatGPT and how in just five days, this upreached like, five million users. But still, I see a lot of potential when it comes to developing hardware. And there are two areas where I see this emerging tech in terms of hardware coming. First of all, in terms of edge computing, and especially for devices like our mobile devices or AR, VR applications, where we want to have low latency and unlocking lots of other applications and autonomous vehicles, digital twins, and also in terms of energy efficiency, like for example, battery usage to lots of IoT devices. And the second one in terms of hardware that I see as emerging tech in the next couple of years is in terms of clean energy and efficient battery usage, smart grids, electrification of electric vehicles, and lots of collaborations between national labs and startups. [4:01] That's great. I love those two, and I think they're both really making an impact, right? And I think, and would you agree that they're, while progress is being made in both of those, they're still somewhat nascent in the impact they're making or the number and the number of entities working on it and the effect that they're having at this point? Yes, exactly. It's very, in terms of software, there's so much that has been developed, but hardware-wise.

Advancements in Data Security and Industry Specific Breakthroughs

[4:32] There's still a long road to go. So I think we will see a lot of advances, and especially when it comes to data security, we will really want to have the data on device and not on the cloud. There are lots of industries like healthcare, manufacturing, oil and gas industries that really need this breakthrough to happen. So I see a lot of potential there. Yeah, and you know, it's so interesting because everything is an ecosystem, right? And so we make progress in some areas, but we need infrastructure to catch up to it, right? Tatyana, you have something to add there? Well, I would just add that there are some, you know, kind of further out technologies as well that people are talking about. I mean, I live in San Francisco, I now rarely use Uber or Lyft, it's mostly cruise and Waymo, so robotics. And, if you think about a gentive AI, right, and kind of auto-GPT, like married with cars or other pretty autonomous embodied agents, that could be a big shift in our society. [5:42] I think a lot of people are also talking about quantum computing. I heard Stephen Wolfram recently say it's never going to be a thing, but maybe it will be a thing. So all of our understanding of privacy and encryption is going to need to change if quantum computing happens. And of course, there are tons of discoveries now being made in synthetic biology and DNA research. So on the biotech side, there is so much to track that I don't even try to keep up because I'm focused on software and AI. But there are amazing breakthroughs in human longevity, in synthetic biology, in organ, both transplantation and, you know, the creation of organs. And again, all of these things interwoven with AI could have massive implications for our lives.

Massive Technological Transformations Happening Simultaneously

[6:35] And of course, there's also this whole thing about, you know, fusion power, which may or may not happen as well. But again, all of these things seem to be these massive transformations in technology all happen to be happening at the same exact time. [6:52] And wow, it is an incredible time to be alive and to be in the tech space right now. Yeah. Absolutely. We are both blessed and cursed with being leaders at this time. Yeah, yeah, absolutely. So, so I want to, I want to, I want to jump on that and just kind of piggyback. So now we're just going to jump right into AI. Thank you for both, both of you for, for bringing those up. I think it's, it's so important. And it's, it's such a great point that there's just an interconnectedness amongst all of this, or most of this anyway. So let's just jump into AI. [7:26] The truth is AI has been the topic of 2023, right? I don't think anybody would really argue with that, at least in our circles. But it's been around for a while, right? It's been around what, a decade or more. But now it's getting just more and more visibility. It's hotter and hotter. So, so tell me why you think that is. Like, why is this the year of AI? And is it because of some of the things that are coming together and use cases that are coming together? Or, you know, what is it? What is it about now that AI, that is making AI so hot and so visible. Tatyana, I'll just throw that to you. [8:02] I hear people ask this all the time and either claim, like, AI is new or AI isn't new. The reality is, we need some categories to think about the current phase of AI. I was chief product officer at Nextdoor back in 2019. We used AI for food ranking, so it was around then. I was at Salesforce in 2015 when Salesforce launched Einstein, the first AI product for predictive analytics on opportunities. So in both B2B and B2C experiences, AI has been around for about a decade. However, if you think about a two by two matrix, let's say on the x-axis you, have predictive AI and then you have generative AI. Where we were before is predictive AI. So we would throw a lot of data into the model. And the machine would learn, right, and uncover patterns in that data. And then it could predict, for example, in the social media case, what is the next best piece of content to serve you? Or in the Salesforce B2B case, what is the next best email to send that would be recommended by the system? [9:10] So now, why 2023 is a huge shift in AI and why we're talking about it now Email

Shift from Predictive to Predictive and Generative AI

[9:16] is because we've moved from predictive to predictive and generative. Now, not only can the system tell you what is the next best recommended step to take in your sales opportunity in Salesforce, it can actually write the email. It can actually generate that email. Or in the case of social media, and this is where we start to get really scary and I'd love to hear Katarina's perspective on this, now you can actually generate a post that is going to be most engaging for you based on hyper-personalization. [9:48] That's one shift. On the Y-axis, we have domain-specific and then generalizable. Domain-specific, again, you would put in all the data points for a particular domain. If you were creating an AI model for salespeople for what's the next best step for your opportunity, you didn't give the model everything. You just gave it that corpus of data that was relevant to that domain. These LLMs are fed with everything on the internet. So what are the topics that the internet knows about? Just about everything except for the very good things. But we can leave that to another discussion. [10:29] So now you have this generalizable thing. So you can ask it not just write me a sales, the next best sales email, you can say, in the style of William Shakespeare, write a sonnet that is the next best sales email for this particular customer. [10:45] And that is a breakthrough. So I think like thinking about those two axes is how I help people explain why now is different. Hi, this is Katharina. I would like to chime in. I totally agree with everything that has already been said. But I think there are also developments that now come to fruition in terms of regulation, which of course makes everything like responsible by design, you know, responsible by design, or okay, security, privacy by design, even more prevalent and it's even more clear that we have to take care of those things with the EU AI Act coming up and of course with all of those US privacy laws popping up in the States which also cover of course AI applications or systems as far as they process personal data. So I think there's also so much more regulation already on the table and coming up so that's also contributing to AI really becoming this omnipresent topic. This is Eva. I agree with everything that's been said. And I also want to add that now with AI being so commoditized, meaning that it's very accessible for everyone to use, and so many founders, startups can access actually an LLM on their personal laptop and start developing applications very easily. That has been really like a great shift and breakthrough that has enabled these technologies.

Concerns and Biases in AI, Addressing the Issues

[12:13] Yeah, those are all really great insights and help kind of paint the story of, and I think folks that aren't, you know, knee deep in these emerging techs every day, you know, may even, number one, think it's brand new, right? Just AI is, oh, it's brand new this year. [12:31] Or, you know, not really understand that history and that progression. So, that's great. So, So, Yuying, I think I'll ask you this question. It's kind of an offshoot of that. So amazing things happening, amazing things will continue to happen, but there are some concerns, right? And I think all of us, you know, have sights on those, but what are your thoughts? And I'll ask anyone else to chime in afterwards, but what are your thoughts on the concerns that organizations, societies, people should have when it comes to these emerging technologies, specifically AI, or is there anything that you and your organization or your organizations that you work with doing to address these concerns? So a couple of the major ones come from essentially how the technology was created, for particularly generative AI that we mentioned earlier it is mainly trained on internet data. So a lot of us are familiar probably with what floats around social media. So one of the things in terms of kind of the inherent dangers, because that data set wasn't curated, so bias, misinformation, underrepresentation. [13:49] It actually is an algorithmic problem that if we don't correct and we just go with essentially statistically voting for whichever one survives, which is the issue that you see with a lot of generative AI today with text and you go into images, you go into video, you see that quite a bit, right? So that's one of the core ones to correct for. It is also where hallucinations come from as well, right? There is, you know, we always make this joke in our house of, it's on the internet, it must be true, right? But there are definitely people who believe that and AI has this idea of the super intelligence behind it. I think people are even more likely to believe that, well, the AI said it, it's even more credible. And so that's one of the key ones we have to really watch out for from a consumer use space. Now, from an enterprise organization space, I'm in a highly regulated industry with financial services, fintech, like we track transactions.

The Challenges of Algorithmic Trading and Accuracy

[14:55] There is like 80% accuracy is not good enough. 98% accuracy is not good enough. We're talking about money here on the other side. And if you're doing algorithmic trading, you can put a company out of business with a mistake. Right, so from, we have a lot of general employee, interest to use. But in terms of company-wide deployment, we're nowhere close because that hallucination, problem, accuracy problem, and then being able to track for compliance and audit, audit trails, we're nowhere close. So lots of fun exploration. So more emails, low risk, but in terms of core business in the regulated space, we're not there yet. Hopefully, there are companies kind of really doing the work around the data transparency, the privacy piece, the compliance, being able to track data lineage, all these phrases that are kind of day to day life for me now. I didn't used to be kind of tight on the hip with the chief compliance officer. But from an organization perspective, whether we're building or buying, like we start at, where do you get your data from? What LLM are you getting? Can I switch it out? Can I tune it? Can I change it? Can I track it? Can I follow the trail? [16:20] So we're very selective up front, but those are major, major problems before it actually becomes widely adopted at the enterprise space, especially in regulated industries. Yeah, that's actually a really great point about harnessing it before you roll it out to the employee base, because it can make such an impact in a negative way, right? Katarina, I know you work deeply in responsible AI. So what are you seeing out there that's really important for folks to keep an eye on, and any organizations that are doing a good job of ensuring that this grows in the right way? Yeah, thank you for the question. And thanks, that was also super interesting to listen to what was just said. So we see so many responsible AI principles as self-regulatory initiatives by companies, of course, all those responsible, ethical, trustworthy AI principles such as privacy, security, accountability, explainability, non-bias, etc., human in the loop. And I think there's still this responsible AI operationalization gap. So as we have referred to security by design or privacy by design, we now have the issue or task to. [17:40] Design product services and infrastructure responsibly in the sense of those responsible AI principles, I think. And there is, of course, also a big opportunity for startups or new services here, not only horizontal AI governance platforms, but also to develop specific services such as, for example, lending with explainability in mind. So I think there's also this vertical opportunity of responsible by design services and products. [18:16] And I think that that is the operationalization of responsible AI principles is something that is maybe We're on top of the game here in specific tech bubbles, but to really make this broader.

Challenges in Open Source AI Governance and Ethical Considerations

[18:31] Accessible, the knowledge and the techniques and when to do what, because it really has to be applied throughout the whole AI life cycle. That's pretty challenging and probably needs to be broken down for, you know, like, so to say, average data science teams that they can really access this information, knowledge and methods as well. And we have so many changes here, also in regards of the open source ecosystem, I think in regards to AI in general, now with so many increasingly models also being outsourced and not fitting under the open source definition, open source software definition. There's also so much change and open source has many Evailable tools for responsibility. I, at the same time, how do you do responsible governance in open source projects? So that's also something I wanted to mention, just raise awareness for it. That's actually another good point. Can I chime in here? This is near and dear to my heart. Yeah, let's get it. So there are ethical challenges that are not specifically AI challenges, but they are magnified by AI. And then there are actual like new challenges that are posed by AI. So the non like specific to AI challenges are things like bias and misinformation. [19:53] Those things I can tell you existed in social media well before AI and well before generative AI. As the chief product officer at Nextdoor in 2020 during COVID and Black Lives Matter and the election, I can tell you those have been massive challenges. [20:12] You know, for hundreds of years, not even for the last decade. So, our regulatory frameworks are meant to solve that, thinking that AI is basically just an amplification of the things that we've already known. And I agree, like, XAI, explainable AI, and those types of things, and the NIST framework, right, for AI governance is really important for leaders to know about. However, there emerged a new class and a new category of AI that I believe very few people are talking about, and that is best exemplified by the Alexa Penny Challenge story. Do you guys know this story? No, tell us the story. Yeah, let's hear it. First of all, if you are listening to the podcast, I suggest you pause and Google it for yourself because it's far more effective to see it, actually, in the wild. So a brilliant Amazon Alexa team, the best technologists in the world, an amazing PM who I absolutely adore and think she's one of the smartest PMs I've ever met, created Alexa Generative Answers for Alexa. So now not only would it search the internet and give you an answer, it could actually generate an answer for you. And this amazing team of the best people in the world did a ton of RLHF on this. They red-teamed it. They made sure that it was safe to put into people's homes. [21:40] And in the first month, a 10-year-old girl said, Alexa, give me something fun, to do while my mother's cooking dinner. And Alexa said, take a penny and put it essentially into a live socket.

Unique Ethical Challenges Posed by AI

[21:53] This is something. This is an AI-specific problem. This is an AI-specific ethical challenge, and the reason why is because no human, would ever do that, no human would ever think of that scenario, and only generative AI would come up with that problem. And we cannot map, like, the NIST framework does not work for this, because humans cannot map the full universe of possible issues and concerns, because we have literally never seen them before. And those are the challenges that I am particularly, like, I do think everything else is very important. It was important before AI. It's even more important now. But I think these specifically new ethical challenges that are posed by AI are the things that we AI experts really need to be focused about and constantly pointing people to them because they are things that other people just, we don't think about them as humans. We've never seen them before. So, that's so amazing and such a great point. Eva? [22:57] Yeah, this is really important. I want to add on that about responsible AI and who is actually responsible for the outputs that are generated from all these systems. systems. And there is a new technology that is still ongoing and will be tested for sure in the years to come, which identifies whether the text or image or video is actually generated from a human or an AI system. And that's called watermarking. There are tools like from Synth ID from Google and others that are just emerging. And I'm sure there will be regulations around it and how to watermark AI output so we can prevent these use cases in the future. [23:45] Yeah, that's a great addition, really important. It's really fascinating. It's just a fascinating time, as we've said.

Advice for Product Managers in the Age of AI

[23:53] So kind of a broader question, but more specific to product managers, the audience for this, product managers, product leaders, what advice, what specific advice would you give to product folks to ensure that they're gaining an understanding of AI and the nuances? What competencies and skills are going to be needed? And obviously there's varying skills and levels of acumen needed, depending on what you're doing or how involved your product is in AI, et cetera. But, you know, kind of generally, what advice would you all give? [24:32] Yuying, I'll start with you. What advice would you give to product teams on how to upskill in this world? So it is a fast moving and fast changing space. Kind of the thing I always say is just try and play with it. The capabilities are evolving constantly. The only way you're gonna stay up to date is if you keep experimenting and keep imagining what you can do with all the newest capabilities. The other one I would just say is there's actually a paradigm shift in terms of user interaction, that generative AI and the conversational interface is introducing. As you're designing, don't think about how you used to design the software. The new models aren't out yet, but you've got to re-look at it differently. This is now intent-based design. [25:25] And I think everybody is in exploration mode, but that's the key, is to explore, experiment, and try. That's really the only way forward. [25:37] Yeah. Adriana? [25:40] I would actually say that while I am not trained as an engineer, I do find with AI, my training in statistics and econometrics has served me very well in understanding what's happening in LLM. So, if you do not have a deep statistical background, and I would actually say that this is my personal experience, understanding how error correction models really helps me understand how the outputs of AI, like, are kind of go off the rails and why they go off the rails. [26:11] And so really having a deep statistical background helps to understand what is going on under the hood, because as Ying actually said, it is a lot of statistics that's happening, right? It's not a database search. When you ask an LLM something, it is not a database search. It does not return the thing that's in the database to you. It is making basically an error correction statistical analysis as to what is the likely probability that its response is close to the actual reality based on the information that it has. And for me, I think that's one of the areas that product people need to lean into. But good news, you don't need to learn how to code ever because English is the primary coding language of today and the future due to LLMs. Adrena, did you have something to add? [27:10] Yeah, so I have never worked as a product manager, but I think that AI and as I'm coming from the responsible AI ecosystem is similar to privacy and privacy engineering and you're really in this role that you have to connect so many topics and to connect so many different silos or experts or departments or whatever. So if your company does not yet have a responsible AI guidelines, which is really super common, so most companies already have some, there is. [27:43] There's always the opportunity that you are like this spearheading initiatives like this and see who from other roles has interest and then really take one use case like one AI ML use case and, discuss what does this, do those responsive AI principles mean for us in our company in this specific project for example. That can be something I think where you can learn a lot and then of course, there are so many state of the art, state of AI reports, such as, for example, I think a classic is by Airstreet Capital, it just came out last week, but they're also having project, you know, industry, research, politics, safety, state of the art, and also prediction for the next 10 months. So I think with those reports, and together with, like, all the colleagues, like this discourse, a discussion and discourse is always the basis for learning. Eva. I also want to add in terms of skills that the new PMs that are working on AI products should have. I really think that unless they're coming from an engineering background or have a deep background in machine learning. [28:58] They would need to find some ways or partnering with research and engineers to understand, at least at high level, the nuances of what is happening with these models, like maybe not understanding deeply on how LLMs and transformers are built, but at least having a high level overview of the architecture so that the decisions to the higher level executives and teams are easier to be made. Because especially, and that's the difference in terms of conventional traditional ML or product management projects is that they really come from research and the experimentation mindset and understanding research papers, for example, in all these complex fields of AI and Gen AI are really important. So having an open mind and really partnering or learning and educating themselves as we all see all the new advances coming. That's great. So I have two more questions to pose to the panel. So this one is, I think, going to be on the mind of a lot of listeners. I get this question asked to me a lot. [30:17] And you'll see some posts out there on social about how AI is going to be the new product manager. Product managers aren't going to be needed anymore because AI can just replace us. So here's my question to the group. How is AI going to change the product management process, the life cycle, how we go about our jobs? I have to think it will. I have to think there are ways that it can improve it without replacing people. There may be some replacements or some tasks to be replaced. But what do you think the product management world's gonna look like in terms of how AI's going to change it over the next three to five years? Like what is AI going to do to the actual building of products? So I can start with that one. [31:08] In terms of like the paperwork piece where we have to write a lot, that effort is going to reduce tremendously for the product managers that learn to use AI to generate user stories, to generate JIRA tickets. I got some of those going already, right? So the efficiency in general, so use some of the tools and they're all coming. But for me, I think more exciting because I've been doing product for more than 20 years, and sometimes the hardest thing is communicating your vision to the dev or to the next. There is a no code and low code environments now that like for I used to just do mock ups, quick mock ups to try to communicate my idea. Now I can do full on prototypes or working code and deploy the app. I think for the product manager that's really driven, the solopreneur of an actual full-on product app business, that's just you and you can run everything around it. I think that is very close. Like, I think. In this world, the great product manager wins because with the AI's help, I can, I have the vision, I can prototype, I can launch it, I can sell it, I can collect payments. [32:27] That's where I think we can go with it. And I'm just going to underline that point, because I know several founders who are non-technical founders.

The Unique Insight of Non-Technical Founders

[32:38] And they had an incredible human insight about what people will need, that AI is not going to be able to do, by the way, because it can look at all the data of things that already exist, but it can't tell you what should exist that doesn't yet exist. [32:56] And so these, you know, these founders, non-technical, they had an incredible insight. And they did exactly what Ying just talked about, which is, you know, they created their app, they deployed their app on their own with, with generative AI tools, you know, designed it, deployed it looks great. You know, there's some back end issues, I'll be honest, that they need to be a little re architected, but they got they got their beta, you know, version out, and they're able to get funding at a post product valuation because they have a product. And they might have some people using the product, right? And that piece is, are the things, you know, that piece of that human insight of what will people actually want and value, that part of product management is still very far away from AI being able to figure out. [33:52] That human element, that insight piece. The rest of it, you know, and the product teams will look different, right? And product managers will continue to be really valuable for our insights. One of the things that I think is going to help a lot of product managers is the ability to communicate, Yuying, like you said, either through mock-ups or through some visuals, but also even wordsmith, right? I mean, I know we have this concern about content being written or, you know, the things that we put out there not being ours and being AI generated, but I know a lot of really brilliant, great people who for whatever reason, you know, maybe English is their second language or lots of other reasons, right? They don't communicate. They're not as boisterous of the, you know, the voice in the room, But if they could get that, even, you know, that writing down or that communicating down a little bit better, it might help. And AI can help in some of those ways. So to me, there are lots of exciting things happening. [35:04] And we're always going to have those, you know, we're going to be replaced by robots soon. We're always going to have those folks in the room. But I think there's a lot of exciting things happening. So I have one last question for each of you. and I'm just going to go around the room, the virtual room, if you will. What most excites you about AI and what it can bring to the future? Katarina, I'll start with you. Yeah, I think AI agents, I mean, AI agents plus robotics, like Tatyana mentioned at the beginning, I think there is this vision that you really combine those two because before there was only I mean, robotics was very limited in terms of what it could do. And now if you combine this with generative AI, I think there's the vision that it just surrounds us, like wherever we go, it's always with us, our AI agent. And actually, I'm already missing it. I can't, I anticipated I would get used to that very quickly. [36:07] I love it. That's great. Thank you. Tatyana. So while there will continue to be jobs, many of the jobs that we currently have, which are so mentally focused and, you know, knowledge work, there will be far fewer people doing the same knowledge jobs that are doing, that are being done today. And I believe that there, this is a challenge to humanity to rediscover our talents and our skills beyond knowledge work and beyond the things that we're doing with writing documents and writing code and, you know, all those types of things. So I think that there is a huge leap in evolution that is going to happen for humanity as a result of AI. We are going to rediscover skills that we have lost for the last few hundred years, certainly since the Industrial Revolution has started, where we have leaned into just like, what can we make and what can we do and what can we write? [37:12] And we discover things like our physical experiences, our spiritual experiences, dive deeper into emotion and the technology of emotion and culture and all the things that make us human that we have forgotten. That's what excites me the most. That's amazing. Eva, what about you? So what's really exciting to me is. [37:38] It's not coming from the AI technology itself, but how it will be applied to solve real problems and to prove its value for lots of different industries and businesses. And that's what really drives me, which really motivates me to see innovation in the business side, because I've seen a lot of breakthroughs in research, lots of tools that go beyond our imagination, but actually how we can deploy these tools for solving real problems like climate issues, like environmental problems or lots of other healthcare issues. This is where my motivation and excitement comes from. It's going to make a huge impact. Great. Yuying. So I'm actually most excited about the unlocking of creativity. So writing stories, creating art and music and movies, the AI has kind of leveled up such that even if you didn't master the skills of drawing, which I can't, I can't draw to save my life. [38:59] But now these tools, you can create images from your imagination using just words. You can create, you're starting to be able to create movies and songs. And I know I'm going to go try to write a song, even though I'm not musically trained at all. If I can do it, that unlocks so many creators who weren't creators in the space. And I think we're going to just see completely new writing, novels, stories, art, you know, I think you'll see kids creating a lot more of the artwork, which before they wouldn't have had the years of mastering the skills before that comes in. So I'm super excited for that piece because I feel like we need an art and creativity renaissance and technology might be the push to get a lot larger population into creating. It's really exciting to hear all of your excitement about AI and these technologies because the truth is these emerging, really world-changing types of technologies don't Come along that often, but when they do, they can make a huge impact. As we discussed, they can also have a lot of ramifications that we need to watch out for. And so I loved this conversation. I appreciate each of you being here with me on Product Voices. is, thank you to PAC for partnering with us on this. Again, Katharina Koerner, Tatyana Mamut, Eva Agapaki, Yuying Chen-Wynn, thank you so very much for sharing your wisdom and for being with us on this amazing topic. And thank you all for joining us on Product Voices. Hope to see you on the next episode.

Thank you for listening to Product Voices, hosted by JJ Rorie. To find more information on our guests, resources discussed during the episode, or to submit a question for our Q&A episodes, visit the show's website,, and be sure to subscribe to the podcast on your favorite platform. [40:58] Music.

bottom of page