
The Leadership Growth Podcast
Timely, relevant leadership topics to help you grow your ability to lead effectively.
New episodes every other Tuesday. Launching January 30, 2024
The Leadership Growth Podcast
How Leaders Leverage AI for Productivity and Development
AI is “not special,” says Dr. Allen Badeau. “It’s just a bigger part of your technology strategy.”
In today’s episode, Daniel, Peter, and Dr. Badeau discuss the latest developments in the artificial intelligence (AI) landscape and how leaders can best leverage this rapidly evolving technology for productivity and development.
Dr. Badeau is an AI evangelist with over 20 years experience building AI systems. He is the Co-Founder of Harmonic AI and the host of NowMedia’s weekly broadcast “AI Today.” Dr. Badeau was also instrumental in developing Stewart Leadership’s newest offering, Stewy, an on-demand AI coach trained on over 40 years of Stewart Leadership insights, models, and expertise.
Tune in to learn:
- Why AI is like a “highly intelligent adolescent”
- How finding the right AI is like shoe shopping
- How leaders can balance the risks and advantages of current AI models
Join us for a realistic–and reassuring–look at the current AI landscape.
Questions, comments, or topic ideas? Drop us an e-mail at podcast@stewartleadership.com.
In this episode:
1:53 – Introduction: Dr. Allen Badeau
2:45 – Topic: How Leaders Leverage AI for Productivity and Development
8:20 – How Business Leaders Should Approach AI
20:08 – Stewy: Your Customizable, On-Demand AI Leadership Coach
36:38 – Lightning Round
Resources:
Stewart Leadership Insights and Resources:
Stewy: Your AI Leadership Coach
10 Ways to Grow Your Career with Stewy, Your New AI Coach
10 Cool Things Leaders Can Do With an AI Coach
4 Ways to Develop a Strategy of Adaptation
7 Ways to Prepare Leaders for Disruption
Planning for Disruption: Five Ways to Future-Proof Your Organization
5 Advantages of Becoming a Digitally Literate Change Leader
5 Misconceptions About Digital Transformation–and Why They Matter
If you liked this episode, please share it with a friend or colleague, or, better yet, leave a review to help other listeners find our show, and remember to subscribe so you never miss an episode.
For more great content or to learn about how Stewart Leadership can help you grow your ability to lead effectively, please visit stewartleadership.com and follow us on LinkedIn, Instagram, and YouTube.
[upbeat music] Coming up on the Leadership Growth Podcast.[upbeat music] Don't do AI for the sake of doing AI because that's not going to be successful. But you have to have your technology roadmap laid out so that you understand what the impacts are going to be, what your goals are, what your customer experience is going to be, how you're going to drive all of that stuff forward, how you're going to save money. All of those things have to be, you know, tightly coupled with whatever that AI is that you're bringing in. Otherwise, it's just going to be the Wild West. And that's what a lot of companies are finding out is that, you know, they're just trying a whole bunch of different things. They don't have it aligned to their strategy and they're the ones that come out and say, you see, I told you AI doesn't work. It doesn't work. We, we didn't, we got nothing out of it. And uh, then when you dig a little bit deeper, then you can explain why they didn't get anything out of it. And so it's got to be part of that larger business strategy that you have moving forward. It's you know, it's interesting because I talked about this last week, too, and and saying that as you are looking at AI, it is part of a broader technology strategy that you have to have. It's not special. It's just a part, a bigger part of your technology. And so if you've got 5G, 6G and you've got more sensors that are doing something else, it's got to all align and come together. And if it doesn't, it's not going to work. Hey everyone, and welcome to another episode of the Leadership Growth Podcast. I'm your host Daniel Stewart along with my brother Peter Stewart, who is also the host here, and we are honored to have a fantastic guest, Dr. Allen Badeau. Allen, welcome to Leadership Growth Podcast. Daniel, Peter great to see you again. Great to have you.—Awesome. Thanks for joining. Let me share with our listeners a brief background of Allen, here. So, Dr. Allen Badeau has over 20 years of experience building AI systems and is Cofounder of Harmonic AI, Founder of Allen Badeau LLC, and TV show host. That's right. We got a TV show host here, folks. TV show host of “AI today with Dr. Badeau,” a weekly broadcast with over 1 million viewers on NowMedia TV network. Allen, again, we are honored to have you here to talk about such an important topic of what the heck is AI and how can leaders best leverage it? So let's start off with a question. This is spring 2025 right now as we're recording this. I know things are ebbing and flowing and changing and redirecting and reshaping every day in the world of AI. Give us kind of a lay of the landscape of current AI activities, features, thinking if you don't mind. Yeah, sure. And uh that's a really great question, Daniel, because you know, over the last few months, AI has gotten a lot of negative publicity, right? With the everything that's going on in the economy has been pretty much blamed on AI. There's so many different nuances associated with that. But the realities are is that it wasn't necessarily AI, for instance, that were driving all of the layoffs, it was just shareholders and those sort of, you know, effects taking place that had nothing to do with AI, but you got to blame somebody, right? And so AI's the easiest target. Now, what we're seeing, though, is that one, private equity money is drying up in investments, so it's expensive to borrow money, so you're not seeing a lot of AI startups, uh, for instance, but we're also seeing a shift in the technology. About every two weeks, there's a new large language model that's released. There's something new that shakes the foundation of what we understand what AI can do, and that is driving an awful lot of the, you know, activities that are taking place. And so it's not an easy position for business owners to be in right now and executives, because if they tie themselves to one large language model developer, the next day they could be behind already. And so there's a lot of, you know, folks are hesitant to make the investment, but they know they have to make the investment. And so there's just a lot of those tough decisions that have to be made. That's helpful context, Allen, is you're sharing that and let's actually kind of broaden that aperture a little bit wider as I know as Daniel was reading your your bio saying, you've been building AI for 20 plus years. I think there are a lot of folks who have are like,“wait a minute, it's been around that long?” You know, it's like kind of having that question. So as you look at the evolution of AI over the last few decades and what, what was, are kind of examples of some of the beginnings of AI that you would consider that? And then what have been some of those major leaps over the past several years and where do you anticipate it heading in the coming, you know, next couple years? So just kind of a brief little timeline history of AI. Yeah, so when I first started, it was around developing genetic algorithms. And they're really just optimization techniques. You're trying to figure out, you know, what the optimum, you know, coefficient might be of an equation or something like that. And that really, you know, drove, you know, some some developments that they had, and then we went into the Neural Net Age and uh, you know, that sort of took off on its own. But the underlying issue that we always had was that we didn't have enough compute power. And, you know, we were trying to— these, these things, one, they take a lot of data to train as, you know, like the next model I'm hearing is going to cost anywhere from 150 to 250 million dollars to train. They didn't have enough, you know, compute power in the machines that we had in order to be able to crunch the numbers so that it would take, you know, less than, oh, geez, we're talking months to run some of these things. And these were the smaller ones back in the day. I remember I had a DEC Alpha, uh, you know, computer that was like state of the art and it had less RAM than my cell phone does, right? And so, you know, now, you know, with GPUs and TPUs and now some basics around quantum annealing and those kind of things, we have seen, you know, Moore's Law at the the leaps and bounds of the compute power have allowed us to finally be able to do some of the compute that we need to and get it in our lifetime.—Wow. So many aspects. And you're making me reflect on an experience I just had last week. I was speaking at a leadership summit for a large bank. And there were 40 of the leaders there and the CEO came up. There was a Q&A section of the meeting. And one of the leaders asked the CEO, what is our AI policy? What is our AI approach? And it was interesting how the CEO responded. He was pretty quick in his response by saying,“We don't have one. We don't need one.”“What we're gonna be doing is simply leveraging the AI tools”“that are gonna come with each of the system providers that we will have.” So really in some ways looking to all of the vendors of all of the processes and systems that the bank utilizes for them to utilize AI to make those systems and processes more efficient in various ways. It was an interesting way of approaching it. I share that as just one example, Allen, as you're hearing that and other business leaders, how should business leaders be approaching and thinking about AI these days? Yeah, I tell all my clients that they've got to integrate it into their larger business strategy. Don't do AI for the sake of doing AI because that's not going to be successful. But you have to have your technology roadmap laid out so that you understand what the impacts are gonna be what your goals are, uh what your customer experience is going to be, how you're going to drive all of that stuff forward, how you're going to save money, all of those things have to be, you know, tightly coupled with whatever that AI is that you're bringing in. Otherwise, it's just going to be the Wild West. And that's what a lot of companies are finding out, is that, you know, they're just trying a whole bunch of different things. They don't have it aligned to their strategy and they're the ones that come out and say,“You see, I told you, AI doesn't work.”“It doesn't work.”“We, we didn't, we got nothing out of it.” And then when you dig a little bit deeper, then you can explain why they didn't get anything out of it. And so it's got to be part of that larger business strategy that you have uh moving, moving forward. It's... you know, it's interesting because I talked about this last week too in saying that as you are looking at AI it is part of a broader technology strategy that you have to have, It's not special. It's just a part, a bigger part of your technology. And so if you've got 5G, 6G and you've got more sensors that are doing something else, it's got to all align and come together. And if it doesn't, it's not going to work.—Yeah.—Yeah. So a quick follow-up and then I want to get your thoughts, Peter as well on this. Just a quick followup. What should AI not do?[laughing] Because I think right now, a lot of business leaders are like,“Oh, it's both optimistic, exciting and overwhelming”“and let's put AI in all sorts of places and things.” However, what are the places that business leaders should not think of AI going into and helping and supporting if you don't mind? Well, you know, I think it can help in so many different places. But what is the easiest thing to think about is that AI should not do anything by itself and be left alone for long periods of time.[laughing] Because you've got to, you know, when we talk about this, you've got to have somebody watching what it's doing, checking in on it, making sure it's behaving, because you never know these models are still very temperamental. And you've got to put really good boundaries around it and tell it exactly what it should not be doing, what systems it should not be looking at, what, uh, you know, customers that should never talk to, uh, you know, software that it should never be allowed to write on its own. Things like that you have to really think about. And, you know, there's a lot of debate going on right now on these models and how they're coming up with some of their their uh thought processes and, you know, it's called chain of thought. And, you know, some people are saying,“No, don't uh don't tell it to not do this” and others are saying “yes.” There's no good answer yet. You gotta, you got to play with it. You got to figure out, you got to break it and you got to see what's going to work best for your environment. That's the easiest thing to, you know, to really keep it in the fence. So it sounds like a key part is leveraging those resources of AI as it fits in with a broader strategic focus, not just doing it for... just to do it, because everybody's doing it. And then there has to be a level of supervision around that. You can't leave it to its own devices. This is a, a highly intelligent adolescent[laughing] that you don't want to let them run the world, you know.—That's right. That's right. And the other thing that people forget, you know, if you have bad processes, AI's not going to fix that. All it does is accelerate your bad processes. So you get to make bad decisions faster. And so that means you're going to go out of business faster. So you've gotta... you've got to have it as a broader strategy that as you implement AI, you've got to implement change management. You've got to implement new processes. All of that has to be coordinated. If you just do one piece, you're missing, you're missing a true opportunity to take advantage of it.—Mm hmm. That's such a great point because too often any of us as humans when we see something new, we can overestimate its ability to be by itself to fix things. And part of it is because the technology is so unknown and we're not quite sure how to fit it into things. But the other part is this, is this hope and wish that it'll be easier. And we don't have to do the hard work, but we will still need to do the hard work around process development and having challenging conversations and doing scenario planning and staying on top of this. Um, and, and some of the trade-offs that then need to be discussed often come around accuracy versus speed. Give us a sense of, what does that mean? How to navigate those two? Because I suspect, many of us, expect AI to be instantly accurate. However, both of those things are not always there and being present. Present. How do we balance those two ideas? Yeah, that's a... that is a great question. And it's an issue that we have dealt with for many years, but now with these large language models being so easy to use, all you have to do is type in a question, right? I'm using, you know, you can use ChatGPT, and most people are using it, like Google. I'll forgive them, but uh, you know, it's it's it's so easy now to get answers to really tough questions because you don't have to dig into them. But... we forget, like Google, the internet's not always right. Neither is the AI. Perfect example, and people can do this. If you look at ChatGPT, the, you know, the free version that you have, and you say, “I want you to write me a report about the weather,” you know, and over the last 30 days and do some analysis around it. The AI will go out, it will get you data. It will generate graphs for you. And it'll give you a report. But I bet you, if you dig and you ask it, “where did you get the data from?” it's gonna give you a resource that does not exist because it made up the data. And it wrote you a report based on fake data. So it pretty much lied to you.[laughing] So why? Why did it do that? Because it wants to give you an answer. It's not gonna tell you, “I don't know,” or “I can't do this” or “the data doesn't exist.” The free versions are free because there's no checks and balances around those. Now, when you start getting more advanced models like the paid version of ChatGPT, for example, and some of those newer ones that are coming out, there's reasoning behind it. There's chain of thought. There's checks and balances, and those models, you don't get an answer instantaneously. It takes you up to five minutes in some cases to get answers back. And it's just churning and checking and doing the right things that it needs to do in order to to preserve that. The models hallucinate. They still lie. You still have to check your data. You still have to check against some sort of ground truth that you have been able to pull together. Those are the fundamental principles that we forget now because now it's so easy to get an answer, just like Google was.“Oh, Google said this.” Now it's “oh ChatGPT told me this” that, you know, my wife's a perfect example of that. Somebody came in and said, uh, you know, “I have a spot on my arm and ChatGPT said that is was not cancerous.” And she said to them, she's like, “did ChatGPT look at it?” It was cancerous. So, you know, it's those kind of things that folks are are forgetting about. It's not always right. And let alone leave it by itself and then you're really in trouble because then you could have a few days worth of wrongness. Well, this is, this is eye-opening as you think about this and it its relationship to the output it provides, going back to that analogy. So this is not only a highly intelligent adolescent left unsupervised in a room, they also want to please and provide the answer.[laughing] But they're not going to tell you when they're making it all up. So if we take this perspective with the fact of what you were sharing earlier, Allen, about the fact that there's new LLMs or large language models that are coming out on a regular basis. So what are some of the... I guess how do you balance that early adoption with waiting to see what's proven, but yet not wanting to be left behind? So how do you kind of balance all of these risks of timing and adoption for an organization? Yeah, we FOMO, right? Fear of missing out. Because you're always worried that your competitor across the street has got something that you don't have. They roll it out and you're in deep trouble.—Mm hmm. So from a business perspective, if you are... if you are sticking to your roadmap, you're sticking to your own timing, then when you roll it out, it's going to be the right time for you. And if that means extra testing, extra benchmarking, extra whatever, depending on what it's going to do, then that's what you have to do and you've got to stick with it and you've got to not worry about what your competitor is doing and hope that what you have is going to be better than what they have. Otherwise, you run into this scenario where you release it one hour after ChatGPT is released and a week later, you have angered just about every customer that you have because you decided to put it in your call center. That's when I get phone calls. That's so yeah, you know, it's always making sure, you know, if you have good business practices and you have good, uh, a good management team, then the technology is the technology. It's going to fit into what that broader strategy is. But if you just are trying to look at ways that you can accelerate something, grab for, you know, grasp for anything that you can to get an advantage, AI is one of those technologies that if you're lucky, it'll help you, but if you're unlucky, it's going to hurt you. And building on that, Allen, these large language models, many of them are very general. They're broad. You can ask them all sorts of questions. What you get back is important to remind yourself it's a first draft, you need to verify, you need to leverage it. It's not just something you can take and put it on the shelf and call it good. And it's very general knowledge-based. There is a trend right now toward developing small, smaller large language models, smaller and/or more specific, And in full transparency for our listeners, Allen and his company Harmonic AI are great partners with us as we've been diving in and developing our own specific AI cognitive persona called Stewy, which is a specialized specialized AI around leadership development.—Mm hmm. So talk to us, Allen, around this trend towards specialized AIs. What's the advantage? How do you... how do you know when they're better than the general broad LLMs, et cetera. How do you distinguish this? Yeah, good, you know, if you think about how humans solve problems we don't try to always solve the entire problem at the same time. We bite off a little chunk that we know that we can, that we can solve and that we can come up with an answer for and then we apply that to something else. So when you have these large models, they are good at answering a whole bunch of different questions. They're not great at anything, which is why some of them can do math. Some of them can't do math. Some of them are not as smart as others. Some of them can't pass the MCAT, some of them do. It depends on the data that you have. And if your data is, you know, not great but you have a lot of it, then you're probably going to be able to answer general questions pretty well. But when you start to try to get specifics out of these models, that's when you can get yourself in big trouble. That's when it will give you an answer that is absolutely beautiful. It is so convincing that you, you were, you would put your life on it and you'd say, “yeah, I guarantee that's right.” No, probably wrong. Probably wrong. Because specifics, it has a very difficult time being able to handle. And then the other issue is, is we're not all going to carry around a giant desktop computer with us, right? We want, we want mobile access. We want laptop access. And trying to get those huge models to run on edge devices is very difficult. It's almost impossible right now. And so getting specific models allows us to really fine tune those things, get them so that they can be very accurate, uh, almost trustworthy. I won't say 100% because nothing will ever be 100% trustworthy from my perspective. But... at least they can be right when they are responding to those things. And so we compare —like with Stewy, we compare the results that we get from all the fine tuning that we do to the larger models to see how's our accuracy? Is it correct? We're lucky enough that we've got some pretty easy benchmarks that we can throw at it. What's Gem 42, right? And... nine— nine times out of out of 10 Stewy rocks it. It's usually nine and actually probably 9.9 out of 10. The large language models out there zero out of 10. It sounds good. Oh, Gem 42 with blah, blah, blah, blah, blah, blah, blah it's worded beautifully, but it isn't right. That's the trick.—Mm hmm. As you're... you're describing this, Allen, there's an analogy that's kind of coming to mind, and see if this, how this fits, of what you're what you're describing. I'm thinking of... I need shoes. So I want to go shoe shopping. And if I just need a shoe, and I want a wide variety and I don't really need a specific type of shoe. I can go to Walmart. They have a whole bunch of shoes. I can go to Target. They have a whole bunch of shoes. And I'll probably find something that can work. But oh, I need an athletic, an athletic shoe. They'll have a smaller selection compared to their others. Maybe I need to go to a Dick's Sporting Goods or something like that. They're going to have more like a Foot Locker. But what if I need a running shoe? Ah, now you go to a specialty store to really get in and those individuals out of the store are going to know running shoes. They're going to look at my feet, measure them, see what fits best. And it sounds like you're kind of walking us down this path of some of the AIs, some of these large language models provide a little bit of everything. But if you really do want that more niche, very specific, curated, customized, that's more like the running store experience. Yeah. Yeah, that's a, that's a great analogy, Peter, because if you think about how these, you know, where these models are getting their data from, they're all over the place, and they're huge. I mean, we're talking huge amounts of data. And for these models to work and and be efficient, you know, they've got to use some optimization algorithms that are not going to be correct as you get more specific to these details. And so for instance, Stewy, we could take all that data that we use for Stewy, you know, the fifty million data points that we generated and then the next models that we release there it's going to be closer to 200 million. We could take all that data, we could put it with a large language model by itself, and we may see one or two percent improvement in the accuracy. If that's all we were using, that's not very good. But what we've done is is we have refined it and we've honed it in to create our own smaller, large language model that really is focused on being an executive coach. It's got your team's personality traits. It's got all the training that's out there, podcasts, videos. I found a video of your dad. I think it was one of the earlier ones that he was, you know, given presentations. It's got all that in there. And that allows it to be an excellent executive coach looking at improving team's performance. But you don't want them to build a power plant. You can't use an executive coach that we've trained a persona, that we've trained to do other things. It doesn't go well. It's not going to be, it's not going to be good at all, actually. It's pretty, it's pretty bad. You know, and you don't want Stewy running the service desk. You don't want Stewy to, you know, look at those other things because that's not what he's trained on.—Yeah. But that also means, you know, we are sacrificing some, you know, some speed, because if you're using the free version of ChatGPT, it's almost immediate, right? But if you're using the $200 one with the reasoning and the checks and those sort of things it's longer, three, four, five minutes. But that's what Stewy is based on because we want accuracy. We have to have accuracy. Otherwise, there's no reason to you guys having any Gems because we'd be making them up all the time.[laughing] Ope, the Gem, we could call it the Gem of the Week. Stewy came up with his own Gem because that's what, that's what happens when you use ChatGPT. Yep. Yep. These are great points. And so I think let's make it specific for leaders right now, as they're hearing this and they're thinking, okay,“What can I do then to be able to help me make better decisions each day?”“What can I do to help myself,”“develop myself, and my team, my career?” And it sounds like that's the kind of running shoe we need to get a very specific AI for. Now, you also said something very interesting that there's emotion. There's personality that we've built in. Talk to us about how that's missing in perhaps other large language models. And what's the advantage of having an AI with that built-in, like, like Stewy? What is the contrast? Yeah, if... large language models, believe it or not, are only a very small piece of the AI field. There's so many other fields and branches of AI that now are not even talked about at all. Large language models are a small branch on a 256, 265, whatever the definition is this week, of different fields of AI. And so——Wow.—there are other things that do better. Allen, I'm going to interrupt you there before you head into that. Like, what are just a couple of examples of other AIs that don't fall into LLM, just as I know you pique the curiosity of listeners? Yeah, like robotics. That's a good example. Um, expert systems is another good example. And that's where like you've got, you know, if this, then that, if this, then that, you know, you look at but on a chart and, you know, those those kind of things, uh neural nets, um is another one, um, you know, neuromorphics you could put into the neural net piece. It's, it's really, there's so many different fields and branches that are out there, heck genetic algorithms is another one, for goodness' sake. I can't believe I didn't say that one up front, but um, you know, that's the thing. And some of these models are very good at helping with decision making. Large language models are not, because you don't, you don't know if you can trust the data. You don't know if you can trust the decision. You can use that as another data point. But even then, oh, the computer said this, let's go with this because that's usually the executive team's mindset sometimes, right? They get— they've bought this tool. They're going to trust the tool. Well, you better hope the tool has some other things involved in it otherwise other types of AI, otherwise I wouldn't rely on it, but that's where Stewy comes into play. Because Stewy's got more than 13, uh yes, 13 different types of AI that we're using in there. So that as Stewy is processing information, as Stewy is looking at the responses, the questions, then, you know, he can use more of, you know, the sense of empathy, his personality traits, you know, understanding what the best decisions are based on those traits that we have provided to him. That's Stewy's DNA. If you think about it, we've trained it so Stewy's DNA is not a prompt. We're not telling Stewy to pretend to act like something. Stewy fundamentally believes his mission is to be an executive coach. And that's what Stewy operates by, not ChatGPT, but act like Stewy and be an executive coach,—Mmm.—because that's, that's not going to go well. And Stewy is trained fundamentally to act like that, and that's the power of what we've we've been able to do with Stewy, not only getting, you know, when we can couple the proprietary data and all the information that you guys have over 45 years, and take that with the tools that you guys are using, and then couple that with all the AI we're using, when you put all that together, you get Stewy, who's accurate, who doesn't misbehave. I've tried to get Stewy to misbehave many times and I still haven't been able to get him to, you know, to even swear at me, which is really surprising. You know, those kind of things are really what we are trying to provide executive teams so that they can have some reliability. Yeah, this is tremendous. And so building in these emotions to be able to help them have us have Stewy have a sense of empathy, it's such a unique element to be able to build this in. So here's my other question before we kind of bring a Lightning Round or wrap-up as to what's the one thing leaders want to pay attention to remember. Let's talk for a moment about prompt engineering. Because this is one of the great challenges that folks face. You know, they might have a fantastic large language model and they're like,“There's so much to access, but what the heck do I ask it?”“How do I phrase it?”“I'm not sure. How specific do I get?”“And is it going to be confidential?”“Is it going to take anything I say and it's going to learn and share with others?”“And is that gonna—” Anyway, there's lots of questions around this. What are some solutions that we could pay attention to to help manage the accuracy, effectiveness and privacy of prompt engineering itself? AI is hard. It scares people, right? And so they don't know what step to take first. And so they're nervous about using any tool, but when you use ChatGPT, the free version, especially, everything that you put in there is getting ingested. It's using it, it's training it, and it's in there. If you use tools like DeepSeek, it's even worse. Because then they can— they own your proprietary data after you put it in there. You cancel your account. Guess what? They keep the data. They still own it. And so it makes people very nervous to take that first step when they write something into the prompt. And so usually what they'll do is they will put in a very general question. How can I improve my executive leadership skills? It's pretty standard. I would imagine that most people will put in there. And the responses are going to be all over the place. They're going to be very general and they're going to say,“Oh, I knew this thing doesn't work.”“You know, it's it's you know, it's it just it just gives me general answers.” Well, if you say the exact same thing to you guys, you're going to say, well, which part do you want to improve? Where do you want to go? How long do you want to be there? You know, is it a next week thing or when are you starting? You guys will have 50 questions. Well, you know, these large language models, because they want to answer you, are not oftentimes going to ask those 50 questions. They're going to give you a response. And it's on your um, it becomes your responsibility then to become a better prompt engineer. And so one of the things that we did with Stewy is that, you know, we created that Prompt Playground where you can just ask general questions, Stewy will give you a better response and say,”you know, your prompt should be this, this and this, this is an optimized prompt.”“But these are the reasons why I optimized your prompt.” And these are, these are the benefits that you'll be able to see. So Stewy is trying to give you insight to improve your ability to prompt. And as you get better, as you get more specific, your answers are gonna be a lot better. It's just— it's just how it is, except with some general large language models the more specific you get, the worse they get.[laughing] It lies more. So you got to watch out for that. Yeah. So Stewy is not that adolescent wanting to please, highly intelligent, unsupervised in the corner. Stewy is a mentor, a coach, highly trained, seasoned, just waiting to help with your development. That's... That's right.—Yeah. And Stewy wants to help. But you guys know this, and you had to deal with me in a couple of different classes. It's a very personal experience. And if if Stewy did not have that ability to recognize that and you know, if I told Stewy that if he got in my head that it would be probably a nightmare scenario for him, Peter, I remember telling you that the first day that we met,[laughing] you know, that that that sort of thing, right? So Stewy's got to be able to recognize those kind of things. Stewy has to be able to have small talk and have, you know, those those kind of relationships with folks that are personal and that are are protected. And that's the other thing is, you know, we built DOD security standards into Stewy so everyone has their own Stewy, doesn't go anywhere else. So with this, here's the last question for you, Allen, a Lightning Round question, which is really around, what is the one thing that you would recommend that leaders do to best take advantage of AI to help them in their day-to-day job? What would you recommend? Yeah, I would say they will, you know, I know everybody won't get Stewy, so I get that. That would be my first recommendation, but I'll take a step back. They've got to find some tool. They really do. They have to figure out something that will work for them, you know, and work for their environment because if they're not using it somehow then, you know, they will probably not be in that leadership role much longer. Because people that, you know, everybody's worried about AI replacing jobs. It's not AI that's going to replace jobs. It's people that know how to use AI that are going to replace the jobs. If you don't know how to use it, then you will probably be replaced by a person that does. And that's the big driver around, you know, a lot of next gen staffing, HR, from the executive leadership team all the way down. Nobody is going to be immune from it because you know, now with, for instance, if you've got a... imagine Stewy that is an HR vice president. You don't need somebody with 25 years of experience in HR anymore. Not you need a junior or a mid-level that knows how to use this a Stewy HR person. So, you know, those are some of the things that executive teams are going through right now and, you know, there's a... it's not easy to get the skill sets right. You can't just take your IT person and plug them in to solve all your AI problems. It doesn't work that way. It's really it's got to be crossfunctional. There's a lot of different, you know, things that are forcing executives to rethink how they're, how they're staffing, how they're using it, how they're going to deploy it. Yeah. Allen, thank you so much for the great insights. It's been a fantastic conversation. Thank you. It's been fun, guys. Thanks.—Always a pleasure. And to all of our listeners, thank you for joining us. Please Like and subscribe to receive notifications for future episodes, as well as as you have questions or comments, please email us at podcast@stewartleadership.com. We'd love to hear your thoughts and topics we could focus on in future episodes. As always, join us again in the future to hear tools and tips to help you improve your leadership capability. I'm Daniel, and my brother Peter, wish you the best in your leadership journey. Take care everyone.[upbeat music] If you like this episode, please share it with a friend or colleague,[upbeat music] or better yet, leave a review to help other listeners find our show.[upbeat music][upbeat music] And remember to subscribe so you never miss an episode.[upbeat music][upbeat music] For more great content or to learn more about how Stewart Leadership[upbeat music] can help you grow your ability to lead effectively,[upbeat music] please visit stewartleadership.com[upbeat music]