
The Leadership Growth Podcast
Timely, relevant leadership topics to help you grow your ability to lead effectively.
New episodes every other Tuesday. Launching January 30, 2024
The Leadership Growth Podcast
Using AI to Build Innovation
What if AI were the key to innovation inside your company?
Today’s guest suggests that AI puts innovation in the hands of people who aren’t necessarily scientists or programmers.
Travis Hoppe is the Assistant Director of AI Research and Development at The White House Office of Science and Technology Policy. He co-authored The Pile, a pioneering open source dataset used for training large language models that served as a catalyst for promoting open science within the field of AI, and he holds a PhD in physics.
In this conversation, Daniel and Travis discuss everything AI–from the basics of machine learning and algorithms to implications for leaders to the most promising applications of AI.
“Now, people can experiment with some really good idea,” Travis says. About 20% of your organization really wants to build stuff. “Oftentimes you just need to bring them together and you need to give them the freedom to do so.”
Tune in to learn:
- Why guardrails in AI innovation are so important
- Why leaders have a unique opportunity to be pioneers right now
- Why you don’t need to fear “the singularity”
Join us for a fascinating conversation about the present–and future–of AI.
In this episode:
1:35 – Introduction: Travis Hoppe
2:53 – What is AI?
9:25 – Algorithms: A Brief Review
13:05 – How Should Leaders Think About AI?
18:40 – AI Guidance for Teams and Businesses
28:00 – AI in Practice
32:40 – Lightning Round
Travis Hoppe profiles:
Memorandum M-24-10 (listed under “Memoranda 2024”)
Stewart Leadership Insights and Resources:
- 4 Ways to Encourage a Healthy Failure Culture
- The Power of Imagination in Planning
- 7 Ways to Prepare Leaders for Disruption
- 5 Advantages of Becoming a Digitally Literate Change Leader
- 5 Ways to Help Manage Your Team’s Change Exhaustion
- AI-Powered Talent Retention
- Women and AI
If you liked this episode, please share it with a friend or colleague, or, better yet, leave a review to help other listeners find our show, and remember to subscribe so you never miss an episode.
For more great content or to learn about how Stewart Leadership can help you grow your ability to lead effectively, please visit stewartleadership.com and follow us on LinkedIn, Instagram, and YouTube.
Coming up on the Leadership Growth Podcast. There's always this 20% in every organization that I've been a part of that really just wants to build and wants to put stuff together. And oftentimes, you just need to bring them together and you need to give them the freedom to do so. I've always been really enamored with the idea of the 20% time that Google used to have or still has, I'm not sure, because it really does enable these ideas to grow. And I would say that they're most effective when they don't always have to be driven by the mission. So sometimes you get leadership and they're like, oh, we should have this thing. And they're like, okay, we're going to have very specific things. And then you should work on these projects. And that's your 20% time. That completely ruins the idea of having this chance to be creative and a chance to grow. Because the goal of kind of enabling the technology is to ultimately be able to use it in your normal day-to-day tasks or to build new products, both externally and internally. But also, there's a sneaky side effect of like building up your workforce and getting them comfortable with it and having them feel empowered about it. Hey, everyone. Welcome to the Leadership Growth Podcast. I'm your host, Daniel Stewart. My brother, Peter, who's normally a guest host with me, he's out ill. So I hope he recovers and feels better soon. However, I have a tremendously special guest. We are really pleased to have Travis Hoppe join us. Welcome, Travis. Thank you for having me. And so, folks, today is AI. It is all about AI. Let me read some of Travis's background, just so folks have a sense of who he is, because we are going to dive into all things AI and what we as leaders can need to pay attention to and how to best leverage it for our teams and organizations. So, Travis Hoppe served as the former director of AI research and development at the White House Office of Science Technology Policy in the Biden administration. I believe he just recently finished that as well. Prior to his time in the White House, his team enabled the CDC to be the first federal agency to unilaterally deploy a generative AI model, chat GPT, to all staff. Travis co-authored The Pile, a pioneering open source data set used for training large language models that served as a catalyst for promoting open science within the field of AI, and holds a PhD in physics. Travis, again, welcome. Thank you. So, let's start with a kind of a broad question here. Word association. When I say AI, what comes to your mind with everything that's going on, all the press, all of the headlines? What is it to you and your experience? So, it's a really hard question to answer because if you were to ask me five years ago, AI would be mostly research and development. There was some AI for process development. This was more your traditional machine learning. But with the rise of generative AI and with the rise of these really powerful models like these transformer models that are covering everything now, we're seeing AI not just be kind of like an academic curiosity or a problem-solving sort of device on like the very technical measures. What we're seeing is this technology that you can't just think of and throw it in the hands of your data scientists anymore. It's something that touches on legal. It's something that touches on the labor issues. It's something that you have to think about UI and UX. It's something that has international policy considerations. These really weren't there maybe 10 years ago, and they were kind of like at the back burner five years ago. But I would say if you're a leader in any of the businesses within the U.S., AI isn't just a technology anymore. It's something that you have to think of on all the different components of your business. So that's kind of a non-answer to get us started. But it's actually really complicated because I think we could have a discussion over these next 25 minutes about all of these different aspects of AI. Well, give us a lay of the land for the moment. Many of us know the term AI. Many of us know the term generative AI. What else is there other than generative AI? Because I have a feeling that many, many average folks, shall I say, many non-techies kind of merge AI with generative AI. And we're not quite aware of all of the other uses. Give us a sense of that broader sense of it. So this is a really, it's a really fun question because the federal government also struggled with defining AI and the legislative branch struggled with what does AI mean when we want to regulate it or we want to legislate it. And if you go back to some of the documents that, you know, that have been put out over the last couple of years, AI is a really, really broad term. It encompasses those things that we would traditionally call like machine learning. And this goes all the way back down to, you know, we had the debates whether logistic regression is a form of AI because at its heart, what AI is, is a set of data and a model, which is just a bunch of weights, a bunch of numbers that you're trying to optimize and trying to get that optimization to match a certain type of output. So it could be, can you predict whether a customer is going to click on this new button that we have? You know, you have your A-B testing for something and you have a bunch of user data and you want to make a prediction. That is considered AI as well. It was in the field of machine learning that we normally have it. And then, you know, you have the stuff that we call generative AI. And I think what makes generative AI different as a fundamental technology is the way it's trained. So typically we would play the game like, is this a hot dog? Is this not a hot dog? And you try to figure out, you know, have the machine learn hot dog or not hot dog. What generative AI did differently is it's in its training regime. Instead of trying to predict one specific thing, which you need trained data for, and it was very expensive to get that trained data, you just tried to have it predict something it saw a lot of. So if you give it a bunch of pictures, you say, give me more pictures like this. If you have a bunch of text, you say, give me more text like this. These were the very first generative models. You couldn't really steer them very much, but we combined this with this crazy thing called reinforcement learning. And then you could start steering them to have more chat-like conversations. So all of the, like, machine learning tasks that we had, that also encompasses all of AI. What's different about generative AI is it kind of, you build these foundation models that can take large, large corpuses of text or images or videos. And then from there, you can build specialized models on them. So this predictive quality, it sounds like there's generative AI, and then there's all of this other types of AI that ultimately are trying to predict what might come. Is that a fair way of putting it? I think a really good way to think of this, you know, especially if you're at the top of an organization is that all AI does fundamentally is model something it's seen before. And you can make that more adaptable, right? Maybe you're looking at crop yields and you're trying to model or predict what's going to happen or impute missing data because your sensors were broken. That is a form of AI. If you're trying to model natural language, which is what kind of generative AI does, that is also a model. It's a much more complicated model, and we didn't think we could do that before. But at the end of the day, what AI is doing is modeling the data that we've seen before, which is why, when you're thinking about, like, the capabilities of AI, it's really useful to think about interpolation. You know, if you have a bunch of data points and you want to predict something within those data points, you know, maybe it's something that hasn't quite occurred, but there's been other instances like it. Your predictive modeling, your machine learning, your AI is going to do a really good job if you've done, if you trained it well. When you try to extrapolate outside of that, it's not going to do as good of a job. And so one of the questions we get, I think every data scientist gets, you know, they get called into the office. You're like, oh, cool, you built this thing. And now can you make it do X, where X is like an impossible task. And oftentimes we go through and we explain that same task again. It's like, no, you give us a bunch of training data or you give us a bunch of things to model after. We can probably make a machine do the same thing. I like to come up with the analogy, like, if you have a really good intern, that's about where AI is. But, you know, there are people who are trying to push the boundaries of that in like mathematics and other sorts of like really esoteric, highly specialized fields. But broadly, the AI that's accessible to everybody today that is something that if you could show somebody and talk them through it, like I want you to do these sorts of numbers, I want you to do these sorts of statistics. That's what the capabilities of AI are. So a question I've always wanted to ask, algorithms. What's the difference? Or chatbots, but more so algorithms. Isn't that a similar type of early development in AI? Because clearly AI goes farther than that, but in some ways it sounds similar. So we're getting really technical and in the weeds here. And let me do a disclaimer before I answer your question. Is when we thought about this, so OMB released a memorandum, M-24-10. It was how the government is supposed to regulate itself with respect to AI. And the way they focused on this. And OMB, just to let folks know, Office of Management and Budget? Yes. Office of Management and Budget. Thank you for filling in my acronyms. Sure, sure. Of course. Recovering federal worker. No, M-24-10, for those of you who are paying attention, if you want to look it up at home, this memorandum described what we as federal agencies should do when we're using AI. And the focus on it wasn't so much on the definition, although it provides a definition, and I'll answer your question about algorithms and that, was that it focused on the harms that a certain model could cause. So if you're NOAA and you're looking for sea lions, which is something NOAA does it's one of their AI projects in the inventory, you can look it up. If that goes wrong, maybe you don't track all the sea lions, and then maybe that's bad for the scientist that's doing it. If you're, I don't know, the Center for Medicaid Services, or you're the Department of Homeland Security, and you're dealing with AI, or just any machine or system that operates on something that would be trust, or safety, or rights impacting AI, there's a whole bunch of things you should consider. And so we can get into the definitions and what's important and what's not, and I will tell you, they're not universally agreed upon. But there's two really, really useful ways to think about AI. And one is the harms it's going to cause, so what sort of preventative measures I should take. And the other is, what are the capabilities of that model? But to go back and to answer your question, get off my soapbox for a second. You can have algorithms that aren't machine learning or AI. So if you have a decision tree where you look, you know, back in the day, you might have a checklist, you're a government worker, somebody comes in and you say, oh, if they look like this, they go in this box. If they look like this, they go in this box. And then if they're in this box, they get sorted again. If they're this age, they go in this box. If they're in this age, they go in this box. That is absolutely an algorithm. It's not considered machine learning or AI because what you're doing, you're not training a model to kind of optimize the weights for this. So if you did it by, if you had a machine say, oh, what would be the best way to put them in this box? Like what fraction of people? Or where should the boxes be? Or what should the boxes actually be? Like how should we break them down? That is called, you know, a decision tree. And you can actually learn those weights. So that then it would become machine learning again because you're teaching the machine to learn the patterns of the data. But before that, we just had an algorithm that says it goes in here or it doesn't go in here. And that's the distinction. Like all machine learning, all AI has algorithms at their heart, but not all algorithms are necessarily machine learned. Yeah. So this machine learning is really the distinguishing characteristic that it helps to be able to then have it learn for itself, to be able to make things more accurate, better in some way. Okay. So let's take a step back. Say I'm a business owner. Okay. I have a small business, maybe a medium sized business or even a big, large business. I've read lots of things about AI. I've heard lots of things. There is this anxiety and urgency at the same time to do something with AI so that my business can be competitive. It can be faster, better, more efficient in all these ways. What are some of the ways I might want to think about applying AI in my business, especially with what you've learned, Travis, using AI and policy around AI with the federal government? What have you learned that business leaders might be able to learn from? So I'll first state that AI is just another piece of technology. It's a little bit different and I'll step on that. And so everything we learned about enabling technology within your organization, all of that still applies. I think we forget about some of this thing. You know, we were thinking about, oh, how do we how do we do cybersecurity and AI? And like 95 percent of it is the same thing. It's just cybersecurity with a different type of file, right? And there's some unique things about cybersecurity and AI that we should consider. But by and large, a lot of this for for most businesses, this is just another technology. And if you've enabled technology within your organization before and you were successful at it, then you should do those things. And so what I found as, you know, one of the leaders in data science throughout the federal government, what I found to be really, really useful if you have a medium to larger organization is start communities of practice, right? You have about the top 20 percent of your workforce that is just driven and they always want to build something and they want to put something together. I don't want to say top. I just want to say there's always this 20 percent in every organization that I've been a part of that really just wants to build and wants to put stuff together. And oftentimes you just need to bring them together and you need to give them the freedom to do so. I've always been really like enamored with the idea of the 20 percent time that Google used to have or still has. I'm not sure. Because it really does enable these ideas to grow. And I would I would say that they're most effective when they don't always have to be driven by the mission. So sometimes you get leadership and they're like, oh, we should have this thing. And they like, OK, we're going to have very specific things and then you should work on these projects. And that's your 20 percent time that completely ruins the idea of having this chance to be creative and a chance to grow because the goal of kind of enabling the technology is to ultimately be able to use it in your in your normal day to day tasks or to build new products, both externally and internally. But also there's a sneaky side effect of like building up your workforce and getting them comfortable with it and having them feel empowered about it. What's different about this particular technology around generative AI is the number of people who have access to do something cool with it. Right. And so before, if you needed to do anything with machine learning, you had to have not only a background in programming, but also a background in machine learning and actually building these models. So it's like a very small subset of people, right? And these people may not have the larger context of like the rest of your business. They might be like very specialized in this thing. Now, people can experiment with some really good idea. They could go on and they could be like, hey, this is a cool thing. I threw this in the Chat GPT. Wouldn't it be awesome if we could scale this all the way out to like the rest of my business? They can't write that scaling out part, but they can do the initial testing, right? They can, you know, like I have been able to take in, I don't know, people from legal, people from other kind of groups that don't consider themselves techies. I'm like, here, I need you to take 15 or 20 or 30 of these and then write down your metrics for determining if this is a good thing. You know, think like a data scientist. You don't have to do this at scale, but you do need some qualitative data and you get them thinking like this. They get excited. They start contributing. And once these ideas are successful, one, they've learned how the technology works and they won't ask the most craziest questions. They'll be like, oh, this is how it's supposed to be. Then you can get your data scientists in. Then you can get your IT in and say, okay, this is maybe something that we want to build as an actual pilot. And this is something that we want to scale up. But you have to have this really fertile ground for failures, for pilots, for people to kind of experiment with the technology and feel comfortable when they're like, oh, wow, that was a really dumb idea. But in the safety of being able to do that. And so if we're going to do this with generative AI, like within the federal government, you know, what the way we've kind of enabled that with that is to obviously not use anything that's private data, nothing that is unreleased. So everything that's public. And, you know, people might get frustrated. They might think, well, that's not the thing I'm actually working on. It turns out there's a lot of proxies for testing your idea. Right? And there's a lot of things on the Internet that are public that you can test your same idea with. And if you're going to test it with internal data, make sure that data isn't sensitive. There's a lot of technicalities on who owns your data, in what places. And that's something you should really talk to your lawyers and talk to your IT professionals about. But if you're just dealing with public data and just testing out how these ideas work, then you can do a lot more with it. So have like different levels of playgrounds. This is so good, Travis. This community of practice notion. I'm just envisioning four or five, ten people, something like this. Even folks who raise their hand and say, hey, I love this stuff. I want to figure it out. And then some sort of leader giving them permission to explore. And you raised a good caution in terms of not getting too specific with constraints, but yet offering something to kind of guide. Give us a sense, Travis. What would be your recommendation based upon some of the work that you've done? How much of guidance do you give to a team to then play with this stuff? And the other part of this, how many tools do they need? Is it really a perplexity or a Claude and that's all you need? Or is it, what's the next step? Because AI in some ways democratizes this ability for so many people to start playing in spaces that normally we wouldn't be playing in. So two things. How much of a constraint, you know, guidance do you give? And what really are the tools that are needed? So on the guidance question, one, you do want to put some guidance in place that are guardrails, right? And I talked a little bit about this, about using non-public data. But you should sit down and make sure you talk to your employees and be like, hey, you know, do your crazy ideas on them. But they have to be somewhat related to what we're working on, right? You can't just do your own personal pet projects that are completely unrelated. You can't do anything that would be illegal. Sometimes you do need to remind people about these sort of things. You shouldn't do anything that would get the company in trouble for like reputational harm, right? It could be legal, but how would people feel if they found out we were doing this, right? Like all these and shouldn't do anything that's rights or safety impacting on your pilots, right? Like have that conversation. And if people push back, remind them that guardrails help people go faster. Because you understand what the lanes are and now you can go a little bit faster knowing that those guardrails are in place. In terms of the topics, I found asking staff to get them like, what are your problems? What are the things that you're working on? They have a million things that they think AI would help with. That's enough, right? Like ask them like, how do you think AI would help? Whether or not they think they can do it. Have them make those lists. Those will always be on topic. Maybe they won't be the things that leadership wants to see, but they're the things that staff are actually having a problem with. This is like their day-to-day sort of business operations or their really cool pie-in-the-sky idea. Like wouldn't it be cool if we had this data set to link to with this data set? Like what if we pulled the data from the federal government and we linked it to our internal data? That would be a cool thing. Can we do that? Both of those ideas will come from staff and all you have to do is just ask them and then give them the place to do so. And you had another question, but I didn't get that one. Yeah. So it's kind of the constraints or how to frame it. And the other is, what additional equipment, what's needed beyond simply a generative AI model itself? Or is that sufficient to then be able to move to the next level, the next step? So obviously if you're going to, so we're talking about these early stage pilots where people are just experimenting. That's really good. And sometimes this is really all people need with generative AI is like having access to, you know, Claude or any of the like Mistral or any of these other models, ChatGPT if you want to put them. That's enough for what they want to do. They just want a chatbot. However, some things need to scale, right? Like some things like, hey, I want to apply this on every query that comes in or I want to actually build a RAG, which are these retrieval augmented generation sort of database systems. And like, I think this made sense. I read an article about RAGs and I want to do it. You need IT staff for that. And you need IT staff. You need data scientists. You need people to scale up this. But that gets back to product development. And so if you're thinking you need to build this product, you should have an organization that knows how to build products, right? And so, again, this is not a new thing. When a business is like, oh, we need something with big data, right? This is a thing that we heard a while ago. I'm like, what do you mean we need to do big data? Or we need to do something on the blockchain, right? Like as you hear these buzzwords. What I really like about smaller pilots and what I really like about this technology is you can immediately see how much it would impact the individual. They could work with this data. They could be like, oh, this is super useful. And so then as business, you can be like, oh, this is worth investing, you know, a couple weeks of RIT time to send something up. Or this is worth investing or buying from another company because it's worth it, right? Because there's always costs that are beyond just like the API calls. There's costs of like integrating with another business. And so I guess my answer to you for that is like you do need the support. Like then it becomes a product, right? And we should know how to build products if we're running businesses. Makes sense. And part of this, as I talked to lots of business leaders about AI, there is this urgency, as I mentioned earlier, to do something with it. However, it's coupled with the I don't know what we should do with it. Because in some ways, there are so many options. It's like going to the grocery store and seeing a hundred variations of cereal. It's almost too much. I just kind of need the 15 or the 10 or the 5 and that's the one. Oh, thank you. That helps me. However, that is also limiting. So as we're dealing with something that has so many possibilities, what are some ways that you've seen, even within the federal government, how do we as humans begin to narrow down so it balances the overwhelmingness with still getting something done and moving forward? I mean, that's a hard question. Again, talking to your staff and having regular places for them to come up with these ideas. Like, I love a good hackathon. I know we don't call them that in the federal government. We call them codeathons or something else like that. But I love those sort of things when you can actually integrate people who are not data scientists into them. They're really, really good to get people together, some teams together and say, oh, I want to build this. And you have one data scientist in there and they like throw something together as a proof of concept. That is a really good way to kind of get some of the ideas that might be successful for this. The other is like look to see what other people are doing successfully, right? You know, like if you hear from other businesses, they're like, oh, I use this program because it was useful. A lot of the times people aren't using, you know, in businesses at scale. They're not using, you know, Anthropics model or OpenAI's model directly. Some other vendor has written something on top of it and that product is really useful to them. And so like looking at what some of these products are, there are too many because everybody has an AI business right now. But look to see what other people are using and finding effective is a good way to do it. I will caveat all of this that the window for technology change is very rapid right now and it's not slowing down. So within the next two years, I think this pace of growth is going to continue. Beyond that, I don't even want to extrapolate and figure out like how fast we're going. But the acceleration has not changed. So we, the pace of innovation is still going and it's still accelerating and it's still really fast. I don't think we need to worry about this AGI thing. That's not where I'm coming from. But I'm just saying in terms of what this technology is enabling, basically the capabilities of these models are going to be surprising us for the next couple of years at least. And AGI, tell us about this. That is, it's artificial, it's general artificial intelligence. And so this is the idea that a model would be smart enough to then think about itself, one, you know, have a sense of identity, a sense of humanity to it. And then if you use that model to maybe think of how itself could be smarter, that model would move around and then create a smarter bot and that bot could create a smarter bot faster than it could create the first one. And then ultimately we have this cascading thing to this, what I think is a science fiction story about the singularity. I think those things are really good to think about and talk about. I do think some of those conversations suck all the air out of the room with genuine real problems that we have about rights or safety impacting AI. So like, yes, we can talk about the singularity. It's fine. But we should also talk about the real world harms that are still going on with AI right now. Yeah. And share with us some examples, perhaps, of all of the cool things that generative AI in particular might be able to create and do. What sorts of things have you seen, even AI more broadly, even within the federal government with your experience, what cool things have been able to be accomplished that maybe couldn't have been done before, or that you're anticipating soon that we can do differently? So we talked about chatbots and generative AI, which in and of itself is massively transformative. But putting that aside, we've seen massive, massive improvements on all image technology. And so before, if you wanted to train something that let's say you were trying to detect if birds were in a picture and you wanted to detect the bird and determine what kind of bird it is. And so what you would do is you would pay somebody thousands upon thousands of dollars over many months to draw little boxes around all these pictures that had birds of them or not. And then maybe somebody to like label the pixels, which ones are birds. So you could build a training set. So you could train a model on detecting birds. Nowadays, we've seen some amazing advancements in image segmentation where you can be like, oh, here's three birds. Figure out where the rest of them are in all million other pictures, just because it has the concept of what images are and how it relates to the physical world. And so it can do that per pixel segmentation by just a few examples. This few shot learning and image segmentation is crazy. And so if you apply that to something like, I don't know, satellite technology, where you have all these geospatial images, right? There's a lot of companies that want to use geospatial images and they want to look for something in particular. We can talk about a CDC project that, you know, was built maybe like five years ago and it was looking for cooling towers in satellite imagery. It was taking Google Earth images and looking for cooling towers because cooling towers lead to, well, they're sometimes correlated with Legionnaire's disease because it's where it often comes from. And so this epidemiologist thought it wouldn't be cool if we could see if they correlate with each other. Then we have an idea where hotspots might come from because we kind of knew this from the literature. But could you pull these together? And so this brilliant data scientist, she painstakingly labeled thousands and thousands of little cooling towers on things so that she could find more of them. Nowadays, we kind of revisited it and you could do 95% of her work. Not 100, 95% of her work with these kind of image segmentation models with, you know, a tenth of the time. And so thinking about that, that's not something that's not a ChatGPT thing. That is a I am using AI deeply in transformative ways for something that would have taken a long time and something that's new. The other thing along those lines are named entity extraction. So, so many of the times in businesses you have some sort of text and you want to be like, this is the subject, this is the company, this is how much we paid them or whatever. This is the congressperson who said this about this company and be able to extract those kind of meaningful ideas. We're really good at that with AI right now. That used to require specialized models. It used to be a huge pain. And if you want great accuracy, yes, you can pay, you know, the thousands of dollars for that. If you want to just use off the shelf kind of foundation models, you get like 95% of the way there. And so like, I think some of that's really big and transformative for like how what's happened is that a lot of tasks that were possible before with a lot of work are now possible with like very, very little work. It's kind of raised the bottom up and that's changing things a lot. The last thing I want to mention, which I don't think we have time to go into, is the massive improvement in robotics. And so I'm not sure if this is a thing that listeners of your podcast will get into but, you know, for all the reasons why training has gotten better for all these other things, we're applying it to robotics, which if you're a robot, you actually have to interact in a physical world with real physics, right? And so whether you're a robot that's sorting things in a factory or whether you're a high speed CNC machine or you're a drone swarm that is trying to coordinate a certain type of action or you're a drone that is tracking somebody on a motorbike, right? Like all of these things are using enabled AI and they're just going to get better and better. Travis, this is a tremendous list. And so here's my last question to you. Of all of the things we've talked about, all the things you've been experiencing and observed, what's the one thing that a business leader needs to be paying attention to, to then take advantage of AI for his or her business? That's a good question. Lots of good questions. I think if there's one takeaway that business leaders can have right now is that you, if you want to act like a policy entrepreneur, you have an opportunity to seize the moment right now. You have an opportunity to kind of enable your organization to think for themselves in this sort of technology. And that has to be done from a bottom-up approach, not from a top-down. And so I've seen this in like three or four organizations, federal government within some states and city governments that are organizing within their own divisions. When that's enabled, you see this growth that happens that then people can build products and other cool things on top of. So you have a moment right now to seize upon. You just have to enable your people. Oh, I love it. Travis Hoppe, this has been an absolute pleasure. Thank you so much for joining on the Leadership Growth Podcast. It was a pleasure. Thank you. To all of our listeners, thank you for listening. Please subscribe and like, comment if you'd like as well. And we look forward to having you at a future Leadership Growth Podcast. All the best, everyone. Bye. If you liked this episode, please share it with a friend or colleague. Or better yet, leave a review to help other listeners find our show. And remember to subscribe so you never miss an episode. For more great content or to learn more about how Stewart Leadership can help you grow your ability to lead effectively, please visit stewartleadership.com.