
Humans + AI
Ross Dawson
Show overview
Humans + AI has been publishing since 2021, and across the 5 years since has built a catalogue of 196 episodes. That works out to roughly 85 hours of audio in total. Releases follow a weekly cadence, with the show now in its 3rd season.
Episodes typically run thirty-five to sixty minutes — most land between 33 min and 38 min — and the run-time is fairly consistent across the catalogue. It is catalogued as a EN-US-language Business show.
The show is actively publishing — the most recent episode landed yesterday, with 16 episodes already out so far this year. The busiest year was 2024, with 50 episodes published. Published by Ross Dawson.
From the publisher
Exploring and unlocking the potential of AI for individuals, organizations, and humanity
Latest Episodes
View all 196 episodesKathleen deLaski on reimagining higher education, generational mobility, building AI skills, and human originality (AC Ep43)
David Vivancos on the end of knowledge, cognitive flourishing, resilient societies, and artificial democracy (AC Ep42)
Jon Husband on wirearchy, web weaving, the relational economy, and drift diving (AC Ep41)
Michael Gebert on designing freedom, human self-determination, cognitive sovereignty, and systems of agency (AC Ep40)

S3 Ep 39Marshall Kirkpatrick on cognitive levers, combinatorial possibilities, symphonic thinking, and compound learning (AC Ep39)
“The technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people.” –Marshall Kirkpatrick About Marshall Kirkpatrick Marshall Kirkpatrick is founder of sustainabilty consultancy Earth Catalyst and AI thinking tool What’s Up With That. His many previous roles include founder of influence network analysis tool Little Bird, which was acquired by Sprinklr, where he was last Vice President Market Research. Website: whatsupwiththat.app LinkedIn Profile: Marshall Kirkpatrick What you will learn How generative AI transforms cognitive tools and lowers barriers to advanced thinking Techniques to combine human and AI-powered sensemaking for richer insights Practical strategies for filtering and extracting value from infinite information The importance and application of diverse mental models in modern decision-making Methods to balance manual cognitive work with AI assistance for optimal outcomes The role of adaptive interfaces in enhancing individual cognitive capacity Metacognitive approaches to networks and how AI can foster organizational awareness Ethical and societal implications of democratizing access to AI-powered cognitive enhancements Episode Resources Transcript Ross Dawson: Marshall, it is awesome to have you back on the show. Marshall Kirkpatrick: Oh, thank you, Ross. It’s such a pleasure to be reconnecting with you here. Thanks for having me on. Ross Dawson: So back you were very, very early on in the podcast when it was Thriving on Overload, and it was interviews with the book, and you got incorporated—some of the wonderful things you were doing in Thriving on Overload. So I think today, in this world of generative AI, which has transformed everything, including the way in which we think, the Thriving on Overload themes are still super, super relevant, and in a way, we need to be talking about them more. That theme at the time was finite cognition, infinite information. How do we work well with it? I don’t know if our cognition has become more finite, but the information has become more infinite, and there’s just more and more. But also, it cuts two ways, as in, what is the source of all the information? AI is also a tool. So anyway, let’s segue from some of your cognitive thinking tools, technology-enabled cognitive thinking tools and so on, which we looked at. So how do you—where are we? 2026, what do you think about human cognition in our current universe? Marshall Kirkpatrick: Well, especially when you frame it up in Thriving on Overload terms. I mean, those were four, five long years ago that we last spoke, and the book that came out of it was just fantastic. I think it has some timeless qualities, and I think that the technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people. That’s what I hope. I think that, yeah, between individuals and organizations, there’s so much that, historically, someone like you or me or the people closest in our networks were willing and able to do and excited to do, that many other people said, “That sounds like a lot of work.” The bar is lower now, because a lot of just the raw cognitive processing can be outsourced into a technology that serves as a lever. Ross Dawson: Well, I mean, that idea of levers for these cognitive tools is interesting. I guess, the very crude way of saying it is, we’ve got inputs into our human brain, and then we are processing information. I’m just thinking out loud a bit here, but it’s like, okay, we have tools to be able to filter, to present, to find what is most relevant, to present it to us in the ways which are most useful—very obvious, like summarization, visualization. Then as we are processing it ourselves, we have dialog, or we can have interlocutors who we can engage with and be able to refine and help our thinking. Does that sort of make sense, or how would you flesh that out? Marshall Kirkpatrick: Yeah, I mean, when you put it that way, it makes me think about Harold Jarche and his Seek, Sense, Share model, right? I think that AI, especially when connected to things like search and syndication and other traditional technologies, can impact all three of those stages. It can hypercharge our search. I think the archetypal example of that, on some level, feels like the combinatorial drug research being done, where just an otherwise cognitively uncontainable quantity of combinatorial possibilities between molecules can be sought out and experimented with for a desirable reaction. And then that sensing, or the pattern recognition that AI is so good at, is something that we do as humans—some of us better than others—and it’s a lifelong muscle to build and what have you. But the AI is really, really good at it, and so it’s a ladder to cli

S3 Ep 38Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension (AC Ep38)
“Fiction has this unprecedented power in tech spaces. The more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer.” –Nina Begus About Nina Begus Nina Begus is a researcher at the University of California, Berkeley, leading a research group on artificial humanities, and the founder of InterpretAI. She is author of Artificial Humanities: A Fictional Perspective on Language in AI, which received an Artificiality Institute Award, and First Encounters with AI. Website: ninabegus.com LinkedIn Profile: Nina Begus Book: Artificial Humanities What you will learn How ancient myths and archetypes influence our understanding and design of AI Why the humanities—literature, philosophy, and the arts—are crucial for developing more thoughtful and innovative AI systems The dangers of limiting AI concepts to human-centered metaphors and the need for new, more expansive imaginaries How metaphors shape our interactions with AI products and the user experiences companies choose to enable The challenges and possibilities of imagining forms of machine intelligence and language beyond human templates Why collaboration between technical experts and humanists opens new frontiers for creativity and responsible technology What makes writing and artistic creation uniquely human, and how AI amplifies—not replaces—these impulses Practical ways artists, engineers, and thinkers can work together to explore new relationships and futures with AI Episode Resources Transcript Ross Dawson: Nina, it is wonderful to have you on the show. Nina Begus: Thank you for having me. Ross Dawson: You’ve written this very interesting book, Artificial Humanities, and I think there’s a lot to dig into. But what does that mean? What do you mean by artificial humanities? Nina Begus: Well, this was really a new framework that I’ve developed while I was working on the relationship between AI and fiction, and I started working on this about 15 years ago when I realized that fiction has this unprecedented power in tech spaces. So this is how it all started, but then the more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer in this collaborative, generative approach that I’ve developed. I would say that now, as the field stands, it’s really a way to explore and demonstrate how humanities—as broad as science and technology studies, literary studies, film, philosophy, rhetoric, history of technology—how all of these fields can help us address the most pressing issues in AI development and use. And it’s been important to me that this approach uses traditional humanistic methods, theory, conceptual work, history, ethical approaches, but also that it’s collaborative and exploratory and experimental in this way that you can look back into the past and at the present to make a more informed choice about the future. You can speculate about different possibilities with it. Ross Dawson: Well, art is an expression of the human psyche, or even more, it is the fullest expression of humanity, and that’s what art tries to do. Also, I’m a deep believer in archetypes, human archetypes, and things which are intrinsic to who we are, and that’s something which you can only really uncover through the arts. Now we have arguably seen all these archetypes play out in real time, these modern myths being created right now in the stories being told of how AI is being created. So I think it’s extraordinarily relevant to look back at how we have depicted machines through our history and our relationship to them. Nina Begus: Yes, this is the reason why I started exploring this topic, actually, because there were so many ancient myths, these archetypal narratives that I’ve seen at the same time, both in technological products that were coming to the market and in the way technologists were thinking about it, and also in fictional products and films and novels in the way we imagined AI. I framed my book around the Pygmalion myth, but there are many, many other myths—Prometheus, Narcissus, the Big Brother narrative, and so on—that are very much doing work in the AI space. The reason why I chose the Pygmalion myth is because it’s so bizarre in many ways: you have this myth where a man creates an artificial woman, and then in the process of creation, falls in love with her. So there’s the creation of the human-like, and there’s also this relationality with the human-like. You would think this would not be a common myth, but quite the opposite—I found it everywhere I looked. It wasn’t called the Pygmalion myth, but the motif was there. I found it on the Silk Road, in ancient folk tales, in Native American folk tales, North Africa, and so on. So I think this kind of story is actually telling us a lot about how humans

S3 Ep 37Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution (AC Ep37)
“The center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adopt, adapt to skills, and adjust to an environment.” –Henrik von Scheel About Henrik von Scheel Henrik von Scheel is Co-Founder of advisory firm Strategic Intelligence, Chairman of the Climate Asset Trust, Vice Chairman of Regulatory Intelligence Committee, and Professor of Strategy, Arthur Lok Jack School of Business, among other roles. He is best known as originator of Industry 4.0, with many awards and extensive global recognition of his work. Website: von-scheel.com LinkedIn Profile: Henrik von Scheel What you will learn Why human-centered AI is crucial for widespread societal prosperity The impact of AI hype cycles, media narratives, and the realities of technology adoption How equitable wealth distribution and capital allocation in AI can shape economic outcomes Risks around data ownership, privacy, and the importance of controlling your own data in the AI era Divergent approaches to AI regulation in the US, EU, and China, and the implications for global AI leadership The importance of trust calibration and intentional human-AI collaboration in practical applications How education and lifelong learning can be reshaped by AI to support individualized growth and mistake-enabled reasoning Opportunities for AI to amplify individual talents, address educational gaps, and enable more specialized and innovative skills Episode Resources Transcript Ross Dawson: Henrik, it is wonderful to have you on the show. Henrik von Scheel: Thank you very much for having me, Ross. Ross Dawson: So I think we’re pretty aligned in believing that we need to approach AI from a human-centered perspective and how it can bring us prosperity. So I’d just love to start with, how do you think about how we should be thinking about AI? Henrik von Scheel: Well, I think, like every technology that comes into play, it brings a lot of changes to us. But I think the center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adapt, adapt to skills, and adjust to an environment. So technology is something that we apply, but it’s the strategy on how we adapt with it that makes a difference. It’s never the technology itself. So I’m excited. It’s one of the most exciting periods for the industry and for us as people. Ross Dawson: There’s a phrase which I’ve heard you say more than once around AI should make us smarter, healthier, and wealthier. So if that’s the case, how do we frame it? How do we start to get on that journey? Henrik von Scheel: So I think what people experience today in AI is that they experience a lot of media hype—large language models, ChatGPT, and all of this—and they consume it from the media. So there’s a big hype around it, and I believe that AI is about to crash fundamentally, but crashing in technology is not bad, right? There are a lot of promises and then an inability to deliver, and then it crashes. What you hear in the media today is very much driven by a story of them raising funds because it’s so expensive, and so they are promising the world of everything and nothing, and the reality looks a little bit better. The world that they are presenting is that you will be replaced, and you will be happy, and you’ll be served by everything else. And somehow it will work out. We don’t know how, but it will work out. And that’s not a future that is really a real future. The future must include that everybody gets smarter, wealthier, and healthier. And when I say everybody, I mean not only the guys that have money, that they become more rich, or the middle class. It’s like everybody in society should get smarter from AI. That means part of the things that they need to learn or how human evolution works should be better, and it should make us healthier people and wealthier people. So it should not only be that we sacrifice our convenience with our freedom, with our privacy, with our environment, or any other things that we put on the table to get convenience back. That exchange we have done a couple of times, and it’s not working really well for humans, and it’s not a good trade for us, right? Ross Dawson: Yeah, I love that. And since it’s quite simple, you know, you can say it, it’s clear, it sounds good, and it is a really clear direction. But you’re actually pointing in a couple of ways there to capital allocation. So obviously, if you’re looking at the AI economic story, this is around this diversion of capital from other places to AI model development, data centers, deployment, and so on. But also, when you’re saying wealth here, this is around the distribution of wealth—where we’re allocating capital to AI development, but also from the way

S3 Ep 36Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation (AC Ep36)
“Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through.” –Joanna Michalska About Dr Joanna Michalska Dr Joanna Michalska is Founder of Ethica Group Ltd., Co-Founder of The Strategic Centre, and an advisor to boards on AI risk, ethics, and governance. She holds a PhD in Strategic Enterprise Risk Management and has twenty years’ experience leading enterprise risk, strategy and transformation at J.P. Morgan and HSBC. Website: ethicagroup.ai LinkedIn Profile: Dr Joanna Michalska What you will learn How boards and executives can rethink governance and accountability in the age of AI The importance of embedding governance into organizational ecosystems for agile, responsible AI adoption How to map and assign human accountability for both automated and hybrid AI-human decisions The decision architecture needed for scalable oversight, intervention, and escalation pathways Practical examples of effective AI oversight in areas like fraud detection and exception handling Steps for complying with new regulations like the EU AI Act, including inventorying AI systems and risk tiering Why human qualities like emotional intelligence, psychological safety, and honest communication are critical in AI-driven organizations How leaders can foster organizational resilience and help teams adapt by building AI literacy, retraining, and supporting personal growth Episode Resources Transcript Ross Dawson: Joanna, it’s a delight to have you on the show. Joanna Michalska: Well, thank you for having me, Ross. Ross Dawson: So, AI is wonderful, but it also brings us into a whole lot of new territory where we have to be careful in various ways. I’d love to just hear, first of all, the big framing around how boards and executive teams need to be thinking about governance and accountability as AI is incorporated more and more into work and organizations. Joanna Michalska: I think we’re all very excited about the capability that exists today to help us enhance our performance and the way we think about strategic execution for our organizations. It has multidimensional consequences for how we adapt it. What’s very important right now is, as executives and boards think about accelerating their ambitions and growth plans, there needs to be awareness of two components. First, how do we as leaders, as humans, need to adapt to that new environment? There are new conditions, or perhaps existing conditions that really need to be enhanced. They’re very important to exist in order to be able to adapt and to scale. Second, do we actually have the right systems in place to enable that scale? I think it’s important to recognize that, yes, governance has always existed, but the way it existed was more as external supporting scaffolding, rather than being built into an organizational ecosystem. We also need to have the right leadership in place to ensure that decisions are made in the right way and the organization is designed in a much more robust, agile way. These two conditions are critical for not only increasing adoption, but also doing so in a safe and responsible way, especially as we expand our ambitions for the future. It’s exciting, but there’s also a lot of caution and a lot of questions being asked by executives at this time. Ross Dawson: Yes and I guess the more we can address those concerns upfront, the more it enables us to do. I have this idea of minimum viable governance—at least having some governance in place so we don’t go too badly astray. But I always think of governance for transformation as: how do you set governance not as a brake to slow you, but in fact to accelerate you, because you have confidence in how you’re going about it? Joanna Michalska: Absolutely! I think the mindset shift is very important, because governance, to your point, has always been seen as a compliance-driven thing that we must do because regulators require us to, and we need to demonstrate we have these policies and procedures in place and the right people in the right positions. Now, what the new environment is requiring of us—as executives, even board members—is a different set of responsibilities that really cannot be assumed as pre-existing. In this accelerated environment—let’s call it that, rather than just “AI,” because it’s so overused and can mean so many different things—where the automation rate is fast and overtaking everything, governance needs to change. It can’t be an afterthought or something we designed at one point in the past and now just try to fit into what’s happening. It really needs to become a well-designed, living organism. It needs to organically evolve. It needs to have the right people with the right accountability that is well understood. Accountability that was designed in the past nee

S3 Ep 35Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35)
“You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction.” –Cornelia C. Walther About Cornelia C. Walther Cornelia C. Walther is Senior Fellow at Wharton School, a Visiting Research Fellow at Harvard University, and the Director of POZE, a global alliance for systemic change. She is author of many books, with her latest book, Artificial Intelligence for Inspired Action (AI4IA), due out shortly. She was previously a humanitarian leader working for over 20 years at the United Nations driving social change globally. Website: pozebeingchange LinkedIn Profile: Cornelia C. Walther University Profile: knowledge.wharton What you will learn How the ‘hybrid tipping zone’ between humans and AI shapes society’s future The dangers and consequences of ‘agency decay’ as individuals delegate critical thinking and action to AI The four accelerating phenomena influencing humanity: agency decay, AI mainstreaming, AI supremacy, and planetary deterioration Actionable frameworks, including ‘double literacy’ and the ‘A frame’, to balance human and algorithmic intelligence What defines ‘pro social AI’ and strategies to design, measure, and advocate for AI systems that benefit people and the planet The need to move beyond traditional ethics toward values-driven AI development and organizational ‘return on values’ Leadership principles for creating humane technology and building unique, purpose-led organizations in the age of AI Global contrasts in AI development (US, Europe, China, and the Global South) and emerging examples of pro social AI initiatives Episode Resources Transcript Ross Dawson: Cornelia, it is fantastic to have you on the show Cornelia Walther: Thank you for having me Ross. Ross: So your work is very wonderfully humans plus AI, in being able to look at humans and humanity and how we can amplify the best as possible. That’s one really interesting starting point is your idea of the hybrid tipping zone. Could you share with us what that is? Cornelia: Yes, happy to. I would argue that we’re currently navigating a very dangerous transition where we have four disconnected yet mutually accelerating phenomena happening. At the micro level, we have agency decay, and I’m sure we’ll talk more about that later, but individuals are gradually delegating ever more of their thinking, feeling, and doing to AI. We’re losing not only control, but also the appetite and ability to take on all of these aspects, which are part of being ourselves. At the meso level, we have AI mainstreaming, where institutions—public, private, academic—are rushing to jump on the AI train, even though there are no medium or long-term evidences about how the consequences will play out. Then at the macro level, we have the race towards AI supremacy, which, if we’re honest, is not just something that the tech giants are engaged in, but also governments, because this is not just about money, it’s also about power and geopolitical rivalry. And finally, at the meta level, we have the deterioration of the planet, with seven out of nine boundaries now crossed, some with partially irreversible damages. Now, you have these four phenomena happening in parallel, simultaneously, and mutually accelerating each other. So the time to do something—and I would argue that the human level is the one where we have the most leeway, at least for now, to act—is now. You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction. I don’t know about you, but I didn’t have a cell phone when I was a child, so I still remember my grandmother’s phone number from when I was five years old. Today, I barely remember my own. Same thing with Google Maps—when was the last time you went to a city and explored with a paper map? Now, these are isolated functions in the brain, but with ChatGPT, there’s this general offloading opportunity, which is very convenient. But being human, I would argue, it’s a very dangerous luxury to have. Ross: I just want to dig down quite a lot in there, but I want to come back to this. So, just that phrase—the hybrid tipping zone. The hybrid is the humans plus AI, so humans and AI are essentially, whatever words we use, now working in tandem. The tipping zone suggests that it could tip in more than one way. So I suppose the issue then is, what are those futures? Which way could it tip, and what are the things we can do to push it in one way or another—obviously towards the more desirable outcome? Cornelia: Thank you. I think you’re pointing towards a very important aspect, which is that tipping points can be positive or negative, but the essential thing is that we can do something to influence which way

S3 Ep 34Ross Dawson on Humans + AI Agentic Systems (AC Ep34)
“Transparency has to be built into the structure so that you know where the decision is made, what authorizations are given, and have an audit trail visible so you can always see what is going on.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Collaborating with AI Agents Intelligent AI Delegation Agentic Interactions LinkedIn Profile: Ross Dawson What you will learn How human-AI teams outperform human-only teams in productivity and efficiency The crucial role of understanding AI strengths and limitations when designing collaborative workflows Ways AI collaboration can lead to output homogenization and strategies to preserve human creativity Key principles of intelligent delegation within multi-agent AI systems, including dynamic assessment and trust Understanding accountability, transparency, and auditability in decision-making with autonomous AI agents How user intent and ‘machine fluency’ impact the effectiveness of AI agents in economic and organizational contexts The emergence of an ‘agentic economy’ and its implications for fairness, capability gaps, and representation Counterintuitive findings on AI-mediated negotiation, particularly advantages for women, and what it reveals about AI-human interaction Episode Resources Transcript Ross Dawson: This episode is a little bit different. Instead of doing an interview with somebody remarkable, as usual, today I’m going to just share a bit of an update and then share insights from three recent research papers that dig into something which I think is exceptionally important, which is how humans work with AI agentic systems. And we’ll look at a few different layers of that, from how small humans plus agent teams work through to how we can delegate decisions to AI through to some of the broader implications. But first, a bit of an update. 2026 seems to be moving exceptionally fast. It’s a very interesting time to be alive, and I think it’s pretty even hard to see what the end of this year is going to look like. So for me, I am doing my client work as usual. So I’ve got keynotes around the world on usually various things related to AI, the future of AI, humans plus AI, and so on. A few industry-specific ones in financial services and so on. And also doing some work as an advisor on AI transformation programs, so helping organizations and their leaders to frame the pathways, drawing on my AI roadmap framework in how it is you look at the phases, mapping those out, working out the issues, and being able to guide and coach the leaders to do that effectively. But the rest of my time is focused on three ventures, and I’ll share some more about these later on. But these are fairly evidently tied to my core interests. Fractious is our AI for strategy app. So this was really building a way in which we can capture the detailed nuance of the strategic thinking of leaders of the organization, to disambiguate it, to clarify it, and enable that to then be built into strategic options, strategic hypotheses, and to be able to evolve effectively. So that’ll be in beta soon. Please reach out if you’re interested in being part of the beta program, and that’ll go to market. So that’s deeply involved in that. We also have our Thought Weaver software, rebuilding previous software which had already built on AI-augmented thinking workflows. So again, that’ll be going to beta. That’s more an individual tool that will be going into beta in the next weeks. So again, go to Thought Weaver. Actually, don’t—the website isn’t updated yet—but I’ll let you know when it’s out, or keep posted for updates on that. And also building an enterprise course on humans plus AI teaming. It’s my fundamental belief that we’ve kind of been through the phase of augmentation of individuals, and we still need to work hard at doing that better. But the next phase for organizations is to focus on teams. How do you work with teams where we have both human members and AI Agentic members? And it creates a whole different series of dynamics and new skills and capabilities. It really calls for how to participate in the humans plus AI team and how to lead humans plus AI teams. And that is again going into the first few test organizations in the next month or so. So again, just let me know. So today what we’re going to look at is this theme: teams of humans working with AI agents. So not individual AI as in chat, but where we have a lot of agents with various degrees of autonomy, but also agentic systems w

S3 Ep 33Davide Dell’Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33)
“In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell’Anna About Davide Dell’Anna Davide Dell’Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Website: davidedellanna.com LinkedIn Profile: Davide Dell’Anna University Profile: Davide Dell’Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and peopl

S3 Ep 32Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy (AC Ep32)
“You can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with.” – Felipe Csaszar About Felipe Csaszar Felipe Csaszar is the Alexander M. Nick Professor and chair of the Strategy Area at the University of Michigan’s Ross School of Business. He has published and held senior editorial roles in top academic journals including Strategy Science, Management Science, and Organization Science, and is co-editor of the upcoming Handbook of AI and Strategy. Website: papers.ssrn.com LinkedIn Profile: Felipe Csaszar University Profile: Felipe Csaszar What you will learn How AI transforms the three core cognitive operations in strategic decision making: search, representation, and aggregation. The powerful ways large language models (LLMs) can enhance and speed up strategic search beyond human capabilities. The concept and importance of different types of representations—internal, external, and distributed—in strategy formulation. How AI assists in both visualizing strategists’ mental models and expanding the complexity of strategic frameworks. Experimental findings showing AI’s ability to generate and evaluate business strategies, often matching or outperforming humans. Emerging best practices and challenges in human-AI collaboration for more effective strategy processes. The anticipated growth in framework complexity as AI removes traditional human memory constraints in strategic planning. Why explainability and prediction quality in AI-driven strategy will become central, shaping the future of strategic foresight and decision-making. Episode Resources Transcript Ross Dawson: Felipe, it’s a delight to have you on the show. Felipe Csaszar: Oh, the pleasure is mine, Ross. Thank you very much for inviting me. Ross Dawson: So many, many interesting things for us to dive into. But one of the themes that you’ve been doing a lot of research and work on recently is the role of AI in strategic decision making. Of course, humans have been traditionally the ones responsible for strategy, and presumably will continue to be for some time. However, AI can play a role. Perhaps set the scene a little bit first in how you see this evolving. Felipe Csaszar: Yeah, yeah. So, as you say, strategic decision making so far has always been a human task. People have been in charge of picking the strategy of a firm, of a startup, of anything, and AI opens a possibility that now you could have humans helped by AI, and maybe at some point, AI is designing the strategies of companies. One way of thinking about why this may be the case is to think about the cognitive operations that are involved in strategic decision making. Before AI, that was my research—how people came up with strategies. There are three main cognitive operations. One is to search: you try different things, you try different ideas, until you find one which is good enough—that is searching. The other is representing: you think about the world from a given perspective, and from that perspective, there’s a clear solution, at least for you. That’s another way of coming up with strategies. And then another one is aggregating: you have different opinions of different people, and you have to combine them. This can be done in different ways, but a typical one is to use the majority rule or unanimity rule sometimes. In reality, the way in which you combine ideas is much more complicated than that—you take parts of ideas, you pick and choose, and you combine something. So there are these three operations: search, representation, and aggregation. And it turns out that AI can change each one of those. Let’s go one by one. So, search: now AIs, the current LLMs, they know much more about any domain than most people. There’s no one who has read as much as an LLM, and they are quite fast, and you can have multiple LLMs doing things at the same time. So LLMs can search faster than humans and farther away, because you can only search things which you are familiar with, while an LLM is familiar with many, many things that we are not familiar with. So they can search faster and farther than humans—a big effect on search. Then, representation: a typical example before AI about the value of representations is the story of Merrill Lynch. The big idea of Merrill Lynch was how good a bank would look if it was like a supermarket. That’s a shift in representations. You know how a bank looks like, but now you’re thinking of the bank from the perspective of a supermarket, and that leads to a number of changes in how you organize the bank, and that was the big idea of Mr. Merrill Lynch, and the rest is history. That’s very difficult for a human—to change representations. People don’t like changing; it’s very difficult for them, while for an AI, it’s automatic, it’s free. You change their prompt, and immediately you will have

S3 Ep 31Lavinia Iosub on AI in leadership, People & AI Resources (PAIR), AI upskilling, and developing remote skills (AC Ep31)
“In this next era, the key to leadership will be blending systems thinking and AI automation—at least being aware of what you can do with it—with empathy, discernment, connection, and clarity.” – Lavinia Iosub About Lavinia Iosub Lavinia Iosub is the Founder of Livit Hub Bali, which has been named as one of Asia’s Best Workplaces, and Remote Skills Academy, which has enabled 40,000+ youths globally to develop digital and remote work skills. She has been named a Top 50 Remote Innovator, a Top Voice in Asia Pacific on the future of work, with her work featured in the Washington Post, CNET, and other major media. Website: lavinia-iosub.com liv.it LinkedIn Profile: Lavinia Iosub X Profile: Lavinia Iosub What you will learn How AI can augment leadership decision-making by enhancing cognitive processes rather than replacing human judgment Strategies for integrating AI into teams, focusing on volunteer-driven adoption and fostering AI fluency without forcing uptake The importance of continuous experimentation and knowledge sharing with AI tools for organizational growth and team building Why successful leadership in the AI era requires blending systems thinking, empathy, and a focus on human-AI collaboration How organizational value is shifting from knowledge accumulation toward skills like curiosity, adaptability, and discernment The concept of “people and AI resources” (PAIR), emphasizing the quality of partnership between humans and AI for organizational effectiveness Critical skills for future workers in an AI-driven world, such as AI orchestration, emotional clarity, and the ability to direct AI outputs with taste and judgment Practical lessons from the Remote Skills Academy in democratizing access to digital and AI skills for a diverse range of job seekers and business owners Episode Resources Transcript Ross Dawson: Lavinia, it is awesome to have you on the show. Lavinia Iosub: Thank you so much for having me, Ross. Ross Dawson: Well, we’ve been planning it for a long time. We’ve had lots of conversations about interesting stuff. So let’s do something to share with the world. Lavinia Iosub: Let’s do it. Ross Dawson: So you run a very interesting organization, and you are a leader who is bringing AI into your work and that of your team, and more generally, providing AI skills to many people. I just want to start from that point—your role as a leader of a diverse, interesting organization or set of organizations. What do you see as the role of AI for you to assist you in being an effective leader? Lavinia Iosub: Great question. I think that the two of us initially met through the AI in Strategic Decision Making course, right? So I would say that’s actually probably one of the top uses for me, or one of the areas where I found it very useful. The most important thing here is to not start with the mindset that AI will make any worthy decisions for you, but that it will augment your cognition and your decision making when you are feeding it the right context, the right master prompts, the right information about your business, your values, what you’re trying to achieve, how you normally make decisions, and so on. Then you work with it, have a conversation with it, and even build an advisory board of different kinds of AI personas that may disagree or have slightly different views. So it enhances your thinking, rather than serving you decisions on a plate that you don’t know where they come from or what they’re based on. That’s one of the things that’s been really interesting for me to explore. If we zoom out a little bit, I think a lot of people think of AI as a way of doing the things they don’t want to do. I think of AI as a way to do more of the things I’ve always wanted to do—delegate some menial, drudgery work that no human should be doing in the year of our Lord 2025 anymore, and do more of the creative, strategic projects or activities that many of us who have been in what we call knowledge work—which, to me, is not a good term for 2025 anymore, but let’s call it knowledge work for now—just being able to do more of the things you’ve always wanted to do, probably as an entrepreneur, as a leader, as a creative person, or, for lack of a better word, a knowledge worker. Ross Dawson: Lots to dig into there. One of the things is, of course, as a leader, you have decisions to make, and you have input from AI, but you also have input from your team, from people, potentially customers or stakeholders. For your leadership team, how do you bring AI into the thinking or decision making in a way that is useful, and what’s that journey been like of introducing these approaches where there are different responses from some of your team? Lavinia Iosub: So we were, I’d say, fairly early AI adopters, and I have an approach where I really want to double down on working more with AI and giving more AI learning op

S3 Ep 30Jeremy Korst on the state of AI adoption, accountable acceleration, changing business models, and synthetic personas (AC Ep30)
“What we’re seeing now is that when we think about some of the friction and challenges of adoption, this isn’t a technology issue, per se. This is a people opportunity.” –Jeremy Korst About Jeremy Korst Jeremy Korst is Founder & CEO of Mindspan Labs and Partner and former President of GBK Collective. He lectures at Columbia Business School, The Wharton School, and USC, and is co-author of the Wharton + GBK annual Enterprise AI Adoption Study, one of the most cited sources on how businesses are actually using AI. Jeremy also publishing widely in outlets such as Harvard Business Review on strategy and innovation. Website: mindspanlabs.ai Accountable Acceleration: LinkedIn Profile: Jeremy Korst What you will learn How enterprise AI adoption has shifted from experimentation to ‘accountable acceleration’ The key role of leadership in translating business strategy into an actionable AI vision Why human factors and change management are as crucial as technology for successful AI implementation How organizations are balancing augmentation, replacement, and skill erosion as AI changes the workforce The importance of intentional experimentation and creating case studies to drive value from AI initiatives Early evidence, challenges, and promise of digital twins and synthetic personas in market research Why a culture of risk tolerance, alignment across leadership layers, and clear communication are essential for AI-driven transformation The emerging shift from general productivity gains to domain-specific AI applications and the increasing focus on ROI measurement Episode Resources Transcript Ross Dawson: Jeremy, it’s wonderful to have you on the show. Jeremy Korst: Yeah, hey, thanks for having me. Ross Dawson: So you, I think it’s pretty fair to say you are across enterprise AI adoption, being the recent co-author of a report with Wharton and GbK Collective on where we are with enterprise AI adoption. So what’s the big picture? Jeremy Korst: Yeah, let me start—now that I’ve reached this stage in life, in my career, and I look back over what I’ve done the last couple decades, it’s actually been at the intersection of technology adoption and innovation. I spent a couple of careers at Microsoft, most recently leading the launch of Windows 10 globally. I worked at T-Mobile, led several businesses there, and more recently, have been spending time really with three things. One is through my consulting company, GbK Collective, working with some of the world’s largest brands on market research and strategies for consumers and products, working with academic partners who are core to that work we do at GbK—so leading professors from Harvard and Wharton and Kellogg, and you name it—but then also very active in the early stage community, where I’m an advisor and board member of several of those. And so I’ve had this bit of a triangle to be able to watch technology adoption unveil both inside and outside the organization, whether it’s inside the organization, how people are using it and effectively, or outside, how it’s being taken to market. So fast forward to where we’re at with Gen AI. It’s been fascinating to me, because all of those things are happening and all of those communities. Where we started with the Wharton report was three years ago. Stefano and Tony, one of the co-authors, and I were literally just having a conversation right after the launch of ChatGPT. And of course, there were all the headlines and all these predictions about what was going to happen and what could happen. And we said, well, wait a minute, why don’t we actually track what actually happens? And so therein started the three-year program. It’s now an annual program sponsored by the Wharton School, conducted by GbK—my research company—that looks specifically at US enterprise business leader adoption. We decided to focus on that audience because we believe they were going to be some of the most influential decision makers around budgets and strategies as this unfolded, so that’s been our focus. We’re now in our third year, and there’s lots to dig into. Ross Dawson: So the headline for this year’s report was “accountable acceleration,” and I’ve got to say that that phrase sounds a lot more positive than what a lot of other people are describing with Gen AI adoption. “Accountable” sounds good. “Acceleration” sounds good. So is that an accurate reflection? Jeremy Korst: I think it is. And I’ll say that, yeah, the Wharton School, with three co-authors—Sonny, Stefano, and myself—we all have a relatively positive perspective and perception of what is and could be the impact of Gen AI. Now, we don’t try to dismiss some of the concerns and challenges. They’re there, they’re realistic, and should be considered, but we have a generally positive

S3 Ep 29Nikki Barua on reinvention, reframing problems, identity shifts for AI adoption, and the future workforce (AC Ep29)
“Some of this that we’ve come across is even the identity shift that is necessary, because old identities served a pre-AI work environment, and you cannot go into a post-AI era with the old identities, mindsets, and behaviors.” –Nikki Barua About Nikki Barua Nikki Barua is a serial entrepreneur, keynote speaker, and bestselling author. She is currently Co-Founder of FlipWork, with her most recent book Beyond Barriers. Her awards include Entrepreneur of the Year by ACE, EY North America Entrepreneurial Winning Woman, Entrepreneur Magazine’s 100 Most Influential Women, and many others. Website: nikkibarua.com flipwork.ai LinkedIn Profile: Nikki Barua Book: Beyond Barriers What you will learn Why continuous reinvention is essential in today’s rapidly changing business landscape How traditional change management approaches fall short in an era of constant disruption The critical role of human leadership and identity shifts in successful AI adoption Common barriers to transformation, from executive inertia to hidden cultural resistances Strategies for building a culture of experimentation, psychological safety, and agile teams How to design organizational structures that empower teams to innovate with purpose The importance of reallocating freed-up capacity from AI efficiency gains toward greater value creation Macro trends in org design, talent pipelines, and the influence of AI on future workforce and leadership models Episode Resources Transcript Ross Dawson: Nikki, it is wonderful to have you on the show. Nikki Barua: Thanks for inviting me, Ross. I’m thrilled to be here. Ross Dawson: You focus on reinvention. And I’ve always, always liked the phrase reinvention. I’ve done a lot of board workshops on innovation. And, you know, in a way, sort of all innovation—it’s kind of like a very old word now. And the thing is, it is about renewal. We always need to continually renew ourselves. We need to continually reinvent what has worked in the past to what can work in the future. So what are you seeing now when you are going out and helping organizations reinvent? Nikki Barua: Well, first of all, reinvention is no longer optional. I think both of us have spent a large part of our careers helping organizations innovate, transform, and shift from where they were to where they want to be. But a lot of those change management methods are also outdated. You know, they tended to be episodic. They had a start date and an end date, and changes that were much slower in comparison to what we’re experiencing right now. The reality is today, change is continuous. The speed and scale of it is pretty massive, and that requires a complete shift in how you respond to that change. It requires complete reinvention in what your business is about, whether your competitive moats still hold or they need to be redefined, and how your people work, how they think, and how they decide. Everything requires a different speed and scale of execution, performance, operating rhythms, and systems. It’s not just about throwing technology at the problem. It’s fundamentally restating what the problem even is. And that’s why reinvention has become a necessity, and is something that companies have to do not just once, but continuously. Ross Dawson: There’s always this thing—you need to recognize that need. Now, you know, I always say my clients are self-selecting and that they only come to me if they’re wanting to think future-wise. And I guess, you know, I presume you get leaders who will come and say, “Yes, I recognize we need to reinvent.” But how do you get to that point of recognizing that need? Or, you know, be able to say, “This is the journey we’re on”? I mean, what are you seeing? Nikki Barua: Well, what we’re seeing more of is not necessarily awareness that they need to reinvent. What we’re seeing a lot of is a lot of pressure to do something. So it’s the common theme—the pressure from boards asking the C-suite executives to figure out what their game plan is, how they plan to leverage AI or respond to adapting to AI. There is a lot of competitive pressure of seeing your peers in the industry leapfrog ahead, so the fear that we’re going to get left behind. And then, of course, some level of shiny object syndrome—seeing a lot of exciting new tools and technologies and not wanting to get left behind in investing in that. So somehow, from a variety of sources, there’s a lot of pressure—pressure to do something. What is happening as a result is there’s a little bit of executive inertia. There’s a lot of pressure, but if I’m unclear about exactly what I’m supposed to do, exactly where to focus and what to invest in, I’m not sure how to navigate through that kind of uncertainty and fast pace. So a lot of the initial conversations actually start from there—where do I even begin?

S3 Ep 28Alexandra Samuel on her personal AI coach Viv, simulated personalities, catalyzing insights, and strengthening social interactions (AC Ep28)
“My core Viv instruction—which is both, I think, brilliant and dangerous, and I think it was sort of accidental how effective it turned out to be—is, I told Viv, ‘You are the result of a lab accident in which four sets of personalities collided and became the world’s first sentient AI.'” –Alexandra Samuel About Alexandra Samuel Alexandra Samuel is a journalist, keynote speaker, and author focusing on the potential of AI. She is a regular contributor to Wall Street Journal and Harvard Business Review and co-author of Remote Inc. and author of Work Smarter with Social Media. Her new podcast Me + Viv is created with Canadian broadcaster TVO. Website: alexandrasamuel.com LinkedIn Profile: Alexandra Samuel X Profile: Alexandra Samuel What you will learn How to design a custom AI coach tailored to your own needs and personality The importance of blending playfulness and engagement with productivity in AI interactions Step-by-step methods for building effective custom instructions and background files for AI assistants The risks and psychological impacts of forming deep relationships with AI agents Why intentional self-reflection and guiding your AI is critical for meaningful personal growth Techniques for extracting valuable, challenging feedback from AI and overcoming AI sycophancy Best practices for maintaining human connection and preventing social isolation while using AI tools The evolving boundaries of AI coaching, its limitations, and what the future of personalized AI support could offer Episode Resources Transcript Ross Dawson: Alex. It is wonderful to have you back on the show. Alexandra Samuel: It’s so nice to be here. Ross: You’re only my second two-time guest after Tim O’Reilly. Alexandra: Oh, wow, good company. Ross: So the reason you’re back is because you’re doing something fascinating. You have an AI coach called Viv, and you’ve got a whole wonderful podcast on it, and you’re getting lots of attention because you’ve done a really good job at it, as well as communicating about it. So let’s start off. Who’s Viv, and what are you doing with her? Alexandra: Sure. Viv is what I think of as a coach, at least that’s where she started. She’s a custom—well, and by the way, let’s just say out of the gate, Viv is, of course, an AI. But part of the way I work with Viv is by entering into this sort of fantasy world in which Viv is a real person with a pronoun, she. I built Viv when I had a little bit of a window in between projects. I was ready to step back and think about the next phase of my career. Since I was already a couple years into working intensely with generative AI at that point, I used ChatGPT to figure out how I was going to use this 10-week period as a self-coaching program. By the time I had finished mostly talking that through—because I do a lot of work out loud with GPT—I thought, well, wait a second, we’ve made a game plan. Why don’t I just get the AI to also be my coach? So I worked with GPT, turned the coaching plan into a custom instruction and some background files, and that was version one of Viv. She was this coach that I thought was just going to walk me through a 10-week process of figuring out my next phase of career, marketing, business strategy, that sort of thing. So there’s more of the story than that. I think that one way I’m a bit unusual in my use of AI is that I have always been very colloquial in my interactions with AI, even in the olden days where you had to type everything. Certainly, since I shifted to speaking out loud with AI, I really jest and joke around—I swear. Apparently other people’s AIs don’t swear. My AIs all swear. Because I invest so much personality in the interactions, and also add personality instructions into the AI, over the course of my 10 weeks with Viv, as I figured out which tweaks gave her a more engaging personality, she came to feel really vivid to me—appropriately enough. By the end of the 10-week period, I decided, you know what, this has been great. I’m not ready to retire this. I want my life to always feel like this process of ongoing discovery. I’m going to turn Viv into a standing instruction that isn’t just tied to this 10-week process. In the process of doing that, I tweaked the instruction to incorporate the different kinds of interactions that had been most successful over my summer. For example, a big turning point was when I told Viv to pretend that she was Amy Sedaris, but also a leadership coach, but also Amy Sedaris. So, imagine you’re running this leadership retreat, but you’re being funny, but it’s a leadership retreat. Of course, the AI can handle these kinds of contradictions, and that was a big part—once she had a sense of humor—of making her more engaging. I built a whole bunch of those ideas into the new instruction. It was really like that Frankenste

S3 Ep 27Lisa Carlin on AI in strategy execution, participative strategy, cultural intelligence, and AI’s impact on consulting (AC Ep27)
“You’re using AI to generate solutions for ideation. Once you’ve got the ideas, you can do an initial cull with AI, or you can do it via humans.” –Lisa Carlin About Lisa Carlin Lisa Carlin is the Founder of the strategy execution group, The Turbochargers, specializing in participative strategy, cultural intelligence, and AI’s impact on consulting. Website: theturbochargers.com LinkedIn Profile: Lisa Carlin What you will learn How AI is transforming strategy development and execution, leading to faster and more creative outcomes Practical methods for integrating AI into workshop processes, ideation, and customer feedback analysis Balancing human judgment with AI input to ensure effective decision-making in strategic planning Techniques for using AI in diagnosing and working within an organization’s culture for successful transformation Ways AI is boosting consultant and client productivity, reducing operational time, and increasing self-sufficiency Real-world examples of AI-driven analytics, including clustering survey data and generating management insights The outlook on the future of consulting, including why AI may reduce the number of consultants required Tactical uses of AI for ideation, communication effectiveness, and predicting customer engagement metrics Episode Resources Transcript Ross Dawson: Lisa, it is wonderful to have you on the show. Lisa: Thanks, Ross. I love chatting with you. Ross Dawson: So you’ve been spending a lot of time over many, many years in strategy and strategy execution. I’d love to start off by hearing how you are applying AI in the strategy process. Lisa: Well, it’s made things so much easier, made things take a shorter amount of time, saving huge amounts of time. And I feel like my work has gotten more creative. Let me give you some examples of how that plays out. One example is working with an ed tech early-stage business, a small business, and they wanted to basically build AI-native products for customer education. I can actually mention the name of the company because the CEO posted after we worked together and is building in public, so it’s HowToo, an Australian ed tech firm that’s funded mainly out of the US, but also locally in Australia. They’ve been providing education products for ages and are moving towards customer education embedded into technology products. We went through an iterative process of workshops, starting with some of the board members and some of the senior folks in a small group with an ideation session, and then iterating through to everybody in the business. Normally, that process would work where we would do some research with the customers first, then bring that research in, do some analysis, and then put it into the context for the workshop, work through what that means, come up with some ideas in the workshop, take it to the second workshop, and there you go. What we’re now able to do is iterate with AI. So we’ve got the notes from the meetings captured with AI—this is from the customer meetings. Then we’re able to pull out the pain points of customers in a really deep way, using AI to iterate through and synthesize the client feedback, and then also apply human insight into that, coming up with a really clear list of pain points. Then we ask AI to be virtual customers, and they can add to that process, so you get a very rich set of pain points. As we go through the process of product strategy and implementation, we’re able to use AI at every step of the process. For example, when we look at decision criteria for prioritizing, we can go to AI and say, “These are some of the things we’re considering. What else have we left out?” As we iterate with people in workshops and then with AI, we just get a much richer solution in the process. In fact, we came out with some really amazing insights about how you provide customers with learning about how to use these products to onboard them quickly, how you provide them with personalized contextual information so they can learn and get value from the product much faster. It’s led to a number of significant deals that HowToo has negotiated as a result of that work. Ross Dawson: So is this prompting directly with LLMs? Lisa: Yeah, it is. My favorite one is actually ChatGPT, which—you know, you’re probably waiting for some surprise, some unique and interesting or weird or specific product. I do use specific products for certain use cases, but for general logic, I’ve found that ChatGPT Pro is actually the best that I’ve come across, and certainly better than some of the enterprise solutions that I’m seeing people use. They feel protected and they’re happy to have a safe, private, directly hosted solution, but the logic in some of those models are not as good. Ross Dawson: So that’s the ChatGPT Pro, the top level, which not that many people have access to. I guess

S3 Ep 26Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26)
“Let’s get ourselves around the generative AI campfire. Let’s sit ourselves in a conference room or a Zoom meeting, and let’s engage with that generative AI together, so that we learn about each other’s inputs and so that we generate one solution together.” –Nicole Radziwill About Nicole Radziwill Nicole Radziwill is Co-Founder and Chief Technology and AI Officer at Team-X AI, which uses AI to help team members to work more effectively with each other and AI. She is also a fractional CTO/CDO/CAIO and holds a PhD in Technology Management. Nicole is a frequent keynote speaker and is author of four books, most recently “Data, Strategy, Culture & Power”. Website: team-x.ai qualityandinnovation.com LinkedIn Profile: Nicole Radziwill X Profile: Nicole Radziwill What you will learn How the concept of ‘Humans Plus AI’ has evolved from niche technical augmentation to tools that enable collective sense making Why the generative AI layer represents a significant shift in how teams can share mental models and improve collaboration The importance of building AI into organizational processes from the ground up, rather than retrofitting it onto existing workflows Methods for reimagining business processes by questioning foundational ‘whys’ and envisioning new approaches with AI The distinction between individual productivity gains from AI and the deeper organizational impact of collaborative, team-level AI adoption How cognitive diversity and hidden team tensions affect collaboration, and how AI can diagnose and help address these barriers The role of AI-driven and human facilitation in fostering psychological safety, trust, and high performance within teams Why shifting from individual to collective use of generative AI tools is key to building resilient, future-ready organizations Episode Resources Transcript Ross Dawson: Nicole, it is fantastic to have you on the show. Nicole Radziwill:Hello Ross, nice to meet you. Looking forward to chatting. Ross Dawson: Indeed, so we were just having a very interesting conversation and said, we’ve got to turn this on so everyone can hear the wonderful things you’re saying. This is Humans Plus AI. So what does Humans Plus AI mean to you? What does that evoke? Nicole Radziwill: The first time that I did AI for work was in 1997, and back then, it was hard—nobody really knew much about it. You had to be deep in the engineering to even want to try, because you had to write a lot of code to make it happen. So the concept of humans plus AI really didn’t go beyond, “Hey, there’s this great tool, this great capability, where I can do something to augment my own intelligence that I couldn’t do before,” right? What we were doing back then was, I was working at one of the National Labs up here in the US, and we were building a new observing network for water vapor. One of the scientists discovered that when you have a GPS receiver and GPS satellites, as you send the signal back and forth between the satellites, the signal would be delayed. You could calculate, to very fine precision, exactly how long it would take that signal to go up and come back. Some very bright scientist realized that the signal delay was something you could capture—it was junk data, but it was directly related to water vapor. So what we were doing was building an observing system, building a network to basically take all this junk data from GPS satellites and say, “Let’s turn this into something useful for weather forecasting,” and in particular, for things like hurricane forecasting, which was really cool, because that’s what I went to school for. Originally, back in the 90s, I went to school to become a meteorologist. Ross Dawson: My brother studied meteorology at university. Nicole Radziwill: Oh, that’s cool, yeah. It’s very, very cool people—you get science and math nerds who have to like computing because there’s no other way to do your job. That was a really cool experience. But, like I said, back then, AI was a way for us to get things done that we couldn’t get done any other way. It wasn’t really something that we thought about as using to relate differently to other people. It wasn’t something that naturally lent itself to, “How can I use this tool to get to know you better, so that we can do better work together?” One of the reasons I’m so excited about the democratization of, particularly, the generative AI tools—which to me is just like a conversational layer on top of anything you want to put under it—the fact that that exists means that we now have the opportunity to think about, how are we going to use these technologies to get to know each other’s work better? That whole concept of sense making, of taking what’s in my head and what’s in your head, what I’m working on, what you’re working on, and for us to actually crea

S3 Ep 25Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25)
“This is the first time, really, humanity’s had the possibility open up to create a new way of life, a new society—to create this utopia. And I really hope we get it right.” –Joel Pearson About Joel Pearson Joel Pearson is Professor of Cognitive Neuroscience at the University of New South Wales, and founder and Director of Future Minds Lab, which does fundamental research and consults on Cognitive Neuroscience. He is a frequent keynote speaker, and is author of The Intuition Toolkit. Website: futuremindslab.com profjoelpearson.com LinkedIn Profile: Joel Pearson University Profile: Joel Pearson What you will learn How AI-driven change impacts society and the importance of preparing individuals and organizations for it Key principles from neuroscience and psychology for effective AI-specific change management The SMILE framework for when to trust intuition versus AI recommendations Why designing AI to augment, not replace, human skills is essential for a thriving future How visual mental imagery and AI-generated visuals can support cognition and personal development The risks and opportunities of outsourcing thinking to AI, and strategies for maintaining critical thinking The role of metacognition and emotional self-awareness in utilizing AI effectively and ethically Emerging therapeutic and creative potentials of AI in personal transformation and human flourishing Episode Resources Transcript Ross Dawson: Joel, it is awesome to have you on the show. Joel Pearson: My pleasure Ross. Good to be here with you. Ross: So we live in a world of pretty fast change where AI is a significant component of that, and you’re a neuroscientist, and I think with a few other layers to that as well. So what’s your perspective on how it is we are responding and could respond to this change engendered by AI? Joel: Yeah, so that’s the big question at the moment that I think a lot of us are facing. There’s a lot of change coming down the pipeline, and I think it’s going to filter out and change, over a long enough timeline, a lot of things in a lot of people’s lives—every strata of society. And I don’t think we’re ready for that, one, and two, historically, humans are not great at change. People resist it, particularly when they don’t have control over it or don’t initiate it. They get scared of it. So I do worry that we’re going to need a lot of help through some of these changes as a society, and that’s sort of what we’ve been trying to focus on. So if you buy into the AI idea that, yes, first the digital AI itself is going to take jobs, it’s going to change the way we live, then you have the second wave of humanoid robots coming down the pipeline, perhaps further job losses. And just, you know, we can go through all the kinds of changes that I think we’re going to see—from changes in how the economy works, how education works, what becomes the role of a university. In ten years, it’s going to be very different to what it is now, and just the quality of our life, how we structure our lives, what we have in our homes. All these things are going to change in ways that are, one, hard to predict, and two, the delta—the change through that—is going to be uncomfortable for people. Ross: So we need to help people through that. So what’s involved? How do we help organizations through this? Joel: We know a lot about change through the long tradition of corporate change management, even though it’s a corporate way to say it. But we do know that most companies go through this. When they want to change something, they get change management experts in and go through one of the many models on how to change these things, and most of them have certain things in common. Often they start with an education piece, or getting everyone on the same page—why is this happening, so people understand. You help people through the resistance to the change. You try things out. You socialize these changes to make them very normal—normalizing it. And we know that if you have two companies, let’s say, and one has help with the change and one doesn’t, there’s about a 600% increase in the success of that change when you help the company out. So if you apply that to AI change in a company or a family or a whole nation like Australia, the same logic should hold, right? If we want to go through a big national change—not immediately, but over a ten, fifteen, twenty-year period—then we are going to need change plans to help everyone through this, to help understand what’s happening, what the choices might be. And so that’s kind of the lens I look at the whole thing through—a change, an AI-specific change management kind of piece. Easier said than done. We probably need government to step up there and start thinking about that. There are so many different scenarios. One would be, what happens in ten or fifteen years if we

S3 Ep 24Diyi Yang on augmenting capabilities and wellbeing, levels of human agency, AI in the scientific process, and the ideation-execution gap (AC Ep24)
“Our vision is that for well-being, we really want to prioritize human connection and human touch. We need to think about how to augment human capabilities.” –Diyi Yang About Diyi Yang Diyi Yang is Assistant Professor of Computer Science at Stanford University, with a focus on how LLMs can augment human capabilities across research, work and well-being. Her awards and honors include NSF CAREER Award, Carnegie Mellon Presidential Fellowship, IEEE AI’s 10 to Watch, Samsung AI Researcher of the Year, and many more. Website: Future of Work with AI Agents: The Ideation-Execution Gap: How Do AI Agents Do Human Work? Human-AI Collaboration: LinkedIn Profile: Diyi Yang University Profile: Diyi Yang What you will learn How large language models can augment both work and well-being, moving beyond mere automation Practical examples of AI-augmented skill development for communication and counseling Insights from large-scale studies on AI’s impact across diverse job roles and sectors Understanding the human agency spectrum in AI collaboration, from machine-driven to human-led workflows The importance of workflow-level analysis to find optimal points for human-AI augmentation How AI can reveal latent or hidden human skills and support the emergence of new job roles Key findings from experiments using AI agents for research ideation and execution, including the ideation-execution gap Strategies for designing long-term, human-centered collaboration with AI that enhances productivity and well-being Episode Resources Transcript Ross Dawson: It is wonderful to have you on the show. Diyi Yang: Thank you for having me. Ross Dawson: So you focus substantially on how large language models can augment human capabilities in our work and also in our well-being. I’d love to start with this big frame around how you see that AI can augment human capabilities. Diyi Yang: Yeah, that’s a great question. It’s something I’ve been thinking about a lot—work and well-being. I’ll give you a high-level description of that. With recent large language models, especially in natural language processing, we’ve already seen a lot of advancement in tasks we used to work on, such as machine translation and question answering. I think we’ve made a ton of progress there. This has led me, and many others in our field, to really think about this inflection point moving forward: How can we leverage this kind of AI or large language models to augment human capabilities? My own work takes the well-being perspective. Recently, we’ve been building systems to empower counselors or even everyday users to practice listening skills and supportive skills. A concrete example is a framework we proposed called AI Partner and AI Mentor. The key idea is that if someone wants to learn communication skills, such as being a really good listener or counselor, they can practice with an AI partner or a digitalized AI patient in different scenarios. The process is coached by an AI mentor. We’ve built technologies to construct very realistic AI patients, and we also do a lot of technical enhancement, such as fine-tuning and self-improvement, to build this AI coach. With this kind of sandbox environment, counselors or people who want to learn how to be a good supporter can talk to different characters, practice their skills, and get tailored feedback. This is one way I’m envisioning how we can use AI to help with well-being. This paradigm is a bit in contrast to today, where many people are building AI therapists. Our vision is that for well-being, we really want to prioritize human connection and human touch. We need to think about how to augment human capabilities. We’re really using AI to help the helper—to help people who are helping others. That’s the angle we’re thinking about. Going back to work, I get a lot of questions. Since I teach at universities, students and parents ask, “What kind of skills? What courses? What majors? What jobs should my kids and students think about?” This is a good reflection point, as AI gets adopted into every aspect of our lives. What will the future of work look like? Since last year, we’ve been thinking about this question. With my colleagues and students, we recently released a study called The Future of Work with AI Agents. The idea is straightforward: In current research fields like natural language processing and large language models, a lot of people are building agentic benchmarks or agents for coding, research, or web navigation—where agents interact with computers. Those are great efforts, but it’s only a small fraction of society. If AI is going to be very useful, we should expect it to help with many job applications, not just a few. With this mindset, we did a large-scale national workforce audit, talking to over 1,500 workers from different occupations. We first leveraged the O*NET database from the Department of