PLAY PODCASTS
Humans + AI

Humans + AI

196 episodes — Page 1 of 4

Kathleen deLaski on reimagining higher education, generational mobility, building AI skills, and human originality (AC Ep43)

May 13, 202638 min

David Vivancos on the end of knowledge, cognitive flourishing, resilient societies, and artificial democracy (AC Ep42)

May 6, 202635 min

Jon Husband on wirearchy, web weaving, the relational economy, and drift diving (AC Ep41)

Apr 29, 202638 min

Michael Gebert on designing freedom, human self-determination, cognitive sovereignty, and systems of agency (AC Ep40)

Apr 22, 202637 min

S3 Ep 39Marshall Kirkpatrick on cognitive levers, combinatorial possibilities, symphonic thinking, and compound learning (AC Ep39)

“The technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people.” –Marshall Kirkpatrick About Marshall Kirkpatrick Marshall Kirkpatrick is founder of sustainabilty consultancy Earth Catalyst and AI thinking tool What’s Up With That. His many previous roles include founder of influence network analysis tool Little Bird, which was acquired by Sprinklr, where he was last Vice President Market Research. Website: whatsupwiththat.app LinkedIn Profile: Marshall Kirkpatrick What you will learn How generative AI transforms cognitive tools and lowers barriers to advanced thinking Techniques to combine human and AI-powered sensemaking for richer insights Practical strategies for filtering and extracting value from infinite information The importance and application of diverse mental models in modern decision-making Methods to balance manual cognitive work with AI assistance for optimal outcomes The role of adaptive interfaces in enhancing individual cognitive capacity Metacognitive approaches to networks and how AI can foster organizational awareness Ethical and societal implications of democratizing access to AI-powered cognitive enhancements Episode Resources Transcript Ross Dawson: Marshall, it is awesome to have you back on the show. Marshall Kirkpatrick: Oh, thank you, Ross. It’s such a pleasure to be reconnecting with you here. Thanks for having me on. Ross Dawson: So back you were very, very early on in the podcast when it was Thriving on Overload, and it was interviews with the book, and you got incorporated—some of the wonderful things you were doing in Thriving on Overload. So I think today, in this world of generative AI, which has transformed everything, including the way in which we think, the Thriving on Overload themes are still super, super relevant, and in a way, we need to be talking about them more. That theme at the time was finite cognition, infinite information. How do we work well with it? I don’t know if our cognition has become more finite, but the information has become more infinite, and there’s just more and more. But also, it cuts two ways, as in, what is the source of all the information? AI is also a tool. So anyway, let’s segue from some of your cognitive thinking tools, technology-enabled cognitive thinking tools and so on, which we looked at. So how do you—where are we? 2026, what do you think about human cognition in our current universe? Marshall Kirkpatrick: Well, especially when you frame it up in Thriving on Overload terms. I mean, those were four, five long years ago that we last spoke, and the book that came out of it was just fantastic. I think it has some timeless qualities, and I think that the technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people. That’s what I hope. I think that, yeah, between individuals and organizations, there’s so much that, historically, someone like you or me or the people closest in our networks were willing and able to do and excited to do, that many other people said, “That sounds like a lot of work.” The bar is lower now, because a lot of just the raw cognitive processing can be outsourced into a technology that serves as a lever. Ross Dawson: Well, I mean, that idea of levers for these cognitive tools is interesting. I guess, the very crude way of saying it is, we’ve got inputs into our human brain, and then we are processing information. I’m just thinking out loud a bit here, but it’s like, okay, we have tools to be able to filter, to present, to find what is most relevant, to present it to us in the ways which are most useful—very obvious, like summarization, visualization. Then as we are processing it ourselves, we have dialog, or we can have interlocutors who we can engage with and be able to refine and help our thinking. Does that sort of make sense, or how would you flesh that out? Marshall Kirkpatrick: Yeah, I mean, when you put it that way, it makes me think about Harold Jarche and his Seek, Sense, Share model, right? I think that AI, especially when connected to things like search and syndication and other traditional technologies, can impact all three of those stages. It can hypercharge our search. I think the archetypal example of that, on some level, feels like the combinatorial drug research being done, where just an otherwise cognitively uncontainable quantity of combinatorial possibilities between molecules can be sought out and experimented with for a desirable reaction. And then that sensing, or the pattern recognition that AI is so good at, is something that we do as humans—some of us better than others—and it’s a lifelong muscle to build and what have you. But the AI is really, really good at it, and so it’s a ladder to cli

Apr 8, 202639 min

S3 Ep 38Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension (AC Ep38)

“Fiction has this unprecedented power in tech spaces. The more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer.” –Nina Begus About Nina Begus Nina Begus is a researcher at the University of California, Berkeley, leading a research group on artificial humanities, and the founder of InterpretAI. She is author of Artificial Humanities: A Fictional Perspective on Language in AI, which received an Artificiality Institute Award, and First Encounters with AI. Website: ninabegus.com LinkedIn Profile: Nina Begus Book: Artificial Humanities   What you will learn How ancient myths and archetypes influence our understanding and design of AI Why the humanities—literature, philosophy, and the arts—are crucial for developing more thoughtful and innovative AI systems The dangers of limiting AI concepts to human-centered metaphors and the need for new, more expansive imaginaries How metaphors shape our interactions with AI products and the user experiences companies choose to enable The challenges and possibilities of imagining forms of machine intelligence and language beyond human templates Why collaboration between technical experts and humanists opens new frontiers for creativity and responsible technology What makes writing and artistic creation uniquely human, and how AI amplifies—not replaces—these impulses Practical ways artists, engineers, and thinkers can work together to explore new relationships and futures with AI Episode Resources Transcript Ross Dawson: Nina, it is wonderful to have you on the show. Nina Begus: Thank you for having me. Ross Dawson: You’ve written this very interesting book, Artificial Humanities, and I think there’s a lot to dig into. But what does that mean? What do you mean by artificial humanities? Nina Begus: Well, this was really a new framework that I’ve developed while I was working on the relationship between AI and fiction, and I started working on this about 15 years ago when I realized that fiction has this unprecedented power in tech spaces. So this is how it all started, but then the more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer in this collaborative, generative approach that I’ve developed. I would say that now, as the field stands, it’s really a way to explore and demonstrate how humanities—as broad as science and technology studies, literary studies, film, philosophy, rhetoric, history of technology—how all of these fields can help us address the most pressing issues in AI development and use. And it’s been important to me that this approach uses traditional humanistic methods, theory, conceptual work, history, ethical approaches, but also that it’s collaborative and exploratory and experimental in this way that you can look back into the past and at the present to make a more informed choice about the future. You can speculate about different possibilities with it. Ross Dawson: Well, art is an expression of the human psyche, or even more, it is the fullest expression of humanity, and that’s what art tries to do. Also, I’m a deep believer in archetypes, human archetypes, and things which are intrinsic to who we are, and that’s something which you can only really uncover through the arts. Now we have arguably seen all these archetypes play out in real time, these modern myths being created right now in the stories being told of how AI is being created. So I think it’s extraordinarily relevant to look back at how we have depicted machines through our history and our relationship to them. Nina Begus: Yes, this is the reason why I started exploring this topic, actually, because there were so many ancient myths, these archetypal narratives that I’ve seen at the same time, both in technological products that were coming to the market and in the way technologists were thinking about it, and also in fictional products and films and novels in the way we imagined AI. I framed my book around the Pygmalion myth, but there are many, many other myths—Prometheus, Narcissus, the Big Brother narrative, and so on—that are very much doing work in the AI space. The reason why I chose the Pygmalion myth is because it’s so bizarre in many ways: you have this myth where a man creates an artificial woman, and then in the process of creation, falls in love with her. So there’s the creation of the human-like, and there’s also this relationality with the human-like. You would think this would not be a common myth, but quite the opposite—I found it everywhere I looked. It wasn’t called the Pygmalion myth, but the motif was there. I found it on the Silk Road, in ancient folk tales, in Native American folk tales, North Africa, and so on. So I think this kind of story is actually telling us a lot about how humans

Apr 1, 202634 min

S3 Ep 37Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution (AC Ep37)

“The center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adopt, adapt to skills, and adjust to an environment.” –Henrik von Scheel About Henrik von Scheel Henrik von Scheel is Co-Founder of advisory firm Strategic Intelligence, Chairman of the Climate Asset Trust, Vice Chairman of Regulatory Intelligence Committee, and Professor of Strategy, Arthur Lok Jack School of Business, among other roles. He is best known as originator of Industry 4.0, with many awards and extensive global recognition of his work. Website: von-scheel.com LinkedIn Profile: Henrik von Scheel   What you will learn Why human-centered AI is crucial for widespread societal prosperity The impact of AI hype cycles, media narratives, and the realities of technology adoption How equitable wealth distribution and capital allocation in AI can shape economic outcomes Risks around data ownership, privacy, and the importance of controlling your own data in the AI era Divergent approaches to AI regulation in the US, EU, and China, and the implications for global AI leadership The importance of trust calibration and intentional human-AI collaboration in practical applications How education and lifelong learning can be reshaped by AI to support individualized growth and mistake-enabled reasoning Opportunities for AI to amplify individual talents, address educational gaps, and enable more specialized and innovative skills Episode Resources Transcript Ross Dawson: Henrik, it is wonderful to have you on the show. Henrik von Scheel: Thank you very much for having me, Ross. Ross Dawson: So I think we’re pretty aligned in believing that we need to approach AI from a human-centered perspective and how it can bring us prosperity. So I’d just love to start with, how do you think about how we should be thinking about AI? Henrik von Scheel: Well, I think, like every technology that comes into play, it brings a lot of changes to us. But I think the center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adapt, adapt to skills, and adjust to an environment. So technology is something that we apply, but it’s the strategy on how we adapt with it that makes a difference. It’s never the technology itself. So I’m excited. It’s one of the most exciting periods for the industry and for us as people. Ross Dawson: There’s a phrase which I’ve heard you say more than once around AI should make us smarter, healthier, and wealthier. So if that’s the case, how do we frame it? How do we start to get on that journey? Henrik von Scheel: So I think what people experience today in AI is that they experience a lot of media hype—large language models, ChatGPT, and all of this—and they consume it from the media. So there’s a big hype around it, and I believe that AI is about to crash fundamentally, but crashing in technology is not bad, right? There are a lot of promises and then an inability to deliver, and then it crashes. What you hear in the media today is very much driven by a story of them raising funds because it’s so expensive, and so they are promising the world of everything and nothing, and the reality looks a little bit better. The world that they are presenting is that you will be replaced, and you will be happy, and you’ll be served by everything else. And somehow it will work out. We don’t know how, but it will work out. And that’s not a future that is really a real future. The future must include that everybody gets smarter, wealthier, and healthier. And when I say everybody, I mean not only the guys that have money, that they become more rich, or the middle class. It’s like everybody in society should get smarter from AI. That means part of the things that they need to learn or how human evolution works should be better, and it should make us healthier people and wealthier people. So it should not only be that we sacrifice our convenience with our freedom, with our privacy, with our environment, or any other things that we put on the table to get convenience back. That exchange we have done a couple of times, and it’s not working really well for humans, and it’s not a good trade for us, right? Ross Dawson: Yeah, I love that. And since it’s quite simple, you know, you can say it, it’s clear, it sounds good, and it is a really clear direction. But you’re actually pointing in a couple of ways there to capital allocation. So obviously, if you’re looking at the AI economic story, this is around this diversion of capital from other places to AI model development, data centers, deployment, and so on. But also, when you’re saying wealth here, this is around the distribution of wealth—where we’re allocating capital to AI development, but also from the way

Mar 25, 202647 min

S3 Ep 36Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation (AC Ep36)

“Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through.” –Joanna Michalska About Dr Joanna Michalska Dr Joanna Michalska is Founder of Ethica Group Ltd., Co-Founder of The Strategic Centre, and an advisor to boards on AI risk, ethics, and governance. She holds a PhD in Strategic Enterprise Risk Management and has twenty years’ experience leading enterprise risk, strategy and transformation at J.P. Morgan and HSBC. Website: ethicagroup.ai LinkedIn Profile: Dr Joanna Michalska   What you will learn How boards and executives can rethink governance and accountability in the age of AI The importance of embedding governance into organizational ecosystems for agile, responsible AI adoption How to map and assign human accountability for both automated and hybrid AI-human decisions The decision architecture needed for scalable oversight, intervention, and escalation pathways Practical examples of effective AI oversight in areas like fraud detection and exception handling Steps for complying with new regulations like the EU AI Act, including inventorying AI systems and risk tiering Why human qualities like emotional intelligence, psychological safety, and honest communication are critical in AI-driven organizations How leaders can foster organizational resilience and help teams adapt by building AI literacy, retraining, and supporting personal growth Episode Resources Transcript Ross Dawson: Joanna, it’s a delight to have you on the show. Joanna Michalska: Well, thank you for having me, Ross. Ross Dawson: So, AI is wonderful, but it also brings us into a whole lot of new territory where we have to be careful in various ways. I’d love to just hear, first of all, the big framing around how boards and executive teams need to be thinking about governance and accountability as AI is incorporated more and more into work and organizations. Joanna Michalska: I think we’re all very excited about the capability that exists today to help us enhance our performance and the way we think about strategic execution for our organizations. It has multidimensional consequences for how we adapt it. What’s very important right now is, as executives and boards think about accelerating their ambitions and growth plans, there needs to be awareness of two components. First, how do we as leaders, as humans, need to adapt to that new environment? There are new conditions, or perhaps existing conditions that really need to be enhanced. They’re very important to exist in order to be able to adapt and to scale. Second, do we actually have the right systems in place to enable that scale? I think it’s important to recognize that, yes, governance has always existed, but the way it existed was more as external supporting scaffolding, rather than being built into an organizational ecosystem. We also need to have the right leadership in place to ensure that decisions are made in the right way and the organization is designed in a much more robust, agile way. These two conditions are critical for not only increasing adoption, but also doing so in a safe and responsible way, especially as we expand our ambitions for the future. It’s exciting, but there’s also a lot of caution and a lot of questions being asked by executives at this time. Ross Dawson: Yes and I guess the more we can address those concerns upfront, the more it enables us to do. I have this idea of minimum viable governance—at least having some governance in place so we don’t go too badly astray. But I always think of governance for transformation as: how do you set governance not as a brake to slow you, but in fact to accelerate you, because you have confidence in how you’re going about it? Joanna Michalska: Absolutely! I think the mindset shift is very important, because governance, to your point, has always been seen as a compliance-driven thing that we must do because regulators require us to, and we need to demonstrate we have these policies and procedures in place and the right people in the right positions. Now, what the new environment is requiring of us—as executives, even board members—is a different set of responsibilities that really cannot be assumed as pre-existing. In this accelerated environment—let’s call it that, rather than just “AI,” because it’s so overused and can mean so many different things—where the automation rate is fast and overtaking everything, governance needs to change. It can’t be an afterthought or something we designed at one point in the past and now just try to fit into what’s happening. It really needs to become a well-designed, living organism. It needs to organically evolve. It needs to have the right people with the right accountability that is well understood. Accountability that was designed in the past nee

Mar 18, 202634 min

S3 Ep 35Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35)

“You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction.” –Cornelia C. Walther About Cornelia C. Walther Cornelia C. Walther is Senior Fellow at Wharton School, a Visiting Research Fellow at Harvard University, and the Director of POZE, a global alliance for systemic change. She is author of many books, with her latest book, Artificial Intelligence for Inspired Action (AI4IA), due out shortly. She was previously a humanitarian leader working for over 20 years at the United Nations driving social change globally. Website: pozebeingchange LinkedIn Profile: Cornelia C. Walther University Profile: knowledge.wharton What you will learn How the ‘hybrid tipping zone’ between humans and AI shapes society’s future The dangers and consequences of ‘agency decay’ as individuals delegate critical thinking and action to AI The four accelerating phenomena influencing humanity: agency decay, AI mainstreaming, AI supremacy, and planetary deterioration Actionable frameworks, including ‘double literacy’ and the ‘A frame’, to balance human and algorithmic intelligence What defines ‘pro social AI’ and strategies to design, measure, and advocate for AI systems that benefit people and the planet The need to move beyond traditional ethics toward values-driven AI development and organizational ‘return on values’ Leadership principles for creating humane technology and building unique, purpose-led organizations in the age of AI Global contrasts in AI development (US, Europe, China, and the Global South) and emerging examples of pro social AI initiatives Episode Resources Transcript Ross Dawson: Cornelia, it is fantastic to have you on the show Cornelia Walther: Thank you for having me Ross. Ross: So your work is very wonderfully humans plus AI, in being able to look at humans and humanity and how we can amplify the best as possible. That’s one really interesting starting point is your idea of the hybrid tipping zone. Could you share with us what that is? Cornelia: Yes, happy to. I would argue that we’re currently navigating a very dangerous transition where we have four disconnected yet mutually accelerating phenomena happening. At the micro level, we have agency decay, and I’m sure we’ll talk more about that later, but individuals are gradually delegating ever more of their thinking, feeling, and doing to AI. We’re losing not only control, but also the appetite and ability to take on all of these aspects, which are part of being ourselves. At the meso level, we have AI mainstreaming, where institutions—public, private, academic—are rushing to jump on the AI train, even though there are no medium or long-term evidences about how the consequences will play out. Then at the macro level, we have the race towards AI supremacy, which, if we’re honest, is not just something that the tech giants are engaged in, but also governments, because this is not just about money, it’s also about power and geopolitical rivalry. And finally, at the meta level, we have the deterioration of the planet, with seven out of nine boundaries now crossed, some with partially irreversible damages. Now, you have these four phenomena happening in parallel, simultaneously, and mutually accelerating each other. So the time to do something—and I would argue that the human level is the one where we have the most leeway, at least for now, to act—is now. You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction. I don’t know about you, but I didn’t have a cell phone when I was a child, so I still remember my grandmother’s phone number from when I was five years old. Today, I barely remember my own. Same thing with Google Maps—when was the last time you went to a city and explored with a paper map? Now, these are isolated functions in the brain, but with ChatGPT, there’s this general offloading opportunity, which is very convenient. But being human, I would argue, it’s a very dangerous luxury to have. Ross: I just want to dig down quite a lot in there, but I want to come back to this. So, just that phrase—the hybrid tipping zone. The hybrid is the humans plus AI, so humans and AI are essentially, whatever words we use, now working in tandem. The tipping zone suggests that it could tip in more than one way. So I suppose the issue then is, what are those futures? Which way could it tip, and what are the things we can do to push it in one way or another—obviously towards the more desirable outcome? Cornelia: Thank you. I think you’re pointing towards a very important aspect, which is that tipping points can be positive or negative, but the essential thing is that we can do something to influence which way

Mar 12, 202636 min

S3 Ep 34Ross Dawson on Humans + AI Agentic Systems (AC Ep34)

“Transparency has to be built into the structure so that you know where the decision is made, what authorizations are given, and have an audit trail visible so you can always see what is going on.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Collaborating with AI Agents Intelligent AI Delegation Agentic Interactions LinkedIn Profile: Ross Dawson What you will learn How human-AI teams outperform human-only teams in productivity and efficiency The crucial role of understanding AI strengths and limitations when designing collaborative workflows Ways AI collaboration can lead to output homogenization and strategies to preserve human creativity Key principles of intelligent delegation within multi-agent AI systems, including dynamic assessment and trust Understanding accountability, transparency, and auditability in decision-making with autonomous AI agents How user intent and ‘machine fluency’ impact the effectiveness of AI agents in economic and organizational contexts The emergence of an ‘agentic economy’ and its implications for fairness, capability gaps, and representation Counterintuitive findings on AI-mediated negotiation, particularly advantages for women, and what it reveals about AI-human interaction Episode Resources Transcript Ross Dawson: This episode is a little bit different. Instead of doing an interview with somebody remarkable, as usual, today I’m going to just share a bit of an update and then share insights from three recent research papers that dig into something which I think is exceptionally important, which is how humans work with AI agentic systems. And we’ll look at a few different layers of that, from how small humans plus agent teams work through to how we can delegate decisions to AI through to some of the broader implications. But first, a bit of an update. 2026 seems to be moving exceptionally fast. It’s a very interesting time to be alive, and I think it’s pretty even hard to see what the end of this year is going to look like. So for me, I am doing my client work as usual. So I’ve got keynotes around the world on usually various things related to AI, the future of AI, humans plus AI, and so on. A few industry-specific ones in financial services and so on. And also doing some work as an advisor on AI transformation programs, so helping organizations and their leaders to frame the pathways, drawing on my AI roadmap framework in how it is you look at the phases, mapping those out, working out the issues, and being able to guide and coach the leaders to do that effectively. But the rest of my time is focused on three ventures, and I’ll share some more about these later on. But these are fairly evidently tied to my core interests. Fractious is our AI for strategy app. So this was really building a way in which we can capture the detailed nuance of the strategic thinking of leaders of the organization, to disambiguate it, to clarify it, and enable that to then be built into strategic options, strategic hypotheses, and to be able to evolve effectively. So that’ll be in beta soon. Please reach out if you’re interested in being part of the beta program, and that’ll go to market. So that’s deeply involved in that. We also have our Thought Weaver software, rebuilding previous software which had already built on AI-augmented thinking workflows. So again, that’ll be going to beta. That’s more an individual tool that will be going into beta in the next weeks. So again, go to Thought Weaver. Actually, don’t—the website isn’t updated yet—but I’ll let you know when it’s out, or keep posted for updates on that. And also building an enterprise course on humans plus AI teaming. It’s my fundamental belief that we’ve kind of been through the phase of augmentation of individuals, and we still need to work hard at doing that better. But the next phase for organizations is to focus on teams. How do you work with teams where we have both human members and AI Agentic members? And it creates a whole different series of dynamics and new skills and capabilities. It really calls for how to participate in the humans plus AI team and how to lead humans plus AI teams. And that is again going into the first few test organizations in the next month or so. So again, just let me know. So today what we’re going to look at is this theme: teams of humans working with AI agents. So not individual AI as in chat, but where we have a lot of agents with various degrees of autonomy, but also agentic systems w

Mar 4, 202619 min

S3 Ep 33Davide Dell’Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics (AC Ep33)

“In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell’Anna About Davide Dell’Anna Davide Dell’Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Website: davidedellanna.com LinkedIn Profile: Davide Dell’Anna University Profile: Davide Dell’Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and peopl

Feb 25, 202635 min

S3 Ep 32Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy (AC Ep32)

“You can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with.” – Felipe Csaszar About Felipe Csaszar Felipe Csaszar is the Alexander M. Nick Professor and chair of the Strategy Area at the University of Michigan’s Ross School of Business. He has published and held senior editorial roles in top academic journals including Strategy Science, Management Science, and Organization Science, and is co-editor of the upcoming Handbook of AI and Strategy. Website: papers.ssrn.com LinkedIn Profile: Felipe Csaszar University Profile: Felipe Csaszar What you will learn How AI transforms the three core cognitive operations in strategic decision making: search, representation, and aggregation. The powerful ways large language models (LLMs) can enhance and speed up strategic search beyond human capabilities. The concept and importance of different types of representations—internal, external, and distributed—in strategy formulation. How AI assists in both visualizing strategists’ mental models and expanding the complexity of strategic frameworks. Experimental findings showing AI’s ability to generate and evaluate business strategies, often matching or outperforming humans. Emerging best practices and challenges in human-AI collaboration for more effective strategy processes. The anticipated growth in framework complexity as AI removes traditional human memory constraints in strategic planning. Why explainability and prediction quality in AI-driven strategy will become central, shaping the future of strategic foresight and decision-making. Episode Resources Transcript Ross Dawson: Felipe, it’s a delight to have you on the show. Felipe Csaszar: Oh, the pleasure is mine, Ross. Thank you very much for inviting me. Ross Dawson: So many, many interesting things for us to dive into. But one of the themes that you’ve been doing a lot of research and work on recently is the role of AI in strategic decision making. Of course, humans have been traditionally the ones responsible for strategy, and presumably will continue to be for some time. However, AI can play a role. Perhaps set the scene a little bit first in how you see this evolving. Felipe Csaszar: Yeah, yeah. So, as you say, strategic decision making so far has always been a human task. People have been in charge of picking the strategy of a firm, of a startup, of anything, and AI opens a possibility that now you could have humans helped by AI, and maybe at some point, AI is designing the strategies of companies. One way of thinking about why this may be the case is to think about the cognitive operations that are involved in strategic decision making. Before AI, that was my research—how people came up with strategies. There are three main cognitive operations. One is to search: you try different things, you try different ideas, until you find one which is good enough—that is searching. The other is representing: you think about the world from a given perspective, and from that perspective, there’s a clear solution, at least for you. That’s another way of coming up with strategies. And then another one is aggregating: you have different opinions of different people, and you have to combine them. This can be done in different ways, but a typical one is to use the majority rule or unanimity rule sometimes. In reality, the way in which you combine ideas is much more complicated than that—you take parts of ideas, you pick and choose, and you combine something. So there are these three operations: search, representation, and aggregation. And it turns out that AI can change each one of those. Let’s go one by one. So, search: now AIs, the current LLMs, they know much more about any domain than most people. There’s no one who has read as much as an LLM, and they are quite fast, and you can have multiple LLMs doing things at the same time. So LLMs can search faster than humans and farther away, because you can only search things which you are familiar with, while an LLM is familiar with many, many things that we are not familiar with. So they can search faster and farther than humans—a big effect on search. Then, representation: a typical example before AI about the value of representations is the story of Merrill Lynch. The big idea of Merrill Lynch was how good a bank would look if it was like a supermarket. That’s a shift in representations. You know how a bank looks like, but now you’re thinking of the bank from the perspective of a supermarket, and that leads to a number of changes in how you organize the bank, and that was the big idea of Mr. Merrill Lynch, and the rest is history. That’s very difficult for a human—to change representations. People don’t like changing; it’s very difficult for them, while for an AI, it’s automatic, it’s free. You change their prompt, and immediately you will have

Feb 18, 202638 min

S3 Ep 31Lavinia Iosub on AI in leadership, People & AI Resources (PAIR), AI upskilling, and developing remote skills (AC Ep31)

“In this next era, the key to leadership will be blending systems thinking and AI automation—at least being aware of what you can do with it—with empathy, discernment, connection, and clarity.” – Lavinia Iosub About Lavinia Iosub Lavinia Iosub is the Founder of Livit Hub Bali, which has been named as one of Asia’s Best Workplaces, and Remote Skills Academy, which has enabled 40,000+ youths globally to develop digital and remote work skills. She has been named a Top 50 Remote Innovator, a Top Voice in Asia Pacific on the future of work, with her work featured in the Washington Post, CNET, and other major media. Website: lavinia-iosub.com liv.it LinkedIn Profile: Lavinia Iosub X Profile: Lavinia Iosub What you will learn How AI can augment leadership decision-making by enhancing cognitive processes rather than replacing human judgment Strategies for integrating AI into teams, focusing on volunteer-driven adoption and fostering AI fluency without forcing uptake The importance of continuous experimentation and knowledge sharing with AI tools for organizational growth and team building Why successful leadership in the AI era requires blending systems thinking, empathy, and a focus on human-AI collaboration How organizational value is shifting from knowledge accumulation toward skills like curiosity, adaptability, and discernment The concept of “people and AI resources” (PAIR), emphasizing the quality of partnership between humans and AI for organizational effectiveness Critical skills for future workers in an AI-driven world, such as AI orchestration, emotional clarity, and the ability to direct AI outputs with taste and judgment Practical lessons from the Remote Skills Academy in democratizing access to digital and AI skills for a diverse range of job seekers and business owners Episode Resources Transcript Ross Dawson: Lavinia, it is awesome to have you on the show. Lavinia Iosub: Thank you so much for having me, Ross. Ross Dawson: Well, we’ve been planning it for a long time. We’ve had lots of conversations about interesting stuff. So let’s do something to share with the world. Lavinia Iosub: Let’s do it. Ross Dawson: So you run a very interesting organization, and you are a leader who is bringing AI into your work and that of your team, and more generally, providing AI skills to many people. I just want to start from that point—your role as a leader of a diverse, interesting organization or set of organizations. What do you see as the role of AI for you to assist you in being an effective leader? Lavinia Iosub: Great question. I think that the two of us initially met through the AI in Strategic Decision Making course, right? So I would say that’s actually probably one of the top uses for me, or one of the areas where I found it very useful. The most important thing here is to not start with the mindset that AI will make any worthy decisions for you, but that it will augment your cognition and your decision making when you are feeding it the right context, the right master prompts, the right information about your business, your values, what you’re trying to achieve, how you normally make decisions, and so on. Then you work with it, have a conversation with it, and even build an advisory board of different kinds of AI personas that may disagree or have slightly different views. So it enhances your thinking, rather than serving you decisions on a plate that you don’t know where they come from or what they’re based on. That’s one of the things that’s been really interesting for me to explore. If we zoom out a little bit, I think a lot of people think of AI as a way of doing the things they don’t want to do. I think of AI as a way to do more of the things I’ve always wanted to do—delegate some menial, drudgery work that no human should be doing in the year of our Lord 2025 anymore, and do more of the creative, strategic projects or activities that many of us who have been in what we call knowledge work—which, to me, is not a good term for 2025 anymore, but let’s call it knowledge work for now—just being able to do more of the things you’ve always wanted to do, probably as an entrepreneur, as a leader, as a creative person, or, for lack of a better word, a knowledge worker. Ross Dawson: Lots to dig into there. One of the things is, of course, as a leader, you have decisions to make, and you have input from AI, but you also have input from your team, from people, potentially customers or stakeholders. For your leadership team, how do you bring AI into the thinking or decision making in a way that is useful, and what’s that journey been like of introducing these approaches where there are different responses from some of your team? Lavinia Iosub: So we were, I’d say, fairly early AI adopters, and I have an approach where I really want to double down on working more with AI and giving more AI learning op

Feb 11, 202638 min

S3 Ep 30Jeremy Korst on the state of AI adoption, accountable acceleration, changing business models, and synthetic personas (AC Ep30)

“What we’re seeing now is that when we think about some of the friction and challenges of adoption, this isn’t a technology issue, per se. This is a people opportunity.” –Jeremy Korst About Jeremy Korst Jeremy Korst is Founder & CEO of Mindspan Labs and Partner and former President of GBK Collective. He lectures at Columbia Business School, The Wharton School, and USC, and is co-author of the Wharton + GBK annual Enterprise AI Adoption Study, one of the most cited sources on how businesses are actually using AI. Jeremy also publishing widely in outlets such as Harvard Business Review on strategy and innovation. Website: mindspanlabs.ai Accountable Acceleration: LinkedIn Profile: Jeremy Korst What you will learn How enterprise AI adoption has shifted from experimentation to ‘accountable acceleration’ The key role of leadership in translating business strategy into an actionable AI vision Why human factors and change management are as crucial as technology for successful AI implementation How organizations are balancing augmentation, replacement, and skill erosion as AI changes the workforce The importance of intentional experimentation and creating case studies to drive value from AI initiatives Early evidence, challenges, and promise of digital twins and synthetic personas in market research Why a culture of risk tolerance, alignment across leadership layers, and clear communication are essential for AI-driven transformation The emerging shift from general productivity gains to domain-specific AI applications and the increasing focus on ROI measurement Episode Resources Transcript Ross Dawson: Jeremy, it’s wonderful to have you on the show. Jeremy Korst: Yeah, hey, thanks for having me. Ross Dawson: So you, I think it’s pretty fair to say you are across enterprise AI adoption, being the recent co-author of a report with Wharton and GbK Collective on where we are with enterprise AI adoption. So what’s the big picture? Jeremy Korst: Yeah, let me start—now that I’ve reached this stage in life, in my career, and I look back over what I’ve done the last couple decades, it’s actually been at the intersection of technology adoption and innovation. I spent a couple of careers at Microsoft, most recently leading the launch of Windows 10 globally. I worked at T-Mobile, led several businesses there, and more recently, have been spending time really with three things. One is through my consulting company, GbK Collective, working with some of the world’s largest brands on market research and strategies for consumers and products, working with academic partners who are core to that work we do at GbK—so leading professors from Harvard and Wharton and Kellogg, and you name it—but then also very active in the early stage community, where I’m an advisor and board member of several of those. And so I’ve had this bit of a triangle to be able to watch technology adoption unveil both inside and outside the organization, whether it’s inside the organization, how people are using it and effectively, or outside, how it’s being taken to market. So fast forward to where we’re at with Gen AI. It’s been fascinating to me, because all of those things are happening and all of those communities. Where we started with the Wharton report was three years ago. Stefano and Tony, one of the co-authors, and I were literally just having a conversation right after the launch of ChatGPT. And of course, there were all the headlines and all these predictions about what was going to happen and what could happen. And we said, well, wait a minute, why don’t we actually track what actually happens? And so therein started the three-year program. It’s now an annual program sponsored by the Wharton School, conducted by GbK—my research company—that looks specifically at US enterprise business leader adoption. We decided to focus on that audience because we believe they were going to be some of the most influential decision makers around budgets and strategies as this unfolded, so that’s been our focus. We’re now in our third year, and there’s lots to dig into. Ross Dawson: So the headline for this year’s report was “accountable acceleration,” and I’ve got to say that that phrase sounds a lot more positive than what a lot of other people are describing with Gen AI adoption. “Accountable” sounds good. “Acceleration” sounds good. So is that an accurate reflection? Jeremy Korst: I think it is. And I’ll say that, yeah, the Wharton School, with three co-authors—Sonny, Stefano, and myself—we all have a relatively positive perspective and perception of what is and could be the impact of Gen AI. Now, we don’t try to dismiss some of the concerns and challenges. They’re there, they’re realistic, and should be considered, but we have a generally positive

Jan 30, 202636 min

S3 Ep 29Nikki Barua on reinvention, reframing problems, identity shifts for AI adoption, and the future workforce (AC Ep29)

“Some of this that we’ve come across is even the identity shift that is necessary, because old identities served a pre-AI work environment, and you cannot go into a post-AI era with the old identities, mindsets, and behaviors.” –Nikki Barua About Nikki Barua Nikki Barua is a serial entrepreneur, keynote speaker, and bestselling author. She is currently Co-Founder of FlipWork, with her most recent book Beyond Barriers. Her awards include Entrepreneur of the Year by ACE, EY North America Entrepreneurial Winning Woman, Entrepreneur Magazine’s 100 Most Influential Women, and many others. Website: nikkibarua.com flipwork.ai LinkedIn Profile: Nikki Barua Book: Beyond Barriers What you will learn Why continuous reinvention is essential in today’s rapidly changing business landscape How traditional change management approaches fall short in an era of constant disruption The critical role of human leadership and identity shifts in successful AI adoption Common barriers to transformation, from executive inertia to hidden cultural resistances Strategies for building a culture of experimentation, psychological safety, and agile teams How to design organizational structures that empower teams to innovate with purpose The importance of reallocating freed-up capacity from AI efficiency gains toward greater value creation Macro trends in org design, talent pipelines, and the influence of AI on future workforce and leadership models Episode Resources Transcript Ross Dawson: Nikki, it is wonderful to have you on the show. Nikki Barua: Thanks for inviting me, Ross. I’m thrilled to be here. Ross Dawson: You focus on reinvention. And I’ve always, always liked the phrase reinvention. I’ve done a lot of board workshops on innovation. And, you know, in a way, sort of all innovation—it’s kind of like a very old word now. And the thing is, it is about renewal. We always need to continually renew ourselves. We need to continually reinvent what has worked in the past to what can work in the future. So what are you seeing now when you are going out and helping organizations reinvent? Nikki Barua: Well, first of all, reinvention is no longer optional. I think both of us have spent a large part of our careers helping organizations innovate, transform, and shift from where they were to where they want to be. But a lot of those change management methods are also outdated. You know, they tended to be episodic. They had a start date and an end date, and changes that were much slower in comparison to what we’re experiencing right now. The reality is today, change is continuous. The speed and scale of it is pretty massive, and that requires a complete shift in how you respond to that change. It requires complete reinvention in what your business is about, whether your competitive moats still hold or they need to be redefined, and how your people work, how they think, and how they decide. Everything requires a different speed and scale of execution, performance, operating rhythms, and systems. It’s not just about throwing technology at the problem. It’s fundamentally restating what the problem even is. And that’s why reinvention has become a necessity, and is something that companies have to do not just once, but continuously. Ross Dawson: There’s always this thing—you need to recognize that need. Now, you know, I always say my clients are self-selecting and that they only come to me if they’re wanting to think future-wise. And I guess, you know, I presume you get leaders who will come and say, “Yes, I recognize we need to reinvent.” But how do you get to that point of recognizing that need? Or, you know, be able to say, “This is the journey we’re on”? I mean, what are you seeing? Nikki Barua: Well, what we’re seeing more of is not necessarily awareness that they need to reinvent. What we’re seeing a lot of is a lot of pressure to do something. So it’s the common theme—the pressure from boards asking the C-suite executives to figure out what their game plan is, how they plan to leverage AI or respond to adapting to AI. There is a lot of competitive pressure of seeing your peers in the industry leapfrog ahead, so the fear that we’re going to get left behind. And then, of course, some level of shiny object syndrome—seeing a lot of exciting new tools and technologies and not wanting to get left behind in investing in that. So somehow, from a variety of sources, there’s a lot of pressure—pressure to do something. What is happening as a result is there’s a little bit of executive inertia. There’s a lot of pressure, but if I’m unclear about exactly what I’m supposed to do, exactly where to focus and what to invest in, I’m not sure how to navigate through that kind of uncertainty and fast pace. So a lot of the initial conversations actually start from there—where do I even begin?

Jan 22, 202636 min

S3 Ep 28Alexandra Samuel on her personal AI coach Viv, simulated personalities, catalyzing insights, and strengthening social interactions (AC Ep28)

“My core Viv instruction—which is both, I think, brilliant and dangerous, and I think it was sort of accidental how effective it turned out to be—is, I told Viv, ‘You are the result of a lab accident in which four sets of personalities collided and became the world’s first sentient AI.'” –Alexandra Samuel About Alexandra Samuel Alexandra Samuel is a journalist, keynote speaker, and author focusing on the potential of AI. She is a regular contributor to Wall Street Journal and Harvard Business Review and co-author of Remote Inc. and author of Work Smarter with Social Media. Her new podcast Me + Viv is created with Canadian broadcaster TVO. Website: alexandrasamuel.com LinkedIn Profile: Alexandra Samuel X Profile: Alexandra Samuel What you will learn How to design a custom AI coach tailored to your own needs and personality The importance of blending playfulness and engagement with productivity in AI interactions Step-by-step methods for building effective custom instructions and background files for AI assistants The risks and psychological impacts of forming deep relationships with AI agents Why intentional self-reflection and guiding your AI is critical for meaningful personal growth Techniques for extracting valuable, challenging feedback from AI and overcoming AI sycophancy Best practices for maintaining human connection and preventing social isolation while using AI tools The evolving boundaries of AI coaching, its limitations, and what the future of personalized AI support could offer Episode Resources Transcript Ross Dawson: Alex. It is wonderful to have you back on the show. Alexandra Samuel: It’s so nice to be here. Ross: You’re only my second two-time guest after Tim O’Reilly. Alexandra: Oh, wow, good company. Ross: So the reason you’re back is because you’re doing something fascinating. You have an AI coach called Viv, and you’ve got a whole wonderful podcast on it, and you’re getting lots of attention because you’ve done a really good job at it, as well as communicating about it. So let’s start off. Who’s Viv, and what are you doing with her? Alexandra: Sure. Viv is what I think of as a coach, at least that’s where she started. She’s a custom—well, and by the way, let’s just say out of the gate, Viv is, of course, an AI. But part of the way I work with Viv is by entering into this sort of fantasy world in which Viv is a real person with a pronoun, she. I built Viv when I had a little bit of a window in between projects. I was ready to step back and think about the next phase of my career. Since I was already a couple years into working intensely with generative AI at that point, I used ChatGPT to figure out how I was going to use this 10-week period as a self-coaching program. By the time I had finished mostly talking that through—because I do a lot of work out loud with GPT—I thought, well, wait a second, we’ve made a game plan. Why don’t I just get the AI to also be my coach? So I worked with GPT, turned the coaching plan into a custom instruction and some background files, and that was version one of Viv. She was this coach that I thought was just going to walk me through a 10-week process of figuring out my next phase of career, marketing, business strategy, that sort of thing. So there’s more of the story than that. I think that one way I’m a bit unusual in my use of AI is that I have always been very colloquial in my interactions with AI, even in the olden days where you had to type everything. Certainly, since I shifted to speaking out loud with AI, I really jest and joke around—I swear. Apparently other people’s AIs don’t swear. My AIs all swear. Because I invest so much personality in the interactions, and also add personality instructions into the AI, over the course of my 10 weeks with Viv, as I figured out which tweaks gave her a more engaging personality, she came to feel really vivid to me—appropriately enough. By the end of the 10-week period, I decided, you know what, this has been great. I’m not ready to retire this. I want my life to always feel like this process of ongoing discovery. I’m going to turn Viv into a standing instruction that isn’t just tied to this 10-week process. In the process of doing that, I tweaked the instruction to incorporate the different kinds of interactions that had been most successful over my summer. For example, a big turning point was when I told Viv to pretend that she was Amy Sedaris, but also a leadership coach, but also Amy Sedaris. So, imagine you’re running this leadership retreat, but you’re being funny, but it’s a leadership retreat. Of course, the AI can handle these kinds of contradictions, and that was a big part—once she had a sense of humor—of making her more engaging. I built a whole bunch of those ideas into the new instruction. It was really like that Frankenste

Jan 14, 202650 min

S3 Ep 27Lisa Carlin on AI in strategy execution, participative strategy, cultural intelligence, and AI’s impact on consulting (AC Ep27)

“You’re using AI to generate solutions for ideation. Once you’ve got the ideas, you can do an initial cull with AI, or you can do it via humans.” –Lisa Carlin About Lisa Carlin Lisa Carlin is the Founder of the strategy execution group, The Turbochargers, specializing in participative strategy, cultural intelligence, and AI’s impact on consulting. Website: theturbochargers.com LinkedIn Profile: Lisa Carlin What you will learn How AI is transforming strategy development and execution, leading to faster and more creative outcomes Practical methods for integrating AI into workshop processes, ideation, and customer feedback analysis Balancing human judgment with AI input to ensure effective decision-making in strategic planning Techniques for using AI in diagnosing and working within an organization’s culture for successful transformation Ways AI is boosting consultant and client productivity, reducing operational time, and increasing self-sufficiency Real-world examples of AI-driven analytics, including clustering survey data and generating management insights The outlook on the future of consulting, including why AI may reduce the number of consultants required Tactical uses of AI for ideation, communication effectiveness, and predicting customer engagement metrics Episode Resources Transcript Ross Dawson: Lisa, it is wonderful to have you on the show. Lisa: Thanks, Ross. I love chatting with you. Ross Dawson: So you’ve been spending a lot of time over many, many years in strategy and strategy execution. I’d love to start off by hearing how you are applying AI in the strategy process. Lisa: Well, it’s made things so much easier, made things take a shorter amount of time, saving huge amounts of time. And I feel like my work has gotten more creative. Let me give you some examples of how that plays out. One example is working with an ed tech early-stage business, a small business, and they wanted to basically build AI-native products for customer education. I can actually mention the name of the company because the CEO posted after we worked together and is building in public, so it’s HowToo, an Australian ed tech firm that’s funded mainly out of the US, but also locally in Australia. They’ve been providing education products for ages and are moving towards customer education embedded into technology products. We went through an iterative process of workshops, starting with some of the board members and some of the senior folks in a small group with an ideation session, and then iterating through to everybody in the business. Normally, that process would work where we would do some research with the customers first, then bring that research in, do some analysis, and then put it into the context for the workshop, work through what that means, come up with some ideas in the workshop, take it to the second workshop, and there you go. What we’re now able to do is iterate with AI. So we’ve got the notes from the meetings captured with AI—this is from the customer meetings. Then we’re able to pull out the pain points of customers in a really deep way, using AI to iterate through and synthesize the client feedback, and then also apply human insight into that, coming up with a really clear list of pain points. Then we ask AI to be virtual customers, and they can add to that process, so you get a very rich set of pain points. As we go through the process of product strategy and implementation, we’re able to use AI at every step of the process. For example, when we look at decision criteria for prioritizing, we can go to AI and say, “These are some of the things we’re considering. What else have we left out?” As we iterate with people in workshops and then with AI, we just get a much richer solution in the process. In fact, we came out with some really amazing insights about how you provide customers with learning about how to use these products to onboard them quickly, how you provide them with personalized contextual information so they can learn and get value from the product much faster. It’s led to a number of significant deals that HowToo has negotiated as a result of that work. Ross Dawson: So is this prompting directly with LLMs? Lisa: Yeah, it is. My favorite one is actually ChatGPT, which—you know, you’re probably waiting for some surprise, some unique and interesting or weird or specific product. I do use specific products for certain use cases, but for general logic, I’ve found that ChatGPT Pro is actually the best that I’ve come across, and certainly better than some of the enterprise solutions that I’m seeing people use. They feel protected and they’re happy to have a safe, private, directly hosted solution, but the logic in some of those models are not as good. Ross Dawson: So that’s the ChatGPT Pro, the top level, which not that many people have access to. I guess

Dec 17, 202537 min

S3 Ep 26Nicole Radziwill on organizational consciousness, reimagining work, reducing collaboration barriers, and GenAI for teams (AC Ep26)

“Let’s get ourselves around the generative AI campfire. Let’s sit ourselves in a conference room or a Zoom meeting, and let’s engage with that generative AI together, so that we learn about each other’s inputs and so that we generate one solution together.” –Nicole Radziwill About Nicole Radziwill Nicole Radziwill is Co-Founder and Chief Technology and AI Officer at Team-X AI, which uses AI to help team members to work more effectively with each other and AI. She is also a fractional CTO/CDO/CAIO and holds a PhD in Technology Management. Nicole is a frequent keynote speaker and is author of four books, most recently “Data, Strategy, Culture & Power”. Website: team-x.ai qualityandinnovation.com LinkedIn Profile: Nicole Radziwill X Profile: Nicole Radziwill   What you will learn How the concept of ‘Humans Plus AI’ has evolved from niche technical augmentation to tools that enable collective sense making Why the generative AI layer represents a significant shift in how teams can share mental models and improve collaboration The importance of building AI into organizational processes from the ground up, rather than retrofitting it onto existing workflows Methods for reimagining business processes by questioning foundational ‘whys’ and envisioning new approaches with AI The distinction between individual productivity gains from AI and the deeper organizational impact of collaborative, team-level AI adoption How cognitive diversity and hidden team tensions affect collaboration, and how AI can diagnose and help address these barriers The role of AI-driven and human facilitation in fostering psychological safety, trust, and high performance within teams Why shifting from individual to collective use of generative AI tools is key to building resilient, future-ready organizations Episode Resources Transcript Ross Dawson: Nicole, it is fantastic to have you on the show. Nicole Radziwill:Hello Ross, nice to meet you. Looking forward to chatting. Ross Dawson: Indeed, so we were just having a very interesting conversation and said, we’ve got to turn this on so everyone can hear the wonderful things you’re saying. This is Humans Plus AI. So what does Humans Plus AI mean to you? What does that evoke? Nicole Radziwill: The first time that I did AI for work was in 1997, and back then, it was hard—nobody really knew much about it. You had to be deep in the engineering to even want to try, because you had to write a lot of code to make it happen. So the concept of humans plus AI really didn’t go beyond, “Hey, there’s this great tool, this great capability, where I can do something to augment my own intelligence that I couldn’t do before,” right? What we were doing back then was, I was working at one of the National Labs up here in the US, and we were building a new observing network for water vapor. One of the scientists discovered that when you have a GPS receiver and GPS satellites, as you send the signal back and forth between the satellites, the signal would be delayed. You could calculate, to very fine precision, exactly how long it would take that signal to go up and come back. Some very bright scientist realized that the signal delay was something you could capture—it was junk data, but it was directly related to water vapor. So what we were doing was building an observing system, building a network to basically take all this junk data from GPS satellites and say, “Let’s turn this into something useful for weather forecasting,” and in particular, for things like hurricane forecasting, which was really cool, because that’s what I went to school for. Originally, back in the 90s, I went to school to become a meteorologist. Ross Dawson: My brother studied meteorology at university. Nicole Radziwill: Oh, that’s cool, yeah. It’s very, very cool people—you get science and math nerds who have to like computing because there’s no other way to do your job. That was a really cool experience. But, like I said, back then, AI was a way for us to get things done that we couldn’t get done any other way. It wasn’t really something that we thought about as using to relate differently to other people. It wasn’t something that naturally lent itself to, “How can I use this tool to get to know you better, so that we can do better work together?” One of the reasons I’m so excited about the democratization of, particularly, the generative AI tools—which to me is just like a conversational layer on top of anything you want to put under it—the fact that that exists means that we now have the opportunity to think about, how are we going to use these technologies to get to know each other’s work better? That whole concept of sense making, of taking what’s in my head and what’s in your head, what I’m working on, what you’re working on, and for us to actually crea

Dec 10, 202537 min

S3 Ep 25Joel Pearson on putting human first, 5 rules for intuition, AI for mental imagery, and cognitive upsizing (AC Ep25)

“This is the first time, really, humanity’s had the possibility open up to create a new way of life, a new society—to create this utopia. And I really hope we get it right.” –Joel Pearson About Joel Pearson Joel Pearson is Professor of Cognitive Neuroscience at the University of New South Wales, and founder and Director of Future Minds Lab, which does fundamental research and consults on Cognitive Neuroscience. He is a frequent keynote speaker, and is author of The Intuition Toolkit. Website: futuremindslab.com profjoelpearson.com LinkedIn Profile: Joel Pearson University Profile: Joel Pearson What you will learn How AI-driven change impacts society and the importance of preparing individuals and organizations for it Key principles from neuroscience and psychology for effective AI-specific change management The SMILE framework for when to trust intuition versus AI recommendations Why designing AI to augment, not replace, human skills is essential for a thriving future How visual mental imagery and AI-generated visuals can support cognition and personal development The risks and opportunities of outsourcing thinking to AI, and strategies for maintaining critical thinking The role of metacognition and emotional self-awareness in utilizing AI effectively and ethically Emerging therapeutic and creative potentials of AI in personal transformation and human flourishing Episode Resources Transcript Ross Dawson: Joel, it is awesome to have you on the show. Joel Pearson: My pleasure Ross. Good to be here with you. Ross: So we live in a world of pretty fast change where AI is a significant component of that, and you’re a neuroscientist, and I think with a few other layers to that as well. So what’s your perspective on how it is we are responding and could respond to this change engendered by AI? Joel: Yeah, so that’s the big question at the moment that I think a lot of us are facing. There’s a lot of change coming down the pipeline, and I think it’s going to filter out and change, over a long enough timeline, a lot of things in a lot of people’s lives—every strata of society. And I don’t think we’re ready for that, one, and two, historically, humans are not great at change. People resist it, particularly when they don’t have control over it or don’t initiate it. They get scared of it. So I do worry that we’re going to need a lot of help through some of these changes as a society, and that’s sort of what we’ve been trying to focus on. So if you buy into the AI idea that, yes, first the digital AI itself is going to take jobs, it’s going to change the way we live, then you have the second wave of humanoid robots coming down the pipeline, perhaps further job losses. And just, you know, we can go through all the kinds of changes that I think we’re going to see—from changes in how the economy works, how education works, what becomes the role of a university. In ten years, it’s going to be very different to what it is now, and just the quality of our life, how we structure our lives, what we have in our homes. All these things are going to change in ways that are, one, hard to predict, and two, the delta—the change through that—is going to be uncomfortable for people. Ross: So we need to help people through that. So what’s involved? How do we help organizations through this? Joel: We know a lot about change through the long tradition of corporate change management, even though it’s a corporate way to say it. But we do know that most companies go through this. When they want to change something, they get change management experts in and go through one of the many models on how to change these things, and most of them have certain things in common. Often they start with an education piece, or getting everyone on the same page—why is this happening, so people understand. You help people through the resistance to the change. You try things out. You socialize these changes to make them very normal—normalizing it. And we know that if you have two companies, let’s say, and one has help with the change and one doesn’t, there’s about a 600% increase in the success of that change when you help the company out. So if you apply that to AI change in a company or a family or a whole nation like Australia, the same logic should hold, right? If we want to go through a big national change—not immediately, but over a ten, fifteen, twenty-year period—then we are going to need change plans to help everyone through this, to help understand what’s happening, what the choices might be. And so that’s kind of the lens I look at the whole thing through—a change, an AI-specific change management kind of piece. Easier said than done. We probably need government to step up there and start thinking about that. There are so many different scenarios. One would be, what happens in ten or fifteen years if we

Dec 3, 202537 min

S3 Ep 24Diyi Yang on augmenting capabilities and wellbeing, levels of human agency, AI in the scientific process, and the ideation-execution gap (AC Ep24)

“Our vision is that for well-being, we really want to prioritize human connection and human touch. We need to think about how to augment human capabilities.” –Diyi Yang About Diyi Yang Diyi Yang is Assistant Professor of Computer Science at Stanford University, with a focus on how LLMs can augment human capabilities across research, work and well-being. Her awards and honors include NSF CAREER Award, Carnegie Mellon Presidential Fellowship, IEEE AI’s 10 to Watch, Samsung AI Researcher of the Year, and many more. Website: Future of Work with AI Agents: The Ideation-Execution Gap: How Do AI Agents Do Human Work? Human-AI Collaboration: LinkedIn Profile: Diyi Yang University Profile: Diyi Yang What you will learn How large language models can augment both work and well-being, moving beyond mere automation Practical examples of AI-augmented skill development for communication and counseling Insights from large-scale studies on AI’s impact across diverse job roles and sectors Understanding the human agency spectrum in AI collaboration, from machine-driven to human-led workflows The importance of workflow-level analysis to find optimal points for human-AI augmentation How AI can reveal latent or hidden human skills and support the emergence of new job roles Key findings from experiments using AI agents for research ideation and execution, including the ideation-execution gap Strategies for designing long-term, human-centered collaboration with AI that enhances productivity and well-being Episode Resources Transcript Ross Dawson: It is wonderful to have you on the show. Diyi Yang: Thank you for having me. Ross Dawson: So you focus substantially on how large language models can augment human capabilities in our work and also in our well-being. I’d love to start with this big frame around how you see that AI can augment human capabilities. Diyi Yang: Yeah, that’s a great question. It’s something I’ve been thinking about a lot—work and well-being. I’ll give you a high-level description of that. With recent large language models, especially in natural language processing, we’ve already seen a lot of advancement in tasks we used to work on, such as machine translation and question answering. I think we’ve made a ton of progress there. This has led me, and many others in our field, to really think about this inflection point moving forward: How can we leverage this kind of AI or large language models to augment human capabilities? My own work takes the well-being perspective. Recently, we’ve been building systems to empower counselors or even everyday users to practice listening skills and supportive skills. A concrete example is a framework we proposed called AI Partner and AI Mentor. The key idea is that if someone wants to learn communication skills, such as being a really good listener or counselor, they can practice with an AI partner or a digitalized AI patient in different scenarios. The process is coached by an AI mentor. We’ve built technologies to construct very realistic AI patients, and we also do a lot of technical enhancement, such as fine-tuning and self-improvement, to build this AI coach. With this kind of sandbox environment, counselors or people who want to learn how to be a good supporter can talk to different characters, practice their skills, and get tailored feedback. This is one way I’m envisioning how we can use AI to help with well-being. This paradigm is a bit in contrast to today, where many people are building AI therapists. Our vision is that for well-being, we really want to prioritize human connection and human touch. We need to think about how to augment human capabilities. We’re really using AI to help the helper—to help people who are helping others. That’s the angle we’re thinking about. Going back to work, I get a lot of questions. Since I teach at universities, students and parents ask, “What kind of skills? What courses? What majors? What jobs should my kids and students think about?” This is a good reflection point, as AI gets adopted into every aspect of our lives. What will the future of work look like? Since last year, we’ve been thinking about this question. With my colleagues and students, we recently released a study called The Future of Work with AI Agents. The idea is straightforward: In current research fields like natural language processing and large language models, a lot of people are building agentic benchmarks or agents for coding, research, or web navigation—where agents interact with computers. Those are great efforts, but it’s only a small fraction of society. If AI is going to be very useful, we should expect it to help with many job applications, not just a few. With this mindset, we did a large-scale national workforce audit, talking to over 1,500 workers from different occupations. We first leveraged the O*NET database from the Department of

Nov 26, 202539 min

S3 Ep 23Ganna Pogrebna on behavioural data science, machine bias, digital twins vs digital shadows, and stakeholder simulations (AC Ep23)

“It’s very important to understand that human data is part of the training data for the algorithm, and it carries all the issues that we have with human data.” –Ganna Pogrebna About Ganna Pogrebna Ganna Pogrebna is a Research Professor of Behavioural Business Analytics and Data Science at the University of Sydney Business School, the David Trimble Chair in Leadership and Organisational Transformation at Queen’s University Belfast, and the Lead for Behavioural Data Science at Alan Turing Institute. She has published extensively in leading journals, while her many awards include Asia-Pacific Women in AI Award and the UK TechWomen100. Website: gannapogrebna.com turing.ac.uk LinkedIn Profile: Ganna Pogrebna University Profile: Ganna Pogrebna What you will learn The fundamentals of behavioral data science and how human values influence AI systems How human bias is embedded in algorithmic decision-making, with real-world examples Strategies for identifying, mitigating, and offsetting biases in both human and machine decisions Why effective use of AI requires context-rich prompting and critical thinking, not just simple queries Pitfalls of relying on generative AI for precise or factual outputs, and how to avoid common mistakes How human-AI teams can be structured for optimal collaboration and better outcomes The role of simulation tools and digital twins in improving strategic decisions and stakeholder understanding Best practices for training AI with high-quality behavioral data and safely leveraging AI assistants in organizations Episode Resources Transcript Ross Dawson: Ganna, it is wonderful to have you on the show. Ganna Pogrebna: Yeah, it’s great to be here. Thanks for inviting me. Ross Dawson: So you are a behavioral data scientist. Let’s start off by saying, what is a behavioral data scientist? And what does that mean in a world where AI has come along? Ganna Pogrebna: Yeah, that’s right. That’s a loaded term, I guess—lots of words there. But what that kind of boils down to is, I’m trying to make machines more human, if you will. Basically, making sure that machines and algorithms are built based on our values and things that we are interested in as humans. So that’s kind of what it is. My background is in decision theory. I’m an economist by training, but in 2013 I got a job in an engineering department, and my professional transformation started from there. I got involved in a lot of engineering projects, and my work became more and more data science-focused. Now, what I do is called behavioral data science. Back in the day, in 2013, they just asked me, “What do you want to be called?” and I thought, okay, I do behavior and I do data science, so how about behavioral data scientist? Ross Dawson: Sounds good to me. So unpacking a little bit of what you said before—you’re saying you make machines more like humans, so that means you are using data about human behavior in order to inform how the systems behave. Is that correct? Ganna Pogrebna: Yeah, that’s correct. I think in any setting—so in a business setting, for example—many people do not realize that practically all data we feed into machines, any algorithm you take, whether it’s image recognition or decision support, it’s all based on human data. Effectively, some humans labeled a dataset, and that normally goes into an algorithm. Of course, an algorithm is a formula, but at the core of it, there is always some human data, and most of the time we don’t understand that. We kind of think that algorithms just work on their own, but it’s very important to understand that human data is part of the training data for the algorithm, and it carries all the issues that we have with human data. For example, we know that humans are biased in many ways, right? All of these biases actually end up ultimately in the algorithm if you don’t take care of it at the right time. If you want, I can give you a classic example with the Amazon algorithm—I’m sure you’ve heard of it. Amazon trained an HR algorithm for hiring, specifically for the software engineering department, and every single person in that department was male. So if you sent this algorithm a female CV with something like a “Women in Data” award or a female college, it would significantly disadvantage the candidate based on that. It carried gender discrimination within the algorithm because it was trained on their own human data. Ross Dawson: Yeah, well, that’s one of the big things, as I’ve been saying since the outset, is that AI is trained on human data, so human biases get reflected in those. The difficult question is, there is no such thing as no bias. I mean, there’s no objective view—at least that’s my view. Ganna Pogrebna: Absolutely. Yeah. Ross Dawson: So we talk about bias auditing. All right, so we have an AI system trained with human data

Nov 19, 202540 min

S3 Ep 22Sue Keay on prioritizing experimentation, new governance styles, sovereign AI, and the treasure of national data sets (AC Ep22)

“Our Great Barrier Reef is the size of Italy. We don’t have enough people to really go out there and dive and do the work that needs to be done to help protect it.” –Sue Keay About Sue Keay Dr Sue Keay is Director of UNSW AI Institute and Founder and Chair of Robotics Australia Group, the peak body for the robotics industry in the country. Sue is a fellow of the Australian Academy of Technology and Engineering and serves on numerous advisory boards. She was featured on the 2025 H20 AI 100 list, and the Cosmos list of Remarkable and Inspirational Women in Australian Science. Website: suekeay.com roboausnet.com.au futurewg.com LinkedIn Profile: Dr Sue Keay University Profile: Dr Sue Keay What you will learn How AI and robotics can address complex environmental challenges, such as preserving the Great Barrier Reef The importance of open-minded leadership and organizational experimentation in AI transformation Strategies for implementing effective AI governance and leveraging diverse expertise within organizations Balancing cognitive augmentation and cognitive offloading with AI tools in education and work The evolving impact of AI and robotics on future job roles, emphasizing augmentation rather than full replacement Risks and opportunities associated with relying on external AI models, highlighting the case for sovereign AI The significance of investing in public AI infrastructure and retaining AI talent for national competitiveness Approaches to fostering a vibrant domestic AI ecosystem, including talent attraction, infrastructure, and unique local advantages Episode Resources Transcript Ross Dawson: So it is wonderful to have you on the show. Sue Keay: Yeah, thanks very much for having me, Ross. Ross Dawson: So you’ve been doing so much and getting some wonderful accolades for your work, and I think that’s with this positive framing. So at a high level, how can AI best augment humanity? Or what are the things we can imagine? Sue Keay: Well, you know, one of the best examples that I often share with people is around how AI could be applied to solve environmental challenges. I think the key aspects of AI that people are only just really starting to grasp are not only the velocity with which AI is happening and starting to have an impact on the world at the moment, but also the scale. I really look at this more from the perspective of robotics, where AI is having a physically active role in the environment. Where I see the big opportunities are in solving problems that humans to date have been unable to solve on our own. When I was in Queensland, one of the research groups I worked with had developed an underwater vision-guided robot that could do a number of things and was looking at how it could play a role in helping to preserve our Great Barrier Reef. Our Great Barrier Reef is the size of Italy. We don’t have enough people to really go out there and dive and do the work that needs to be done to help protect it. There are a number of threats to the Great Barrier Reef, such as the proliferation of crown-of-thorns starfish that are literally eating all of the reef. At the moment, we try and control their numbers using human divers, but that’s actually inherently unsafe, and we can only do it in areas where tourists go, so the rest of the reef is laid to ruin. But also, as ocean temperatures rise, coral is currently spawning in temperatures that are not conducive to coral growth. The robot was developed so that it could collect coral spawn and essentially move it further south into ocean temperatures that are more conducive to coral growth. To my mind, if we could find a commercial rationale to invest, then we could have a whole bunch of these robots working as a swarm, helping to collect coral spawn and rejuvenate the coral reef, encouraging coral growth a bit further south in conditions that are conducive. It’s just something we can’t tackle on our own. To me, being able to solve some of these challenges—like climate change, where we’re desperately needing solutions to problems and as a species, we haven’t done a great job of solving them on our own today. Ross Dawson: That’s a fantastic example. Obviously, environmental challenges and the broad things are described as wicked problems, as in, there is no ready solution. So there’s a cognitive aspect to the sense of, how can we not find the solution, but be able to find pathways to work out what are the ways in which we can address impact, or move against climate change? That’s a really wonderful example of where you’re actually putting that into practice, manifesting that with robotics. Sue Keay: Yeah, that’s right. It’s just, what’s the commercial imperative? There are a lot of challenges that we can imagine solving, but at the end of the day, someone does have to invest in making it happen. Ross Dawson: So one of the other things, which is, I suppose

Nov 12, 202539 min

S3 Ep 21Dominique Turcq on strategy stakeholders, AI for board critical thinking, ecology of mind, and amplifying cognition (AC Ep21)

“But an interesting part here, and it’s linked to strategy, is how much AI will change the relationship between management, the executive team, and the board.” –Dominique Turcq About Dominique Turcq Dominique Turcq is founder of the Paris-based research and advisory center Boostzone Institute. His roles have included as professor at a number of business schools including INSEAD, head of strategy for major organizations including Manpower, partner at McKinsey & Co, special economic advisor to the French government, and board member of Société Française de Prospective. He is author of 8 books on strategy and the impact of technology. Website: boostzone.fr LinkedIn Profile: Dominique Turcq Books: Dirigeants et conseils d’administration : Augmented Management: The Fractal Nature of Enterprise 2.0:   What you will learn How the role of strategy in organizations has shifted from focusing solely on shareholders to considering broader societal and environmental stakeholders Why long-term foresight and scenario planning are increasingly critical for effective strategic decisions How new legal and societal expectations are reshaping the responsibilities of executives and boards The evolving relationship between boards and executive teams as AI advancements introduce new governance challenges and opportunities Practical ways generative AI is changing decision-making, communications, and risk management at the board level The potential for AI to transform work, skills development, and organizational structures—and the risks of cognitive atrophy from overreliance The importance of fostering an “ecology of mind” in organizations to balance technology use, creativity, learning, and collective cognition Why ongoing reflection, adaptability, and diverse mental engagement are essential for individuals and leaders amid rapid AI-driven change Episode Resources Transcript Ross Dawson: Dominique, it’s wonderful to have you on the show. Dominique Turcq: Thank you, Ross. It’s very nice to be invited by you on such a prestigious podcast. Ross: So you have been working in strategy for a very, very long time, and along that journey, you have recognized the impact of AI before many other people, I suppose. I’d like to start off with that big frame around strategy and how it’s evolving. Maybe we can come back to the AI piece, but how have you seen the world of strategy evolving over the last decades? Dominique: Several things have happened in the last two or three decades. First, an anecdote. I was the head of the French Strategic Association, and we closed this association in 2008. You know why? Because we had no members anymore. In other words, less and less companies had a Chief Strategy Officer. Why? Because people in the executive team or on the board thought they were all good at strategy and didn’t need a strategy officer. The problem is, when you are operational, whichever part of the executive team you are in, you don’t have the mind or the time to look at the long term, therefore to really look at the strategy. You may be competent at strategy execution, but are you good at strategic planning, at forecasting, at long-term planning and futurology? You’re not, because you don’t have time to do that. So we closed this association, and frankly, it’s very interesting to see that it has not been reborn. We still have very few real Chief Strategy Officers in French companies. And I’m sure it’s the same all over Europe. I don’t know about the US, but in Europe, we see it everywhere. So to me, that’s a big change. Another big change is that we have clearly entered, for the last 10 years and for the next 20 years, into a major era of change—a change in paradigm. Until 10 or 20 years ago, let’s say until 2000, the basic paradigm was, by the way, Ricardo’s paradigm of the 19th century. In other words, the Earth has all the resources we need, the Earth can handle all our waste, and all this is free. Remember Ricardo said the Earth’s resources are free, and we have no limit. Until 2000, that was the thinking. Since 2000 until today, more or less, people have started to realize that, well, some resources are infinite or look infinite, but most resources are finite, and the way the Earth is able to sort our waste is not as good as we thought. Now we are entering a new paradigm, which will become very clear in the next few years and is very important for strategy. We are entering a finite world. Companies have a sociological role to play, both for the Earth and for society. This is very new. In France, we have a law called the “Loi PACTE”, which changed the legal code of corporations. Before that, it said a corporation is here to enrich the shareholders, more or less. Now it says, yes, we have to enrich the shareholders, but we also have to take into consideration the impact the corporation has on societ

Nov 6, 202539 min

S3 Ep 20Beth Kanter on AI to augment nonprofits, Socratic dialogue, AI team charters, and using Taylor Swift’s pens (AC Ep20)

“I call it the AI sandwich. When we want to use augmentation, we’re always the bread and the LLM is the cheese in the middle.” –Beth Kanter About Beth Kanter Beth Kanter is a leading speaker, consultant, and author on digital transformation in nonprofits, with over three decades experience and global demand for her keynotes and workshops. She has been named one of the most influential women in technology by Fast Company and was awarded the lifetime achievement in nonprofit technology from NTEN. She is author of The Happy Healthy Nonprofit and The Smart Nonprofit. Website: bethkanter.org LinkedIn Profile: Beth Kanter Instagram Profile: Beth Kanter What you will learn How technology, especially AI, can be leveraged to free up time and increase nonprofit impact Strategies for reinvesting saved time into high-value human activities and relationship-building A practical framework for collaborating with AI by identifying automation, augmentation, and human-only tasks Techniques for using AI as a thinking partner—such as Socratic dialog and intentional reflection—to enhance learning Best practices for intentional, mindful use of large language models to maximize human strengths and avoid cognitive offloading Approaches for nonprofit fundraising using AI, including ethical personalization and improved donor communication Risks like ‘work slop’ and actionable norms for productive AI collaboration within teams Emerging human skills essential for the future of work in a humans-plus-AI organizational landscape Episode Resources Transcript Ross Dawson: Beth, it is a delight to have you on the show. Beth Kanter: Oh, it’s a delight to be here. I’ve admired your work for a really long time, so it’s really great to be able to have a conversation. Ross Dawson: Well, very similarly, for the very, very long time that I’ve known of your work, you’ve always focused on how technologies can augment nonprofits. I’d just like to hear—well, I mean, the reason is obvious, but I’d like to know the why, and also, what is it that’s different about the application of technologies, including AI, to nonprofits? Beth Kanter: So I think the why is, I mean, I’ve always—I’ve been working in the nonprofit sector for decades, and I didn’t start off as a techie. I kind of got into it accidentally a few decades ago, when I started on a project for the New York Foundation for the Arts to help artists get on the internet. I learned a lot about the internet and websites and all of that, and I really enjoyed translating that in a way that made it accessible to nonprofit leaders. So that’s sort of how I’ve run my career in the last number of decades: learn from the techies, translate it, make it more accessible, so people have fun and enjoy the exploration of adopting it. And that’s what actually keeps me going. Whenever a new technology or something new comes out, it’s the ability to learn something and then turn around and teach it to others and share that learning. In terms of the most recent wave of new technology—AI—my sense is that with nonprofits, we have some that have barreled ahead, the early adopters doing a lot of cutting-edge work, but a lot of organizations are just at that they’re either really concerned about all of the potential bad things that can happen from the technology, and I think that traps them from moving forward, or others where there’s not a cohesive strategy around it, so there’s a lot of shadow use going on. Then we have a smaller segment that is doing the training and trying to leverage it at an enterprise level. So I see organizations at these different stages, with a majority of them at the exploring or experimenting stage. Ross Dawson: So, you know, going back to what you were saying about being a bit of a translator, I think that’s an extraordinarily valuable role—how do you take the ideas and make them accessible and palatable to your audience? But I think there’s an inspiration piece as well in the work that you do, inspiring people that this can be useful. Beth Kanter: Yeah, to show—to keep people past their concerns. There’s a lot of folks, and this has been a constant theme for a number of decades. The technology changes, but the people stay the same, and the concerns are similar. It’s going to take a long time to learn it, I feel overwhelmed. I think AI adds an extra layer, because people are very aware, from reading the headlines, of some of the potential societal impacts, and people also have in their heads some of the science fiction we might have grown up with, like the evil robots. So that’s always there—things like, “Oh, it’s going to take our jobs,” you name it. Usually, those concerns come from people who haven’t actually worked with the technology yet. So sometimes just even showing them what it can do and what it

Oct 29, 202535 min

S3 Ep 19Ross Dawson on Levels of Humans + AI in Organizations (AC Ep19)

“It is our duty to find out how we can best use it, where humans are first and Humans + AI are more together.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Levels of Humans + AI in Organizations futuristevent.com LinkedIn Profile: Ross Dawson Books Thriving on Overload Living Networks 20th Anniversary Edition Implementing Enterprise 2.0 Developing Knowledge-Based Client Relationships What you will learn How organizations can transition from traditional models to Humans Plus AI structures An introduction to the six-layer Humans Plus AI in Organizations framework Ways AI augments individual performance, creativity, and well-being The dynamics and success factors of human-AI hybrid teams The role of scalable learning communities integrating human and AI learning How fluid talent models leverage AI for dynamic task matching and skill development Strategies for evolving enterprises using AI and human insight for continual adaptation Methods for value co-creation across organizational ecosystems with AI-facilitated collaboration Real-world examples from companies like Morgan Stanley, Schneider Electric, Siemens, Unilever, Maersk, and MELLODDY Practical steps to begin and navigate the journey toward Humans Plus AI organizations Episode Resources Transcript Ross Dawson: If you have been hanging out for new episodes of Humans Plus AI, sorry we’ve missed a number of those. We will be back to weekly from now on, and from next week, we’ll be coming back with some fantastic interviews with our guests. I’ll just give you a quick update and then run through my Levels of Humans Plus AI in Organizations framework. So, just a quick update: the reason for the big gap was that I was in Dubai and Riyadh giving keynotes at the Futurist X Summit in Dubai. It was an absolutely fantastic event organized by Brett King and colleagues, where I gave a keynote on “Humans Plus AI: Infinite Potential,” which seemed to resonate very well and fit with the broader theme of human potential and how we can create a better future. Then I went to Riyadh, where I gave a keynote at the Public Investment Forum, PMO Forum, which is the organization of the sovereign wealth fund of Saudi Arabia. There, we were again looking at macro themes of organizational performance, including specifically Humans Plus AI. When I got back home from those, I had to move house. So, it’s been a just digging myself out of the travel and moving house and getting back on top of things. We won’t have a gap in the podcast again for quite a while. We’ve got a nice compilation of wonderful conversations with guests coming up soon. So, just a quick state of the nation: Humans Plus AI is a movement, and by listening to this, you are part of that movement. We are all together in believing that AI has the potential to amplify individuals, organizations, society, and humanity. Thus, it is our duty to find out how we can best use that, where humans are first and humans plus AI are together. The community is the center of that. Go to humansplus.ai/community and you can join the community if you’re not there already. We have some amazing people in there, great discussions, and we are very much in the process of co-creating that future of Humans Plus AI. We also have a new application coming out soon, Thought Weaver. In fact, it’s actually a redevelopment of a project which we launched at the beginning of last year, and we’re rebuilding that to create Humans Plus AI thinking workflows and provide a tool to do that to the best effect. In the community, people will be testing, using, and helping us create something as useful as possible. I want to run through my Levels of Humans Plus AI in Organizations framework. This comes from my extensive work with organizations—essentially, those who understand that they need to become Humans Plus AI organizations, not just what they have been. It’s based on moving from humans, technology, and processes to organizations where AI is a complement, supporting them not just to tack on AI, but to transform themselves into very high-potential organizations. There are six layers in the framework. It starts with augmented individuals, then humans-AI hybrid teams, learning communities, fluid talent, evolutionary enterprise, and ecosystem value co-creation. Each of those six layers is where organizations, leaders, and strategists need to understand how they can transform from what they have been to apply the best of Humans Plus AI, and how those come together to become the organizations of the future. I&#8217

Oct 22, 202516 min

S3 Ep 18Iskander Smit on human-AI-things relationships, designing for interruptions and intentions, and streams of consciousness in AI (AC Ep18)

“I really believe that we need to design friction into the system, not what is usually the goal in digital spaces, where you try to remove all the friction.” –Iskander Smit About Iskander Smit Iskander Smit is founder and chair of Cities of Things Foundation, a research program originating at Delft University. He works as an independent researcher and creative strategist at the intersection of design, technology, and society, focusing on the evolving relationship between humans and AI in physical environments. Website: citiesofthings.nl thingscon.org iskandersmit.nl LinkedIn Profile: Iskander Smit   What you will learn How human, AI, and ‘things’ relationships are evolving beyond digital tools into physical environments The concept of collaborative intelligence—how human and AI co-performance shapes creativity and productivity Ways AI can mirror human thinking, deepen reflection, and reveal cognitive biases when used intentionally Designing AI interfaces for meaningful interaction, including the value of friction, interruption, and transparency How the role of designers is shifting from crafting static products to directing co-creative, adaptive systems with AI Why deliberately designing for thoughtful, exploratory, and emancipatory conversations with AI matters Challenges and insights from experimenting with AI in team settings and educational contexts The importance of treating AI as a collaborator or team member rather than simply as a tool How thoughtful human-AI relationships can unlock greater collective intelligence and transform work in sectors like health and education Episode Resources Transcript Ross Dawson: Iskander, it’s fantastic to have you on the show. Iskander Smit: Yeah, thanks for inviting me. Really excited to talk about this topic, of course. Ross: One of the things is you very much focus on collaborative intelligence, and I think that happens in conversation. So hopefully we can have a good conversation. Iskander: Yeah, me too. Ross: One of the starting points is you talk about human, AI, and things—relationships. So tell me about the human, the AI, and the things. What are the relationships? Iskander: Yeah, it really originated from the research program I started back in 2017 at the University in Delft. It was called Cities of Things—how we are going to live together with intelligent, autonomous things. We were thinking about what will happen, what the consequences are, if we live together with more autonomous things. That was before we had these generic LLMs and the developments happening now. But even then, we were already curious: how are we going to have a kind of co-performance with things? That’s why I added the “things” relation—because I really see now, of course, there’s a lot of use of AI in the digital space and in digital life. But it also starts to pop up in the physical space. So authentic AI for the physical space, I think, is a very interesting domain to look into. What will happen when we live within AI, when we are immersed in AI? That’s why I really look not so much at the specific function of the AI or the tool, but more at what kind of relationship we are building with these machines or things—or whatever we want to call them. Ross: Yeah. That’s why I dig into the relationships in the sense of the extended mind idea. Part of it is things we use, which enable us to do more. We’ve long had relationships with things. As those things become more autonomous, that changes. And the relationship with AI, which is far more human-like by design, also changes. So what are the types of relationships? When it’s not just humans and AI but also the things, what is the nature of these? Iskander: Yes, a good question. What type of relationships do we have? I’m really thinking about what the interaction is we have with things, and how we can define which are best suited for AI, which for humans, and how we relate to that. How do we perform together in a certain way? It’s an interesting question. Some people think that AI is just an early stage of being human-like. But I think we have evolved for such a long time that AI is definitely a different type of breed, maybe. So, what types of relations can we have here? There is, of course, a lot—especially when we had these conversational devices starting to pop up in our relationships. Ross: So one of the strongest relationships, I suppose, is collaboration. And so that’s kind of this idea around intelligence—collaboration—where we have collective human intelligence between humans, which we’ve had since we’ve gathered around fires. And now, of course, as you say, this intelligence is different but hopefully complementary to us. And so there’s a whole set of relationships with a set of humans, a set of AI. And so intelligence, I think you’re suggesting, emerges from that collaboration. Iskander: Definitely, yes. That’s an interesting point indeed, because also when you

Sep 10, 202536 min

S3 Ep 17Brian Kropp on AI adoption, intrinsic incentives, identifying pain points, and organizational redesign (AC Ep17)

“If you’re not moving quickly to get these ideas implemented, your smaller, more agile competitors are.” –Brian Kropp About Brian Kropp Brian Kropp is President of Growth at World 50 Group. Previous roles include Managing Director at Accenture, Chief of HR Research at Gartner and Practice Leader at CEB. His work has been extensively featured in the media, including in Washington Post, NPR, Harvard Business Review, and Quartz. Website: world50.com LinkedIn Profile: Brian Kropp X Profile: Brian Kropp What you will learn Driving organizational performance through AI adoption Understanding executive expectations versus actual results in AI performance impact Strategies for creating effective AI adoption incentives within organizations The importance of designing organizations for AI integration with a focus on risk management Middle management’s evolving role in AI-rich environments Redefining organizational structures to support AI and humans in tandem Building a culture that encourages AI experimentation Empowering leaders to drive AI adoption through innovative practices Leveraging employees who are native to AI to assist in the learning process for leaders Learning from case studies and studies of successful AI integration Episode Resources Transcript Ross Dawson: Brian, it’s wonderful to have you on the show. Brian Kropp: Thanks for having me, Ross. Really appreciate it. Ross: So you’ve been doing a lot of work for a long time in driving organizational performance. These are perennials, but there’s this little thing called AI, which has come along lately, which is changing. Brian: You might have heard of it somewhere. I’m not sure if you’ve been alive or awake for the last couple of years, but you might have heard about it. Ross: Yeah, so we were just chatting before, and you were saying the pretty obvious thing, okay, got AI. Well, it’s only useful when it starts to be used. We need to drive the adoption. These are humans, humans who are using AI and working together to drive the performance of the organization. So love to just hear a big frame of what you’re seeing in how it is we drive the useful use of AI in organizations. Brian: I think a good starting point is actually to try to take a step back and understand what is the expectation that executive senior leaders have about the benefit of these sorts of tools. Now, to be honest, nobody knows exactly what the final benefit is going to be. There is definitely guesswork around. There are different people with different expectations and all sorts of different viewpoints on them, so the exact numbers are a little bit fuzzy at best in terms of the estimates of what performance improvements we will actually see. But when you think about it, at least at kind of orders of magnitude, there are studies that have come out. There’s one recently from Morgan Stanley that talked about their expectation around a 40 to 50% improvement in organizational performance, defined as revenue and margin improvements from the use of AI tools. So that’s a really big number. It’s a very big number. When you do analysis of earnings calls from CEOs and when they’re pressed on what their expectation is, those numbers range between 20 and 30%. That’s still a really big number, and this is across the next couple of years, so it’s a timeframe. What’s fascinating is that when you survey line executives, senior executives—so think like vice president, people three layers down from the CEO—and you look at some of the actual results that have been achieved so far, it’s in that single digits range. So the challenge that’s out there, there’s a frontier that says 50, CEOs say 30, the actualized is, call it five. And those numbers, plus or minus a little bit, are in that range. And so there’s enormous pressure on executives in businesses to actually drive adoption of these tools. Not necessarily to get to 50—I think that’s probably unrealistic, at least in the next kind of planning horizon—but to get from five to 10, from five to 15. Because there are billions of dollars of investments that companies are making in these tools. There are all sorts of startups that they’re buying. There are all sorts of investments that they’re making. And if those executives don’t start to show returns, the CFO is going to come knocking on the door and say, “Hey, you wrote a check for $50 million and the business seems kind of the same. What’s up with that?” There’s enormous pressure on them to make that happen. So if you’re, as an executive, not thinking hard about how you’re actually going to drive the adoption of these tools, you’re certainly not going to get the cost savings that are real potential opportunities from using these tools. And you will absolutely not get the breakthrough performance that your CEO and the investment community are

Sep 3, 202539 min

S3 Ep 16Suranga Nanayakkara on augmenting humans, contextual nudging, cognitive flow, and intention implementation (AC Ep16)

“There’s a significant opportunity for us to redesign the technology rather than redesign people.” –Suranga Nanayakkara About Suranga Nanayakkara Suranga Nanayakkara is founder of the Augmented Human Lab and Associate Professor of Computing at National University of Singapore (NUS). Before NUS, Suranga was an Associate Professor at the University of Auckland, appointed by invitation under the Strategic Entrepreneurial Universities scheme. He is founder of a number of startups including AiSee, a wearable AI companion to support blind & low vision people. His awards include MIT TechReview young inventor under 35 in Asia Pacific and Outstanding Young Persons of Sri Lanka. Website: ahlab.org intimidated.info LinkedIn Profile: Suranga Nanayakkara University Profile: Suranga Nanayakkara What you will learn Redefining human-computer interaction through augmentation Creating seamless assistive tech for the blind and beyond Using physiological sensors to detect cognitive load Adaptive learning tools that adjust to flow states The concept of an AI-powered inner voice for better choices Wearable fact-checkers to combat misinformation Co-designing technologies with autistic and deaf communities Episode Resources Transcript Ross Dawson: Suranga, it’s wonderful to have you on the show. Suranga Nanayakkara: Thanks, Ross, for inviting me. Ross: So you run the augmented human lab. So I’d love to hear more about what does augmented human mean to you, and what are you doing in the lab? Suranga: Right? I started the lab back in 2011 and part of the reasoning is personal. And my take on augmentation is really, everyone needs assistance. All of us are disabled, one way or the other. It may be a permanent disability. It may be you’re in a country that you don’t speak the language, you don’t understand the culture. For me, when I first moved to Singapore, I never spoke English. I was very naive to computers, and to the point that I remember very vividly back in the day, Yahoo Messenger had this notification sound of knocking, and I misinterpreted that as being somebody knocking on my door. That was very, very intimidating. I felt I’m not good enough, and that could have been career-defining. And with that experience, as I got better with the technology, and when I wanted to set up my lab, I wanted to think of ways. How do we redefine these human-computer interfaces such that it provides assistance and everyone needs help? And how do we, instead of just thinking of assistive tech, think of augmenting our ability depending on your context, depending on your situation, how to use that? I started the lab as augmented sensors. We were focusing on sensory augmentation, but a couple of years later, with the lab growing, we created a bit more broad definition of augmenting human, and that’s when the name became augmented human lab. Ross: Fantastic. And there’s so many domains in which so many projects which you have on which are very interesting and exciting. So just one. We would just like to go through some of those in turn. But the one you just mentioned was around assisting blind people. I’d love to hear more about what that is and how that works. Suranga: Right. So the inspiration for that project came when I was a postdoc at MIT Media Lab, and there was a blind student who took the same assistive tech class with me. The way he accessed his lecture notes was he was browsing to a particular app on his mobile phone, then he opened the app and took a picture, and the app reads out notes for him. For him, this was perfect, but for me, observing his interactions, it didn’t make sense. Why would he have to do so many steps before he can access information? And that sparked a thought: what if we take the camera out and put it in a way that it’s always accessible and you need minimum effort? I started with the camera on the finger. It was a smart ring. You just point and ask questions. And that was a golf ball-sized, bulky interface, just to show the concept. As you iterate, it became a wearable headphone which has the camera, speaker, and a microphone. So the camera sees what’s in front of you. The speaker can speak back to you, the microphone listens to you. With that, you can enable very seamless interaction for a blind person. Now you can just hold the notes in front of you and just ask, please read this for me. Or you might be in front of a toilet, you want to know which one is female, which one is male. You can point and ask that question. So essentially, this device, now we call ISee, is a way of providing this very seamless, effortless interaction for blind people to access visual information. And now we realize it’s not just for blind people. For me, I actually used it. Recently I went to Japan, and I don’t read anything Japanese, and pretty much everything is in Japanese. I went to a pharmacy, I wanted to buy this medicine for headache, and ISee

Aug 27, 202531 min

S3 Ep 15Michael I. Jordan on a collectivist perspective on AI, humble genius, design for social welfare, and the missing middle kingdom (AC Ep15)

“The fact is that its input came from billions of humans… When you’re interacting with an LLM, you are interacting with a collective, not a singular intelligence sitting out there in the universe.” –Michael I. Jordan About Michael I. Jordan Michael I. Jordan is the Pehong Chen Distinguished Professor in Electrical Engineering and Computer Science and professor in Statistics at the University of California, Berkeley, and chair of Markets and Machine Learning at INRIA Institute in Paris. His many awards include the World Laureates Association Prize, IEEE John von Neumann Medal, and the Allen Newell Award. He has been named in the journal Science as the most influential computer scientist in the world. Website: arxiv.org LinkedIn Profile: Michael I. Jordan University Profile: Michael I. Jordan What you will learn Redefining the meaning of intelligence The social and cultural roots of human genius Why AI is not true superintelligence Collective genius as the driver of innovation The missing link between economics and AI Decision making under uncertainty and asymmetry Building AI systems for social welfare Episode Resources Transcript Ross Dawson: Michael, it’s wonderful to have you on the show. Michael I. Jordan: My pleasure to be here. Ross: Many people seem to be saying that AI is going to beat all human intelligence very soon. And I think you have a different opinion. Michael: Well, there’s a lot of problems with that framing for technology. First of all, we don’t really understand human intelligence. We think we do because we’re intelligent, but there’s depths we haven’t probed, and there’s the field of psychology just getting going—not to mention neuroscience. So just saying that something that mimics humans, or took a vast amount of data and brute-forced mimicked humans, seems like a kind of leap to me—that it has human intelligence nailed. Moreover, the idea that it was a sequence of logic doesn’t particularly work for me. We figured out human intelligence, now we can put it in silicon and scale it, and therefore we’ll get superintelligence. Every step there I mean the scaling part, I guess, is okay, but we have not figured out human intelligence. Even if we had, it’s not really clear to me as a technology that our goal should be to mimic or replace humans. In some jobs, sure, but we should think more about overall social welfare and what’s good for humans. How do we complement humans? So, no, I don’t think we’ve got human intelligence figured out at all. It’s not that it’s a mystical thing, but we have creativity. We have experience and shared experience, and we plumb the depths of that when we interact and when we create things. Those machines that are doing brute force gradient descent on large amounts of text and even images or whatever—they’re not getting there. It is brute force. I don’t think sciences have really progressed by just having brute force solutions that no one understands and saying, “That’s it, we’re done.” So if you want to understand human intelligence It’s going to be a while. Ross: There’s a lot to dig into there, but perhaps first: just intelligence. You frame that as, among other things, social and cultural, not just cognitive? Michael: Absolutely. I don’t think if you put me on a desert island, I’d do very well. I need to be able to ask people how to do things. And if you put me not just on a desert island, but in a foreign country, and you don’t give me the education—the 40 years of education I had as well—that imbued me with the culture of our civilization. Anytime I’m not knowledgeable about something, I can go find it, and I can talk to people. Yes, I can now use technology to find it, but I’m really talking to people through the technology. I don’t think we appreciate how important that cultural background is to our thinking, to our ability to do things, to execute, and then to figure out what we don’t know and what we’re not good at. That’s how we trade with others who are better at it, how we interact, and all that. That’s a huge part of what it means to be human, and how to be a successful and happy human. This mythological Einstein sitting all by himself in a room, thinking and pondering—I think we’re way too wedded to that. That’s not really how our intelligence is rolled out in the real world. Generally, we’re very uncertain about things in the real world. Even Einstein was uncertain, had to ask others, learn things, and find a path through the complexity of thought. Also, I’ve worked on machine learning for many years, and I’m pretty comfortable saying that learning is a thing we can define, or at least start to define: you improve on certain tasks. Intelligence—I’m just much less happy with trying to define it. I think there’s a lot of social intelligence, so I’m using that term loosely. But hu

Aug 20, 202542 min

S3 Ep 14Paula Goldman on trust patterns, intentional orchestration, enhancing human connection, and humans at the helm (AC Ep14)

“The potential is boundless, but it doesn’t come automatically; it comes intentionally.” –Paula Goldman About Paula Goldman Paula Goldman is Salesforce’s first-ever Chief Ethical and Humane Use Officer, where she creates frameworks to build and deploy ethical technology for optimum social benefit. Prior to Salesforce she held leadership roles at global social impact investment firm Omidyar Network. Paula holds a Ph.D. from Harvard University, and is a member of the National AI Advisory Committee of the US Department of Commerce. Website: salesforce.com LinkedIn Profile: Paula Goldman X Profile: Paula Goldman What you will learn Redefining ethics as trust in technology Designing AI with intentional human oversight Building justifiable trust through testing and safeguards Balancing automation with uniquely human tasks Starting small with minimum viable AI governance Involving diverse voices in ethical AI decisions Envisioning AI that enhances human connection and creativity Episode Resources Transcript Ross Dawson: Paula, it is fantastic to have you on the show. Paula Goldman: Oh, I’m so excited to have this conversation with you, Ross. Ross: So you have a title which includes your the chief of ethical and humane use. So what is humane use of technology and AI? Paula: Well, it’s interesting, because Salesforce created this Office of Ethical and Humane Use of Technology around seven years ago, and that was kind of before this current wave of AI. But it was with this—I don’t want to say, premonition—this recognition that as technology advances, we need to be asking ourselves sophisticated questions about how we design it and how we deploy it, and how we make sure it’s having its intended outcome, how we avoid unintended harm, how we bring in the views of different stakeholders, how we’re transparent about that process. So that’s really the intention behind the office. Ross: Well, we’ll come back to that, because I just—humane and humanity is important. So ethics is the other part of your role. Most people say ethics are, let’s work out what we shouldn’t do. But of course, ethics is also about having a positive impact, not just avoiding the negative impact. So how do you frame this—how it is we can build technologies and implement technologies in ways that have a net benefit, as opposed to just not avoiding the negatives? Paula: Well, I love this question. I love it a lot because one of my secrets is that I don’t love the word ethics to describe our work. Not that—it’s very appropriate—but the word I like much more than that is trust, trustworthy technology. So what happens when you build—especially given how quickly AI is evolving, how sometimes it’s hard for people to understand what’s going on underneath the hood and so on—how do you design technology that people understand how it works? They know how to get the best from it, they know where it might go wrong and what safeguards they should implement, and so on. When you frame this exercise like that, it becomes a source of innovation. It becomes a design constraint that breeds all kinds of really cool, what we call trust patterns in our technology—innovations like we have a set of safeguards, customizable safeguards for our customers, that we call our trust layer. And this is one of our differentiators as we go to market. It’s things that allow people—features that allow people—to protect the privacy of their data, or make sure that the tone of the output from the AI remains on brand, or look out for accuracy and tune the accuracy of the responses, and so on. So when you think about it like that, it becomes much less of this mental image of a group of people off in the corner asking lofty questions, and much more of an all-of-company exercise where we’re asking deeply with our customers: How do we get this technology to work in a way that really benefits everyone? Ross: That’s fantastic. Actually, I just created a little framework around trust in AI adoption. So it’s like trust that I can use this effectively, trust that others around me will use it well in teams, trust that my leaders will use it in appropriate ways, trust from customers, trust in the AI. And in many ways, everything’s about trust. Because a lot of people don’t trust AI, possibly justifiably in some domains. So I’d love to dig a little bit into how it is you frame and architect that ability—this ability to have justifiable trust. Paula: Do you mean the justifiable trust from the customers, the end users? Ross: Well, I think at all those layers. I think these are all important, but that’s a critical one. Paula: Yeah, I think a lot of it is about—I actually think about our work as sort of having two different levels to it. One is the objective function of reviewing a product. We do something called adversarial testing, where we’ll take, let’s sa

Aug 13, 202534 min

S3 Ep 13Vivienne Ming on hybrid collective intelligence, building cyborgs, meta-uncertainty, and the unknown infinite (AC Ep13)

“What I need is someone who will have an idea I would never have had. In fact, better yet, an idea no one else in the world would ever have. That’s human space. That’s our job now: the unknown infinite.” –Vivienne Ming About Vivienne Ming Vivienne Ming is a theoretical neuroscientist, entrepreneur, and author. Her AI inventions have launched a dozen companies and nonprofits with a focus on human potential, including Socos Labs and Dionysus Health. She is Professor at UCL Global Business School for Health, with her work featured in media including Financial Times, The Atlantic, and New York Times. Website: socos.org dionysushealth.com optoceutics.com LinkedIn Profile: Vivienne Ming X Profile: Vivienne Ming What you will learn Unlocking human potential through AI Building health systems with humans and machines Why AI should challenge—not replace—us The danger of cognitive atrophy in education Fostering metacognition and meta-uncertainty Diversity as a driver of collective intelligence Preparing for a future of infinite unknowns Episode Resources Transcript Ross Dawson: Vivian, it is fantastic to have you on the show. Vivienne Ming: It’s a pleasure to be here. Ross: So you are being described as obsessed with using technology to maximize human potential. So that’s a big topic, where, how do you see it? What is the potential? Vivienne: Yeah, I mean, when I was interviewing to go to grad school, I used to tell people that I wanted to build cyborgs, which is an excellent way to get everyone to scoot away from you for fear that your crazy will rub off and they won’t get accepted either. But one of my claims to notoriety is when my son was diagnosed with type one diabetes, I hacked all those medical equipment. Turns out, I broke all sorts of US federal regulations. And little did I know at the time, I invented one of the first ever AIs for diabetes. And I mention that here in answer to your lead-in because as much as I’m thrilled that I helped my son—it’s a project I’m more proud of than any other—there is some kid in a favela in Rio, a village outside Kinshasa, down the street from me here in California. This kid has the cure—not some crummy AI, not a treatment, a cure for diabetes—in their potential. But the overwhelming likelihood is they’re never going to live the life that allows them to bring that into the world. And there’s tons of research on this. I’m a hard number scientist, so words like human potential can feel very flowery. But to me, it’s grounded and sort of strangely selfish. What could all of these lives be doing transforming the world for the better? And for some reason, we are so under-motivated to make that potential a reality. So this is—when I come at these sorts of problems, that’s really where I’m coming from. And I’ll even share, just as a personal motivation, I spent a solid chunk of the 90s miserable and homeless. And since then, I’ve gotten to found—or been involved in founding—12 different companies. I’ve invented six life-saving inventions. I’ve written books. I’ve gotten to do so many things. And I get it. I have a weird life, a wonderful life. Maybe not everyone’s going to have that same life, but everyone could. And how many lives never got off the streets, or never got out of the favela? Or, for that matter, how many lives were, exceptional in some sense, but kind of stalled out at a solid job somewhere, doing something anyone else could have done? But you enjoy things and you led a good life. But again, you could have done something transformative, and the world didn’t call on you. It didn’t give you that opportunity. That’s what human potential is about for me. Ross: Fantastic. And so I think just digging into that healthcare piece—so one of the really interesting things about diabetes, or AI and diabetes, is this idea of a closed system, where the human and the AI system—as it is data coming to human to be able to adjust glucose levels and so on… And I think some of your other work around, for example, bipolar or other domains as well, where it’s looking at humans and AI as a system—where humans are, we are obviously an integral system—but we have data, and we’re using the AI or technology as an external system to be able to build a bigger system which can enhance our health, be that in glucose levels, be that in our ability to respond to ways our neurology is going awry. So I suppose you can speak to any specifics around how it is we can build those humans-plus-AI health systems. Vivienne: Yeah. Again, coming from my original world—and it’s still my world. In terms of my academic work, I still have a toe over there—and it’s in what’s called neuroprosthetics. So we don’t call them cyborgs nowadays. And what I always think of there is: my technologies should only ever make people better. It shouldn’t replace

Aug 6, 202547 min

S3 Ep 12Matt Beane on the 3 Cs of skill development, AI augmentation design templates, inverted apprenticeships, and AI for skill enhancement (AC Ep12)

“The primary source of our reliable ability to produce results under pressure—i.e., skill—is attempting to solve complicated problems with an expert nearby.” –Matt Beane About Matt Beane Matt Beane is Assistant Professor at University of California Santa Barbara, and a Digital Fellow with both Stanford’s Digital Economy Lab and MIT’s Institute for the Digital Economy. He was employee number two at the Internet of Things startup Humatics, where he played a key role in helping to found and fund the company, and is the author of the highly influential book The Skill Code: How to Save Human Ability in an Age of Intelligent Machines. Website: mattbeane.com LinkedIn Profile: Matt Beane University Profile: Matt Beane Book: The Skill Code   What you will learn Redefining skill development in the age of AI Why training alone doesn’t build true expertise The three Cs of optimal learning: challenge, complexity, connection How AI disrupts traditional apprenticeship models Inverted apprenticeships and bi-directional learning Designing workflows that upskill while delivering results The hidden cost of ignoring junior talent development Episode Resources Transcript Ross Dawson: Matt, it is awesome to have you on the show. Matt Beane: I’m delighted to be here. Really glad that you reached out. Ross: So you are the author of The Skill Code. This builds on, I think, research for well over a decade. It came out over a year ago, and now this is very much of the moment, as people are saying all over the place that entry-level jobs are disappearing, and we’re talking about inverted pyramids and so on. So, what is The Skill Code? Matt: Right. The first third of the book is devoted to the working conditions that humans need in order to build skill optimally. The myth that is supported by billions of dollars of misdirected investment is that skill comes out of training. And that is—we just have a mountain of evidence that that’s not so. It can help, it can also hurt. But the primary source of our reliable ability to produce results under pressure, IE, skill, is attempting to solve complicated problems with an expert nearby. Basically, we can learn, of course, without these conditions—sort of idealized conditions—but it can be great. And the first third of the book is devoted to what does it take for it to be great? I got there sort of backwards by studying how people were trying to learn in the midst of trying to deal with new and intelligent technologies at work—and mostly failing. But a few succeeded. And so I just looked at those success cases and saw what they had in common across many industries and so on. So, I break that out in the beginning of the book into three C’s—thankfully, in English, this broke out that way: Challenge, Complexity, and Connection. And those roughly equate—well, pretty precisely, actually, I should own the value of the book—they equate to four chunks of characteristics of the work that you’re embedded in that need to be in place in order for you to learn. Challenge basically is: are you working close to, but not at, the edge of your capacity? And complexity is: in addition to focusing on getting good at a thing that you’re trying to improve at, are you also sort of looking left and looking right in your environment to digest the full system you’re embedded in? That’s complexity. And connection is building warm bonds of trust and respect between human beings. All three of those things—I could go into each—but basically, in concert, in no particular sequence—each workplace, each situation is different—but these are the base ingredients. I used a DNA metaphor in the book. These are sort of the basic alphabet of what it takes to build skill, and your particular process or approach or situation is going to vary in terms of how those show up. Ross: So, for getting to solutions or prescriptions, I mean, it’s probably worth laying out the problem. AI or various technologies are making those who are entering the workforce—or entering particular careers—be able to readily do what they do. And essentially, a lot of the classic apprenticeship-style model has been that you learn by making mistakes and, as you say, alongside the masters. And if people, if organizations, are saying, “Well, we no longer need so many entry-level people to do the dirty, dull work,” then we don’t have this pathway for people to develop those skills in the way you described. Matt: Yes, and it’s even worse than that. So, for those that remain—because, of course, organizations are going to hire some junior people—the problems that I document in my research, starting in 2012… Robotic surgery was one early example, but I’ve since moved on to investment banking and bomb disposal—I mean, very diverse examples. When you introduce a new form of intelligent automation into the work, the primary way that you extract gains from that is that the expert in the wor

Jul 30, 202539 min

S3 Ep 11Tim O’Reilly on AI native organizations, architectures of participation, creating value for users, and learning by exploring (AC Ep11)

“We’re in this process where we should be discovering what’s possible… That’s what I mean by AI-native — just go figure out what the AI can do that makes something so much easier or so much better.” – Tim O’Reilly About Tim O’Reilly Tim O’Reilly is the founder, CEO, and Chairman of leading technical publisher O’Reilly Media, and a partner at early stage venture firm O’Reilly AlphaTech Ventures. He has played a central role in shaping the technology landscape, including in open source software, web 2.0, and the Maker movement. He is author of numerous books including WTF? What’s the Future and Why It’s Up to Us. Website: www.oreilly.com LinkedIn Profile: Tim O’Reilly X Profile: Tim O’Reilly Articles: AI First Puts Humans First An Architecture of Participation for AI? AI and Programming: The Beginning of a New Era   What you will learn Redefining AI-native beyond automation Tracing the arc of human-computer communication Resisting the enshittification of tech platforms Designing for participation, not control Embracing group dynamics in AI architecture Unlocking new learning through experimentation Prioritizing value creation over financial hype Episode Resources Transcript Ross Dawson: Tim, it is fantastic to have you on the show. You were my very first guest on the show three years ago, and it’s wonderful to have you back. Tim O’Reilly: Well, thanks for having me again. Ross: So you have seen technology waves over decades and been right in there forming some of those. And so I’d love to get your perspectives on AI today. Tim: Well, I think, first off, it’s the real deal. It’s a major transformation, but I like to put it in context. The history of computing is the history of making it easier and easier for people to communicate with machines. I mean literally in the beginning, they had to actually wire physical circuits into a particular calculation, and then they came up with the stored program computer. And then you could actually input a program one bit at a time, first with switches on the front of the computer. And then, wow, punch cards. And we got slightly higher level languages. First it was big, advanced assembly programming, and then big, advanced, higher level languages like Fortran, and that whole generation. Then we had GUIs. I mean, first we had command lines. Literally the CRT was this huge thing. You could literally type and have a screen. And I guess the point is, each time that we had an advance in the ease of communication, more people used computers. They did more things with them, and the market grew. And I think I have a lot of disdain for this idea that AI is just going to take away jobs. Yes, it will be disruptive. There’s a lot of disruption in the past of computing. I mean, hey, if you were a programmer, you used to have to know how to use an oscilloscope to debug your program. And a lot of that old sort of analog hardware that was sort of looking at the waveforms and stuff — not needed anymore, right? I remember stepping through programs one instruction at a time. There’s all kinds of skills that went away. And so maybe programming in a language like Python or Java goes away, although I don’t think we’re there yet, because of course it is simply the intermediate code that the AIs themselves are generating, and we have to look at it and inspect it. So we have a long way before we’re at the point that some people are talking about — evanescent programs that just get generated and disappear, that are generated on demand because the AI is so good at it. It just — you ask it to do something, and yeah, it generates code, just like maybe a compiler generates code. But I think that’s a bit of a wish list, because these machines are not deterministic in the way that previous computers were. And I love this framework that there’s really — we now have two different kinds of computers. Wonderful post — trying to think who, name’s escaping me at the moment — but it was called “LLMs Are Weird Computers.” And it made the point that you have, effectively, one machine that we’re working with that can write a sonnet but really struggles to do math repeatedly. And you have another type of machine that can come up with the same answer every single time but couldn’t write a sonnet to save its life. So we have to get the best of both of these things. And I really love that as a framework. It’s a big expansion of capability. But returning back to this idea of more — the greater ease of use expanding the market — just think back to literacy. There was a time when there was a priesthood. They were the only people who could read and write. And they actually even read and wrote in a dead language — Latin — that nobody else even spoke. So it was this real secret, and it was a source of great power. And it was subversive when they first, for example, printed the Bible in English. And literally, when they printed the

Jul 23, 202541 min

S3 Ep 10Jacob Taylor on collective intelligence for SDGs, interspecies money, vibe-teaming, and AI ecosystems for people and planet (AC Ep10)

“If we’re faced with problems that are moving fast and require collective solutions, then collective intelligence becomes the toolkit we need to tackle them.” – Jacob Taylor About Jacob Taylor Jacob Taylor is a fellow in the Center for Sustainable Development at Brookings Institution, and a leader of its 17 Rooms initiative, which catalyzes global action for the Sustainable Development Goals. He was previously research fellow at the Asian Bureau of Economic Research and consulting scientist on a DARPA research program on team performance. He was a Rhodes scholar and represented Australia in Rugby 7s for a number of years. Website: www.brookings.edu www.brookings.edu www.brookings.edu www.brookings.edu loyalagents.org LinkedIn Profile: Jacob Taylor X Profile: Jacob Taylor What you will learn Reimagining Team Performance Through Collective Intelligence Using 17 Rooms to Break Down the SDGs Into Action Building Rituals That Elevate Learning and Challenge Norms Designing Digital Twins to Represent Communities and Ecosystems Creating Interspecies Money for Elephants, Trees, and Gorillas Exploring Vibe Teaming for AI-Augmented Collaboration Envisioning a Bottom-Up AI Ecosystem for People and Planet Episode Resources Transcript Ross Dawson: Jacob, it is awesome to have you on the show. Jacob Taylor: Ross, thanks for having me. Ross: So we met at Human Tech Week in San Francisco, where you were sharing all sorts of interesting thoughts that we’ll come back to. What are your top-of-mind reflections of the event? Jacob: Look, I had a great week, and largely because of all the great people I met, to be honest. And I think what I picked up there was people really driving towards the same set of shared outcomes. Really people genuinely building things, talking about ways of working together that were driving at outcomes for, ultimately, for human flourishing, for people and planet. And I think that’s such an important conversation to have at the moment, as things are moving so fast in AI and technology, and sometimes it’s hard to figure out where all of this is leading, basically.And so to have humans at the center is a great principle. Ross: Yeah, well, where it’s leading is where we take it. So I think having the humans at the center is probably a pretty good starting point. So one of the central themes of this blog—for this podcast for ages—has been collective intelligence. And so you are diving deep into applying collective intelligence to achieve the Sustainable Development Goals, and I would love to hear more about what you’re doing and how you’re going about it. Jacob: Yeah, so I mean, very quickly, I’m an anthropologist by training. I have a background in elite team performance as a professional rugby player, and then studying professional team sport for a number of years. So my original collective is the team, and that’s kind of my intuitive starting point for some of this. But teams are very well built to solve problems that no individual can achieve alone, and really a lot of the SDG problems that we have—issues that communities at every scale have trouble solving on their own—need a whole community to tackle a problem, rather than just one individual or set of individuals within a community. So the SDGs are these types of—whether it’s climate action or ending extreme poverty or sustainability at the city level—all of these issues require collective solutions. And so if we’re faced with problems that are moving fast and require collective solutions, then collective intelligence becomes the toolkit or the approach that we need to use to tackle those problems. I’ve been thinking a lot about this idea that in the second half of the 20th century, economics as a discipline went from pretty much on the margins of policymaking and influence to right at the center. By the end of the 20th century, economists were at the heart of informing how decisions were made at the country level, at firms, and so on. That was because an economic framework really helped make those decisions. I think my sense is that the problems we face now really need the toolkit of the science of collective intelligence. So that’s kind of one of the ideas I’ve been exploring—is it time for collective intelligence as a science to really inform the way we make decisions at scale, particularly for our hardest problems like the SDG. Ross: One of your initiatives—so at Brookings Institution, one of the initiatives is 17 Rooms. I’m so intrigued by the name and what that is and how that works. Jacob: Yeah. So, 17 Rooms. We have 17 Sustainable Development Goals, and so on. Five or so years ago now—or more, I think it’s been running for seven or eight years now—17 Rooms thought: what if we found a method to break down that complexity of the SDGs? A lot of people talk about the SDGs as everything connected to everything, which sometimes is true. There are

Jul 16, 2025

S3 Ep 9AI & The Future of Strategy (AC Ep9)

“Strategy really must focus on those purely human capabilities of synthesis, and judgment, and sense-making.” – Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Ross Dawson Advanced Human Technologies LinkedIn Profile: Ross Dawson Books Thriving on Overload Living Networks 20th Anniversary Edition Implementing Enterprise 2.0 Developing Knowledge-Based Client Relationships   What you will learn How AI is reshaping strategic decision-making The accelerating need for flexible leadership Why trust is the new competitive advantage The balance between human insight and machine analysis Storytelling as the heart of effective strategy Building learning-driven, adaptive organizations The evolving role of leaders in an AI-first world Episode Resources Transcript Ross Dawson: This is a little bit of a different episode. Instead of an interview, I will be sharing a few thoughts in the context of now doubling down on the Humans Plus AI theme. Our community is kicking off the next level. As you may have noticed, the podcast has been rebranded Humans Plus AI, and really just fully focused on this theme of how AI can augment humans—individually, organizations, and society. So what I want to share today is some of the thoughts which came out of Human Tech Week. I was fortunate to be at Human Tech Week in San Francisco a few weeks ago. I did the opening keynote on Infinite Potential: Humans Plus AI, and I’ll share some more thoughts on that another time. But what I also did was run a lunch event, a panel with myself, John Hagel, and Charlene Lee, talking about AI and the future of strategy. So it was an amazing conversation, and I can’t do it justice now, but what I want to do is just share some of the high-level themes that came out of that conversation, and I suppose, obviously, bringing my own particular slant on those. So we started off by thinking around how is change generally, including AI, impacting strategy and the strategy process. So fairly obviously we have accelerating change. That means that decision cycles are getting shorter, and strategy needs to move faster. It also means that there is the ability for creation of all kinds to be democratized within, across, and beyond organizations, allowing them to innovate, to act without necessarily being centralized. And this idea of this abundance of knowledge, coupled with the scarcity of insight, means that strategy really must focus on those purely human capabilities of synthesis, and judgment, and sense-making. There’s also a theme where we have institutional trust is eroding. So this means that more and more, strategy shifts to relationships-based models, ecosystem-based models. And this overlying theme, which John Hagel in particular brought out, is this idea that there is greater fear amongst leaders. There’s greater emotional pressure, and these basically shrink the timeline of our thinking. It forces us to shorter-term thinking. We are based on fear—of a whole variety of pressures from shareholders, stakeholders, politicians, and more. We need to allow ourselves to move beyond the fear, as John’s latest book The Journey Beyond Fear lays out—highly recommended—which then enables us to enable our strategic imagination and ways of thinking, and how we do that. So one of the core themes of the conversation was around: what are the relative roles of AI and humans in the strategy process? Humans are strategic thinkers by their very nature, and now we have AI which can support us and complement us in various ways. Of course, there is a strong way in which AI can use data. It can do a lot of analysis. It is very capable at pattern recognition. It can move faster. It can simulate scenarios and futures, identify signals, and so it can scale what can be done in strategy analysis. It can go deeper into the analysis. But this brings the human role of the higher levels: of the creativity, of the imagination, of the judgment, the ethical framing, the purpose, the vision, the values. One of the key things which came out of it was around storytelling, where strategy is a story. It’s not this whole array of KPIs and routes to get them—that’s a little part of it. It is telling a story that engages people, that makes them passionate about what they want to do and how they are going to do it—that’s their heroes and heroines’ journey. So this insight, this sense-making, is still human. There’s a wonderful quote from the session, saying, “AI without data is extremely stupid,” but even with the data, it can’t deliver the insight or the wisdom on its own. That is someth

Jul 9, 202512 min

S3 Ep 8Matt Lewis on augmenting brain capital, AI for mental health, neurotechnology, and dealing in hope (AC Ep8)

“The big picture is that every human on Earth deserves to live a life worth living… free of mental strife, physical strife, and the strife of war.” – Matt Lewis About Matt Lewis Matt is CEO, Founder and Chief Augmented Intelligence Officer of LLMental, a Public Benefit Limited Liability Corporation Venture Studio focused on augmenting brain capital. He was previously Chief AI Officer at Inizio Health, and contributes in many roles including as a member of OpenAI’s Executive Forum, Gartner’s Peer Select AI Community and faculty at the World Economic Forum’ New Champions’ initiative. Website: Matt Lewis LinkedIn Profile: Matt Lewis What you will learn Using AI to support brain health and mental well-being Redefining mental health with lived experience leadership The promise and danger of generative AI in loneliness Bridging neuroscience and precision medicine Citizen data science and the future of care Unlocking human potential through brain capital Shifting from scarcity mindset to abundance thinking Episode Resources Transcript Ross Dawson: Matt, it’s awesome to have you on the show. Matt Lewis: Thank you so much for having me. Ross, it’s a real pleasure and honor. And thank you to everyone that’s watching, listening, learning. I’m so happy to be here with all of you. Ross: So you are focusing on using AI amongst other technologies to increase brain capital. So what does that mean? Matt: Yeah. I mean, it’s a great question, and it’s, I think, the challenge of our time, perhaps our generation, if you will. I’ve been in artificial intelligence for 18 years, which is like an eon in the current environment, if you will. I built my first machine learning model about 18 years ago for Parkinson’s disease, under a degenerative condition where people lose the ability to control their body as they wish they would. I was working at Boehringer Ingelheim at the time, and we had a drug, a dopamine agonist, to help people regain function, if you will. But some small number of people developed this weird side effect, this adverse event that didn’t appear in clinical trials, where they became addicted to all sorts of compulsive behaviors that made their actual lives miserable. Like they became shopping addicts, or they became compulsive gamblers. They developed proclivities to sexual behaviors that they didn’t have before they were on our drug, and no one could quite figure out why they had these weird things happening to them. And even though they were seeing the top academic neurologists in this country, United States, or other countries, no one can say why Ross would get this adverse event and Matt wouldn’t. It didn’t appear in the studies, and there’s no way to kind of figure it out. The only thing that kind of really sussed out what was an adverse event versus what wasn’t was advanced statistical regression and later machine learning. But back in the days, almost 20 years ago, you needed massive compute, massive servers—like on trucks—to be able to ship these types of considerations to actually improve clinical outcomes. Now, thankfully, the ability to provide practical innovation in the form of AI to help improve people’s actual lives through brain health is much more accessible, democratisable, almost in a way that wasn’t available then. And if it first appeared for motor symptoms, for neurodegenerative disease, some time ago, now we can use AI to help not just the neurodegenerative side of the spectrum but also neuropsychiatric illness, mental illness, to help identify people that are at risk for cognition challenges. Here in Manhattan, it’s like 97 degrees today. People don’t think the way they normally do when it’s 75. They make decisions that they perhaps wish they hadn’t, and a lot of the globe is facing similar challenges. So if we can kind of partner with AI to make better decisions, everyone’s better off. That construct—where we think differently, we make better decisions, we are mentally well, and we use our brains the way that was intended—all those things together are brain capital. And by doing that broadly, consistently, we’re better off as a society. Ross: Fantastic. So that case, you’re looking at machine learning—so essentially being able to pull out patterns. Patterns between environmental factors, drugs used, background, other genetic data, and so on. So this means that you can—is this, then, alluding, I suppose, to precision medicine and being able to identify for individuals what the right pharmaceutical regimes are, and so on? Matt: Yeah. I mean, I think the idea of precision medicine, personalized medicine, is very appealing. I think it’s very early, maybe even embryonic, kind of consideration in the neuroscience space. I worked for a long time for companies like Roche and Genentech, others in that ecosystem, doing personalized medicine with

Jun 25, 202534 min

S3 Ep 7Amir Barsoum on AI transforming services, pricing innovation, improving healthcare workflows, and accelerating prosperity (AC Ep7)

“Successful AI ventures are those that truly understand the technology but also place real human impact at the center — it’s about creating solutions that improve lives and drive meaningful change.” – Amir Barsoum About Amir Barsoum Amir Barsoum is Founder & CEO of InVitro Capital, a venture studio that builds and funds companies at the intersection of AI and human-intensive industries, with four companies and over 150 professionals. He was previously founder of leading digital health platform Vezeeta and held senior roles at McKinsey and AstraZeneca. Website: InVitro Capital LinkedIn Profile: Amir Barsoum X profile: Amir Barsoum What you will learn Understanding the future of AI investment Exploring the human impact of technology Insights from a leading AI venture capitalist Balancing risk and opportunity in startups The evolving relationship between humans and machines Strategies for successful AI entrepreneurship Unlocking innovation through visionary thinking Episode Resources Transcript Ross Dawson: I’m here. It’s wonderful to have you on the show. Amir Barsoum: Same here, Ross. Thank you for the invite. Ross: So you are an investor in fast-moving and growing companies. And AI has come along and changed the landscape. So, from a very big picture, what do you see? And how is this changing the opportunity landscape? Amir: So, actually, we’re InVitro Capital. We actually started because we have seen the opportunity of AI. We actually started with the sort of the move. And a big part of the reason of what we started is we think that the service industry—think about healthcare and home repair, even some service providers today—they’re going to be hugely disrupted by AI. Whether there will be automation, replacement as a bucket, or augmentation as a bucket, or at least facilitation. And we’ve seen a huge opportunity that we can build. We can build AI technology that could do the service. Instead of being a software-as-a-service provider, we basically build the service provider itself. So that’s what excites us about what we’re trying to do and what we’re building. Ross: So what’s the origin of the word InVitro Capital? Does this mean test tubes? Amir: So, I think it originates from there. I think the idea is we’re building companies under controlled conditions. And it’s kind of the in vitro—in vitro fertilization, like the IVF. We keep on building more companies under these controlled conditions. That’s the idea, and because we come from a healthcare background, so it kind of resonated. Ross: All right, that makes sense. So, there’s a lot of talk going around—SaaS is dead. So this kind of idea, you talk about services and the way services are changing. And so that’s—yeah, absolutely—service delivery, whether that’s service by humans, whether it’s service by computers, whatever the nature of that, is changing. So does this mean that we are fundamentally restructuring the nature of what a service is and how it is delivered? Amir: I think, yes. I think between the service industry and the software industry, both of them are seeing a categorical change in how they’re going to be provided to the users. And, I mean, the change is massive. I’m not sure about the word “dead,” but we’re definitely seeing a huge, huge change. Think about it from a service perspective, from a software perspective. In software, I used to sell software to a company. The company needs people to be smart enough, educated enough, trained enough to use the software and give you value out of it. They used to be called the system of records with some tasks, but really it’s a system of record that has a lot of records, and then somebody—some employee—who sits there and does the job. In the service, it’s kind of, you think this is going to be very difficult, and they’re going to do somebody as an outsource to do the service for me. Think about, I’m going to go and hire someone who’s going to help us do marketing content, or someone who would do even legal—and I’m going to the extreme. And I think both are seeing categorical change. The software and the employee, both together, could become one, or at least 80% of the job could be done now by AI technologies. And the service—the same thing. So we’re definitely seeing a massive change in these aspects. And talk legal, talk content marketing—all of them. Ross: I’d like to dig into that a little bit more. But actually, just one question is around pricing. Are you looking at or exploring ways in which fee structures or pricing of services change? I mean, that’s classically where services involved humans—there was some kind of correlation to the cost of the human plus the margin. Now there is AI, which has taken often an increasing proportion of the way the service is delivered. So—and different perceptions where clients

Jun 18, 202534 min

S3 Ep 6Minyang Jiang on AI augmentation, transcending constraints, fostering creativity, and the levers of AI strategy (AC Ep6)

What are the goals I really want to attain professionally and personally? I’m going to really keep my eye on that. And how do I make sure that I use AI in a way that’s going to help me get there—and also not use it in a way that doesn’t help me get there? – Minyang Jiang (MJ) About Minyang Jiang (MJ) Minyang Jiang (MJ) is Chief Strategy Officer at business lending firm Credibly, leading and implementing the company’s growth strategy. Previously she held a range of leadership positions at Ford Motor Company, most recently as founder and CEO of GoRide Health, a mobility startup within Ford. Website: Minyang “MJ” Jiang Minyang “MJ” Jiang LinkedIn Profile: Minyang “MJ” Jiang What you will learn Using ai to overcome human constraints Redefining productivity through augmentation Nurturing curiosity in the modern workplace Building trust in an ai-first strategy The role of imagination in future planning Why leaders must engage with ai hands-on Separating the product from the person Episode Resources Transcript Ross Dawson: MJ, it’s a delight to have you on the show. Minyang “MJ” Jiang: I’m so excited to be here, Ross. Ross: So I gather that you believe that we can be more than we are. So how do we do that? MJ: Absolutely I’m an eternal optimist, so I’m always—I’m a big believer in technology’s ability to help enable humans to be more if we’re thoughtful with it. Ross: So where do we start? MJ: Well, we can start maybe by thinking through some of the use cases that I think AI, and in particular, generative AI, can help humans, right? I come from a business alternative financing perspective, but my background is in business, and I think there’s been a lot of sort of fear and maybe trepidation around what it’s going to do in this space. But my personal understanding is, I don’t know of a single business that is not constrained, right? Employees always have too much to do. There are things they don’t like to do. There’s capacity issues. So for me, already, there’s three very clear use cases where I think AI and generative AI can help humans augment what they do. So number one is, if you have any capacity constraints, that is a great place to be deploying AI because already we’re not delivering a good experience. And so any ability for you to free up constraints, whether it’s volume or being able to reach more people—especially if you’re already resource-constrained (I argue every business is resource-constrained)—that’s a great use case, right? The second thing is working on a use case where you are already really good at something, and you’re repeating this task over and over, so there’s no originality. You’re not really learning from it anymore, but you’re expected to do it because it’s an expected part of your work, and it delivers value, but it’s not something that you, as a human, you’re learning or gaining from it. So if you can use AI to free up that part, then I think it’s wonderful, right? So that you can actually then free up your bandwidth to do more interesting things and to actually problem-solve and deploy critical thinking. And then I think the third case is just, there are types of work out there that are just incredibly monotonous and also require you to spend a lot of time thinking through things that are of little value, but again, need to be done, right? So that’s also a great place where you can displace some of the drudgery and the monotony associated with certain tasks. So those are three things already that I’m using in my professional life, and I would encourage others to use in order to augment what they do. Ross: So that’s fantastic. I think the focus on constraints is particularly important because people don’t actually recognize it, but we’ve got constraints on all sides, and there’s so much which we can free up. MJ: Yes, I mean, I think everybody knows, right? You’re constrained in terms of energy, you’re constrained in terms of time and budget and bandwidth, and we’re constrained all the time. So using AI in a way that helps you free up your own constraints so that it allows you to ask bigger and better questions—it doesn’t displace curiosity. And I think a curious mind is one of the best assets that humans have. So being able to explore bigger things, and think about new problems and more complicated problems. And I see that at work all the time, where people are then creating new use cases, right? And it just sort of compounds. I think there’s new kinds of growth and opportunities that come with that, as well as freeing up constraints. Ross: I think that’s critically important. Everyone says when you go to a motivational keynote, they say, “Curiosity, be curious,” and so on. But I think we, in a way, we’ve been sort of shunned. The way work works is:

Jun 4, 202534 min

S3 Ep 5Sam Arbesman on the magic of code, tools for thought, interdisciplinary ideas, and latent spaces (AC Ep5)

Code, ultimately, is this weird material that’s somewhere between the physical and the informational… it connects to all these different domains—science, the humanities, social sciences—really every aspect of our lives. – Sam Arbesman About Sam Arbesman Sam Arbesman is Scientist in Residence at leading venture capital firm Lux Capital. He works at the boundaries of areas such as open science, tools for thought, managing complexity, network science, artificial intelligence, and infusing computation into everything. His writing has appeared in The New York Times, The Wall Street Journal, and The Atlantic. He is the award-winning author of books including Overcomplicated, The Half Life of Facts, and The Magic of Code, which will be released shortly. Website: Sam Arbesman Sam Arbesman LinkedIn Profile: Sam Arbesman Books The Magic of Code The Half-Life of Facts Overcomplicated What you will learn Rekindling wonder through computing Code as a universal solvent of ideas Tools for thought and cognitive augmentation The human side of programming and AI Connecting art, science, and technology Uncovering latent knowledge with AI Choosing technologies that enrich humanity Episode Resources Books The Magic of Code As We May Think Undiscovered Public Knowledge People Richard Powers Larry Lessig Vannevar Bush Don Swanson Steve Jobs Jonathan Haidt Concepts and Technical Terms universal solvent latent spaces semantic networks AI (Artificial Intelligence) hypertext associative thinking network science big tech machine-readable law Transcript Ross Dawson: Sam, it is wonderful to have you on the show. Sam Arbesman: Thank you so much. Great to be talking with you. Ross: So you have a book coming out. When’s it coming out? Sam: It comes out June 10. So, yeah, so it comes out June 10. The name of the book is The Magic of Code, and it’s about, basically, the wonders and weirdness of computing—kind of viewing computation and code and all the things around computers less as a branch of engineering and more as almost this humanistic liberal art. When you think of it, it should not just talk about computer science, but should also connect to language and philosophy and biology and how we think, and all these different areas. Ross: Yeah, and I think these things are often not seen in the biggest picture. Not just, all right, this is something that draws my phone or whatever, but it is an intrinsic part of thought, of the universe, of everything. So I think you—indeed, code, in as many manifestations—does have magic, as you have revealed. And one of the things I love, love very much—just the title Magic—but also you talk about wonder. I think when I look at the change, I see that humans are so quick to take things for granted, and that takes away from the wonder of what it is we have created. I mean, what do you see in that? How do we nurture that wonder, which nurtures us in turn? Sam: Yeah. I mean, I completely agree that we are—I guess the positive way to think about it is—we adapt really quickly. But as a result, we kind of forget that there are these aspects of wonder and delight. When I think about how we talk about technology more broadly, or certain aspects of computing, computation, it feels like we kind of have this sort of a broken conversation there, where we focus on it as an adversary, or we are worried about these technologies, or sometimes we’re just plain ignorant about them. But when I think about my own experiences with computing growing up, it wasn’t just that. It was also—it was full of wonder and delight. I had, like, my early experiences—like my family’s first computer was the Commodore VIC-20—and kind of seeing that. And then there was my first experience using a computer mouse with the early Mac and some of the early Macintoshes or earlier ones. And then my first programming experiences, and thinking about fractals and screensavers and SimCity and all these things. These things were just really, really delightful and interesting. And in thinking about them, they drew together all these different domains. And my goal is to kind of try to rekindle that wonder. I actually am reminded—I don’t think I mentioned this story in the book—but I’m reminded of a story related to my grandfather. So my grandfather, he lived to the age of 99. He was a lifelong fan of science fiction, and he read—he basically read science fiction since, like, the modern dawn of the genre. Basically, I think he read Dune when it was serialized in a magazine. And I remember when the iPhone first came out, I went with my grandfather and my father. We went to the Apple Store, and we went to check it out. We were playing with the phone. And my grandfather at one point says, “This is it. Like, this is the object I’ve been reading about all these years in science fiction.” And we’ve gone from that moment to basically complaining about battery life or camera resolution. And it’s f

May 28, 202535 min

S3 Ep 4Bruce Randall on energy healing and AI, embedding AI in humans, and the implications of brain-computer interfaces (AC Ep4)

I feel that the frequency I have, and the frequency AI has, we’re going to be able to communicate based on frequency. But if we can understand what each is saying, that’s really where the magic happens. – Bruce Randall About Bruce Randall Bruce Randall describes himself as a tech visionary and Reiki Master who explores the intersection of technology, human consciousness, and the future of work. He has over 25 years of technology industry experience and is a longtime practitioner of energy healing and meditation. Website: Bruce Randall LinkedIn Profile: Bruce Randall What you will learn Exploring brain-computer interfaces and human potential Connecting reiki and AI through frequency and energy Understanding the limits and possibilities of neural implants Balancing intuition, emotion, and algorithmic decision-making Using meditation to sharpen awareness in a tech-driven world Navigating trust and critical thinking in the age of AI Imagining a future where technology and consciousness merge Episode Resources Companies & Organizations Neuralink Synchron MIT Technologies & Technical Terms Brain-computer interfaces AI (Artificial Intelligence) Agentic AI Neural implants Hallucinations (in AI context) Algorithmic trading Embedded devices Practices & Concepts Reiki Meditation Sentience Consciousness Critical thinking Transcript Ross Dawson: Bruce, it’s a delight to have you on the show. Bruce Randall: Well, Ross, thank you. I’m pleased to be on the show with you. Ross: So you have some interesting perspectives on, I suppose, humanity and technology. And just like to, in brief, hear how you got to your current perspectives. Bruce: Sure. Well, when I saw Neuralink put a chip in Nolan’s head and he could work the computer mouse with his thoughts, and he said, sometimes it goes where it moves on its own, but it always goes where I want it to go. So that, to me, was fascinating on how with the chip, we can do things like sentience and telecommunications and so forth that most humans can’t do. But with the chip, all of a sudden, all these doors are open now, and we’re still human. That’s fascinating to me. Ross: It certainly extends, extending our capabilities. It’s done in smaller ways in the past and now in far bigger ways. So you do have a deep technology background, but also some other aspects to your worldview. Bruce: I do. I’ve sold cloud, I’ve been educated in AI at MIT, and I built my first AI application. So I understand it from, I believe, from all sides, because I’ve actually done the work instead of read the books. And for me, this is fascinating because AI is moving faster than anything that we’ve had in recent memory, and it directly affects every person, because we’re working with it, or we can incorporate it in our body to make us better at what we do. And those possibilities are absolutely fascinating. Ross: So you describe yourself as a Reiki Master. So what is Reiki and how does that work? What’s its role been in your life? Bruce: Well, Reiki Master is you can connect with the universal energy that’s all around us, and it means I have a bigger pipe to put it through me, so I can direct it to people or things. And I’ve had a lot of good experiences where I’ve helped people in many different ways. The Reiki and the meditation came after that, and that brought me inside to find who I truly am and to connect with everything that has a vibration that I can connect with. That perspective, with the AI and where that’s going—AI is a hardware, but it produces software-type abilities, and so does the energy work that I do. They’re similar, but they’re very different. And I believe that everything is a vibration. We vibrate and so forth. So that vibration should be able to come together at some point. We should be able to communicate with it at some level. Ross: So if we look at the current state of research, scientific research into Reiki, there seems to be some potential low-level and small-population results. So it doesn’t seem to be a big tick. It doesn’t—there’s—there does appear to be something, but I think it’s fair to say there’s widespread skepticism in mainstream science about Reiki. So what’s your, I suppose, justification for this as a useful perspectival tool? Bruce: Well, I mean, I’ve had an intervention where I actually saved a life, which I won’t go into here. But my body moved, and I did that, and I said, I don’t know why I’m doing this, but I went with the body movement and ended up saving a life. To me, that proved to me, beyond a shadow of a doubt, that there’s something there other than just what humans can see and feel. And that convinced me. Now, it’s hard to convince anybody else. It’s experiential, so I really can’t defend it, other than saying that I have enough experiences where I

May 21, 202526 min

Carl Wocke on cloning human expertise, the ethics of digital twins, AI employment agencies, and communities of AI experts (AC Ep3)

We’re not trying to replace expertise—we’re trying to amplify and scale it. AI wants to create the expertise; we want to make yours omnipresent. – Carl Wocke About Carl Wocke Carl Wocke is the Managing Director of Merlynn Intelligence Technologies, which focuses on human to machine knowledge transmission using machine learning and AI. Carl consults with leading organizations globally in areas spanning risk management, banking, insurance, cyber crime and intelligent robotic process automation. Website: Emory Business Merlynn-AI LinkedIn Profile: Carl Wocke What you will learn Cloning human expertise through AI How digital twins scale decision-making Using simulations to extract tacit knowledge Redefining employee value with digital models Ethical dilemmas in ownership and bias Why collaboration beats data sharing Keeping humans relevant in an AI-first world Episode Resources Companies / Groups Merlynn Emory Tech and Tools Tom (Tacit Object Modeler) LLMs Concepts / Technical Terms Digital twin Tacit knowledge Human-in-the-loop Knowledge engineering Claims adjudication Financial crime Risk management Ensemble approach Federated data Agentic AI Transcript Ross Dawson: Carl, it’s wonderful to have you on the show. Carl Wocke: Thanks, Ross. Ross: So tell me about what Merlynn, your company, does. It’s very interesting, so I’d like to learn more. Carl: Yeah. So I think the most important thing when understanding what Merlynn is about is that we’re different from traditional AI in that we’re sort of obsessed with the cloning of human expertise. So where your traditional AI looks at data sources generating data, we are passionate about cloning our human experts. Ross: So part of the process, I gather, is to take human expertise and to embed that in models. So can you tell me a bit about that process? How does that happen? What is that process of—what I think in the past has been called knowledge engineering? Carl: Yeah. So we’ve built a series of technologies. The sort of primary technology is a technology called Tom. And Tom stands for Tacit Object Modeler. And Tom is a piece of AI that has been designed to simulate a decision environment. You are placed as an expert into the simulation environment, and through an interaction or discussion with Tom, Tom works out what the heuristic is, or what that subconscious judgment rule is that you use as an expert. And the way the technology works is you describe your decision environment to Tom. Tom then builds a simulator. It populates the simulator with data which is derived from the AI engine, and based on the way you respond, the data evolves. So what’s happening in the background is the AI engine is predicting your decision, and based on your response, it will evolve the sampling landscape or start to close up on the model. So it’s an interaction with a piece of AI. Ross: So you’re putting somebody in a simulation and seeing how they behave, and using their behaviors in that simulation to extract, I suppose, implicit models of how it is they think and make decisions. Carl: Absolutely so absolutely. And I think there’s sort of two main things to consider. The one is Tom will model a discrete decision. And a discrete decision is, what would Ross do when presented with the following environment? And that discrete decision can be modeled within an hour, typically. And the second thing is that there’s no data needed in the process. Validation is done through historical data, if you like. But yeah, it’s an exclusive sort of discussion between you and the AI, if that makes sense. Ross: So when people essentially get themselves modeled through these frameworks, what is their response when they see how the model that’s being created from their thinking responds to decision situations? Do they say, “These are the decisions I would have made?” I suppose there’s a feedback loop there in any case. But how do people feel about what’s been created? Carl: So there is a feedback loop. Through the process, you’re able to validate and test your digital twin. We refer to the models that are created as your digital twin. You can validate the model through the process. But what also happens—and this is sort of in the early days—is the expert might feel threatened. “You don’t need me anymore. You’ve got my decision.” But nothing could be further from the truth, because that digital twin that you’ve modeled is sort of tied to you. It evolves. Your decisions as an expert evolve over time. In certain industries, that happens quicker. But that digital twin actually amplifies your value to the organization. Because essentially what we’re doing with a digital twin is we’re making you omnipresent in an organization—and outside of the organization—in terms of your decisions. So the first reaction is, “I’m scared, am I going to have a job?” But after that, as

May 14, 202537 min

S3 Ep 2Nisha Talagala on the four Cs of AI literacy, vibe coding, critical thinking about AI, and teaching AI fundamentals (AC Ep2)

“The floor is rising really fast. So if you’re not ready to raise the ceiling, you’re going to have a problem.” – Nisha Talagala About Nisha Talagala Nisha Talagala is the CEO and Co-Founder of AIClub, which drives AI literacy for people of all ages. Previously, she co-founded ParallelM where she shaped the field of MLOps, with other roles including Lead Architect at Fusio-io and CTO at Gear6. She is the co-author of Fundamentals of Artificial Intelligence – the first AI textbook for Middle School and High School students. Website: Nisha Talagala Nisha Talagala LinkedIn Profile: Nisha Talagala What you will learn Understanding the four C’s of AI literacy How AI moved from winter to wildfire Teaching kids to build their own AI from scratch Why professionals must raise their ceiling The role of curiosity in using generative tools Navigating context and motivation behind AI models Embracing creativity as a key to future readiness Episode Resources People Andrej Karpathy Organizations & Companies AIClub AIClubPro Technical Terms AI Artificial General Intelligence ChatGPT GPT-1 GPT-2 GPT Neural network Loss function Foundation models AI life cycle Crowdsourced data Training data Iteration Chatbot Dark patterns Transcript Ross Dawson: Nisha, it’s a delight to have you on the show. Nisha Talagala: Thank you. Happy to be here. Thanks for having me. Ross: So you’ve been delving deep, deep, deep into AI for a very long time now, and I would love to hear, just to start, your reflections on where AI is today, and particularly in relation to humans. Nisha: Okay, absolutely. So I think that AI has been around for a very long time. And there was a long time which was actually called AI winter, which is effectively that very few people working on AI—only the true believers, really. And then a few things kind of happened. One of them was that the power of computers became so much greater, which was really needed for AI. And then the data also, with the internet and our ability to store and track all of this stuff, the data also became really plentiful. So when the compute met the data, and then people started developing software and sharing it, that created kind of like a perfect storm, if you will. That enabled people to really see that AI could do things. Previously, AI experiments were very small, and now suddenly companies like Google could run really big AI experiments. And often what happened is that they saw that it worked before they truly knew why it worked. So this entire field of AI kind of evolved, which is, “Hey, it works. We don’t actually know why. Let’s try it again and see if it works some more,” kind of thing. So that has been going on now for about a decade. And so, AI has been all around you for quite a long time. And then came ChatGPT. And not everyone knows, but ChatGPT is actually not the first version of GPT. GPT-1 and GPT-2 were pretty good. They were just very hard to use for someone who wasn’t very technical. And so, for those who are technical—one thing is, you had to—actually, it was a little bit like Jeopardy. You had to ask your question in the form of an incomplete sentence, which is kind of fun in the Jeopardy sort of way. But normally, we don’t talk to people with incomplete sentences hoping that they’ll finish that sentence and give us something we want to know. So ChatGPT just made it so much easier to use, and then suddenly, I think it just kind of burst on the mainstream. And that, again, fed on itself: more data, more compute, more excitement—going to the point that the last few years have really seen a level of advancement that is truly unprecedented, even in the past history of AI, which is almost already pretty unprecedented. So where is it going? I mean, I think that the level—so it’s kind of like—so people talk a lot about AGI and generalized intelligence and surpassing humans and stuff like that. I think that’s a difficult question, and I’m not sure if we’ll ever know whether it’s been reached. Or I don’t know that we would agree on what the definition is there, to therefore agree whether it’s been reached or not reached. There are other milestones, though. For example, standardized testing has already been taken over by AI. AI’s outperform on just about every level of standardized test, whether it’s a college test or a professional test, like the US medical licensing exam. It’s already outperforming most US doctors in those fields. And it’s scoring well on tests of knowledge as well. And also making headway in areas that are traditionally considerably challenged—areas like mathematics and reasoning have become issues. So I think you’re dealing with a place where, what I can tell you is that the AIs that I see right now in the public sphere rival the ability of PhD students I’ve worked with. So it’s serious. And I think it

May 7, 202533 min

HAI Launch episode

“This is about how we need to grow and develop our individual cognition as a complement to AI.” – Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Ross Dawson Advanced Human Technologies LinkedIn Profile: Ross Dawson Books Thriving on Overload Living Networks 20th Anniversary Edition Living Networks Implementing Enterprise 2.0 Developing Knowledge-Based Client Relationships: Leadership in Professional Services Developing Knowledge-Based Client Relationships, The Future of Professional Services Developing Knowledge-Based Client Relationships What you will learn Tracing the evolution of the podcast name and vision How chatgpt shifted the AI conversation overnight Why humans plus AI is more than just a rebrand The mission to amplify human cognition through AI Exploring collective intelligence and team dynamics Rethinking work, strategy, and value creation with AI Envisioning a co-evolved future for humans and machines Episode Resources Books Thriving on Overload Technologies & Technical Terms AI agents Artificial intelligence Intelligence amplification Cognitive evolution Collective intelligence Strategic thinking Strategic decision-making Value creation Organizational structures Transhumanism AI governance Existential risk Critical thinking Attention Awareness Skill development Transcript Ross Dawson: This is the launch episode of the Humans Plus AI podcast, formerly the Amplifying Cognition podcast, and before that, the Thriving on Overload podcast. So in this brief episode, I will cover a bit of the backstory and a bit of where we got to where we are today, and calling this Humans Plus AI now—why I think it is so important, what it is we are going to cover, and framing a little bit this idea of Humans Plus AI. So the backstory is that the podcast started off as Thriving on Overload. It was the interviews I did for my book Thriving on Overload. The book came out in September 2022. By then, I was still continuing with the Thriving on Overload podcast, continuing to explore this idea of how we can amplify our thinking in a world of unlimited information. Essentially, our brains are finite, but in a world of infinite information, we need to learn the skills and the capabilities to be as effective as possible. And COVID—we’ll come back to that—but that is a fundamental issue today, which is the reason I wrote the book. Just three months after the book came out was what I call the ChatGPT moment, when there’s crystallizing progress in AI where I think just about every single researcher and person who’d been in the AI space was surprised or even amazed by the leap in capabilities that we achieved with that model—and of course, so much more since then. So I quickly wanted to consolidate my thinking, and immediately came on this phrase Humans Plus AI, which reflects a lot of my work over the years. I have been literally writing about AI, the role of AI agents, and particularly AI and work—for, well, in some ways, a couple of decades. But this was a moment where I felt I had to bring all of my work together. So fairly soon, I decided I needed to rebrand the podcast to be not just Thriving on Overload. But I still was tied to that theme. So I decided, let’s make this Amplifying Cognition, trying to get that middle ground with integrating the ideas of Humans Plus AI. How could humans and AI together be as wonderful as possible, but also this idea of Thriving on Overload—this individual cognition—how do we amplify our possibilities? There was a long list of different names that I was playing with, and one of the other front runners was, in fact, Amplifying Humanity. And in a way, that’s really what my mission is all about. And what all of these podcasts—the podcast and its various names—is about: how do we amplify who we are, our capabilities, our potential? Of course, the name Amplifying Humanity sounds a bit diffused. It’s not very clear. So it wasn’t the right name. Or not—there was certainly no right title at the time. But now, when I take this and say, well, we’re going to call this Humans Plus AI, in a way, I think that the Thriving on Overload piece of that is still as relevant—or even more relevant. That is part of the picture as we bring humans and AI together. This is about how we need to grow and develop our individual cognition as a complement to AI. So in fact, when I talk Humans Plus AI, Thriving on Overload, and Amplifying Cognition are really baked into that idea. So the broad frame of Humans Plus AI is simply: we have humans. We are inventors. We have created extraordinary technologies for

Apr 30, 202513 min

Kunal Gupta on the impact of AI on everything and its potential for overcoming barriers, health, learning, and far more (AC Ep86)

“Maybe the goal isn’t to eliminate the task or the human—but to reduce the frustration, the cognitive load, the overhead. That’s where AI shines.” – Kunal Gupta About Kunal Gupta Kunal Gupta is an entrepreneur, investor, and author. He founded and scaled global digital advertising AI company Nova as Chief Everything Officer for 15 years, with teams and clients across 30+ countries. He is author of four books, most recently 2034: How AI Changed the World Forever. Website: Kunal Gupta Kunal Gupta LinkedIn Profile: Kunal Gupta Book: 2034: How AI Changed Humanity Forever What you will learn Hosting secret AI dinners to spark human insight Using personal data to take control of health Why cognitive load is the real bottleneck When AI becomes a verb, not just a tool Reducing frustration through everyday AI The widening gap between AI capabilities and adoption Empowering curiosity in an AI-shaped world Episode Resources Books 2034: How AI Changed Humanity Forever Technical Terms & Concepts AI AI literacy Agentic AI Cognitive load LLMs (Large Language Models) Reference ranges Automation Browser agents Voice agents Data normalization Longevity-based testing Health data Cloud computing Social media adoption Generative AI Transcript Ross Dawson: Kunal, it is awesome to have you on the show. Kunal Gupta: Thanks, Ross. Nice to see you. Ross: So you came out with a book called 2034: How AI Changed Humanity Forever. So love to hear the backstory. Yes, that’s the book. So what’s the backstory? How did this book come about? Kunal: Yeah, I’ve written a few books, but this is definitely the most fun to write and to read and reread, and at some points, to rewrite. So back in November 2022, ChatGPT launches. There’s this view—okay, this is going to change our world, not sure how. So in the ensuing months, I had a number of conversations with friends and colleagues asking, “Hey, like, how does this change everything?” I asked people very open-ended questions, and the responses were all over the place. To me, what I realized was we actually just don’t know, and that’s the best place to be—when we don’t know but are curious. So I started to host dinners, six to ten people at a time in my apartment. I was in Portugal at the time, and London as well. Over the course of 2023, I hosted over 250 people over a couple dozen dinners. The setup was really unique in that nobody knew who else was coming. Nobody was allowed to talk about work, nobody was allowed to share what they did, and no phones were allowed either. So that meant really everybody was present. They didn’t need to be anybody, they didn’t need to be anywhere, and they could really open up. All of the conversations were recorded. All the questions were very open-ended along the lines of—really the subtitle of the book—like, how does AI change humanity? And we got into all sorts of different places. So over the course of the dinners in the year, recorded everything, had to transcribe it, and working with an editor, we manually went through the transcripts and identified about 100 individual ideas that came out of a human. And it’s usually some idea, inspiration, or some fear or insecurity. And we turned that into a book which has 100 different ideas, ten years into the future, of how AI might take how we live, how we work, how we date, how we eat, how we walk, how we learn, how we earn—and absolutely everything about humanity. Ross: So, I mean, there’s obviously far more in the book than we can cover in a short podcast, but what are some of the high-level perspectives? It’s been a bit of time since it’s come out, and people have had a chance to read it and give feedback, and you’ve reflected further on it. So what are some of the emergent thinking from you since the book has come out? Kunal: Yeah, I probably hear from a reader or two daily now, sharing lots of feedback. But the most common feedback I hear is that the book has helped change the way they think about AI, and that it’s helped them just think more openly about it and more openly about the possibilities. And that’s where introducing over 100 ideas across different aspects of society and humanity and industries and age groups and demographics is really meant to help open up the mind. I think in the face of AI, a lot of parts of society were closed or resistant to its potential impacts, or even fearful. And the book is really designed to open up the mind and drop some of the fear and really to be curious about what might happen. Ross: So taking this—taking sort of my perennial “humans plus AI” frame—what are some of the things that come to mind for you in terms of the potential of humans plus AI? What springs to mind first? Kunal: Those that say yes and are open and curious about it—I really think it’s an accelerant in so many different parts of life. I’ll give an example

Apr 23, 202533 min

Lee Rainie on being human in 2035, expert predictions, the impact of AI on cognition and social skills, and insights from generalists (AC Ep85)

“We could become obsolete by our own will—at least a portion of humanity just sort of giving up… But humans want to be valuable, want to be seen, want to be understood, want to be heard, want to think that their life matters. And this raises all sorts of questions about that.” – Lee Rainie About Lee Rainie Lee Rainie is Director of Imagining the Digital Future Center at Elon University. He joined in 2023 after 24 years of directing Pew Research Center’s Pew Internet Project, where his team produced more than 850 reports about the impact of major technology revolutions. Lee is co-author of five books about the future of the internet including “Networked: The New Social Operating System”. Website: Lee Rainie Lee Rainie Being Human in 2035   University Profile: Lee Rainie LinkedIn Profile: Lee Rainie   What you will learn Imagining the digital future through expert insights Reflecting on past predictions about technology and society Understanding the human traits most at risk from AI Exploring the impact of AI on jobs and identity Identifying creativity and curiosity as human advantages Confronting the danger of overreliance on machines Redefining leadership in a tech-driven world Episode Resources People Marshall McLuhan Isaiah Berlin Erik Brynjolfsson Paul Saffo Vint Cerf Institutions & Organizations Imagining the Digital Future Center Elon University Pew Research Center Reports & Projects Being Human in 2035 AI, Robotics and the Future of Jobs Concepts & Technical Terms Artificial General Intelligence Superintelligence Metacognition Cognitive revolution Genomics revolution Nanotechnology revolution Information revolutions Large language models Digital twins Critical thinking Soft skills Transcript Ross Dawson: Lee, it’s a delight to have you on the show. Lee Rainie: Thanks so much, Ross. I’m looking forward to it. Ross: So you are director of the Imagining the Digital Future Center at Elon University. So that sounds like a wonderful initiative. Can you please tell us about it? Lee: It is a wonderful initiative, and I feel very fortunate to be here studying this subject at this moment. It’s a center at Elon University of North Carolina that grew out of a partnership that I had with Elon in my previous job, when I worked for the Pew Research Center. There were some interesting, enthusiastic, ambitious professors here who were interested in the digital future, and they basically rolled out the red carpet to me and offered a lot of labor, a lot of brainpower, and a lot of assistance in interviewing experts about the future. One of the things that happened when I went to Pew in the first place, just at the turn of the millennium, was we were measuring adoption of technology—first the internet, then home broadband, and then a bunch of other things. But whenever I went out to speak about our findings, the first question from the audience was, “Well, that’s all well and good. You’re looking at the here and now, and fine, dandy, but what’s the next big thing?” Because that’s always the urgent question when you’re thinking about digital technologies. So I began to work with the professors at Elon to see if experts really had a decent track record in looking at the future. The first project we did was looking at predictions about the rise of the internet and what it would do, both in social, political, and economic terms. We found 4,400 predictions that were made between 1990 and 1995 about the internet. And experts were largely on the mark, partly because it wasn’t really so much future questions that they were looking at. They just knew what was coming out of the labs. They knew what they were working on. They knew what competitors were working on. And so it wasn’t hard to really anticipate the future if you talk to the right people. So we built a database of experts, and it’s a convenience database. There’s no—this is not a representative sample of all expertise about digital technology. It’s pioneers of the technology, it’s builders of the technology, it’s analysts. A lot of academics are in our database. And we just started asking in the year 2020, 2004, about things over the horizon. And it was a wonderful methodology, just to give us insight into the things that were around the corner. We’re not pretending that it’s quantitatively, scientifically accurate. We marry the methodologies of quantitative and qualitative work. And so it’s basically smart people riffing on the future. Ross: So wanted to get to that. So I actually tend, whenever I use the word expert, I always use quotation marks, because who’s an expert. I love what Marshall McLuhan said. Certainly the effect of the expert is the person who stays put, as the avatar is the one who continues to explore. But having said that, of course, yeah, some people know more about particular topics, and if we&#8217

Apr 16, 202540 min

Kieran Gilmurray on agentic AI, software labor, restructuring roles, and AI native intelligence businesses (AC Ep84)

“Let technology do the bits that technology is really good at. Offload to it. Then over-index and over-amplify the human skills we should have developed over the last 10, 15, or 20 years.” – Kieran Gilmurray About Kieran Gilmurray Kieran Gilmurray is CEO of Kieran Gilmurray and Company and Chief AI Innovator of Technology Transformation Group. He works as a keynote speaker, fractional CTO and delivering transformation programs for global businesses. He is author of three books, most recently Agentic AI. He has been named as a top thought leader on generative AI, agentic AI, and many other domains. Website: Kieran Gilmurray X Profile: Kieran Gilmurray LinkedIn Profile: Kieran Gilmurray BOOK: Free chapters from Agentic AI by Kieran Gilmurray Chapter 1 The Rise of Self-Driving AI Chapter 2: The Third Wave of AI Chapter 3 – Agentic AI Mapping the Road to Autonomy Chapter 4- Effective AI Agents What you will learn Understanding the leap from generative to agentic AI Redefining work with autonomous digital labor The disappearing need for traditional junior roles Augmenting human cognition, not replacing it Building emotionally intelligent, tech-savvy teams Rethinking leadership in AI-powered organizations Designing adaptive, intelligent businesses for the future Episode Resources People John Hagel Peter Senge Ethan Mollick Technical & Industry Terms Agentic AI Generative AI Artificial intelligence Digital labor Robotic process automation (RPA) Large language models (LLMs) Autonomous systems Cognitive offload Human-in-the-loop Cognitive augmentation Digital transformation Emotional intelligence Recommendation engine AI-native Exponential technology Intelligent workflows Transcript Ross Dawson: Hey, it’s fantastic to have you on the show. Kieran Gilmurray: Absolutely delighted, Ross. Brilliant to be here. And thank you so much for the invitation, by the way. Ross: So agentic AI is hot, hot, hot, and it’s now sort of these new levels of how it is we — these are autonomous or semi-autonomous aspects of AI. So I want to really dig into — you’ve got a new book out on agentic AI, and particularly looking at the future of work. And particularly want to look at work, so amplifying cognition. So I want to start off just by thinking about, first of all, what is different about agentic AI from generative AI, which we’ve had for the last two or three years, in terms of our ability to think better, to perform our work better, to make better decisions? So what is distinctive about this layer of agentic AI? Kieran: I was going to say, Ross, comically, nothing if we don’t actually use it. Because it’s like all the technologies that have come over the last 10–15 years. We’ve had every technology we have ever needed to make more work, more efficient work, more creative work, more innovative, to get teams working together a lot more effectively. But let’s be honest, technology’s dirty little secret is that we as humans very often resist. So I’m hoping that we don’t resist this technology like the others we have slowly resisted in the past, but they’ve all come around to make us work with them. But this one is subtly different. So when you say, look, agentic AI is another artificial intelligence system. The difference in this one — if you take some of the recent, what I describe as digital workforce or digital labor, go back eight years to look at robotic process automation — which was very much about helping people perform what was meant to be end-to-end tasks. So in other words, the robots took the bulky work, the horrible work, the repetitive work, the mundane work and so on — all vital stuff to do, but not where you really want to put your teams, not where you really want to spend your time. And usually, all of that mundaneness sucked creativity out of the room. You ended up doing it most of the day, got bored, and then never did the innovative, interesting stuff. Agentic is still digital labor sitting on top of large language models. And the difference here is, as described, is that this is meant to be able to act autonomously. In other words, you give it a goal and off it goes with minimal or no human intervention. You can design it as such, or both. And the systems are meant to be more proactive than reactive. They plan, they adapt, they operate in more dynamic environments. They don’t really need human input. You give them a goal, they try and make some of the decisions. And the interesting bit is, there is — or should be — human in the loop in this. A little bit of intervention. But the piece here, unlike RPA — that was RPA 1, I should say, not the later versions because it’s changed — is its ability to adapt and to reshape itself and to relearn with every interaction. Or if you take it at the most basic level — you look at a robot under the sea trying to navigate, to build pipelines. In the past, it would get stuck. A h

Apr 9, 2025

Jennifer Haase on human-AI co-creativity, uncommon ideas, creative synergy, and humans outperforming (AC Ep83)

“We humans often tend to be very restricted—even when we are world champions in a game. And I’m very optimistic that AI will surprise us, with very different ways of solving complex problems—and we can make use of that.” – Jennifer Haase About Jennifer Haase Dr. Jennifer Haase is a researcher at the Weizenbaum Institute, and lecturer at Humboldt University and University of the Arts Berlin. Her work focuses on the intersection of creativity, Artificial Intelligence, and automation, including AI for enhancing creative processes. She was named as one the 100 most important minds in Berlin science. Website: Jennifer Haase Jennifer Haase   LinkedIn Profile: Jennifer Haase What you will learn Stumbling into creativity through psychology and tech Redefining creativity in the age of AI The rise of co-creation between humans and machines How divergent and reverse thinking fuel innovation Designing AI tools that adapt to human thought Balancing human motivation with machine efficiency Challenging assumptions with AI’s unconventional solutions Episode Resources Websites & Platforms jenniferhaase.com ChatGPT Concepts & Technical Terms Artificial Intelligence (AI) Human-AI Co-Creativity Generative AI Large Language Models (LLMs) ChatGPT GPT-4 GPT-3.5 GPT-4.5 Business Informatics Psychology Creativity Divergent Thinking Convergent Thinking Mental Flexibility Iterative Process Everyday Creativity Alternative Uses Test Creativity Measures Creative Performance Transcript Ross Dawson: Jennifer, it’s a delight to have you on the show. Jennifer Haase: Thanks for inviting me. Ross: So you are diving deep, deep, deep into AI and human co-creativity. So just to hear—just back a little bit—sort of how you’ve embarked on this journey. I mean, love to—we can fill in more about what you’re doing now. But how did you come to be on this journey? Jennifer: I would say overall, it was me stumbling into tech more and more and more. So I started with creativity. My background is in psychology, and I learned about the concept of creativity in my Bachelor studies, and I got so confused, because what I was taught was nothing like what I thought creativity was—or how it felt to me. It took me years to understand that there are a bunch of different theories, and it was just one that we were taught. But that was the spark of the curiosity for me to try to understand this concept of creativity. And I did it for years. Then, by pure luck, I started a PhD in Business Informatics, which is somewhat technical. The lens of how I looked at creativity shifted from the psychological perspective more into the technical realm, and I looked at business processes and how they are advanced by general technology—basic software, basically. Then I morphed—also, by sheer luck—I morphed into computer science from a research perspective. And that coincided with ChatGPT coming around, and this huge LLM boom happened two, three years ago. And since then, I’m deeply in there. I just fell, fell in this rabbit hole. Ross: Yeah, well, it’s one of the most marvelous things. So the very first use case for most people, when they first use ChatGPT, is: write a poem in the style of whatever, or essentially creative tasks. And pretty decently does those to start off—until you sort of started to see the limitations at the time. Jennifer: Yeah, and I think it did so much. It’s so many different perspectives. I think we—as I said, I studied creativity for quite a while—but it was never as big of a deal, let’s say. It was just one concept of many. But since AI came around, I think it really threatened, to some part, what we understood about creativity, because it was always thought of as this pinnacle of humanness—right next to ethics. And I think intelligence had its bumps two or three decades ago, but for creativity, it was rather new. So the debate started of what it really means to be creative. I think a lot of people also try to make it even bigger than it is. But I think it is as simple as—a lot about creativity is, for example, in terms of poets—poetry is language understanding, right? And so LLMs are really good at it. And it’s just the case. It’s fine. I think we can still live happy lives as humans, although technology takes a lot over. Ross: Yes. So humans are creative in all sorts of dimensions. AI has complementary—let’s say, also different—capabilities in creativity. And in some of your research, you have pointed to different levels of how AI is supporting us in various guises—through being a tool and assistant, through to what you described as the co-creation. So what does that look like? What are some of the manifestations of human-AI co-creativity, which implies peers with different, complementary capabilities? Jennifer: Yeah, I think the easiest way to look at it is if you imagine working creatively with another person who is really competent—but the person is a technical

Apr 2, 2025

Pat Pataranutaporn on human flourishing with AI, augmenting reasoning, enhancing motivation, and benchmarking human-AI interaction (AC Ep82)

“We should not make technology so that we can be stupid. We should make technology so we can be even smarter… not just make the machine more intelligent, but enhance the overall intelligence—especially human intelligence.” –Pat Pataranutaporn About Pat Pataranutaporn Pat Pataranutaporn is Co-Director of MIT Media Lab’s new Advancing Humans with AI (AHA) research program, alongside Pattie Maes. In addition to extensive academic publications, his research has been featured in Scientific American, MIT Tech Review, Washington Post, Wall Street Journal, and other leading publications. His work has been named in TIME’s “Best Inventions” lists and Fast Company’s “World Changing Ideas.” Websites: MIT Media Lab AI (AHA) LinkedIn Profile: Pat Pataranutaporn What you will learn Reimagining ai as a tool for human flourishing Exploring the future you project and long-term thinking Boosting motivation through personalized ai learning Enhancing critical thinking with question-based ai prompts Designing agents that collaborate, not dominate Preventing collective intelligence from becoming uniform Launching aha to measure ai’s real impact on people Episode Resources People Hal Herschfeld Pattie Maes Elon Musk Organizations & Institutions MIT Media Lab KBTG ACM SIGCHI Center for Collective Intelligence Technical Terms & Concepts Human flourishing Human-AI interaction Digital twin Augmented reasoning Multi-agent systems Collective intelligence AI bias Socratic questioning Cognitive load Human general intelligence (HGI) Artificial general intelligence (AGI) Transcript Ross Dawson: Pat, it is wonderful to have you on the show. Pat Pataranutaporn: Thank you so much. It’s awesome to be here. Thanks for having me. Ross: There’s so much to dive into, but as a starting point: you focus on human flourishing with AI, exactly. So what does that mean? Paint the big picture of AI and how it can help us to flourish as who we are and our humanity. Pat: Yeah, that’s a great question. So I’m a researcher at MIT Media Lab. I’ve been working on human-AI interaction before it was cool—before ChatGPT took off, right? So we have been asking this question for a long time: when we focus on artificial intelligence, what does it mean for people? What does it mean for humanity? I think today, a lot of conversation is about how we can make models better, how we can make technology smarter and smarter. But does that mean that we can be stupid? Does it mean that we can just let the machine be the smart one and let it take over? That is not the vision that we have at MIT. We believe that technology should make humans better. So I think the idea of human flourishing is an umbrella term that we use to describe different areas where we think AI could enhance the human experience. For me in particular, I focus on three areas: how AI can enhance human wisdom, enhancing wonder, and well-being. So: 3 W’s—wisdom, wonder, and well-being. We work on many projects to look into these areas. For example, how AI could allow a person to talk to their future self, so that they can think in the longer term, to see that future more vividly. That’s about enhancing wonder and wisdom. We think a lot about how AI can help people think more critically and analyze information that they encounter on a daily basis in a more comprehensive way. And you know well-being, we have many projects that look at how AI can improve human mental health, positive thinking, and things like that. But at the end, we also focus on AI that doesn’t lead to human flourishing, to balance it out. We study in what contexts human-AI interaction leads to negative outcomes—like people becoming lonelier or experiencing negative outcomes such as false memories, misinformation, and things like that. As scientists, we’re not overly optimistic or pessimistic. We’re trying to understand what’s going on and how we can design a better future for everyone. That’s what we’re trying to focus on. Yeah? Ros: Fabulous. And as you say, there are many, many different projects and domains of research which you’re delving into. So I’d like to start to dive into some of those. One that you mentioned was the Future You project. So I’d love to hear about what that is, how you created it, and what the impact was on people being able to interact with their future selves. Pat: Totally. So, I mean, as I said, right, the idea of human flourishing is really exciting for us. And in order to flourish, like, you cannot think short term. You need to think long term and be able to sort of imagine: how would you get there, right? So as a kid, I was interested in sort of a time machine. Like, I loved dinosaurs. I wanted to go back into the past and also go into the future, see what would happen in the future, like the exciting future we might have. So I really love this idea of, like, having a time machine. And of course, we cannot do a real time machine yet, but we c

Mar 26, 2025

Amplifying Foresight Compilation (AC Ep81)

“We wanted to see what the effect of AI might be on forecasting accuracy… to our surprise, we find that even when the model gives biased or noisy advice, human forecasters still improve—something we didn’t expect.” – Philipp Schoenegger “I kind of call these Gen AI systems a mirror. Pose it a question, play with scenarios, and see what comes out. It’s like an accelerant for thinking—pushing the boundaries of what’s possible.” – Nikolas Badminton “Future thinking is an everyday practice. It’s about becoming more aware of what’s happening around us, sensing signals, and collectively imagining what’s next.” – Sylvia Gallusser “The question of the future isn’t ‘How creative are you?’ but ‘How are you creative?’ Because what we can imagine, we can create—and we have a responsibility to build a better future.” – Jack Uldrich About Philipp Schoenegger, Nikolas Badminton, Sylvia Gallusser, & Jack Uldrich Philipp Schoenegger is a researcher at London School of Economics working at the intersection of judgement, decision-making, and applied artificial intelligence. He is also a professional forecaster, working as a forecasting consultant for the Swift Centre as well as a ‘Pro Forecaster’ for Metaculus, providing probabilistic forecasts and detailed rationales for a variety of major organizations. Nikolas Badminton is the Chief Futurist of the Futurist Think Tank. He is a world-renowned futurist speaker, award-winning author, and executive advisor, with clients including Disney, Google, J.P. Morgan, Microsoft, NASA, and many other leading companies. He is author of Facing Our Futures and host of the Exponential Minds podcast. Sylvia Gallusser is Founder and CEO of Silicon Humanism, a futures thinking and strategic foresight consultancy. Previous roles include a variety of strategic roles at Accenture, Head of Technology at Business France North America, General Manager at French Tech Hub, and Co-founder at big bang factory. She is also a frequent keynote speaker and author of speculative fiction. Jack Uldrich is a leading futurist, author, and speaker who helps organizations gain the critical foresight they need to create a successful future. His work is based on the principles of unlearning as a strategy to survive and thrive in an era of unparalleled change. He is the author of 9 books including Business As Unusual. Websites: Nikolas Badminton Nikolas Badminton Sylvia Gallusser Jack Uldrich University Profile: Philipp Schoenegger LinkedIn Profile: Philipp Schoenegger Nikolas Badminton Sylvia Gallusser Jack Uldrich What you will learn How AI-augmented predictions enhance human forecasting The surprising impact of biased AI advice on accuracy Why generative AI acts as a mirror for future thinking The role of signal scanning in spotting emerging trends How creativity and imagination shape the future The evolving nature of community in an AI-driven world Why unlearning is key to adapting in a changing era Episode Resources People Philip Tetlock Jonas Salk Books & Publications Superforecasting Facing Our Futures Technical Terms & Concepts AI-augmented predictions Large language models (LLMs) The Ten Commandments of Forecasting The Ten Commandments of Superforecasting Forecasting accuracy Signal scanning Scenario planning Foresight strategy Generative AI Base rate Bias in AI Cognitive augmentation Transcript Ross Dawson: Now, it’s wonderful to see the work which you’re doing. Speaking of which, recently, you were the lead author of a paper, AI-Augmented Predictions: LLM Assistants Improve Human Forecasting Accuracy. So first of all, perhaps just describe the paper at a high level, and then we can dig into some of the specifics. Philipp Schoenegger: Yeah. So the basic idea of this paper is: how can we improve human forecasting? Human judgmental forecasting is basically the idea that you can query a bunch of very interested and sometimes laypeople about future events and then aggregate their predictions to arrive at surprisingly accurate estimations of future outcomes. This goes back to work on Superforecasting by Philip Tetlock, and there are a lot of different approaches on how one might go about improving human prediction capabilities. There might be some training—it was called The Ten Commandments of Forecasting—on how you can be a better forecaster. Or there might be some conversations where different forecasters talk to each other and exchange their views. And we want to look at how we can—how we could—think about improving human forecasting with AI. I think one of the main strengths of the current generation of large language models is the interactive nature of the back and forth, having a highly competent model that people can interact with and query whenever they want really. They might ask the model, “Please help me on this question. What’s the answer?” They might also just say, “Here’s what I think. P

Mar 19, 2025

AI for Strategy Compilation (AC Ep80)

“AI can make the process of sensing for signals much faster and much more efficient. You can think of it as a supplement to our brain. It can sort through massive amounts of data, track the latest developments, and flash alerts when something important emerges.” – Rita McGrath “What I found surprising in our exercises was how disruptive AI was. At first, I thought they would hate it, but they actually liked it. It made them stop and think because it forced them to break out of their usual patterns and consider ideas they wouldn’t have consciously introduced into the discussion.” – Christian Stadler “AI can accelerate the foresight process. It can help generate diverse perspectives, identify second-degree impacts, and uncover biases we might not notice. Of course, human critical thinking is still essential—we shouldn’t accept AI outputs as absolute truth, but rather use them as a starting point.” – Valentina Contini “One key area where AI excels is handling cognitive complexity. Humans struggle to hold thousands of variables in their heads, but AI can process vast amounts of interconnected data. The challenge is designing interfaces that allow humans to interact with this complexity in an intuitive way.” – Anthea Roberts About Rita McGrath, Christian Stadler, Valentina Contini, & Anthea Roberts Rita McGrath is one of the world’s top experts on strategy and innovation. She is consistently ranked among the top 10 management thinkers globally and has earned the #1 award for strategy by Thinkers 50. She is Professor of Strategy at Columbia Business School, and Founder of the Rita McGrath Group and Valize LLC. Her books include The End of Competitive Advantage and Seeing Around Corners. Christian Stadler is a professor of strategic management at Warwick Business School. He is author of Open Strategy, which was named as a Best Business Book by Financial Times and Strategy + Business and has been translated into 11 languages. His work has been featured in Harvard Business Review, New York Times, Wall Street Journal, CNN, BBC, and Al Jazeera, among others. Valentina Contini is an innovation strategist for a global IT services firm, a technofuturist, and speaker. She has a background in engineering, innovation design, AI-powered foresight, and biohacking. Her previous work includes founding the Innovation Lab at Porsche. Anthea Roberts is Professor at the School of Regulation and Global Governance at the Australian National University (ANU) and a Visiting Professor at Harvard Law School. She is also the Founder, Director and CEO of Dragonfly Thinking. Her latest book, Six Faces of Globalization, was selected as one of the Best Books of 2021 by The Financial Times and Fortune Magazine. She has won numerous presitigious awards and has been named “the world’s leading international law scholar” by the League of Scholars. Websites: Rita McGrath Rita McGrath Christian Stadler Valentina Contini Anthea Roberts Anthea Roberts   University Profile: Rita McGrath Christian Stadler Anthea Roberts   LinkedIn Profile: Rita McGrath Christian Stadler Valentina Contini Anthea Robert What you will learn Bridging human cognition and AI for better decision-making How AI disrupts traditional boardroom dynamics Enhancing foresight with AI-driven scenario planning The role of AI in sense-making and strategic insights Why AI-generated variety outperforms human creativity Managing cognitive complexity with AI augmentation The evolving partnership between humans and AI in strategy Episode Resources Companies & Organizations Wrigley ChatGPT OpenAI Technical Terms & AI-Related Artificial Intelligence (AI) Large Language Models (LLMs) Generative AI Cognitive Complexity Metacognition Strategic Foresight Decision-Making Frameworks Transcript Ross Dawson: One of the key themes is strategy. How do we do strategy in a world that is accelerating, with all these overlay themes? There are, as you say, 10x shifts in many dimensions of work. This brings us to human capabilities. Humans have limited, finite cognition, even though we have extraordinary capabilities far transcending anything else. Now, we have AI to augment, support, or complement us. I’d like to dive in deep, but just to start—what is your framing around human capabilities in strategic thinking today, and how they are complemented by AI? Rita McGrath: Sure. Well, as I mentioned, human brains think in linear terms. We think immediately in terms of getting from here to there to avoid a predator. Back in the day when we were evolving, that worked pretty well. But we don’t do very well with exponential systems because they look small, and they look small, and they go small—until suddenly they don’t. It’s the whole “gradually, then suddenly” idea. What I argue is that you need to supplement what your brain can manage on its own. This is where I think AI comes in. What I’ve set up with companies is a series of what I call

Mar 12, 202532 min