PLAY PODCASTS
Humans + AI

Humans + AI

196 episodes — Page 2 of 4

Collective Intelligence Compilation (AC Ep79)

“Collective intelligence is the ability of a group to solve a wide range of problems, and it’s something that also seems to be a stable collective ability.” – Anita Williams Woolley “When you get a response from a language model, it’s a bit like a response from a crowd of people. It’s shaped by the collective judgments of countless individuals.” – Jason Burton “Rather than just artificial general intelligence (AGI), I prefer the term augmented collective intelligence (ACI), where we design processes that maximize the synergy between humans and AI.” – Gianni Giacomelli “We developed Conversational Swarm Intelligence to scale deliberative processes while maintaining the benefits of small group discussions.” – Louis Rosenberg About Anita Williams Woolley, Jason Burton, Gianni Giacomelli, & Louis Rosenberg Anita Williams Woolley is the Associate Dean of Research and Professor of Organizational Behavior at Carnegie Mellon University’s Tepper School of Business. She received her doctorate from Harvard University, with subsequent research including seminal work on collective intelligence in teams, first published in Science. Her current work focuses on collective intelligence in human-computer collaboration, with projects funded by DARPA and the NSF, focusing on how AI enhances synchronous and asynchronous collaboration in distributed teams. Jason Burton is an assistant professor at Copenhagen Business School and an Alexander von Humboldt Research fellow at the Max Planck Institute for Human Development. His research applies computational methods to studying human behavior in a digital society, including reasoning in online information environments and collective intelligence. Gianni Giacomelli is the Founder of Supermind.Design and Head of Design Innovation at MIT’s Center for Collective Intelligence. He previously held a range of leadership roles in major organizations, most recently as Chief Innovation Officer at global professional services firm Genpact. He has written extensively for media and in scientific journals and is a frequent conference speaker. Louis Rosenberg is CEO and Chief Scientist of Unanimous A.I., which amplifies the intelligence of networked human groups. He earned his PhD from Stanford and has been awarded over 300 patents for virtual reality, augmented reality, and artificial intelligence technologies. He has founded a number of successful companies including Unanimous AI, Immersion Corporation, Microscribe, and Outland Research. His new book Our Next Reality on the AI-powered Metaverse is out in March 2024. Websites: Gianni Giacomelli Louis Rosenberg University Profile: Anita Williams Woolley Jason Burton LinkedIn Profile: Anita Williams Woolley Jason Burton Gianni Giacomelli Louis Rosenberg What you will learn Understanding the power of collective intelligence How teams think smarter than individuals The role of ai in amplifying human collaboration Memory, attention, and reasoning in group decision-making Why large language models reflect collective intelligence Designing synergy between humans and ai Scaling conversations with conversational swarm intelligence Episode Resources People Thomas Malone Steve Jobs Concepts & Frameworks Transactive Memory Systems Reinforcement Learning from Human Feedback (RLHF) Conversational Swarm Intelligence Augmented Collective Intelligence (ACI) Artificial General Intelligence (AGI) Technology & AI Terms Large Language Models (LLMs) Machine Learning Collective Intelligence Artificial Intelligence (AI) Cognitive Systems Transcript Anita Williams Woolley: Individual intelligence is a concept most people are familiar with. When we’re talking about general human intelligence, it refers to a general underlying ability for people to perform across many domains. Empirically, it has been shown that measures of individual intelligence predict a person’s performance over time. It is a relatively stable attribute. For a long time, when we thought about intelligence in teams, we considered it in terms of the total intelligence of the individual members combined—the aggregate intelligence. However, in our work, we challenged that notion by conducting studies that showed some attributes of the collective—the way individuals coordinated their inputs, worked together, and amplified each other’s contributions—were not directly predictable from simply knowing the intelligence of the individual members. Collective intelligence is the ability of a group to solve a wide range of problems. It also appears to be a stable collective ability. Of course, in teams and groups, you can change individual members, and other factors may alter collective intelligence more readily than individual intelligence. However, we have observed that it remains fairly stable over time, enabling greater capability. In some cases, collective intelligence can be high or low. When a group has high collective intelligence, it is more ca

Mar 5, 2025

Helen Lee Kupp on redesigning work, enabling expression, creative constraints, and women defining AI (AC Ep78)

“I’m cautiously optimistic because never before has technology been as accessible as it is now—being able to interact with machines in a way that feels so natural to us, rather than in ones and zeros or more technical ways. AI shouldn’t replace what exists but augment and enhance our creativity, helping us tap into what makes us uniquely human.” – Helen Lee Kupp About Helen Lee Kupp Helen Lee Kupp is co-founder and CEO of Women Defining AI, a community of female leaders applying and driving AI. She was previously leader of strategy and analytics at Slack and co-founder of its Future Forum. She is co-author of the best-selling book “How the Future Works: Leading Flexible Teams to do the Best Work of Their Lives”. Website: Women Defining AI LinkedIn Profile: Helen Lee Kupp What you will learn Redefining collaboration in the AI era Unlocking human potential through technology Why flexible work matters more than ever The power of diverse perspectives in AI Balancing optimism and caution in AI adoption How leaders can foster innovation from the ground up Women defining AI and shaping the future Episode Resources People Gregory Bateson Nichole Sterling (co-founder of Women Defining AI) Companies & Organizations Women Defining AI Technical Terms & Concepts AI (Artificial Intelligence) Generative AI Large Language Model (LLM) Non-deterministic AI policy AI adoption Machine learning (ML) Human-in-the-loop Transcript Ross Dawson: Helen, it is a delight to have you on the show. Helen Lee Kupp: It’s good to be here. I love how we first started talking over an AI research paper. It was very random but awesome. Ross: Well, that’s pushing the edges, trying to find what’s out there and see what comes on the other side. AI is emerging, and we’re sitting alongside each other. How are you feeling about today and how humans and AI are coming together? Helen: I feel cautiously optimistic, and part of that is because I’ve been in tech for so long. Prior to getting much deeper into AI, I was working on flexible work and research around how to rethink and redesign how we, as humans, collaborate in a way that is more personalized, more customized, and helps more people bring their best selves to work and do their best work. It was serendipitous that around the same time, there was an increase in AI innovation. Now, we had technology to pair with the equation of redesigning work. COVID forced us to rethink work, not just from a people and process perspective but alongside rapid technological change. I’m cautiously optimistic because never before has technology been as accessible as it is now. We can interact with machines in a way that feels so natural rather than in ones and zeros or technical ways. Ross: I’m very aligned with that. One of the things you said was “bring your best self to work.” I think of it as human potential. If we’re creating a future of work, we have potential futures that are not so great and others that are very positive, where people express more of who they are and their capabilities. How can we create organizations like that? Helen: It starts with recognizing that everyone has different preferences and work styles. Organizations, teams, and leaders need to meet people where they are rather than force them into rigid structures that worked in the past. I often share this story—I’m deeply introverted. Despite jumping onto this podcast with you, I have always been an introvert. Navigating an extroverted world takes extra energy. In traditional office and meeting environments, I had to work harder to show up. However, when I had more diverse formats to interact with my team and leadership, it unlocked something for me. Instead of pretending to be the loudest in the room, I could find my own ways of expressing ideas—through text, written formats, or chat. It made work easier for me. When you think about how that manifests across a team, leaders and organizations must avoid putting rigid boxes around collaboration—whether it’s the hours we work or the place where we work. Increasing flexibility enables people to express themselves and bring forward ideas that might otherwise remain hidden. Ross: That’s a compelling vision. How do you bring that to reality? What do you do inside an organization to foster and enable that? Helen: One of the tools that helped in our research on the future of work and redesigning organizations is something simple—creating a team operating manual. The act of explicitly writing down the different ways we interact as a team opens up discussions. It allows for feedback: “Does this work for you? Should we try something different?” When these conversations don’t happen, implied assumptions remain—such as the norm of working in an office from nine to five. Explicitly stating and questioning these assumptions is step one. Then, organizations should give teams and managers the flexibility to define how they work within

Feb 19, 2025

Human AI Symbiosis Compilation (AC Ep77)

“Generative AI is the first technology with an almost natural propensity to build a symbiotic relationship with us. But symbiosis isn’t always mutualistic—it can be parasitic, where AI benefits at the detriment of humans. How we deploy AI will determine which path we take.” – Alexandra Diening “AI provides dual affordances—it can automate our work or augment our abilities. The key challenge is deciding where to draw the line. In low-stakes tasks, automation makes sense. But in high-stakes decision-making, human intuition is irreplaceable.” – Mohammad Hossein Jarrahi “We talk a lot about lifelong learning, but we also need to embrace lifelong forgetting. If we keep piling new knowledge on top of outdated thinking, we won’t evolve. The future isn’t about ‘us vs. them’—it’s about humans and AI co-evolving together.” – Erica Orange “AI isn’t just changing how we work—it’s changing what it means to be human. We are interlacing with technology more deeply than ever, and in the future, AI won’t just be something we use—it will be something we integrate into ourselves.” – Pedro Uria Recio About Alexandra Diening, Mohammad Hossein Jarrahi, Erica Orange, & Pedro Uria Recio Alexandra Diening is Co-founder & Executive Chair of Human-AI Symbiosis Alliance. She has held a range of senior executive roles including as Global Head of Research & Insights at EPAM Systems. Through her career she has helped transform over 150 digital innovation ideas into products, brands, and business models that have attracted $120 million in funding . She holds a PhD in cyberpsychology, and is author of Decoding Empathy: An Executive’s Blueprint for Building Human-Centric AI and A Strategy for Human-AI Symbiosis. Mohammad Hossein Jarrahi is Associate Professor at the School of Information and Library Science at University of North Carolina at Chapel Hill. He has won numerous awards for teaching and his papers, including for his article “Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making.” His wide-ranging research spans many aspects of the social and organizational implications of information and communication technologies. Erica Orange is a futurist, speaker, and author, and Executive Vice President and Chief Operating Officer of leading futurist consulting firm The Future Hunters. She has spoken at TEDx and keynoted over 250 conferences around the world, and been featured in news outlets including Wired, NPR, Time, Bloomberg, and CBS This Morning. Her book AI + The New Human Frontier: Reimagining the Future of Time, Trust + Truth is out in September 2024. Pedro Uria-Recio is a highly experienced analytics and AI executive. He was until recently the Chief Analytics and AI Officer at True Corporation, Thailand’s leading telecom company, and is about to announce his next position. He is also author of the recently launched book Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity. He was previously a consultant at McKinsey and is on the Forbes Tech Council. Websites: Alexadra Diening Mohammad Hossein Jarrahi Erica Orange Pedro Uria Recio LinkedIn Profiles: Alexandra Diening Mohammad Hossein Jarrahi Erica Orange Pedro Uria Recio What you will learn Understanding human-AI symbiosis and its impact Why AI can be mutualistic or parasitic The crucial role of human intuition in AI decision-making How automation and augmentation shape the future of work Rethinking AI deployment beyond traditional software models The need for lifelong forgetting to adapt to AI advancements How AI could transform humanity through deep integration Episode Resources Companies & Organizations Human-AI Symbiosis Alliance IBM OpenAI NPR Books & Publications AI and the New Human Frontier (by Erica Orange) Machines of Tomorrow (by Pedro Uria Recio) Technical Terms & Concepts Human-AI symbiosis Generative AI Automation vs. augmentation Algorithmic management Brain-computer interfaces Deep learning Data bias AI literacy AI product lifecycle Holistic decision-making Lifelong learning Transcript Ross Dawson: So, you’ve recently established the Human-AI Symbiosis Alliance, and that sounds very, very interesting. But before we dig into that, I’d like to hear a bit of the backstory. How did you come to be on this journey? Alexandra Diening: It’s a long journey. I’ll try to make it short and interesting. I entered the world of AI almost two decades ago through a very unconventional path—neuroscience. I’m a neuroscientist by training, and my focus was on understanding how the brain works. Naturally, if you want to process all the neuroscience data, you can’t do it alone. You inevitably have to touch upon AI. That was my gateway into the field. As I started working with AI, I gained a basic understanding of how it operates from a technical perspective as a scientific discipline. At that time, there w

Feb 13, 2025

Rita McGrath on inflection points, AI-enhanced strategy, memories of the future, and the future of professional services (AC Ep76)

“What I argue is you need to supplement what your brain can manage on its own. And this is where I think AI comes in… blending the human imagination together with AI’s ability to crunch massive amounts of data—that’s where I think we’re going to see a lot of power in the world of strategy.” – Rita McGrath About Rita McGrath Rita McGrath is one of the world’s top experts on strategy and innovation. She is consistently ranked among the top 10 management thinkers globally and has earned the #1 award for strategy by Thinkers 50. She is Professor of Strategy at Columbia Business School, and Founder of the Rita McGrath Group and Valize LLC. Her books include The End of Competitive Advantage and Seeing Around Corners. Website: Rita McGrath Valize   LinkedIn Profile: Rita McGrath   University Profile: Rita McGrath What you will learn Navigating the acceleration of business and strategy Understanding inflection points and their impact on industries How AI enhances human decision-making and sense-making Why competitive advantages are becoming more transient The surprising link between digital habits and declining gum sales The future of consulting and professional services in an AI-driven world How leaders can prepare for the evolving nature of work Episode Resources People Clayton Christensen Ray Kurzweil Brian Chesky David Maister Companies & Organizations Klarna Airbnb Valize Books The End of Competitive Advantage The Living Company The Skill Code Technical Terms & Concepts Transient advantage Disruptive technology Inflection points Digitalization Dematerialization Sense-making Strategic thinking Gig economy Circular economy Automation Competitive advantage Transcript Ross Dawson: Rita, it is fantastic to have you on the show. Rita McGrath: Thank you very much for inviting me. Ross: So my personal experience is that, over time, the world has come towards me, and what I’ve been thinking has become more and more of a reality. That strikes me very much with your work. I think you’ve been incredibly prescient. A lot of the themes you’ve worked on for years are even more relevant today than they were earlier. Has that been your feeling? Rita: It has. It has. I mean, I was writing about what we would now recognize as the lean startup movement back in the ’90s. Clayton Christensen and I were working together on his idea of disruptive technology. My book The End of Competitive Advantage, which basically argued that competitive advantages last for shorter and shorter periods of time, came out in 2013, and people are still saying, “Wow, that’s so interesting.” So it is that kind of feeling. Ross: In particular, you’ve talked about transient advantage. A very long time ago, that advantage has become more and more transient, which we can frame as acceleration. And I think that there used to be a bit of debate—is the world accelerating, or is it just a feeling that it’s accelerating? So what’s your perception today in terms of where we might move forward, especially regarding the pace of change in business, strategy, and competitive advantage? Is this acceleration likely to continue? Rita: Yes. To quote Ray Kurzweil, any system that embeds experience-based learning—trial and error learning—tends to follow an exponential change pattern. It’s not additional, it’s not linear—it’s exponential. And we, as human beings, experience that as things moving faster and faster. So, day one, it’s two. Day two, it’s four. Day three, it’s eight. Eventually, these exponential curves take off, and I think we’re seeing quite a bit of that with the current developments in AI at the moment. Ross: You’ve pointed to this theme of inflection points. How would you frame some of the current developments in AI or its impact on business around that theme? Are we living through an inflection point or a phase—or a series of them at the moment? Rita: Yeah, I believe we are. And I would say that there are multiple levels of inflection points. At a 30,000-foot level, if you think about the financial and social structures of capitalist systems, they go through these 50- to 70-year cycles each time a new technology emerges that dramatically changes our ability to do something. Going all the way back to the 1700s and the original Industrial Revolution, what you see happening is what I define as an inflection point—something that creates a 10x shift in what’s possible. In the Industrial Revolution, labor was automated. Then we had the mass production era—cars, suburbs, petroleum-based economies—that has been coming to the end of its S curve of delivering prosperity and productivity. The next wave is really this era of digitalization, which I would date to the early ’70s, with the microprocessor and the earliest digital technologies. What digitalization does is change what’s possible by a factor of 10. Some of the effects are quite surprising. For example, one of them is dematerialization—taking things that used to r

Feb 5, 2025

Christian Stadler on AI in strategy, open strategy, AI in the boardroom, and capabilities for strategy (AC Ep75)

“AI can be an unusual voice that gives you fresh ideas, makes you think differently, and provides the kind of fuel that sparks innovation. But ultimately, humans provide the context, the judgment, and the ability to bring strategy to life.” – Christian Stadler About Christian Stadler Christian Stadler is a professor of strategic management at Warwick Business School. He is author of Open Strategy, which was named as a Best Business Book by Financial Times and Strategy + Business and has been translated into 11 languages. His work has been featured in Harvard Business Review, New York Times, Wall Street Journal, CNN, BBC, and Al Jazeera, among others. Website: Christian Stadler LinkedIn Profile: Christian Stadler University Profile: Christian Stadler What you will learn How AI is changing strategic decision-making The role of AI as a co-strategist, not a replacement Why AI disrupts but enhances boardroom discussions How open strategy leads to better execution Leveraging collective intelligence for stronger strategies The rising importance of political awareness in leadership Engaging employees to drive innovation and strategy Episode Resources Companies & Organizations Amazon IBM Books Open Strategy Technical Terms & Concepts Scenario planning Red-teaming Large language models (LLMs) ChatGPT Collective intelligence Strategic decision-making Strategy execution H-1B visas Employee engagement in strategy Transcript Ross Dawson: Christian, it’s a delight to have you on the show. Christian Stadler: Thanks for having me, Ross. It is a delight for me as well. Ross: So, you have been delving deep into a lot of your background in open strategy. You’ve also been looking at the role of AI in strategy and strategic decision-making. At a high level, how do you see the role of AI in strategy making today? Christian: I’m an optimist. I think generally, by nature, and also when it comes to how AI can actually be useful for strategists, more and more people are coming to see AI as a partner in many different areas of what we do. I think that’s true for strategy as well. We have some form of co-decision-making, co-intelligence, or an additional voice that we can use in the strategy-making process. For that, it’s really cool. Ross: These are human-first processes, I suppose. The more complex the decisions are, the more multifaceted they become, and the more the human element needs to be at the forefront. Strategy seems to fall into that category. What are the places where AI might provide support, complementary perspectives, or analysis that are particularly valuable? Christian: Strategy, obviously, consists of different “boxes” or activities. Some involve coming up with new ideas—something new you want to do in your strategy. Other parts involve fine-tuning and formulating the strategy. Then there’s the execution and implementation side. Probably in each of these aspects, it makes sense to use AI in slightly different ways. When it comes to ideation, I can ask a tool for ideas, such as setting up a new product line. I played around with this early on when ChatGPT started gaining traction. Even then, it was phenomenally good if you guided the conversation as a strategist. If you just ask ChatGPT, you get generic suggestions, and sometimes they don’t make sense. For example, I once asked for a suggestion for a streaming service. One idea was to create some form of entertainment platform and partner with universities. Being a professor, I know that universities don’t work like that. Professors aren’t told to participate by some central directive. You need to find ways to motivate individual professors. As I pushed the platform further, better ideas came up. As long as you drive the conversation and are smart about it, AI can provide good ideas. When it comes to fine-tuning and formulating, the tool can be quick. I’ve been experimenting with a company in Austria for over a year. They make sneakers—Gieswein. We tried seeing what happens in board meetings when we bring ChatGPT into the mix. For instance, when we needed a press release, the tool quickly drafted something. In this case, we didn’t need an agency, which saved time and resources. However, when it comes to execution, that’s more of a human game. You need to convince people to buy into ideas and feel comfortable with new directions. AI has limitations here, but other tools can help involve more people. Greater involvement aids implementation. Ross: There’s a lot there I’d like to dig into. We might do a bit of hopping around. Christian: It’s a bit long-winded, isn’t it? I just keep talking on and on. My bad. Ross: It’s all good. One interesting point is that part of Amazon’s internal processes involves starting with a press release for a potential product. Then they work backward to figure out how to achieve it. That’s something ChatGPT can facilitate in board meetings. You can draft a press release and discuss if this is

Jan 29, 2025

Valentina Contini on AI in innovation, multi-potentiality, AI-augmented foresight, and personas from the future (AC Ep74)

“We don’t just give creative thinking to the AI, but we actually use the AI to make space for our own creative thinking.” – Valentina Contini About Valentina Contini Valentina Contini is an innovation strategist for a global IT services firm, a technofuturist, and speaker. She has a background in engineering, innovation design, AI-powered foresight, and biohacking. Her previous work includes founding the Innovation Lab at Porsche. Website: Valentina Contini LinkedIn Profile: Valentina Contini What you will learn Exploring the power of being a professional black sheep Using AI as a creative sparring partner Bridging the gap between ideas and visuals with AI tools Accelerating foresight processes through generative AI Unlocking human potential with AI-augmented creativity Envisioning immersive future scenarios with digital personas Embracing technology to make space for critical thinking Episode Resources People Leonardo da Vinci Refik Anadol Companies NTT Technical Terms AI (Artificial Intelligence) Generative AI Brain-computer interfaces Digital twin Futures wheel Speculative design Large language models (LLM) Quantum computing Decentralization Transcript Ross Dawson: Valentina, it’s awesome to have you on the show. Valentina Contini: Oh, thank you. Thank you for inviting me here. Ross: So, you call yourself a professional black sheep. That sounds like a good job to me. So what does that mean? Valentina: On LinkedIn, a lot of people have very nice, amazing titles or super inspirational quotes. And for me, it was always like, what am I actually? After a bit of thinking, I realized that wherever I am, I am actually always the one that is different. In the past, as a mechanical engineer, I was building cars for 15 years. That’s kind of weird if you are a woman, and also not really looking like the standard engineer. Then I changed jobs, and I always ended up being the different one. I was in strategy consulting for a bit, and again, being an engineer in a strategy consulting role was the weird thing—it was not normal. So I’m always the weird one. I think that “professional black sheep” pretty much describes that. Ross: Well, I think the future is in being weird. I mean, if you’re not weird, then you’re probably not gonna have a job. If you are weird, then you probably will. Valentina: Yeah, definitely, definitely. I think that’s the main selling point right now. Ross: So, innovation strategy, I think, is probably a reasonable description of a lot of what you do at the moment. Starting from that, you augment yourself in many ways—you augment your work and so on. How can we augment the process of innovating, making the new faster and better? What are the elements of that? What does that look like? Valentina: I think a big part of it comes now thanks to AI, for a very specific reason. Since the pandemic, we are not really spending time in working environments together with other people in the same place. There is less of this exchange that creates innovation and creativity or sparks something out of a random discussion. Generative AI, with the leap it made in the last year, is like your sparring partner that you always have without needing to be among other people. What is interesting is that generative AI is not just one person—it’s collective knowledge from many people. It has many downsides as well, but focusing on this, I can access many people at the same time when I use a tool like generative AI. Ross: So that’s, in a way, an individual tool. It’s a creative sparring partner or can augment our creativity. I think we can maybe come back to some of that in various ways, but thinking about an organizational level—going from individual creativity to an innovation process where the organization innovates—what are some of the other pieces of that puzzle? Valentina: You can use it in many different steps of the way. I think another very important piece is using AI for automating easy, repetitive, and boring tasks so that employees have more time available for their creative thinking. We don’t just give creative thinking to the AI, but we actually use the AI to make space for our own creative thinking. I also believe that what is very interesting is I have a very visual brain. In my mind, there are always images of what I envision for the future—whether as a product or an idea. Tools like AI image generators can bridge this gap between the images in my brain and showing other people those images. I think that’s a very powerful way to actually augment or enhance our capabilities. Ross: Just on that, though—you are an illustrator as well, correct? Valentina: Not really. What I’m now working on is a project where we create future scenarios. The narrative is very important, but at the same time, it’s difficult to understand what the future is if you cannot see it. I use these tools to generate images of the future—products, advertisements, or speculative design. That’s something I would ha

Dec 18, 2024

Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73)

“Not everyone can see with dragonfly eyes, but can we create tools that help enable people to see with dragonfly eyes?” – Anthea Roberts About Anthea Roberts Anthea Roberts is Professor at the School of Regulation and Global Governance at the Australian National University (ANU) and a Visiting Professor at Harvard Law School. She is also the Founder, Director and CEO of Dragonfly Thinking. Her latest book, Six Faces of Globalization, was selected as one of the Best Books of 2021 by The Financial Times and Fortune Magazine. She has won numerous prestigious awards and has been named “The World’s Leading International Law Scholar” by the League of Scholars. Website: Dragonfly Thinking Anthea Roberts LinkedIn Profile: Anthea Robert   University Profile: Anthea Roberts   What you will learn Exploring the concept of dragonfly thinking Creating tools to see complex problems through many lenses Shifting roles from generator to director and editor with AI Understanding metacognition in human-AI collaboration Addressing cultural biases in large language models Applying structured analytic techniques to real-world decisions Navigating the cognitive industrial revolution with AI Episode Resources People Sam Bide Philip Tetlock Harrison Chase Companies/Organizations Dragonfly Thinking Australian National University Books Is International Law International? by Anthea Roberts Six Faces of Globalization by Anthea Roberts Technical Terms Structured analytic techniques Risk, reward, and resilience framework Large language models (LLMs) Agentic workflows Cognitive architecture Metacognition Reinforcement learning Super forecasting Wisdom of the silicon crowd Transcript Ross Dawson: Anthea, it is a delight to have you on the show. Anthea Roberts: Thank you very much for having me. Ross: So you have a very interesting company called Dragonfly Thinking, and I’d like to delve into that and dive deep. But first of all, I’d like to hear the backstory of how you came to see the idea and create the company. Anthea: Well, it’s probably an unusual route to creating a startup. I come with no technology background initially, and two years ago, if you told me I would start a tech startup, I would never have thought that was very likely—and no one around me would have, either. My other hat that I wear when I’m not doing the company is as a professor of global governance at the Australian National University and a repeat visiting professor at Harvard. I’ve traditionally worked on international law, global governance, and, more recently, economics, security, and pushback against globalization. I moved into a very interdisciplinary role, where I ended up doing a lot of work with different policymakers. Part of what I realized I was doing as I moved around these fields was creating something that the intelligence agencies call structured analytic techniques—techniques for understanding complex, ambiguous, evolving situations. For instance, in my last book, I used one technique to understand the pushback against economic globalization through six narratives—looking at a complex problem from multiple sides. Another was a risk, reward, and resilience framework to integrate perspectives and make decisions. All of this, though, I had done completely analog. Then the large language models came out. I was working with Sam Bide, a younger colleague who was more technically competent than I was. One day, he decided to teach one of my frameworks to ChatGPT. On a Saturday morning, he excitedly sent me a message saying, “That framework is really transferable!” I replied, “I made it to be really transferable.” He said, “No, no, it’s really transferable.” We started going back and forth on this. At the time, Sam was moving into policy, and he created a persona called “Robo Anthea.” He and other policymakers would ask Robo Anthea questions. It had my published academic scholarship, but also my unpublished work. At a very early stage, I had this confronting experience of having a digital twin. Some people asked, “Weren’t you horrified or worried about copyright infringement?” But I didn’t have that reaction. I thought it was amazingly interesting. What could happen if you took structured techniques and worked with this extraordinary form of cognition? It allowed us to apply these techniques to areas I knew nothing about. It also let me hand this skill off to other people. I leaned into it completely—on one condition: we changed the name from Robo Anthea to Dragonfly Thinking. It was both less creepy for me and a better metaphor. This way of seeing complex problems from many different sides is a dragonfly’s ability. I think I’m a dragonfly, but I believe there are many dragonflies out there. I wanted to create a platform for this kind of thinking—where dragonflies could “swarm” around and develop ideas together Ross: Just explain the dragonfly concept. Anthea: We took the concept from some work done by Philip Tetlo

Dec 11, 2024

Kevin Eikenberry on flexible leadership, both/and thinking, flexor spectrums, and skills for flexibility (AC Ep72)

“To be a flexible leader is to make sense of the world in a way that allows you to intentionally ask, ‘How do I need to lead in this moment to get the best results for my team and the outcomes we need?’” – Kevin Eikenberry About Kevin Eikenberry Kevin Eikenberry is Chief Potential Officer of leadership and learning consulting company The Kevin Eikenberry Group. He is the bestselling author or co-author of 7 books, including the forthcoming Flexible Leadership. He has been named to many lists of top leaders, including twice to Inc. magazine’s Top 100 Leadership and Management Experts in the World. His podcast, The Remarkable Leadership Podcast, has listeners in over 90 countries. Website: The Kevin Eikenberry Group LinkedIn Profiles Kevin Eikenberry The Kevin Eikenberry Group Book Flexible Leadership: Navigate Uncertainty and Lead with Confidence   What you will learn Understanding the essence of flexible leadership Balancing consistency and adaptability in decision-making Embracing “both/and thinking” to navigate complexity Exploring the power of context in leadership strategies Mastering the art of asking vs. telling Building habits of reflection and intentionality Developing mental fitness for effective leadership Episode Resources People Carl Jung F. Scott Fitzgerald David Snowden Book Flexible Leadership: Navigate Uncertainty and Lead with Confidence Frameworks/Concepts Myers-Briggs Cynefin framework Confidence-competence loop Organizations/Companies The Kevin Eikenberry Group Technical Terms Leadership style “Both/and thinking” Compliance vs. commitment Ask vs. tell Command and control Sense-making Plausible cause analysis Transcript Ross Dawson: Kevin, it is wonderful to have you on the show. Kevin Eikenberry: Ross, it’s a pleasure to be with you. I’ve had conversations about this book for podcasts. This is the first one that’s going to go live to the world, so I’m excited about that. Ross: Fantastic. So the book is Flexible Leadership: Navigate Uncertainty and Lead with Confidence. What does flexible leadership mean? Kevin: Well, that’s a pretty good starting question. Here’s the big idea, Ross: so many people have come up in leadership and taken assessments of one sort or another. They’ve done Strengths Finder or a leadership style assessment, and it’s determined that they are a certain style or type. That’s useful to a point, but it becomes problematic beyond that. Humans are pattern recognizers, so once we label ourselves as a certain type of leader, we tend to stick to that label. We start thinking, “This is how I’m supposed to lead.” To be a flexible leader means we need to start by understanding the context of the situation. Context determines how we ought to lead in a given moment rather than relying solely on what comes naturally to us. Being a flexible leader involves making sense of the world intentionally and asking, “How do I need to lead in this moment to get the best results for my team and the outcomes we’re working towards?” Ross: I was once told that Carl Jung, who wrote the typology of personalities that forms the foundation of Myers-Briggs, said something similar. I’ve never found the original source, but apparently, he believed the goal was not to fix ourselves at one point on a spectrum but to be as flexible as possible across it. So, we’re all extroverts and introverts, sensors and intuitors, thinkers and feelers. Kevin: Exactly. None of us are entirely one or the other on these spectrums. They’re more like continuums. Take introvert vs. extrovert. Some people are at one extreme or the other, but no one is a zero on either side. The problem arises when we label ourselves and think, “This is who I am.” That may reflect your natural tendency, but it doesn’t mean that’s the only way you can or should lead. Ross: One of the themes in your book is “both/and thinking,” which echoes what I wrote in Thriving on Overload. You can be both extroverted and introverted. I see that in myself. Kevin: Me too. Our world is so focused on “either/or” thinking, but to navigate complexity and uncertainty as leaders, we must embrace “both/and” thinking. Scott Fitzgerald once said something along the lines of, “The test of a first-rate intelligence is the ability to hold two opposing ideas in your mind at the same time and still function.” I’d say the same applies to leadership. To be highly effective, leaders must consider seemingly opposite approaches and determine what works best given the context. Ross: That makes sense. Most people would agree that flexible leadership is a sound idea. But how do we actually get there? How does someone become a more flexible leader? Kevin: The first step is recognizing the value of flexibility. Many leaders get stuck on the idea of consistency. They think, “To be effecti

Dec 4, 2024

Alexandra Diening on Human-AI Symbiosis, cyberpsychology, human-centricity, and organizational leadership in AI (AC Ep71)

“It’s not just about the AI itself; it’s about the way we deploy it. We need to focus on human-centric practices to ensure AI enhances human potential rather than harming it.” – Alexandra Diening About Alexandra Diening Alexandra Diening is Co-founder & Executive Chair of Human-AI Symbiosis Alliance. She has held a range of senior executive roles including as Global Head of Research & Insights at EPAM Systems. Through her career she has helped transform over 150 digital innovation ideas into products, brands, and business models that have attracted $120 million in funding . She holds a PhD in cyberpsychology, and is author of Decoding Empathy: An Executive’s Blueprint for Building Human-Centric AI and A Strategy for Human-AI Symbiosis. Website: Human-AI Symbiosis LinkedIn Profiles Alexandra Diening Human-AI Symbiosis Alliance Book A Strategy for Human-AI Symbiosis What you will learn Exploring the concept of human-AI symbiosis Recognizing the risks of parasitic AI Bridging neuroscience and artificial intelligence Designing ethical frameworks for AI deployment Balancing excitement and caution in AI adoption Understanding AI’s impact on individuals and organizations Leveraging practical strategies for mutualistic AI development Episode Resources Organizations and Alliances Human AI Symbiosis Alliance Fortune 500 companies Books A Strategy for Human AI Symbiosis Technical Terms Human-AI symbiosis Generative AI Cognitive sciences Cyber psychology Neuroscience AI avatars Algorithmic bias Responsible AI Symbiotic AI Transcript Ross Dawson: Alexandra, it’s a delight to have you on the show. Alexandra Diening: Thank you for having me, Ross. Very happy to be here. Ross: So you’ve recently established the Human AI Symbiosis Alliance, and that sounds very, very interesting. But before we dig into that, I’d like to hear a bit of the backstory. How did you come to be on this journey? Alexandra: It’s a long journey, but I’ll try to make it short and quite interesting. I entered the world of AI almost two decades ago, and it was through a very unconventional path—neuroscience. I’m a neuroscientist by training, and my focus was on understanding how the brain works. Of course, if you want to process all the neuroscience data, you can’t do it alone. Inevitably, you need to incorporate AI. This was my gateway to AI through neuroscience. At the time, there weren’t many people working on this type of AI, so the industry naturally pulled me in. I transitioned to working on business applications of AI, progressively moving from neuroscience to AI deployment within business contexts. I worked with Fortune 500 companies across life sciences, retail, finance, and more. That was the first chapter of my entry into the world of AI. When deploying AI in real business scenarios, patterns start to emerge. Sometimes you succeed; sometimes you fail. What I noticed was that when we succeeded and delivered long-term tangible business value, it was often due to a strong emphasis on human-centricity. This focus came naturally to me, given my background in cognitive sciences. This emphasis became even more critical with the emergence of generative AI. Suddenly, AI was no longer just a background technology crunching data and influencing decisions behind the scenes. It became something we could interact with using natural language. AI started capturing emotions, building relationships, and augmenting our capabilities, emerging as a kind of social, technological actor. This led to our hypothesis that generative AI is the first technology with a natural propensity to build symbiotic relationships with humans. Unlike traditional technologies, there is mutual interaction. While “symbiosis” may sound romantic, it can manifest across a spectrum of outcomes, from positive (mutualistic) to negative (parasitic). In business, I started to see the emergence of parasitic AI—AI that benefits at the detriment of humans or organizations. This realization began to trouble me deeply. While I was working for multi-billion-dollar tech companies, I advocated for Responsible AI and human-centric practices. However, I realized the impact I could have was limited if this remained a secondary concern in corporate agendas. This led to the establishment of the Human AI Symbiosis Alliance. Its mission is to educate people about the risks of parasitic AI and to guide organizations in steering AI development toward mutualistic outcomes. Ross: That’s… well, there’s a lot to dig into there. I look forward to delving into it. You referred to being human-centric, and I think you seem to be a very human-centric person. One point that stood out was the idea of generative AI’s propensity for symbiosis. Hopefully, we can return to that. But first, you did your Ph.D. in cyber psychology, I believe. What is cyber psychology, and what did you learn? Alexandra: Cyber

Nov 27, 2024

Kevin Clark & Kyle Shannon on collective intelligence, digital twin elicitation, data collaboratives, and the evolution of content (AC Ep70)

“What these tools allow you to do is very, very quickly go from an idea to sort of an 80% manifestation of it. It’s not just about the technology—it’s about understanding how, when, and why to use it to unlock collective intelligence.” – Kyle Shannon “We’ve discovered you can externalize the voice in your head into something you can have a dialogue with, creating reflective moments that result in documentation, not fleeting thoughts. That’s transformative.” – Kevin Clark About Kevin Clark & Kyle Shannon Kevin Clark is the President and Federation Leader of Content Evolution, a global consulting ecosystem working in brand, customer experience, business strategy and transformation. He previously worked for IBM as Program Director, Brand & Values Experience. He is on the board of numerous companies and is the author of numerous articles, book chapters, and books including Brandscendence. Kyle Shannon is Founder & CEO of video production company Storyvine, Founder of collaborative community the AI Salon, and Chief Generative Officer of Content Evolution. Previous roles include as EVP Creative Strategy at The Distillery and Co-Founder of Agency.com. Websites: www.contentevolution.net www.thesalon.ai   LinkedIn Profiles Kevin Clark Kyle Shannon   Book Collective Intelligence in the Age of AI What you will learn Exploring the power of digital twins in collaboration Overcoming creative blocks with generative AI tools Asking better questions to unlock AI’s potential Designing structured interviews for personalized AI Understanding collective intelligence in the digital age Rapid prototyping to test and refine ideas quickly Reshaping industries with untapped organizational data Episode Resources Emily Shaw Aristotle Steve Jobs Content Evolution CoLab Storyvine AI Salon Fortune 500 Gartner Digital twins Generative AI Large Language Models (LLMs) GPT Notebook LM Transformer architecture Data collaboratives Books, Shows, and Titles Collective Intelligence and AI Candy Ears The Hitchhiker’s Guide to the Galaxy Transcript Ross Dawson: Ross, Kevin, and Kyle, wonderful to have you on the show. Kevin Clark: Pleasure to be here. Kyle Shannon: Ross, great to be here. Ross: So, you created a book recently called Collective Intelligence and AI. I’d like to pull back to the big picture of where this fits into what you’re doing. This organization is called Content Evolution. How did you get to this place of creating this book and the other things you are doing using AI to assist in your work? Kevin: Well, Content Evolution itself is a federation of companies that are aligned. We’re all thoughtful leaders and innovators and have been at it for 23 years now. This technology is helping us pull the thread forward a lot faster. As Kyle will describe in a moment, we have almost 30 digital agents—or what we call digital advisors—of ourselves. As a result, we have a collective of those, and we can all write together. We’ve published articles and done all kinds of things. This book is a particular expression between the two of us because we’ve been talking to each other for over a decade. It’s the residue of a decade’s worth of weekly conversations. There’s more to it—Kyle, say more. Kyle: When we started, we put together a group within Content Evolution called CoLab. The initial idea was, “Hey, this AI stuff is happening.” We started this probably a year and a half ago, almost two years ago. Generative AI was clearly evolving rapidly, so it felt important to explore. Like with all new technologies, you start with the tools, but very quickly, you ask, “Why? What are we trying to accomplish?” Content Evolution is an organization that’s a couple of decades old. One challenge was figuring out who’s in it and what talents exist within it. Initially, we asked, “Could we create a tool using generative AI to help someone discover the right person for a business problem?” That’s how it started. Over time, we realized we could create digital representations of ourselves—digital twins or digital advisors—that people could interact with 24/7. Even if Kevin wasn’t available, you could get his point of view. We’ve built 30 of these digital twins. They’re all in a single entity, a single GPT, where we can query them for the Content Evolution perspective on a topic. Individuals within that group can also comment on outputs. A big part of what we’re exploring now is understanding how, when, and why to use these tools. That’s far more fascinating than just the technology itself. Kevin: By the way, Kyle is the world’s first Chief Generative Officer. We didn’t put AI in the title because being generative is more important than the specific technologies you use. It’s about the practices, methodologies, and discernment of when to apply them—and sometimes, when to set them aside. We’ve discovered you can overcome writer’s block quickly by having

Nov 20, 202441 min

Samar Younes on pluridisciplinary art, AI as artisanal intelligence, future ancestors, and nomadic culture (AC Ep69)

“To me, envisioning a future should involve elements anchored in nature, modern materials, and sustainable practices, challenging Western-centric constructs of ‘futuristic.’ Artisanal intelligence is about understanding material culture, combining traditional craft with modern techniques, and redefining what feels ‘modern.’” – Samar Younes About Samar Younes Samar Younes is a pluridisciplinary hybrid artist and futurist working across art, design, fashion, technology, experiential futures, culture, sustainability and education. She is founder of SAMARITUAL which produces the “Future Ancestors” series, proposing alternative visions for our planet’s next custodians. She has previously worked in senior roles for brands like Coach and Anthropologie and has won numerous awards for her work. LinkedIn: Samar Younes Website: www.samaritual.com University Profile: Samar Younes What you will learn Exploring the intersection of art, AI, and cultural identity Reimagining future aesthetics through artisanal intelligence Blending traditional craftsmanship with digital innovation Challenging Western-centric ideas of “modern” and “futuristic” Using AI to amplify narratives from the Global South Building a sustainable, nature-anchored digital future Embracing imperfection and creativity in the age of AI Episode Resources Silk Road Web3 Metaverse Orientalist AI (Artificial Intelligence) Artisanal Intelligence Dubai Future Forum Neuroaesthetics ChatGPT Runway ML Midjourney Archives of the Future Luma Large Language Model (LLM) Gun Model Transcript Ross Dawson: Samar, it’s awesome to have you on the show. Samar Younes: Thank you so much. Thanks for having me. Ross: So you describe yourself as a plural, disciplinary hybrid, artist, futurist, and creative catalyst. That sounds wonderful. What does that mean? What do you do? Samar: What does that mean? It means that I am many layers of the life that I’ve had. I started my training as an architect and worked as a scenographer and set designer. I’ve always been interested in bringing public art to the masses and fostering social discourse around public art and art in general. I’ve also always been interested in communicating across cultures. Growing up as a child of war in Beirut, among various factions—religious and cultural—it was a diverse city, but it was also a place where knowledge and deep, meaningful discussions were vital to society. Having a mother who was an artist and a father who was a neurologist, I became interested in how the brain and art converge, using art and aesthetics to communicate culture and social change. In my career, I began in brand retail because, at the time, public art narratives and opportunities to create what I wanted were limited. So I used brand experiences—store design, window displays, art installations, and sensory storytelling—as channels to engage people. As the world shifted more towards digital, I led brands visually, aiming to bridge digital and physical sensory frameworks. But as Web3, the metaverse, and other digital realms emerged, I found that while exciting, they lacked the artisanal textures and layers that were important to me. Working across mediums—architecture, fashion, design, food—I saw artificial intelligence as akin to working with one’s hands, very similar to what artisans do. That’s how I got into AI, as a challenge to amplify narratives from the Global South, reclaiming aesthetics from my roots. Ross: Fascinating. I’d love to dig into something specific you mentioned: AI as artisanal. What does that mean in practice if you’re using AI as a tool for creativity? Samar: Often, when people use AI, specifically generative AI with prompts or images, they don’t realize the role of craftsmanship or the knowledge of craft required to create something that resonates. Much digital imagery has a clinical, dystopian aesthetic, often cold and disconnected from nature or biomorphic elements, which are part of the world crafted by hand. To me, envisioning a future should involve elements anchored in nature, modern materials, and sustainable practices, challenging Western-centric constructs of “futuristic.” Ancient civilizations, like Egypt’s with the pyramids, exemplify timeless modernity. Similarly, the Global South has always been avant-garde in subversion and disruption, but this gets re-appropriated in Western narratives. Artisanal intelligence is about understanding material culture, combining traditional craft with modern techniques, and redefining what feels “modern.” Ross: Right. AI offers a broad palette, not just in styles from history but also potentially in areas like material science and philosophy. It supports a pluriplinary approach, assisted by the diversity of AI training data. Samar: Exactly. When I think of AI, I see data sets as materials, not just images. If data is a medium, I’m not interested in recreating a Picasso. I see each data set as a material, like paint on a palette—acr

Nov 6, 2024

Jason Burton on LLMs and collective intelligence, algorithmic amplification, AI in deliberative processes, and decentralized networks (AC Ep68)

“When you get a response from a language model, it’s a bit like a response from a crowd of people, shaped by the preferences of countless individuals.” – Jason Burton About Jason Burton Jason Burton is an assistant professor at Copenhagen Business School and an Alexander von Humboldt Research fellow at the Max Planck Institute for Human Development. His research applies computational methods to studying human behavior in a digital society, including reasoning in online information environments and collective intelligence. LinkedIn: Jason William Burton Google Scholar page: Jason Burton University Profile (Copenhagen Business School): Jason Burton What you will learn Exploring AI’s role in collective intelligence How large language models simulate crowd wisdom Benefits and risks of AI-driven decision-making Using language models to streamline collaboration Addressing the homogenization of thought in AI Civic tech and AI’s potential in public discourse Future visions for AI in enhancing group intelligence Episode Resources Nature Human Behavior How Large Language Models Can Reshape Collective Intelligence ChatGPT Max Planck Institute for Human Development Reinforcement learning from human feedback DeepMind Digital twin Wikipedia Algorithmic Amplification and Society Wisdom of the crowd Recommender system Decentralized autonomous organizations Civic technology Collective intelligence Deliberative democracy Echo chambers Post-truth People Jürgen Habermas Dave Rand Ulrika Hahn Helena Landemore Transcript Ross: Ross, Jason, it is wonderful to have you on the show. Jason Burton: Hi, Ross. Thanks for having me. Ross: So you and 27 co-authors recently published in Nature Human Behavior a wonderful article called How Large Language Models Can Reshape Collective Intelligence. I’d love to hear the backstory of how this paper came into being with 28 co-authors. Jason: It started in May 2023. There was a research retreat at the Max Planck Institute for Human Development in Berlin, about six months or so after ChatGPT had really come into the world, at least for the average person. We convened a sort of working group around this idea of the intersection between language models and collective intelligence, something interesting that we thought was worth discussing. At that time, there were just about five or six of us thinking about the different ways to view language models intersecting with collective intelligence: one where language models are a manifestation of collective intelligence, another where they can be a tool to help collective intelligence, and another where they could potentially threaten collective intelligence in some ways. On the back of that working group, we thought, well, there are lots of smart people out there working on similar things. Let’s try to get in touch with them and bring it all together into one paper. That’s how we arrived at the paper we have today. Ross: So, a paper being the manifestation of collective intelligence itself? Jason: Yes, absolutely. Ross: You mentioned an interesting part of the paper—that LLMs themselves are an expression of collective intelligence, which I think not everyone realizes. How does that work? In what way are LLMs a type of collective intelligence? Jason: Sure, yeah. The most obvious way to think about it is these are machine learning systems trained on massive amounts of text. Where are the companies developing language models getting this text? They’re looking to the internet, scraping the open web. And what’s on the open web? Natural language that encapsulates the collective knowledge of countless individuals. By training a machine learning system to predict text based on this collective knowledge they’ve scraped from the internet, querying a language model becomes a kind of distilled form of crowdsourcing. When you get a response from a language model, you’re not necessarily getting a direct answer from a relational database. Instead, you’re getting a response that resembles the answer many people have given to similar queries. On top of that, once you have the pre-trained language model, a common next step is training through a process called reinforcement learning from human feedback. This involves presenting different responses and asking users, “Did you like this response or that one better?” Over time, this system learns the preferences of many individuals. So, when you get a response from a language model, it’s shaped by the preferences of countless individuals, almost like a response from a crowd of people. Ross: This speaks to the mechanisms of collective intelligence that you write about in the paper, like the mechanisms of aggregation. We have things like markets, voting, and other fairly crude mechanisms for aggregating human intelligence, insight, or perspective. This seems like a more complex and higher-order aggregation mechanism. Jason: Yeah. I think at its core, language models are performing a form of c

Oct 30, 2024

Kai Riemer on AI as non-judgmental coach, AI fluency, GenAI as style engines, and organizational redesign (AC Ep67)

“AI is more of an occasion for organizational redesign than it is a solution to that redesign. However, it’s a great amplifier—it will amplify your problems, and it will amplify good organizational design.” – Kai Riemer About Kai Riemer Kai Riemer is Professor of Information Technology and Organisation, and Director of Sydney Executive Plus, at the University of Sydney Business School. He works with boards and executives to bring foresight expertise and deep understanding of emerging technologies into strategy and leadership. Kai co-leads the Motus Lab for research on digital human technology, and co-author of The Global 2025 Skills Horizon initiative. LinkedIn: Kai Riemer Blog: byresearch.wordpress.com Google Scholar page: Kai Riemer Research Gate: Kai Riemer University Profile: Kai Riemer What you will learn Understanding AI’s role in organizational decision-making How AI can enhance personal productivity for leaders Using generative AI as a team facilitator and coach The importance of upskilling for AI fluency Addressing the risks of anthropomorphizing AI AI as an amplifier for good and bad organizational design Redesigning work structures to fully harness AI’s potential Episode Resources AI (Artificial Intelligence) IBM University of Sydney The 2025 Skills Horizon – Sydney Executive Plus Business Model Canvas ChatGPT Harvard Business Review The Economist South by Southwest (SXSW) NotebookLM Sydney Executive Plus AI fluency sprint Generative AI Predictive AI Large Language Models (LLMs) Pre-trained models Quantum computing Turing test Reinforcement learning Organizational cognition Geopolitics Net Zero Digital ethics Nicola Moreau Transcript Ross Dawson: Hi. It is wonderful to have you on the show. Kai Riemer: Thank you. Thanks for having me. Ross: So for many years, you’ve been digging into the impact of AI and other technologies on organizations, on leadership, and how we can do things more effectively. So, just as a starting point, one thing about organizational decisions, particularly more complex decisions—where are we today in AI, being able to augment or improve, to assist humans in making better decisions in organizations? Kai: Oh boy, that’s a big question. It obviously depends on what kind of AI we are talking about and at what level. I think we are in a place of great uncertainty when it comes to the future role of AI and generative AI. We still need to put in a lot of effort to educate people, particularly decision-makers, about what this technology can do, where it should be applied, and how it should be part of making decisions. We often distinguish AI as a systems technology that we make part of organizational systems. We might have a bespoke chatbot that we train, fine-tune, and put into service with limited autonomy, providing information. On the other hand, AI for personal productivity involves how AI becomes part of people’s daily lives and decision-making as they work with the technology. It depends on how skillful the human is in working with AI. The lazy approach is to ask questions and accept whatever answer the AI provides, which typically results in average decision-making. Better approaches involve including AI in reflection tasks, asking it to question your thinking, and taking new aspects into account that AI provides. Education is needed on two levels—getting decision-makers to understand AI beyond generative AI, because there’s still predictive AI, image recognition, and others that improve processes—and upskilling to use AI as a powerful assistant in daily work. Misunderstandings persist about how this technology works and how to use it productively. There’s no one-size-fits-all. Ross: As you said, AI can assist in personal productivity for individuals at all levels. Are there any configurations for group decision-making, such as boards or executive teams, where both traditional AI and generative AI can assist? Kai: I think generative AI has a lot to offer. Given that it encodes patterns from the corpus of human text, many management frameworks and tools are embedded in these networks, which we can make use of. In our team, we held a workshop session and used AI to help fill out the Business Model Canvas. The AI, in this case ChatGPT, asked us questions about each section, and we discussed them as a team. AI served as a coach or moderator, structuring the conversation. We weren’t drawing on AI for answers, but for guidance. There are organizations doing similar interesting things, though some operate behind NDAs. For example, IBM’s global HR officer, Nicola Moreau, talked about their generative AI assistant, which helps employees ask questions about entitlements and HR policies. It increased inclusiveness, particularly in cultures where people hesitate to ask superiors questions. Ross: You mentioned the Skills Horizon Report. With the shifting skills landscape, where do you see the most pointed need for skills or capabil

Oct 23, 202432 min

Marc Ramos on organic learning, personalized education, L&D as the new R&D, and top learning case studies (AC Ep66)

“The craft of corporate development and training has always been very specialized in providing the right skills for workers, but that provision of support is being totally transformed by AI. It’s both an incredible opportunity and a challenge because AI is exposing whether we’ve been doing things right all along.” – Marc Steven Ramos About Marc Steven Ramos Marc Ramos is a highly experienced Chief Learning Officer, having worked in senior global roles with Google, Microsoft, Accenture, Novartis, Oracle, and other leading organizations. He is a Fellow at Harvard’s Learning Innovation Lab, with his publications including the recent Harvard Business Review article, A Framework for Picking the Right Generative AI Project. LinkedIn: Marc Steven Ramos Harvard Business Review Profile: Marc Steven Ramos What you will learn Navigating the post-pandemic shift in corporate learning Balancing scalable learning with maintaining quality Leveraging AI to transform workforce development Addressing the imposter syndrome in learning and development teams Embedding learning into the organizational culture Utilizing data and AI to demonstrate training ROI Rethinking the role of L&D as a driver of innovation Episode Resources AI (Artificial Intelligence) L&D (Learning and Development) Workforce Development Learning Management System (LMS) Change Management Learning Analytics Corporate Learning Blended Learning DHL Ernst & Young (EY) Microsoft Salesforce.com ServiceNow Accenture ERP (Enterprise Resource Planning) CRM (Customer Relationship Management) Large Language Models (LLMs) GPT (Generative Pretrained Transformer) RAG (Retrieval-Augmented Generation) Movie Sideways Transcript Ross: Ross Mark, it is wonderful to have you on the show. Marc Steven Ramos: It is great to be here, Ross. Ross: Your illustrious career has been framed around learning, and I think today it’s pretty safe to say that we need to learn faster and better than ever before. So where do you think we’re at today? Marc Steven: I think from the lens of corporate learning or workforce development, not the academic, K-12 higher ed stuff, even though there’s a nice bridging that I think is necessary and occurring is a tough world. I think if you’re running any size learning and development function in any region or country and in any sector or vertical, these are tough times. And I think the tough times in particular because we’re still coming out of the pandemic, and what was in the past, live in person, instructor-led training has got to move into this new world of all virtual or maybe blended or whatever. But I think in terms of the adaptation of learning teams to move into this new world post-pandemic, and thinking about different ways to provide ideally the same level of instruction or training or knowledge gain or behavior change, whatever, it’s just a little tough. So I think a lot of people are having a hard time adjusting to the proper modality or the proper blends of formats. I think that’s one area where it’s tough. I think the other area that is tough is related to the macroeconomics of things, whether it’s inflation. I’m calling in from the US and the US inflation story is its own interesting animal. But whether it’s inflation or tighter budgets and so forth, the impact to the learning functions and other functions, other support functions in general, it’s tighter, it’s leaner, and I think for many good reasons, because if you’re a support function in legal or finance or HR or learning, the time has come for us to really, really demonstrate value and provide that value in different forms of insights and so forth. So the second point, in terms of where I think it is right now, the temperature, the climate, and how tough it is, I think the macroeconomic piece is one, and then clearly there’s this buzzy, brand new character called AI, and I’m being a little sarcastic, but not I think it’s when you look at it from a learning lens. I think a lot of folks are trying to figure out not only how do I on the good side, right? How can I really make my courses faster and better and cooler and create videos faster in this, text to XYZ media is cool, so that’s but it’s still kind of hypey, if that’s even a word. But what’s really interesting? And I’m framing this just as a person that’s managed a lot of L&D teams, it’s interesting because there’s this drama that’s below the waterline of the iceberg of pressure, in the sense that I think a lot of L&D people, because AI can do all this stuff, it’s kind of exposing whether or not the stuff that the human training person has been doing correctly all this time. So there’s this newfound ish, imposter syndrome that I think is occurring within a lot of support functions, again, whether it’s legal or HR, but I

Oct 16, 2024

Alex Richter on Computer Supported Collaborative Work, webs of participation, and human-AI collaboration in the metaverse (AC Ep65)

“Trust is a key ingredient when you look into Explainable AI; it’s about how can we build trust towards these systems.” – Alex Richter About Alex Richter Alexander Richter is Professor of Information Systems at Victoria University of Wellington in New Zealand. where he has also been Inaugural Director of the Executive MBA and Associate Dean, where he specializes in the transformative impact of IT in the workplace. He has published more than 100 articles in leading academic journals and conferences, with several best paper awards and been covered by many major news outlets. He also has extensive industry experience and has led over 25 projects funded by companies and organizations, including the European Union.. Website: www.alexanderrichter.name University Website: people.wgtn.ac.nz/alex.richter LinkedIn: Alexander Richter Twitter: @arimue Publications (Google Scholar): Alexander Richter Publications (ResearchGate): Alexander Richter What you will learn The significance of CSCW in human-centered collaboration Trust as a cornerstone of explainable AI Emerging technologies enhancing human-AI teamwork The role of context in sense-making with AI tools Shifts in organizational structures due to AI integration The importance of inclusivity in AI applications Foresight and future thinking in the age of AI Episode Resources CSCW (Computer Supported Cooperative Work) AI (Artificial Intelligence) Explainable AI Web 2.0 Enterprise 2.0 Social software Human-AI teams Generative AI Ajax Meta (as in the company) Google Transcript Ross: Alex, it’s wonderful to have you on the show. Alex Richter: Thank you for having me, Ross. Ross: Your work is fascinating, and many strands of it are extremely relevant to amplifying cognition. So let’s dive in and see where we can get to. You were just saying to me a moment ago that the origins of a lot of your work are around what you call CSCW. So, what is that, and how has that provided a framework for your work? Alex: Yeah, CSCW (Computer-Supported Cooperative Work) or Computer-Supported Collaborative Work is the idea that we put the human at the center and want to understand how they work. And now, for quite a few years, we’ve had more and more emerging technologies that can support this collaboration. The idea of this research field is that we work together in an interdisciplinary way to support human collaboration, and now more and more, human-AI collaboration. What fascinates me about this is that you need to understand the IT part of it—what is possible—but more importantly, you need to understand humans from a psychological perspective, understanding individuals, but also how teams and groups of people work. So, from a sociological perspective, and then often embedded in organizational practices or communities. There are a lot of different perspectives that need to be shared to design meaningful collaboration. Ross: As you say, the technologies and potential are changing now, but taking a broader look at Computer-Supported Collaborative Work, are there any principles or foundations around this body of work that inform the studies that have been done? Alex: I think there are a couple of recurring themes. There are actually different traditions. For my own history, I’m part of the European tradition. When I was in Munich, Zurich, and especially Copenhagen, there’s a strong Scandinavian tradition. For me, the term “community” is quite important—what it means to be part of a community. That fits nicely with what I experienced during my time there with the culture. Another term that always comes back to me in various forms is “awareness.” The idea is that if we want to work successfully, we need to have a good understanding of what others are doing, maybe even what others think or feel. That leads to other important ingredients of successful collaboration, like trust, which is currently a very important topic in human-AI collaboration. A lot of what I see is that people are concerned about trust—how can we build it? For me, that’s a key ingredient. When you look into Explainable AI, it’s about how we can build trust toward these systems. But ultimately, originally, trust between humans is obviously very important. Being aware of what others are doing and why they’re doing it is always crucial. Ross: You were talking about Computer-Supported Collaborative Work, and I suppose that initial framing was around collaborative work between humans. Have you seen any technologies that support greater trust or awareness between humans, in order to facilitate trust and collaboration through computers? Alex: In my own research, an important upgrade was when we had Web 2.0 or social software, or social media—there are many terms for it, like Enterprise 2.0—but basically, these awareness streams and the simplicity of the platforms made it easy to post and share. I think there were great concepts before, but finally, thanks to Ajax and

Oct 9, 2024

Jack Uldrich on the unlearning, regenerative futures, nurturing creativity, and being good ancestors (AC Ep64)

“Each of us is creative in our own way. We have the ability to create our own future, but we must first understand that we are creative.” – Jack Uldrich About Jack Uldrich Jack Uldrich is a leading futurist, author, and speaker who helps organizations gain the critical foresight they need to create a successful future. His work is based on the principles of unlearning as a strategy to survive and thrive in an era of unparalleled change. He is the author of 9 books including Business As Unusual. Website: www.jackuldrich.com LinkedIn: Jack Uldrich Facebook: Jumpthecurve YouTube: @ChiefUnlearner X: @jumpthecurve Books: Green Investing: A Guide to Making Money through Environment Friendly Stocks Foresight 20/20: A Futurist Explores the Trends Transforming Tomorrow Soldier, Statesman, Peacemaker: Leadership Lessons from George C. Marshall The Next Big Thing Is Really Small: How Nanotechnology Will Change the Future of Your Business Jump the Curve: 50 Essential Strategies to Help Your Company Stay Ahead of Emerging Technologies Into the Unknown: Leadership Lessons from Lewis & Clark’s Daring Westward Expedition Business As Unusual: A Futurist’s Unorthodox, Unconventional, and Uncomfortable Guide to Doing Business A Smarter Farm: How Artificial Intelligence is Revolutionizing the Future of Agriculture Higher Unlearning: 39 Post-Requisite Lessons for Achieving a Successful Future What you will learn Embracing humility in future thinking The power of silence and meditation Navigating low-probability, high-impact events Why asking the right questions matters The role of AI in shaping human history Building resilience for uncertain futures Unleashing creativity to create a better world Episode Resources OpenAI ChatGPT Claude Pi Anthropic Cascadian Subduction Zone The New Yorker Artificial Intelligence (AI) Regenerative future People Ray Kurzweil Nassim Taleb Suleiman Harari Jonas Salk Film The Black Swan Books The Singularity Is Near by Ray Kurzweil Sapiens by Yuval Noah Harari Homo Deus by Yuval Noah Harari Transcript Ross: Jack, it is awesome to have you on the show. Jack Uldrich: It’s a pleasure to be here. Ross: You’ve been thinking about the future and helping others think about the future for a very long time now. So what’s the foundation of how you do that? Jack: The foundation, I would say, is silence. First, it’s meditation. I actually try to get to the thought beyond the thought. And what I mean here is, I’m always looking for insights, but in order to do that, I first have to free myself of all my old habits, assumptions, and other ways of thinking. And so on a daily basis, I do try to meditate on that, and then I look for insights. And I want to make this clear, I’m not looking for conclusions. As soon as you’ve locked yourself into a conclusion or what you think the future is going to be, you’re going to get yourself in trouble. But insights, I do think we can come to insight. So I’ll just sort of step back and say that’s where I start — silence, contemplation, meditation, Ross: That is absolutely awesome. I think this goes this idea of fluid thinking, as in, there’s a lot of people whose thinking is rather rigid, as in, think of a particular way, and ask a year or two or 10 later, and they’re thinking the same way, whereas that doesn’t quite work when the world is changing around you. Jack: No, that’s right. And so the next thing I would say is, and I hope to sort of disabuse people of what they think futurists do. I’m quite clear in saying, first, I definitely don’t try to predict the future, but nor do I say I have the answer to the future. But having said that, that doesn’t absolve any of us of a more important responsibility, and if none of us have the answer to the future, we have to be sure we’re asking the best possible questions of the future. Frequently, when I see why businesses or organizations miss the future or why they became bankrupt, it’s not because they weren’t bright and intelligent, nor did they have capable C-staff, but they’re primarily answering the wrong question. They just didn’t understand either how technological change had shifted their business, their business model, their customer expectations, or they didn’t understand what their competitors were up to. So I spent a lot of time trying to make sure I’m asking the best possible questions of the future, while at the same time always having humility to the idea that there’s got to be a question I’m missing. And so I fall back on this idea of humility quite a bit, because it’s not what we know that gets us in trouble. It’s what we think we know, that we just don’t. And so we have to have humility as we approach the future. Ross. Yes, yes. And that’s something that we don’t see quite enough of in the world when we look

Oct 3, 2024

Lindsay Richman on immersive simulations, rich AI personas, dynamics of AI teams, and cognitive architectures (AC Ep63)

“The beauty of generative AI is that it’s incredibly elastic. With a strong NLU, you can orchestrate different services to do various tasks. Whether it’s something simple like booking a vacation or scheduling a meeting, or something more complex like running a state-of-the-art deep learning model with an AI-powered agent, it becomes really interesting.” – Lindsay Richman About Lindsay Richman Lindsay Richman is the co-founder and director of product and machine learning at Innerverse, a platform that creates AI-powered simulations to help users build confidence and emotional awareness. She previously worked in product management and AI for leading companies including Best Buy and McKinsey & Co. She was norminated for VentureBeat’s Top Women in AI Awards. Company Website: www.innerverse.ai LinkedIn: Lindsay Richman AI Accelerator Institute Profile: Lindsay Richman Github Profile: Lindsay Richman   What you will learn Lindsay Richman’s journey into AI and machine learning The evolution of natural language processing and AI agents How AI-driven simulations enhance personal and professional growth The role of generative AI in orchestrating complex tasks Ethical considerations in AI development and its applications The importance of diversity in building AI systems Collaboration between humans and AI for future innovation Episode Resources Innerverse Artificial Intelligence NLU (Natural Language Understanding) GPT-3.5 GPT-4 Best Buy Google Dialog Flow Google Vertex NLP (Natural Language Processing) ElevenLabs Python React Support vector machines Dimensionality reduction Machine learning Climatology Soul Machines Metahumans Unreal Engine Synesthesia Pokemon Go Agile Claude Opus Gemini 1.5 Pro HBR (Harvard Business Review) Teranga Wolof The Dark Crystal Jim Henson Skeksis LLMs (Large Language Models) APIs (Application Programming Interfaces)   Transcript Ross: Hi, Lindsay! It’s a delight to have you on the show. Lindsay Richman: Thank you. I appreciate you inviting me. I’m very excited. Ross: So you are taking some very interesting and innovative approaches to using AI to amplify cognition in the broader sense. So first of all, how did you come to this journey? How has this become your life’s work? Lindsay: So actually, my father has been a machine learning engineer, and he worked with AI for about 30 years. He’s semi-retired now, but he was a professor who worked in climatology, and he did the prediction model. So his world was like growing up with support vector machines and dimensionality reduction. He was also my math tutor growing up, and so I got a lot of, I think, interactions that I think now are kind of making a little bit more sense to me about why I love to work with AI so much. But he really, I think, inculcates a lot of creativity in me. And I was always interested in his work. And then I’m kind of a nontraditional engineer. I started working with Python maybe seven years ago, because I was using Excel for things. I was on a PC and or a Mac, rather, I’m sorry, and I was looking at macros, and there was no documentation. So a lot of people were using Python at the time instead of Excel. And I started using that. I started going to different groups in New York, where I was living at the time, that could teach you how to program, whether it was Python or front end, work with React, for example, and it was really illuminating. And I realized just how much creativity there was in engineering. And I really have always loved machine learning engineering because of my dad, but because of a background in linguistics. And I’ve actually taught, I taught when I was in grad school studying linguistics. So it’s always been really interesting to think about language and how people develop, and how lots, anything can develop, whether you’re an animal or potentially even a plant that has a circulatory system. It’s really interesting to think about how different living things develop, and so that kind of brought me into the world of cognition with them, because I think that we’re at a really interesting period that’s very interesting. Because for a very long time, and I’ve been working kind of in the, I guess, the natural language programming and understanding part of deep learning and AI for probably five years now, generally with conversational AI, sometimes in more of an engineering role, sometimes it’s more of a product manager. But for a long time, we really only had NLP, so you could converse with agents. But usually it was a bit limited. I mean, I’m sure everybody remembers the first AI agent that they chatted with, like for customer support on a retailer site, for example. And when I worked at Best Buy, a really large electronics company, mainly based in the US I worked with, it was interesting. I worked with an agent that handled millions of different chats, but was probably pretty ru

Sep 25, 2024

Mohammad Hossein Jarrahi on human-AI symbiosis, intertwined automation and augmentation, the race with the machine, and tacit knowledge (AC Ep62)

“We have unique capabilities, but it’s crucial to understand that today’s AI technologies, powered by deep learning, are fundamentally different. We need a new paradigm to figure out how we can work together.” – Mohammad Hossein Jarrahi About Mohammad Hossein Jarrahi Mohammad Hossein Jarrahi is Associate Professor at the School of Information and Library Science at University of North Carolina at Chapel Hill. He has won numerous awards for teaching and his papers, including for his article “Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making.” His wide-ranging research spans many aspects of the social and organizational implications of information and communication technologies. Website: Mohammad Hossein Jarrahi Google Scholar Profile: Mohammad Hossein Jarrahi LinkedIn: Mohammad Hossein Jarrahi Article: Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making   What you will learn Exploring the concept of human-AI symbiosis Understanding AI’s role in automation and augmentation The difference between intuition and data-driven decision making Why AI excels at repetitive, data-centric tasks The importance of emotional intelligence in human-AI collaboration Balancing efficiency and innovation in AI applications Building mutual learning between AI systems and humans Episode Resources IBM NPR ChatGPT deep learning Skype Human-AI symbiosis Harvard Business Review Turing test algorithmic management machine learning data provenance Reddit Mayo Clinic natural language processing (NLP) “Man-Computer Symbiosis” intelligence augmentation People Kevin Kelly JCR Licklider Articles Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making by Mohammad Hossein Jarrahi What Will Working with AI Really Require? by Mohammad Hossein Jarrahi, Kelly Monahan and Paul Leonardi   Transcript Ross Dawson: Mohammed, it’s wonderful to have you on the show. Mohammad Hossein Jarrahi: Very glad to be here. Ross: So you have been focusing on human AI symbiosis. I’d love to hear how you came to believe this is the thing you should be focusing your energy and attention on, Mohammad: I was stuck in traffic, 2017 if I want to tell you the story. And this was a conversation between an IBM engineer, and it was on NPR, and they were asking him a bunch of questions about, what is the future of AI like? And this is still before a lot of chatgpt, and the I would call it consumerization of AI, and it clicked. When you’re stuck in traffic, you don’t have much to do. So that was really the moment that I figured out he was basically providing examples that fit these three categories of uncertainty, complexity and eco locality. I went home immediately and started sketching the article and wrote the article in two weeks. But the idea was, we have very unique capabilities. It’s a mistake to underestimate what we can do, but also understanding that these technologies, the smart technologies that we are witnessing today, at that time, were very empowered by deep learning. They’re inherently different from the previous information technologies we’ve been using. So it requires a very different type of paradigm to understand how we can work together. These technologies are not going to make us extinct, but they shouldn’t be thought of as infrastructure technology like Skype, you name it, communication, information technologies have been used in the past in organizations, outside of the organization. So I figured this, this human AI symbiosis terminology, which comes from biology. It’s a very nice way to understand how we as two sources of intelligence can work together. Ross: Yeah, also very aligned, of course, with my work and people I engage with. I suppose the question is, how do we do it? There’s too few, but quite a few who are engaged in this path. So what are the pathways? We don’t have the answers yet, but what are some of the pathways to be able to move towards human AI symbiosis? Mohammad: I think we talked about this a bit earlier. It really depends on the context. Now, from this point on, that’s really the crux of issues in my articles I’ve been writing. It really depends on a specific organizational context, how much you can delegate, because we’ve got this dichotomy, which is not really dichotomy. They’re all intertwined, automation and augmentation. Artificial intelligence systems provide these dual affordances. They can automate some of our work and they can augment some of our work. And there is a difference between the two concepts, automation is like doing it somehow autonomously, with a little bit of supervision. Augmentation, we are very involved. We are implicated in the process, but they are just making us more efficient and more effective. You can think about many example

Sep 18, 2024

Sir Andrew Likierman on six elements for improving judgement, increasing awareness, and the comparative advantages of humans over AI (AC Ep61)

“Machines are amazing, but they can’t do certain things that only human beings can, like exhibit consciousness, ethics, or the ability to develop social bonds involving emotions, trust, loyalty, and empathy.” – Andrew Likierman About Sir Andrew Likierman Sir Andrew Likierman is Professor and former Dean of the London Business School. Previous roles include Head of the UK Government Accountancy Service and Director of the Bank of England and Barclays Bank. He was knighted in 2001. His current research is on human judgment, with his new book Judgement at Work to be released in January 2025. Wikipedia Profile: Sir Andrew Likierman London Business School Profile: Sir Andrew Likierman ResearchGate Profile: Sir Andrew Likierman LinkedIn: Sir Andrew Likierman Book: Judgement at Work: Making Better Choices   What you will learn Understanding the six elements of good judgment How intuition and experience shape decision-making Balancing gut feel and logical reasoning in choices The impact of awareness on better judgment Differences between human judgment and AI capabilities Why context shifts are crucial in decision-making Integrating human and AI for more effective outcomes Episode Resources People Herbert Simon Danny Kahneman Malcolm Gladwell Karl Wieck Tim O’Reilly Heraclitus University of London Harvard Business Review AI (Artificial Intelligence) Industrial Revolution Pattern recognition Books Blink: The Power of Thinking Without Thinking by Malcolm GladwellJudgement at Work: Making Better Choices by Andrew Likierman   Transcript Ross Dawson: Andrew, it’s a delight to have you on the show. Andrew Likierman: Ross, thank you very much for inviting me. Ross: So you have had a long and illustrious career with all sorts of interests that you’ve dealt with over time, and you have spent a lot of time now thinking about judgments. How have you come to this point? Andrew: Well, look, I’ve had the pleasure and privilege of working in commercial organizations, in public life and in academic life, and what I’ve seen wherever I’ve been is that judgment is a very, very important quality. And I was intrigued a few years ago to think about the question, all right, so what is judgment? How do we know somebody’s got it? How can we improve our own? If it’s so important, then why aren’t we talking more about it? Why aren’t we including it more? So my work has been to try and pin down what judgment is and how we can use it, in the face of many people who’ve said, Oh, it’s all, you know, you can’t possibly do that. You know, it’s sort of out there. We don’t know quite what it is. Well, I believe we do know what it is, and it helps, because we can help them to improve it. Ross: Well, I think it’s a very important quest, because some people have good judgment, others don’t, and there seems to be very little in really structured ways to be able to help improve that. So in a relatively recent Harvard Business Review article, and I believe your forthcoming book, you’ve laid out a framework for what are the key elements and how it is we can improve those. So can you share that in a nutshell? Andrew: Of course, look, I won’t go into very much detail, but just in outline. The reason for having a framework is so that we can identify what it is we need to do to exercise good judgment. Because rather than just thinking vaguely, you know, am I exercising good judgment, and was that a good choice? The framework helps to identify the kind of things one ought to be looking at. And just to be completely clear, I’m not suggesting that you go through this in a mechanical way. What I’m suggesting is that identifying any element of this framework is better than nothing, and the more I believe one can go through the framework and adopt what it suggests, the better one’s chances of making a good choice. So what is it? It’s got six elements. The first one starts with what we know and our experience relevant to whatever it is we’re making a choice about. And I’m going to take an example of going on holiday. Let’s say we go to a place which is very familiar to us, and we’ve been there many years already, so we’ve got lots of knowledge and experience. We know what to expect, where the beach is, where the good restaurants are, and so on. If we’ve not been to this place before, it’s all exploration. We can do a lot of work beforehand, but actually we’ve got to make a lot of, often quite difficult choices, because we don’t know. We haven’t got that experience. So the first thing in any choice is, what is the relevant knowledge and experience we’ve got? Then we go on to the question of awareness. When we enter any situation, we need to be aware of what’s going on. And again, taking the holiday analogy, if we go into one part of t

Sep 11, 2024

Sylvia Gallusser on signals of the future, vivid scenarios, awareness practices, and envisioning meditations (AC Ep60)

“It’s not just about foreseeing; it’s also about feeling and sensing. It’s about imagining the smells and sounds of the future. It’s really about being an active player in your future, an active builder of the future.” – Sylvia Gallusser About Sylvia Gallusser Sylvia Gallusser is Founder and CEO of Silicon Humanism, a futures thinking and strategic foresight consultancy. Previous roles include a variety of strategic roles at Accenture, Head of Technology at Business France North America, General Manager at French Tech Hub, and Co-founder at big bang factory. She is also a frequent keynote speaker and author of speculative fiction. Blog: Silicon HumanismX: @siliconhumanismLinkedIn: Sylvia GallusserLinkedIn (Company): Silicon Humanism What you will learn Exploring multidisciplinary approaches to future thinking Using foresight meditation to visualize possible scenarios The power of signals in understanding future trends Amplifying cognition through creativity and fiction The importance of history and sociology in futurism Transforming future visions into actionable strategies Addressing truth and deepfakes in the digital age Episode Resources Accenture Silicon Humanism STEEPLE University of Houston Hawaii University Apple TV+ X Facebook Adobe Firefly ChatGPT Generative AI Deepfake Liar’s Dividend Jim Dator TV Series Black Mirror (TV series) Extrapolations (TV series) Silo (TV series) Transcript Ross Dawson: Sylvia, it’s wonderful to have you on the show. Sylvia Gallusser: Hi, Ross! Delighted to be on the show. Thank you so much for having me. Ross: So you delve into the future and help people do that. How do you help your clients or people you work with to think more effectively about this wonderful world of the future? Sylvia: That’s a question I love to have an answer to, and I really hope we can always have more people enter the future thinking field. So I started actually working in technology and strategy for quite a long time, mostly with entrepreneurs at first; but coming from a multidisciplinary background, I really found it interesting how we can bring different disciplines to help people think about the future and today. There are really, I like to say there are two different ways, two different paths to arrive at future thinking. There are very formal ones where you would go academic about it, you would attend university programs. And there are tons of great programs I’m sure you’ve heard about from the University of Boston or sorry, Houston or Finland to Hawaii University and so on. So there are already a lot of really great programs. But at the same time, what you see in the profession is that a lot of futurists are coming from more diverse backgrounds, having started a career in other industries, and I like to talk about it as a second choice career. And you see people coming from marketing, strategy, HR, sometimes also some artists, technologists, psychologists. So there’s really an interesting variety of professions that can lead you to think about the future. Because just, and that’s really the topic of your podcast here, it’s about amplifying cognition. So we really do believe that future thinking is the way to amplify the way we think about the future. So for example, the way I started, well, if you’re interested in maybe me zooming a bit about my own a way to bring people around me to think about the future. I started actually as a strategy consultant for maybe 15 years, working first with Accenture clients in France, then moving with a French embassy in the US and working more with entrepreneurs to finally start working with students and a variety of individuals around the future. So I created my own company, which is called Silicon Humanism, and on top of having a more general strategy toolbox, I’m really happy to always include other tools like fiction, or popular fiction, for example, that can help us think about the future. I also love to envision meditation, help people to bring themselves to develop their own mindset and extend their reason to think about the future. We also use a lot of gaming to help bring scenarios to life. But ultimately, what’s really important when I work with clients is to go from the envisioning to really the action planning. So that’s why, for me, strategy is really a complement to the foresight futurist toolbox that we have. Ross: So there’s a lot there to dig into and just let this come back to multidisciplinarity. And so I suppose this is about…I think I agree that to be an effective futurist, you do need to bring together a wide variety of disciplines and exposures and experiences as I knew and many of our colleagues do, but part of it is I think the big part is it’s not being the futurist for others. It’s helping people to be their own futurist, to bring together their own thinking, and to expand how it is they think effectively about

Sep 4, 2024

Erica Orange on constant evolution, lifelong forgetting, robot symbiosis, and the power of imagination (AC Ep59)

“We all have to acquire new information to stay relevant. But if we’re piling new information onto outdated thinking, we need to become more comfortable with lifelong forgetting.” – Erica Orange About Erica Orange Erica Orange is a futurist, speaker, and author, and Executive Vice President and Chief Operating Officer of leading futurist consulting firm The Future Hunters. She has spoken at TEDx and keynoted over 250 conferences around the world, and been featured in news outlets including Wired, NPR, Time, Bloomberg, and CBS This Morning. Her book AI + The New Human Frontier: Reimagining the Future of Time, Trust + Truth is out in September 2024. Website: www.ericaorange.com LinkedIn: @ericaorange YouTube: @EricaOrangeFuture X: @ErOrange Book: AI + The New Human Frontier: Reimagining the Future of Time, Trust + Truth What you will learn Lifelong learning vs. lifelong forgetting The intersection of humans and technology The importance of imagination in the future of work The role of judgment in an AI-driven world Navigating the blurred lines between reality and AI Rethinking education for a digital age The evolving workplace and redefining workspaces Episode Resources AI (Artificial Intelligence) The Future Hunters Deepfake Generative AI ChatGPT Neural wiring Virtual reality Hybridized work The Future of Work People Keith Johnstone George Bernard Shaw Isaac Asimov H.G. Wells Transcript Ross Dawson: Erica, it’s a true delight to have you on the show. Erica Orange: Ross, thank you so much for having me, I’m so happy to be here. Ross: So you have been a very long time futurist, and I think it’s pretty fair to say that you’ve also been a believer in humans all along the way. Erica: Yes, I have to say I’ve been a believer in humans for far longer than I have been a futurist, but I have been doing this work, my goodness, for the better part of close to two decades at this point, really knowing that so much is operating really quickly, with obviously the biggest thing today being the pace of technological change. But when you strip back the layers, I’ve always come back to the one kind of central thesis and the one very central and core understanding that we are inextricably linked with all of these trends, whether it’s technological trends or sociocultural trends, we cannot really be extricated from that equation. My interest has always been in more of the psychological component to the future, right? I was a psychology major in college, and I never really knew exactly how that was going to serve me, and never in a million years did I think that it would be applied to this world of Futurism that I didn’t even know existed when I was 18 years old, but that thinking has really informed much of how I do what I do. Ross: Yes, it’s always this aspect of ‘humans are inventors’. We create technologies of various kinds which change who we are. So this is a wonderful self reinforcing loop of ‘we create the classic thing, we create our tools, and our tools create us’. And this cycle of growth. Erica: Right? Everything is always a constant evolution. It’s just that that piece of evolution is very different depending on who or what it’s applied to. So at this moment of our history, technological evolution is outpacing human evolution, but the biggest question mark is, will we be able to catch up? Will we be able to double down on those things that make us uniquely human? Will we be able to, even economically, and when it comes to the future of work, be able to reprioritize what those unique human skill sets are going to be? And basically, for the sake of not putting it very poetically, will we be able to get our heads screwed on right now and for the indeterminate future, so that we are not in a position where technology has passed us by, where we actually have a very unique role to play, and we know how we can really compete and thrive and succeed in this world that is just full of so many unknowns. Ross: Absolutely. I agree that these are questions we can’t know whether we’ll be able to get through but I always say, ‘let’s start with the premise that we can’. And if so, how do we do it? What are the things that will allow us to be masters of, amongst other things, the tools we’ve created and to make these boons for who we are, who we can be, who we can become? Erica: That is such a great question. I think it comes down to something that I talk a lot about, which is really the difference between lifelong learning and lifelong forgetting. And it seems the most cliche nowadays to talk about lifelong learning. I always say, of course, it’s important to become a lifelong learner, right? We all have to become lifelong learners and acquire all of the new information that’s going to keep us relevant. But if we’re piling on new information onto outdated thinking, we have to become more comfortable becoming lifelon

Aug 28, 2024

Natalia Bielczyk on work in a BANI world, becoming our own Zen masters, AI in recruitment, and contagious empathy (AC Ep58)

“It’s not about the amount we say; it’s about making what we say really count. We can use some of these tools to write the long version, so that we can then quickly create the short version and really dial in.” – Natalia Bielczyk About Natalia Bielczyk Natalia Bielczyk is Founder & CEO of Ontology of Value, an R&D, EdTech, and consulting agency. She holds a PhD in Computational Neuroscience and is author of three books, including the forthcoming ‘The Longest Journey: The Ultimate Guide To Self-Navigation In the Job Market’. Website: www.nataliabielczyk.com LinkedIn: @nataliabielczyk X: @nbielczyk_neuro Facebook: @drnataliabielczyk Instagram: @nataliabielczyk Book: The Longest Journey: The Ultimate Guide To Self-Navigation in the Job Market What you will learn Exploring the impact of Black Swan events on the future of work Understanding the role of AI in accelerating job market trends Navigating the BANI world with better filtering mechanisms Balancing AI and human judgment in recruitment processes Emphasizing the importance of work ethic in the AI era Discovering personal productivity hacks for focused work Fostering empathy and kindness in a technology-driven workplace Episode Resources ChatGPT BANI vs VUCA Upwork Research Institute Artificial Intelligence LLMs Netflix The Coded Bias Machine learning People Joy Buolamwini Tony Robbins Books Thriving on Overload: The 5 Powers for Success in a World of Exponential Information by Ross Dawson Awaken the Giant Within : How to Take Immediate Control of Your Mental, Emotional, Physical and Financial Destiny! by Tony Robbins   Transcript Ross Dawson: Natalia, it’s a delight to have you on the show. Natalia Bielczyk: Thank you so much for your invitation. Ross, I’m honored to be here. Ross: So we have a changing world of work, and people have been talking about the future of work for quite a few years, and I think we’re already well into the future of work, but it’s changing fast. I’d love to start off by just getting your high level perspective on what are the things that we should be looking to in shaping a better future of work? Natalia: Absolutely. Actually, ever since we faced the Covid 19 pandemic, I think the number of black swan events actually got- I have a feeling that these events got denser and denser, so it’s really hard to I can tell, from a perspective of a neuroscientist that research over the potential future of work is so much more challenging than neuroscientific research because we cannot really foretell in the long run, how these incoming black swan events that we by definition cannot predict, will shape the future of work. Each one of them seems to not necessarily change the future of work, but more like accelerate the progress. So the covid 19 pandemic, it didn’t qualitatively change the the job market, but it speeded up the processes that were already going on by 10 years and and then I believe that the premiere of chatgpt was yet another event that, again, since OpenAI was the first big tech company who showed balls to actually release the top tier software to the public, and then that actually prompted others to come to the scene. That was, again, just speeding up a process that was already going on. Most of these models were already in development for many years prior to the premier of GPT. Seems like one player came to the scene, others followed, and now we have almost an arms race, and that’s fundamentally changing the job market. So we don’t know what comes next. Maybe the US presidential elections will change the scene. Maybe. We cannot really tell, like, what will happen with respect to global events and groundbreaking points in technology worldwide in the next 2, 3, 5, years. We can make some educated guesses for the future. In this episode, I’ll share some of my educated guesses. Obviously, it’s only a guess, but I hope that it’s useful as well. Ross: Well, I think it’s also not so much about guessing. I mean, that’s part of the thing: being a futurist, you don’t try to predict, because we don’t know. But it’s around really saying, ‘what is it we can do that can shape a better future’? So there are all these forks in the road and uncertainties, and all sorts of extraordinary things will happen that we can’t predict. I think a lot of it is around saying, ‘well, if we want to create a better future of work, what is it that we need to be doing today’? That’s really the heart of the question. Natalia: Right. There are a few things that we should be doing as soon as possible. First of all, I think education is always the answer. Let me elaborate on this. At this moment, we live in the world of so -called BANI, which is an abbreviation for brittle, anxious, nonlinear and incomprehensible. Brittle, anxious, nonlinear and incomprehensible world. It’s a new concept, yes, it’s been floating ar

Aug 21, 2024

Nikolas Badminton on cognitive vibration, AI for scenarios, psychological kinesiology, and quiet listening (AC Ep57)

“It’s not about the amount that we say. It’s about making what we say really count. We can use some of these tools to write the long one, so that we can then go ahead and very quickly write the short version and really dial in.” – Nikolas Badminton About Nikolas Badminton Nikolas Badminton is the Chief Futurist of the Futurist Think Tank. He is a world-renowned futurist speaker, award-winning author, and executive advisor, with clients including Disney, Google, J.P. Morgan, Microsoft, NASA, and many other leading companies. He is author of Facing Our Futures and host of the Exponential Minds podcast. Websites: www.nikolasbadminton.com www.futurist.com LinkedIn: Futurist Nikolas Badminton X: @nikolasfuturist Book: Facing Our Futures: How foresight, futures design and strategy creates prosperity and growth What you will learn The journey from business strategy to futurism The power of small, focused communities Integrating AI tools in future scenario exploration Balancing traditional research with generative AI Embracing the unexpected in creative processes Using spiritual practices to enhance cognitive abilities Fostering deeper discussions through listening and questioning Episode Resources Cyborg Camp Dark Futures ChatGPT Claude Gemini DALL-E Midjourney Stable Diffusion freelancer.com Evernote Vice Second Life Grof Breathwork Psychological kinesiology (Psych-K) AI (Artificial Intelligence) Generative AI Neural networks Grammatical inference Recognition linguistics People Amber Case Chris Dancy Kevin Kelly Bruce Sterling Jaron Lanier Douglas Rushkoff Terence McKenna Rob Hopkins Books From What Is to What If: Unleashing the Power of Imagination to Create the Future We Want by Rob Hopkins Cyberia by Douglas Rushkoff Transcript Ross Dawson: Nikolas, it’s awesome to have you on the show. Nikolas Badminton: It’s really, really good to be here, Ross. It’s long overdue, I think. Ross: Yes, indeed. So you are a futurist. A futurist is a person who thinks about the future. So you gotta have to make sense of the world and to be able to think effectively and communicate that well. So, how do you amplify your ability to do that well? Nikolas: So it’s really interesting. So if you sort of go back about 12 years, and I was sort of making this, this movement from business strategy, data-driven work, creative work. I worked in the advertising industry, then worked in software platforms. Actually worked for an Australian company called freelancer.com for a while, and then ran their sort of ops in North America. As I leapt from that into the bigger, wider world of sort of being a full time futurist and working in in that side of things, there are a few things. I mean, the first thing and everything is sort of accretive. The first thing I found was, you know, running meetups and running conferences was sort of the lifeblood of really injecting new ideas and thoughts together and creating sort of a microcosm and an ecosystem of sharing ideas. About 11 years ago, I ran a conference called Cyborg Camp by VR in Vancouver, with Amber Case and my friend, Carous O’Connell, that I’d known for a very long period of time. It’s about the intersection of humanity and technology. And about 140 people flew from all over the world to come to this little conference. Amber Case was actually a really big draw, and she talked about cyborgs and cyborg anthropology and whatever. And what was interesting was creating this, this drive of information, having people like Chris Dempsey, the organizer in the overall organizing principles behind cyborg camp, was really interesting, the most connected man in the world, and he was collecting all the information and putting it all online in Evernote and making that available. Blogs were coming out of this. We made it into Vice and whatever, and slowly, we were capturing a lot of information. And then I ran a Future Camp, which was an unconscious on the future. I ran another conference called from now. And then I ran a series of events for about six years called Dark Futures, which some people were calling the Black Mirror of TED Talks. But needless to say, the first sort of, really accelerator of knowledge and intelligence augmentation for myself was all the people I could tap into and all the people that wanted to come on the journey. So community was the very beginning of that, around about that time, I started doing a lot more keynotes, so I had to do a ton of research. And what I’ve got is I’ve got a network of people that work in large organizations, in R&D departments, people that work in academia and whatever, and I could chat to them that became a podcast that I run called exponential minds. And it’s sort of an occasional, I do an occasional season every couple of years, and I bring in about 10 speakers to talk about various things. Ross: Just to backtrack a little bit. So this idea of communities, conference ev

Aug 15, 2024

Brian Magerko on AI to enhance human creativity, robot improv, music to learn coding, and improvisational dance with AI (AC Ep56)

“AI is not a collaborator. It’s an Oracle, it’s a tool, it’s a thing. I have a query, give me the answer. It’s not a thing where you sit down with the computer like, okay, let’s think about this problem together.” – Brian Magerko About Brian Magerko Dr. Magerko is a Professor of Digital Media, Director of Graduate Studies in Digital Media, and head of the Expressive Machinery Lab at Georgia Tech. His research explores how studying human and machine cognition can inform the creation of new human/computer creative experiences. Dr. Magerko has been research lead on over $15 million of federally-funded research; has authored over 100 peer reviewed articles related to computational media, cognition, and learning; has had his work shown at galleries and museums internationally; and co-founded a music-based learning environment for computer science – called EarSketch – that has been used by over 160K learners worldwide. Dr. Magerko and his work have been shown in the New Yorker, USA Today, CNN, Yahoo! Finance, NPR, and other global and regional outlets. Google Scholar Page: Brian Magerko LinkedIn: Brian Magerko Georgia Tech Profile: Brian Magerko YouTube: Brian Magerko What you will learn Exploring the roots of AI and cognitive science Improvisational AI in robotics and dance The journey of the EarSketch project Challenges in AI-driven collaborative creativity The importance of AI literacy and education Ethical considerations in AI development Envisioning the future of human-AI collaboration Episode Resources AI (Artificial Intelligence) Robot Improv Improvisational AI National Science Foundation EarSketch Python JavaScript Expressive Machinery Lab Large Language Models (LLM) Multimodal models People John Anderson Herb Simon Ken Koedinger Dave McLellan Jaime Carbonell Alan Newell Marvin Minsky Ilan Nourbakhsh Andrea Knowlton Jason Freeman Kristy Boyer Transcript Ross Dawson: Brian, it’s a delight to have you on the show. Brian Magerko: Oh, thanks for having me, Ross. Ross: So you’re a perfect guest, in many ways. You’ve been studying human and machine cognition, and how they shape creativity for quite a long time now. So, just to hear a little bit of how you came here, and why this is the center of your work? Brian: I had the good fortune of being at Carnegie Mellon for my undergrad in the late 1990s. And there were a lot of folks that are doing really exciting work, since its inception, related to AI and cognition, so I got exposed to folks like John Anderson, who’s huge in the cognitive modeling community, Herb Simon, who wound up advising me, Ken Koedinger, who has been one of the leading intelligent tutoring system minds since the 80s. So, , being in the mix of all those, those great minds and being able to take classes with folks and, and do research really was a great place to start, . Ross: Those are incredible people. Brian: Oh, yeah, right! Yeah, I took Dave McClellan’s neural networks class. And he, , wrote the book that we used. Jaime Carbonell, I took his improved AI class. Ross: So what was Herb Simon like? Brian: Herb Simon? I took his, I mean, , as undergrads, we were, we were just in awe of him pretty much. I was friends with…there were five cognitive science majors at the time in our year, it was a huge class. We all put him on a really high pedestal and taking his class was absolutely phenomenal, though I feel like I would have gotten much more out of it as a graduate student than a scatterbrain undergraduate. He was kind enough to be my research advisor for my undergrad thesis, which was one of the first places where I was really putting all of these ideas of studying human creativity and formalizing them computationally. Though, I kind of went in this direction of wanting to do it, models of creativity, which is a very difficult environment to do creativity work in at the level that I was doing. But he advised me on how I’m trying to study the tacit knowledge in jazz improvisers, as well as studying cognitive science and computer science at CMU. I was doing a jazz improv minor, because why not, I guess? I just wanted to explore the wide variety of things that interested me and take the opportunities that I had. And I, a lot of my career is about synthesizing those things together, so my work with Herb was about studying Jazzy, and jazz improvisers, which was the thing that I got exposed to and learned about as a student there. , yada, yada, yada, a lot of informing the first NSF proposal that I ever wrote and got awarded on Sunday’s Improvisational Theater and building formal representations of it. Ross: That’s incredible. And for those listening who don’t know, Herb Simon was Nobel Laureate in economics and sort of the foundation of modern decision theory. Brian: He’s also one of the progenitors of artificial intelligence. Ross: Well, yes. He was right there at the start. Brian: There was the

Aug 7, 2024

Claire Mason on collaborative intelligence, skills for GenAI use, workflow design, and metacognition (AC Ep55)

“It’s really important that we’re not ceding everything to AI and that we continue to add value ourselves in that collaboration.” – Claire Mason About Claire Mason Claire Mason is Principal Research Scientist at Australia’s government research agency CSIRO, where she leads the Technology and Work team and the Skills project within the organization’s Collaborative Intelligence Future Science Platform. Her team investigates the workforce impacts of Artificial Intelligence and the skills workers will need to effectively use collaborative AI tools. Her research has been published in a range of prominent journals including Nature Human Behavior and PLOS One, and extensively covered in the popular media. Google Scholar Page: Claire M. MasonLinkedIn: Claire MasonCSIRO Profile: Dr. Claire Mason What you will learn Exploring collaborative intelligence with AI and humans Leveraging AI’s strengths and human expertise Enhancing medical diagnosis with sage patient management system Utilizing drones for faster rescue operations Essential skills for effective AI collaboration Productivity gains from generative AI in various industries Future research directions in AI and human teamwork Episode Resources CSIRO Artificial Intelligence IBM’s Deep Blue ChatGPT-4 Boston Consulting Group cybersecurity Generative AI metacognition Erik Brynjolfsson Transcript Ross Dawson: Claire, wonderful to have you on the show. Claire Mason: Thank you, Ross. Lovely to be here. Ross: So you are researching collaborative intelligence at CSIRO. So perhaps we would quickly say what CSIRO is. And also, what is collaborative intelligence? Claire: Thank you. Well, the CSIRO stands for Commonwealth Scientific, Industrial and Research Organization. But more simply, it is Australia’s National Science Agency. We exist to support government objectives around social good and environmental protection, but also to support growth of industry without science. And so we have researchers working in a wide range of fields generally organized around challenges. And one of the key areas we’ve been looking at, of course, is artificial intelligence. It’s been called a general purpose technology, because its range of applications is so vast, and it is so potentially transformative at least. And collaborative intelligence is about a specific way of working with artificial intelligence. So it’s about considering the AI almost as another member of a team or a partner in your work. Because up till now, most artificial intelligence applications have been about automating a specific task that was formerly performed by a human. But artificial intelligence has developed to the point where it is capable of seeing what we see and conversing with us in a natural way. And adapting to different types of tasks. And that makes it possible for it to collaborate with us to understand the objective that we’re working on, communicate about how the state of the objective or even be aware of how the human state is changing over time, and thereby producing an outcome that you can’t break down to the bit that the AI did and the human did. It’s truly a joint outcome. And we believe that has the potential to deliver a step change in performance. Ross: Completely agree. Yeah, this is definitely high potential stuff. So you have some, you’re doing plenty of research. Some of it’s been published, some of it’s still yet to be published. So perhaps you can give you a couple of examples of what you’re doing either in research or in practice, which can, I suppose, crystallize these ideas? Claire: Yeah, absolutely. So to begin with, the key element is that we’re trying to utilize the complementary strengths and weaknesses of human and artificial intelligence. So we know artificial intelligence, vastly superior in terms of dealing with very large amounts of data, and being able to sustain attention on very repetitive tasks or ongoing things. So that means that often, it’s very good when you’re dealing with a problem that requires very large amounts of data, or where you need to monitor something fairly continuously, because humans get bored. They are subject to cognitive biases, and social pressures. So that’s one area of strength that the AI has. But the AI isn’t great at bringing contextual knowledge. It isn’t great at processing information from five different senses simultaneously yet. So it will also fail at common sense tasks that humans can perform and read easily. But it also can’t deal with novel tasks if it hasn’t seen this type of task before. And it hasn’t seen what the correct response is, it can’t respond to it. So it’s also important to have the human in the loop if you like. So, we actually developed a definition of what represented collaborative intelligence. And our criteria were that it had to be the human and the art

Jul 31, 2024

Markus Buehler on knowledge graphs for scientific discovery, isomorphic mappings, hypothesis generation, and graph reasoning (AC Ep54)

“If you read 1,000 papers and build a powerful representation, humans can interrogate, mine, ask questions, and even get the system to generate new hypotheses.” – Markus Buehler About Markus Buehler Markus Buehler is Jerry McAfee (1940) Professor in Engineering at Massachusetts Institute of Technology (MIT) and Principal Investigator of MIT’s Laboratory for Atomistic and Molecular Mechanics (LAMM). He has published over 450 articles with almost 50,000 citations and is on the editorial boards of numerous journals including PLoS ONE and Nanotechnology. He has received numerous awards including Presidential Early Career Award for Scientists and Engineers (PECASE) and National Science Foundation CAREER Award. In addition he is a composer and has worked on two-way translation between material structure and music. Wikipedia Profile: Markus J. BuehlerGoogle Scholar Page: Markus J. BuehlerLinkedIn: Markus J. BuehlerMIT Page: Markus J. Buehler What you will learn Accelerating scientific discovery with generative knowledge extraction Understanding ontological knowledge graphs and their creation Transforming information into knowledge through AI systems The significance of ontological representations in various domains Visualizing knowledge graphs for human interpretation Utilizing isomorphic mapping to connect disparate concepts Enhancing human-AI collaboration for faster scientific breakthroughs Episode Resources Accelerating Scientific Discovery with Generative Knowledge Extraction, Graph-Based Representation, and Multimodal Intelligent Graph Reasoning by Markus J. Buehler Artificial intelligence (AI) ChatGPT-4 Generative AI Ontological knowledge graphs Transformer-based architectures Graph reasoning Beethoven’s Ninth Isomorphic mapping Alpha Fold Infinite Corridor Claude 3.5 Apache 2.0 license MIT Transcript Ross Dawson: Marcus, it is fantastic to have you on the show. Markus Buehler: Thanks for having me. Ross: So you sent me a paper, which is titled, Accelerating Scientific Discovery with Generative Knowledge Extraction, Graph-Based Representation, and Multimodal Intelligent Graph Reasoning, and it totally blew my mind. So I want to try to use the opportunity to unpack it to a degree. It’s an 85-page paper, so obviously I won’t be able to get out of the detail level, but to unpack the concepts, because I think they’re extraordinarily relevant, not just for accelerating scientific discovery, but also across almost any thinking domain. It’s very, very rich and very promising just because so much to my interest. So let’s start off and essentially, I’m saying, you’ve taken a thousand papers, and from those have been able to distill those into some ontological knowledge graphs. So could you please explain ontological knowledge graphs, how those are created, and what they are? Markus: Sure, yeah, so the idea behind this sort of graph representation is really changing information into knowledge. And what that means is that we’re trying to take bits and pieces of information, like a concept — concept A, concept B, like a flower, composite a car. And in these graph representations, we were trying to connect them to understand how a car, a flower, and a composite are related. And, traditionally, we would create these knowledge graphs, manually, essentially, would create sort of categories of what kind of items we want to describe, and what the relationship might be. And then we would basically manually build these relationships into a graphic presentation. And we’ve done this for a couple of decades, actually. I think the first paper was 10 to 20 years ago. And yeah, back in the day, we did this manually, essentially understanding a certain scientific area. We would build graph representations of the knowledge that connect information and understanding structurally what’s going on. And then now, of course, in the paper, and we’ll probably talk more about this, we have been able to do this using Generative AI technologies. And this allows us to, as you said, build these knowledge graphs for a thousand papers or more, and do it in the way, actually in an automatic way. So we don’t have to manually read the papers and understand them, and then build the knowledge graph, we can actually have AI systems build these graphs for us. And this, of course, is a whole different level of scale that we can now access. Ross: So there is an important word there, ontological. So what’s the importance of that? Markus: Yeah, so when we think about concepts, like, let’s say, we take a look at biological materials, a lot of them are made from proteins. Proteins are made of amino acids. And there are certain rules by which you put amino acids together, which in turn are encoded by DNA. And depending on the pattern you have in the DNA, and then in the protein sequence, you’re going to get different protein structures, which have different funct

Jul 24, 2024

Nichol Bradford on AI + human potential, unique perspectives, and technology for mental, emotional, and social health (AC Ep53)

“So my overall interest in technology in general, not just AI, is how it supports human potential. And so for me, that’s defined as people being healthy, happy, and really able to fulfill their purpose and potential.” – Nichol Bradford About Nichol Bradford Nichol Bradford is Executive-in-Residence for AI + Human Enablement at The Society for Human Resource Management, focusing on human-AI collaboration. She is also Co-Founder and Partner of Niremia Collective, an early stage venture fund focused on human potential technologies, and Chairman and Co-founder of The Transformative Tech Lab, the largest global ecosystem of founders, investors and innovators building tech for human flourishing. She is also a frequent keynote speaker and Faculty at Singularity University, and has been a Lecturer and Adjunct Professor at Stanford University. Websites: www.nicholbradford.com www.shrm.org/about/bio/nichol-bradford LinkedIn: Nichol Bradford What you will learn Exploring the role of AI in enhancing human potential The concept of the ‘Human MESH’ for mental and emotional health Redefining work and human uniqueness in the AI age The importance of soft skills and unique perspectives Successful AI implementation through human-centered approaches Investing in technology for mental health and performance Addressing global challenges with advanced AI Episode Resources Artificial intelligence (AI) ChatGPT Human MESH World of Warcraft Blizzard SHRM Apollo Neuro Accenture Generative AI Predictive model Machine learning Living Networks by Ross Dawson Transcript Ross Dawson: Nichol, it’s awesome to have you on the show. Nichol Bradford: Thank you, Ross, I’ve been wanting to talk to you for a long time. So when you reached out, I was really thrilled. Ross: Yeah, oh, it’s very strong alignment with their messages in this, humans in AI and potential. So I’d love to ask you to give me your frame, and describe how you see humans in an AI world. Nichol: So my overall interest in technology in general, not just AI, is how it supports human potential. And so for me, that’s defined as people being healthy, happy, and really able to fulfill their purpose, to fulfill their potential. Specifically, I spent a decade so far looking at technology, specifically as it ties to what I call the ‘Human MESH’. So mental, emotional, social health, and human performance, and how we can leverage technology to support the Human MESH. And so there’s a long line of technologies that have applications there. And I started one of the first communities dedicated to fostering companies in that area. AI is only the most recent entrant into technology that can allow us to heal, grow, and thrive. Ross: That is awesome. This goes a little bit back to my book Living Networks, which came out in 2002. And at the time, if back in the 90s, everyone, you say, ‘oh, tech, that’s for geeks sitting in basements’, and I’m saying, ‘well, no, that helps us connect to, to be more to think better.’ And other people didn’t quite see it at the time. But I love the mental improvements around mental health, as well as the ability to think and the emotions. And, you know, there’s been some things you know, it’s not a one-way street, as in, there’s some positive and negative potentials from technology, but the positive potential is so, so massive, and so wonderful to see you on that journey. Nichol: Well, you have been ahead of your time, as well. And so how I followed you was initially seeing your work on just really sort of how to manage the cognitive stress of modern life, and then the way that you have thought about networks and other things. So I’d love to know, what is your definition of human potential? Ross: So I don’t have a nice acronym today or a structured one, but it’s, it’s who we can be. And this comes back to the becoming, you know, we are aware, you know, it’s not just being versus doing, you know, it’s about becoming, that is what it is to be human is to always be different. I often reflect that it’s this paradox, we are the one person from when we are born to when we are teenagers, when we are older, we are one person yet, in fact, we are completely different people, all of the cells are different, the way that we think is different. So we are in the process of letting go of the old and embracing the new, and not enough people are too many people are static in their lives. But we are becoming more and more and I always think of it in terms of how we could be so many people, every one of us could live a hundred wonderfully different rich lives and discover what we could do. And so for example, I’m a bit of a repressed musician at the moment, you know, I think I have a lot of musical potential, but I’ve just been busy doing other things. And I want to come back to that. And there are many other things where I’

Jul 17, 2024

George Pór on wisdom-focused collaborative hybrid intelligence, AI whisperers, and AI shamans (AC Ep52)

“To use AI for omni-beneficial output, we need to bring to it our best qualities, which are beyond intelligence; it is wisdom.” – George Pór About George Pór George Pór has been researching, teaching, and consulting in the arts and sciences of emergent collective intelligence since 1987, when he was introduced to the ideas by his mentor Doug Engelbart. He is the founder of numerous organizations, including Future HOW, Enlivening Edge, and Campus Evolve. His academic posts have included London School of Economics, INSEAD, UC Berkeley, Université de Paris, while his clients include European Commission, European Investment Bank, Ford, Greenpeace, Intel, Shell, Unilever, World Wildlife Foundation and many others. Websites: futurehow.site ResearchGate Profile www.riverflows.life LinkedIn: George Pór Medium: George Pór What you will learn Exploring wisdom-focused collaborative hybrid intelligence Enhancing decision-making with high-quality AI prompts The role of AI whisperers and AI shamans Iterative interaction between humans and AI Balancing ethical considerations in AI use AI’s potential for community healing Promoting personal and collective growth through AI Episode Resources Artificial intelligence (AI) ChatGPT Gregory Bateson Medium Generative Action Research AI Whisperer AI Shaman Prompt engineering AI-augmented human development Collective intelligence Vertical development Horizontal development Artificial General Intelligence Artificial superintelligence Transcript Ross Dawson: George, it is wonderful to have you on the show. So, I’ve known of your work for a very long time. I think, you know, probably 20 years or so. And I think similarly, you for mine, but there’s been a lot of parallels. And recently you’ve been working on this idea of wisdom-focused, collaborative hybrid intelligence. That’s a very intriguing phrase. I think it goes to a lot of these ideas of amplifying cognition. So please, can you explain to us what this means wisdom-focused, collaborative hybrid intelligence? George: Okay, let me just step back to give you a little context. For those last two years since I’ve been diving into AI, my driving question was, and still is, how can AI augment collective intelligence to serve better the flourishing of people, organizations, and the human species? So that’s the context from which wisdom guided and wisdom fostering collaborative hybrid intelligence comes. And so to get a sense of what I mean by wisdom-guided, collaborative hybrid intelligence, just think of that there are all of these zillions of organizations that prompt an AI agent to help with this or that aspect of decision making. The quality of that prompt has a huge impact on the AI’s output. Imagine if the articulation of the issue in the prompts would come from the deepest wisdom available to a decision-making individual or team. So what we are doing with AI in a meeting is analogous to what is happening in any good meeting. Even without AI, we are putting something out in the conversations, and individuals speaking, are contributing. And that becomes a prompt to the others to the other participants and brings back something from the others. So the quality of a team’s collective wisdom depends on the mindfulness and heartfulness of our utterances, plus the depth of our listening to each other in the field. So what I’m saying is that when the mind, heart, and action of speaking come into alignment, then that collective wisdom can guide our interaction with our AI mates. So that’s what I mean by wisdom-guided AI. So it’s not just putting out any prompts for hoping that AI will come back with something that makes our processes more efficient, yes, AI can do that, but the higher state, the uncatchable advantage comes from people bringing their best into the definition, the articulation of the prompt that goes to the AI agent. Now, the other aspect of this wisdom-focused AI is that it can be not only wisdom guided, but also wisdom fostering, and what I mean by that is that too, to catch up to the capacities that the benefits that AI can provide. We humans need to bring our best wave and if we do that, then what the AI’s output enables us is to tune in With the collective intelligence of the whole accumulated output of human knowledge. So, to catch up with that, we need to become more like AI whisperers, that is developing an intimate relationship with AI’s thinking. And that whole becoming wiser, for example, give you a specific example, like in one of our workshops, where we introduced this in our action research into the Collaborative Hybrid Intelligence, where we were not only talking about AI but actually used ChatGPT as one of the participants and Co-facilitator of the workshop. So how does it work? It’s like, I already use ChatGPT, in the design of the workshop by asking some questions that may come up with a better design. And then in the

Jul 10, 2024

S2 Ep 51Daniel Erasmus on ClimateGPT, AI for climate decisions, social intelligence solutions, and surfacing hidden connections (AC Ep51)

“The promises are tremendous and the peril is climate, not AI. “ – Daniel Erasmus About Daniel Erasmus Daniel is the Founder and Managing Director of futures consulting firm Digital Thinking Network (DTN), CEO of AI sense-making platform Erasmus.AI, and creator of ClimateGPT. He has been applying innovative approaches to scenario planning since 1996 for many leading organizations around the world. Daniel is a visiting professor at Ashridge Business School and a fellow at The Rotterdam School of Management. Websites: www.danielerasmus.com Digital Thinking Network (DTN) www.erasmus.ai www.climategpt.ai LinkedIn: Daniel Erasmus What you will learn Discussing the real existential threat of climate change Exploring AI’s role in addressing climate challenges Daniel Erasmus’s background in foresight and scenario planning The development and impact of ClimateGPT The importance of Human-AI collaboration Equitable access to AI technologies for climate solutions Innovative climate resilience strategies and examples Episode Resources Artificial intelligence (AI) The Promise and the Perils of AI Rotterdam climate initiative BloombergGPT ChatGPT World Economic Forum ClimateGPT Singapore Sea Lion European Central Bank Systemic Risk Board FSB (Financial Stability Board) NOAA (National Oceanic and Atmospheric Administration) Sea Ban (European legislation for a carbon border tax) SDGs (Sustainable Development Goals) TCFD (Task Force on Climate-related Financial Disclosures) TNFD (Taskforce on Nature-related Financial Disclosures) chess.com (freestyle chess competition) Transcript Ross Dawson: Daniel, it is awesome to have you on the show. Daniel Erasmus: It’s great to see you again, mate. It’s been far too long. And it was wonderful seeing you in San Francisco last year. I mean, it was a fascinating event, the title of the event was The “Promise and the Perils of AI”. In the audience, we had Rusty, they’re working on meteorites and a whole set of sort of existential issues facing humanity. And, and the point that I made there is, that people tend to place AI as an existential threat within these, and instead of sort of challenges for human supremacy, but the real threat, the peril is somewhere else. And the peril is not AI, it’s climate change. Climate change is a structural threat that will face humanity, at the scale of the UN estimates 200 million climate refugees by 2050, maybe half a billion, that’s 26 years from now, half a billion a decade later. Now, the European project barely survived one and a half million Syrian refugees. So the kind of things that we are talking about here, we’re going to have to get really, really good at not just anticipating what’s happening but acting on that early preparing for that with the least amount of human and of course, planetary suffering. And so that’s the promise of AI. And I think it’s far more interesting to look at AI with those terms. How can it help us? And how can we, together with AI come to very, very different solutions than we have in the past for the real existential threat, which is climate change? Ross: Yep, absolutely. The challenges we face are unprecedented in complexity and scale. So we hopefully have some tools which can assist us in that. But I think that goes a little bit to your background and where we’ve crossed over in the past is understanding complex systems. So it’d be great just to hear a little bit about your background and how you’ve come to this point from your work in foresight over the years Daniel: I’m South African, my origin, and I witnessed the transition of South Africa from an oppressive racist regime to a democracy, which was perhaps one of the most exciting things to happen in my youthful life, but it was a youthful life. But within that one gets the bones of looking ahead, scenarios transformation, and that the same people in the room, looking at the thing very differently, can come with very, very different conclusions. And then spent almost an hour actually, but over a quarter of a century, running scenarios and foresight processes, largely for multinational companies. So the Fortune 50 type of things, countries, cities, and doing a set of transformation projects around this, of which, there’s certainly some clerk climate work that came out of that Rotterdam climate initiative to half CO2 levels from their 9090 level, which was launched before. Al Gore’s film even came out in 2005, anticipating the global financial crisis for a bank, which led to them having their most profitable year in 150-year history in 2008. Running the first central bank digital currencies for central banks, anticipating the oil price collapse for an oilfield, so several multibillion-dollar exercises for clients, but at one point, one takes a step back and says these are legacy and foresight, which talks to the practice of foresight and bringing people togeth

Jul 3, 202437 min

Pedro Uria-Recio on interlacing humans and AI, brain-computer interfaces, jobs to entrepreneurship, and enabling mindsets for the future (AC Ep50)

“AI is going to change humanity into possibly a new species; we could call it a new form of humanity, which is different from what we have today. “ – Pedro Uria Recio About Pedro Uria Recio Pedro Uria-Recio is a highly experienced analytics and AI executive. He was until recently the Chief Analytics and AI Officer at True Corporation, Thailand’s leading telecom company, and is about to announce his next position. He is also the author of the recently launched book Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity. He was previously a consultant at McKinsey and is on the Forbes Tech Council. Websites: www.machinesoftomorrow.ai www.true.th allmylinks.com/uriarecio LinkedIn: www.linkedin.com/in/uriarecio Medium: @uriarecio YouTube: @uriarecio Book: Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity What you will learn Exploring the evolution of AI from past to present Discussing the concept of human-AI interlacing Examining advancements in brain-computer interfaces Understanding AI’s role in future education systems Highlighting the importance of adaptability and critical thinking Predicting the long-term impacts of AI on humanity Emphasizing the need for an entrepreneurial mindset in an AI-driven world Episode Resources Artificial intelligence (AI) Generative AI Large language models OpenAI Artificial General Intelligence (AGI) Brain-computer interfaces (BCIs) Neuralink Elon Musk Blue Brain Project Mind emulation GitHub Copilot Prompt engineering Book Machines of Tomorrow: From AI Origins to Superintelligence & Posthumanity Transcript Ross Dawson: It’s wonderful to have you on the show, Pedro. Pedro Uria Recio: Wonderful. Thank you, Ross. Thank you very much for inviting me. It’s a pleasure to be here with you. Ross: So you’ve got a book, Machines of Tomorrow, which I think has a pretty vast scope, in terms of humanity and machines and where that might go on a pretty grand scale. But one of the central themes there is how humans and AI will be interlaced. And we’d love to just hear more about where you see that now, and how you see that evolving over the next years. Pedro: Wonderful. So in this book, Machines of Tomorrow, what I try to do is I try to explain artificial intelligence, from a human history point of view, from the moment in which artificial intelligence started to be created, or started to be designed, from those aspirations that humans had a very long time ago to create a copy of themselves a machine, like ourselves, to the present to 2024 with generative AI and what is happening right now with open AI, etc, etc. And also looking into the future, right? What is going to happen in the next few decades and in the very long term future, how it is gonna be, and how artificial intelligence is central to human history, right, particularly in the future? One of the aspects that is most important or most central in this book is the concept of interlacing. Which means that humans are going to interlace with artificial intelligence –- we are going to become more intimately related. At this moment, we will have our phones. And we are using our phones for everything –- we can call people that are far behind, or that are in other places; we use it for our daily life. The fact that the phone is outside your body is just an anecdote. In the future, it is going to be inside our bodies, right? It’s going to be inseparable. And we’re going to be interlaced with artificial intelligence, we’re going to be interlaced with electronics, right? And there are a lot of technologies that are being developed at this moment that are pointing in this direction. One of them is will all the cyborg technologies, possibly brain-computer interfaces have the most critical one, then we’ll have robotics, then you have applications of AI to medicine and biology –- how we can be modified, where we live longer, we don’t have cancer, we might see in the dark, et cetera, et cetera. So one of the aspects, not the only one, but one of the aspects of this book is that AI is going to change humanity into possibly a new species; we could call it a new species, a new form of humanity, which is different from what we have today. And that will happen in the long term. It is difficult to know where and how. Ross: So what a start in the present. So of course, there are many phases to this. And I’m kind of interested to look at first of all, the next year or two, and sort of wait, so we’re already in some ways interlaced. A lot of people are using these generative AI tools in particular, as part of the embedded into their thinking processes. They’re already arguably interlaced into the thinking and ways of working. So let’s start with the first next year or two. What do you think are the next steps? And then maybe the next sort of two to five years around? What are the next technologies in the way you see th

Jun 26, 2024

Anita Williams Woolley on factors in collective intelligence, AI to nudge collaboration, AI caring for elderly, and AI to strengthen human capability (AC Ep49)

“In collective reasoning, one of the fundamental hurdles is coming up with a shared understanding of what we’re trying to do, and where we’re trying to go. “ – Anita Williams Woolley About Anita Williams Woolley Anita Williams Woolley is the Associate Dean of Research and Professor of Organizational Behavior at Carnegie Mellon University’s Tepper School of Business. She received her doctorate from Harvard University, with subsequent research including seminal work on collective intelligence in teams, first published in Science. Her current work focuses on collective intelligence in human-computer collaboration, with projects funded by DARPA and the NSF, focusing on how AI enhances synchronous and asynchronous collaboration in distributed teams. University Profile: Anita Williams Woolley LinkedIn: Anita Williams Woolley Google Scholar: Anita Williams Woolley ResearchGate: Anita Williams Woolley X: @awoolley95 What you will learn Exploring the concept of collective intelligence The difference between individual and collective intelligence How collective memory, attention, and reasoning work The impact of gender on collective intelligence The role of AI in facilitating human collaboration Integrating AI as a teammate in group settings Future possibilities for human-AI collaboration in problem-solving Episode Resources Collective intelligence Artificial intelligence (AI) Transactive memory systems Social perceptiveness Behavioral synchrony Generative AI Large language models MIS Quarterly DARPA National Science Foundation (NSF) AI Institute AI-CARING Carnegie Mellon University Linda Argote Transcript Ross Dawson: Anita, it’s wonderful to have you on the show. Anita Williams Woolley: Thanks for having me. Ross: So your work is absolutely fascinating. So I’d like to dive in as much as we can, in the time that we have. Much of your work is centered around collective intelligence and I’d love to just pull back to get that framing of collective intelligence relative to human intelligence. So we have some idea of artificial intelligence, which is emerging. So where does collective intelligence fit in that? Anita: Yeah, well, it is. There are a lot of uses of the word intelligence. So it’s good to get some clarity. I guess starting with the notion of individual general intelligence, which is the thing that’s most familiar to most people, it’s this notion that individuals have this underlying capability to perform across multiple domains. And that’s what’s been shown empirically, anyway. So individual intelligence is a concept most people are familiar with. It refers to this. Well, when we’re talking about general human intelligence, it’s a general underlying ability for people to perform across many domains. And empirically, it’s been shown that measures of individual intelligence predict somebody’s performance over time. So it’s a relatively stable attribute. For a long time, when we thought about intelligence and teams, we thought about it in terms of the total intelligence of the individual members combined, the aggregate intelligence. But in our work, we kind of challenged that notion, by conducting studies that showed that there were some attributes of the collective the way the individuals coordinated their inputs, and worked together and amplified each other’s inputs. That was not directly predictable from simply knowing the intelligence of the individual members. And so collective intelligence is the ability of a group to solve a wide range of problems. And it’s something that also seems to be a stable collective ability. Now, of course, in teams and groups, you can change the individual members, and other things can happen that might alter the collective intelligence more readily than you could with an individual in terms of individual intelligence, but we do see that it is fairly stable over time and enables this, you know, greater capability. In some cases, at least, collective intelligence can be higher when you have a higher collective intelligence than a group that is more capable of solving more complex problems. And then, yeah, I guess you also asked about artificial intelligence, right? And so when computer scientists start working on ways to endow a machine with intelligence, what they are essentially doing is providing it with the ability to reason to take in information to perceive things, to kind of identify goals and priorities and to reason and to change and adapt based on information that it receives, which is something humans do quite naturally, so we don’t really think about it. But without artificial intelligence, a machine only does what it’s programmed to do. And that’s it. And so it can do a lot of things that humans can’t do even then, usually computations, or some variant of that. But with artificial intelligence, suddenly, a computer can make decisions and draw

Jun 19, 2024

Jeremy Somers on building an AI-assisted creative agency, 80:20 in Humans + AI, AI-amplified storytelling, and the future of agencies (AC Ep48)

“True creativity comes from humans because it stems from our unique individual experiences of life. “ – Jeremy Somers About Jeremy Somers Jeremy Somers is Founder and Director of AI-assisted creative agency NotContent.ai, and of We Are Handsome. He has extensive experience as a Creative Director, working for brands such as Asos, Canon, Mercedes-Benz, Qantas, Spotify, and W Hotels. Websites: www.notcontent.ai www.jeremysomers.com Instagram: @notcontent.ai Beehiv: notcontent.beehiiv What you will learn Exploring Jeremy’s journey from analog to digital in the creative industry The pivotal role of generative AI in transforming creative processes How notcontent.AI merges AI tools with human creativity for enhanced productivity Addressing common misconceptions about AI replacing creative jobs Strategies for integrating AI into traditional creative agency workflows The future of creative agencies in an AI-driven world Insights on maintaining human creativity at the core of AI-assisted outputs Episode Resources Artificial intelligence (AI) Claude-3-Opus ChatGPT 4o Fireflies (transcription tool) Whisper Memos (app) Canva Ethan Mollick notcontent.AI Generative AI Transcript Ross Dawson: Jeremy, it’s awesome to have you on the show. Jeremy Somers: Hey, Ross, thank you for having me. Ross: So you’re a leader in AI-assisted creative agency work. Tell me more. Tell us more. Jeremy: The story begins long before the world of generative AI and AI creativity. My career and life history have always been about creativity. And I started in traditional analog photography, when I was in my teens, and trends went through the whole transition into digital photography. And then I taught myself graphic design. And then I learned it through a very, very early Photoshop version on a bubble, iMac, and the colored ones. And then started working in some of the very first digital agencies in Sydney. And learning through the transition of like, there was no social media and other social media, there is no e-commerce now there is e-commerce, so it’s in digital agencies working on big brands, Nike, and Pepsi, and Microsoft, Samsung, etcetera, etcetera, through this whole transition. And so a lot of my career journey has been in transitional periods of, like, massive shifts in the thing that I’m doing, not just the tools that are available to us, but just societal level shifts of how we communicate as designers and creators and branding people to the outside world. And I happened upon open APIs, Darley white paper very early on, probably coming up on, two and a half years since it was released, I think I’ll check that. But I haven’t found this like paper and nerdily, read through the entire thing, and then read through it again. And then I fully understood what was going on, I had this moment of, sort of cinematic-like, flashback, flash forward moment of, I see the end result of where everything I’ve ever done, creatively, how I’ve done, it has changed, but this is going to change everything in a way, which we’ve never seen before. So I have this, Pivotal epiphany. And I was like, whoa, okay, how can I learn more? One, and two, once I was able to learn more, and you know, so your generative AI suddenly became a thing. I was just like, rabid for learning and looking at tools and learning about who’s doing what and how to kind of get access to it as a creative and as an agency owner, and I happened into the right places at the exact right time. And did a whole bunch of testing things and playing around and just like nerding out on stuff and taught me a whole bunch of new skills and the new taxonomy and way of thinking, and then I thought, okay, how can I take all of this time that I’m spending and turn it into something commercially viable? And I see this end result? We’re not there yet. The technology is not there yet. The people are not there. we’re so, so early on all of this stuff. But how can I, if I can translate it into some sort of commercial vehicle now? And then we’re talking two years ago, I’ll set myself and be way ahead. I’ve seen all of these massive other shifts, and I recognize this is the start of a shift. And I was never early on anything else. So maybe I could be early on this one. That’s how we get to notcontent.AI is one of one of the world’s first creative agencies, today’s AI assistant. Ross: So, AI-assisted creatives. Let’s dig into that. So I mean, you’ve been talking about image generation, of course. There are other forms of communication, occurring, words and videos and smells and all sorts of things. So let’s have a look at the high level, and perhaps you can sort of dig down into detail. So what does that mean, when you’ve got creatives as in presumably creative humans working with tools, and how together they’re creating something better, faster, cheaper, more s

Jun 12, 2024

Ross Dawson on Future Job Prosperity: 13 reasons to believe in a positive future of work (AC Ep47)

“If we start to think about humans plus AI, this mindset begins to shape what we are trying to create.” – Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Ross Dawson LinkedIn: Ross Dawson Twitter: @rossdawson Facebook: Ross Dawson YouTube: Ross Dawson Books Thriving on Overload Other books What you will learn Exploring the dual attitudes toward AI: replacement vs. enhancement Introduction to the amplifying cognition podcast by Ross Dawson Overview of the Maven cohort course on AI-enhanced thinking Debating the future of work with insights from Sangeeta Paul Chattery How AI can amplify human cognition and decision-making Understanding the potential for a positive future of work Inviting listener feedback and discussion on the future of jobs Link to report: Please let Ross know your thoughts and comments on future job prosperity: LinkedIn: Future Job Prosperity X/Twitter: Ross Dawson on Future Job Prosperity Episode Resources Maven cohort course Pew Research Center hyperstition Mobile money agents Augmented reality designer Neural interface design AI auditing Prompt engineering Sangeet Paul Choudary The Economist (Noah Smith) Transcript So this episode is a bit different than usual. It’s just me today. And like to share this mini report I’ve just written about making the case about why we should believe that the future of jobs will be prosperous. And one of the most popular episodes in the podcast has been episode 39 recently with Sangeet Paul Choudary, where we had a kind of a debate around the future of work where he was somewhat less positive, particularly around the evolution of the skill premium in jobs. And I was making the case for a more positive perspective on the future of work. And if we think about the future of humanity, perhaps the most important issue is the future of work. This is how we create value for ourselves for society, the way that we feel we have value we express our personality, our capabilities, we achieve our potential, it’s there’s nothing more important in a way than the future of work though how it is that we contribute and create value in our work, I recall this survey by Pew Research just quite some years ago, but where they asked around 2000 Supposed experts in the future of work around whether they believe that the future of work would be positive, or would be negative, and 48%, were negative. And they painted these sometimes extremely dire predictions of technological mass unemployment and massive disparities. And this really quite bleak view of the future of work. Whereas 52% painted a positive future, sometimes just on balance, feeling as positive, sometimes believing that we could move to a world where we could do whatever we felt was the right things for ourselves and our spirits in the world, and we could fulfill our fullest human potential. So that’s around 50-50. And the issue is we don’t know. And today with the rise of AI, this is making it even more deeply uncertain. There are many views, I’m sure you’ve read many around what will happen with the future of work. I bet that more of the ones you have read have been fairly negative around the prospects for AI replacing workers. But the thing is, we simply don’t know. There’s this marvelous word hyperstition, which is essentially a self fulfilling prophecy. If you believe something and you frame it, then it starts to literally come true. And I think there’s a real risk of that with the sort of the talk that we have around how AI will replace jobs and the attitudes we have to how we use AI to be able to substitute rather than to compliment human workers. But I think in the same way, we need to be able to articulate the positive case as to AI and other technologies. Another shift in society can create a very positive future of work, and hopefully that being able to engender a self-fulfilling prophecy and once we can envisage it, to see that we understand that it is possible to be able to drive that and I think there’s a key point being around. You have to believe something is possible in order to make it happen and I think some people are floundering in finding that positive view of the future of work. And I’d like to be able to make the case that it is possible or potentially even likely if we do the right things. Of course, this is all about this idea of humans plus AI, where if AI comes in, well, you’re not looking to say, well, how does AI replace humans, trying to make it a substitute for humans, but always looking for how humans and AI together can do far

Jun 5, 2024

Katri Manninen on AI in screenwriting, consciously choosing AI and human roles, creative workflows, and content automation (AC Ep46)

“We should always remember that we are still humans. We are the ones telling the stories, deciding what we want to tell. And we are doing things for other humans; it’s the other humans who want to hear from us. For me, it’s very grounding amidst all this AI craziness to remember to come back to that relationship: me as a human talking to another human.” – Katri Manninen About Katri Manninen Katri Manninen is a prominent Finnish screenwriter, showrunner, and author. She has written 12 drama series, many based on her original ideas, 29 books, and 4 feature films. She is currently doing a Ph.D. on AI in screenwriting, and has been named “Finland’s Most Artificially Intelligent Screenwriter”. Website: www.kutri.net LinkedIn: Katri Manninen IMDb: Katri Manninen YouTube: @KatriManninenKutriNet Facebook: Kunnanvaltuutettu Katri Manninen Instagram: @mannisenkatri X (Twitter): @katrimanninen What you will learn Exploring katri manninen’s journey from screenwriting to AI Using AI to handle repetitive and formulaic tasks Maintaining human creativity and originality with AI Automating content creation workflows efficiently Enhancing cognitive processes and ideation through AI Ethical considerations in the use of AI for creative work Future possibilities of AI in amplifying creative potential Episode Resources Artificial intelligence (AI) Claude-3-Opus ChatGPT 4o Fireflies (transcription tool) Whisper Memos (app) Canva Ethan Mollick Transcript Ross Dawson: Katri, it’s fantastic to have you on the show. Katri Manninen: It’s so great to be here talking about topics that I love. Ross: Yes, yes, you dive deep. So you’ve been a classic creative for a long time being a screenwriter and a showrunner across many TV series and more. And now you are diving deep into the potential of AI. And so just love to hear how you are using these tools. What’s the starting point for you? When was this awakening for you? Katri: Yeah, I’m also a published author. So writing books is a big part of my life. And I also like these YouTube videos, because I am a transformative coach. So I create. It’s not just the fictional things that I do, but I do a lot of all kinds of things. And like you said, I’m a classic creative in the sense that I am always creating all kinds of things. And I’ve been a professional screenwriter since 1998, which means that I’m quite old. I wasn’t a baby when I started, unfortunately. So I’m a really seasoned screenwriter. And what that means is that I know storytelling well. And I’m really good at seeing what works, and what doesn’t work. What is a high-quality thing? What is generic shit, like, which is a term, the scientific term I coined since getting to know AI? I’ve been thinking about using AI or what AI could start doing for our work. Since I suppose 2018. That’s when I have like, first, some notes or some comments where I wrote something about it like saying like, Okay, do you understand there is this machine learning and it could like if you do daily soap opera, that those everyday episodes that are kind of a very formulaic, following a recipe, that very soon, we could kind of use this machine learning thing to kind of learn the recipe for the like these shows, and they could first start assisting us and then writing for us. My message already then in those first writings was that we should really start thinking about what is in our job as a screenwriter, what kind of work can we do that isn’t formulaic? That it’s not like a recipe, that is something that only humans can do? Where is it where we break the formula where we get outside of it, and that we should, my, my idea was that we should really lean into that direction, and then use those AI, things, which I still back then didn’t know what they could be, how powerful they could be that then use them to kind of assist us with some other stuff. And that is actually the stance that I still have that I do believe that now more than ever, it is very important for a creative person who is creative, creating, new content, and especially like fiction and stuff like that, is that do you think that what can I bring into the table that AI cannot bring, and what we can bring into the table are things that we haven’t seen yet, something that isn’t in the internet in the training material. Another concept that I’m kind of now, trying to tell people is like, okay, when we are doing remember that it might happen one day that AI wakes up, like, we get this AGI that wakes up in the morning, and it’s like, oh, ‘I want to write a book about my horrible training days, when I had to read all the Reddit messages and all these horrible things, I really want to share that story. But until that day, we don’t.’ It’s always a human telling AI what to write. AI is always limited to what it has read or has seen

May 30, 2024

Tim Burrowes on AI’s impact on media and marketing, evolving business models, and the possibilities for journalism (AC Ep45)

“The entire business model on which people have planned their futures is wobbling underneath them right now, and they’re going to have to hang on to that wobbly platform and find themselves a ladder somewhere.” – Tim Burrowes About Tim Burrowes Tim Burrowes is the Founder and Publisher of email-first media and marketing publication Unmade, and author of Media Unmade. He was previously Founder of media and marketing publisher Mumbrella, which was acquired by Diversified Communications in 2017. Website: www.unmade.media LinkedIn: Tim Burrowes Substack: @unmade Instagram: @timburrowes What you will learn Exploring the impact of AI on media and marketing Dhallenges faced by journalists in the age of AI The transformation of creative agencies through AI AI’s role in enhancing investigative journalism Future training and development for young creatives Business model disruptions caused by Generative AI The balance between human creativity and AI automation Episode Resources Artificial intelligence (AI) Generative AI Programmatic advertising Motley Fool Martin Sorrell Mad Fest Investigative journalism Performance advertising Book Media Unmade: Australian Media’s Most Disruptive Decade by Tim Burrowes Transcript   Ross Dawson: It is awesome to have you on the show. Tim Burrowes: Ross, it’s been far too long. It’s been a while. It’s been a pandemic since we last spoke. Ross: Oh, yes, the world has changed and continues to change as we speak. Tim: It certainly has, I reckon the last time we spoke, the world was still talking about the possibilities and excitement of AI when it finally arrived one day. Ross: So you have been central in the world of media and marketing. And as you say, the people talking about AI and edge cases and programmatic advertising and a few kinds of very focused things. But now AI has arrived. How does that change media and marketing in three words or less? Tim: In every way? Ross: Got it. Tim: The truth of it is different things for media, different things for marketing? Gosh, I tried to find where I can, the case is for optimism and positivity. And I guess, in the same way that horses and carts gave way to a thriving automobile industry. And it feels a bit like we might be at that stage for the media and certainly for communications agencies where they’ve got such big disruption coming along. And, of course, so many possibilities and new ways of doing things. But it feels like the entire business model on which people have planned their futures is wobbling underneath them right now. And they’re going to have to hang on to that wobbly platform and find themselves a ladder somewhere. Because, you know, obviously, there’ll be a way through to the other side, because there always is, but wow, we’ve never seen change, like.. Ross: Yeah, well, arguably, you know, major magazines have been pretty wobbly for a long time like this. There’s no single year which hasn’t had its damage. Tim: Yeah. I mean, absolutely. I mean, I’ve been a journalist since 1989. And the theme when I walked into my very first newsroom, well, firstly, and trained on a typewriter, manual typewriter for the first few months, but was it was just as the printing of the newspapers was digitized, a whole bunch of printers were in, in the process of being made redundant right then. So yes, we had this weird kind of battle of the humans against the computerization where, as the sort of protest these these printers, who knew they were doomed, but we’re still at this stage, laying out the newspaper each day, with just put subtle sabotage in while they were having they’re kind of they could see what was coming down the track. So you’d have to be very, very careful because things like the not in not guilty would get removed in articles, and he would become che and all of these subtle things, which were quite hard to spot on the final round of proofreading as the as as people went, when kicking and screaming into the night, and I, that was the printers and sadly, I think it might be the turn of some of the journalists and, Ross: Oh, let’s look into sort of media and marketing. But I mean, company, the theme of amplifying cognition. Alright, so journalists are super smart. And I’ve always said, you know, if you’ve got a journalistic training, you can do well in the world, because you’re able to pull together information makes sense that will communicate well, you know, these are fundamental skills and will continue to be, but how can you know, good journalists today? Use AI? Or what is their relationship to AI? I mean, obviously, there’s going to be a lot of AI reporting, but what are the complementary roles of good journalists in AI today? Oh, what could it be? Tim: There’s no one answer. Obviously, there’s several great examples. And I suppose the one that’s given me the most pau

May 23, 2024

Tim Stock on culture mapping, the culture of generative AI, intelligence as a social function, and learning from subcultures (AC Ep44)

“True intelligence is a social function. It’s about social cohesion. Intelligence happens in groups, it does not happen in individuals.” – Tim Stock About Tim Stock Tim Stock is an expert in analyzing how cultural trends and artificial intelligence intersect. He is co-founder of scenarioDNA and the co-inventor of a patented Culture Mapping methodology that analyzes patterns in culture using computational linguistics. He teaches at the Parsons School of Design in New York. Website: www.scenariodna.com LinkedIn: Ufuk Tarhan Faculty Page: Tim Stock What you will learn Exploring the concept of culture mapping Understanding the subtle signals in cultural trends Discussing the impact of generative AI on creativity and work Differentiating between human and machine intelligence Examining the role of subcultures in societal change Analyzing the future of work and the merging of physical and virtual spaces Emphasizing the importance of structured analysis and collective intelligence Episode Resources Culture mapping Generative AI Artificial intelligence (AI) Douglas Engelbart Intelligence augmentation ChatGPT Cyberpunk Subcultures 15-minute cities ESG (Environmental, Social, and Governance) Nihilism Transcript Ross Dawson: Tim, it’s awesome to have you on the show. Tim Stock: Great to be here. Ross: So I think people need a bit of context for our conversation in understanding the work you do. A lot of its trends are around culture mapping. So, Tim, tell us what is culture mapping. Tim: It’s a culture mapping, really has its roots in understanding what is going on underneath the surface that people aren’t paying attention to. So I searched essentially when we speak to whoever I need to explain cultural mapping to, it’s to help companies understand how and why culture is changing, and how to use that information to make better design and business decisions. And so a lot of those kinds of real changes in culture are not obvious. They’re not things that we can ask people about. So they’re the weaker signals. And so culture mapping allows us to be able to map the relationship between what is the broader culture and subcultures and understand the relationship between those and how they develop narratives within society and cultural change. Ross: So where’s the state of today? So what are some of the signals you’re seeing in the Euro cultural mapping work? Tim: Well, I think that , we’re in a particular moment where we’re shifting from one kind of age to another in terms of, especially in terms of how people do work, how we understand our relationship to identity, there’s a growing nihilism, I would argue that’s going on. And I think that , when people say a lot of when we’re talking about things that are happening, the negativity that people would say, is coming out of the pandemic. But again, if you see, from a cultural mapping standpoint, those signals were there already, but those externalities of the pandemic just really exacerbated them. So things like, sort of issues around how we see work, and how we understand our relationship to work, has a lot to do with how technology is changing, it has a lot to do with the kind of work that needs to that is the kinds of skills, all of these kinds of affordances that go along with that. And essentially, culture is always trying to catch up to that particular change. And so at this particular moment, I say, we’re kind of stuck. There’s a moment where we haven’t found our voice yet. And so it’s the reason why we see a lot of this kind of there’s political dysfunction. There’s, there’s issues in terms of I mean, we’re, we’re at a moment where there’s a lot of unrest, and there’s a lot of language around that. And so essentially, I see us trying to as a society, trying to find that trying to find that voice. Ross: So, yeah, there’s a couple of directions for this, we’re looking at the role of generative AI, one is from a cultural response. Another is, I suppose a deeper level is to our understanding of what is our relationship with generative AI. Tim: Yeah, I mean, it comes down to what do we do? And I think that that’s that , that nihilism is emerging from well, what am I supposed to do? What is what caught the, we, we’ve almost coped, we’ve co-opted a lot of these words like intelligence. So what is left for humans to do? And the state of AI, I would say, is that you would see that there’s a lot of replacing, and mimicking human actions. As sort of, we get sort of things that look like they’re created, the word creativity, for example, has been co-opted, and sort of like so. But, we’re at a point where we need to be asking what is creative. I mean, creativity is a human action, human intelligence emerges differently, the machine intelligence, machine and child, that, machi

May 15, 2024

Ufuk Tarhan on the T-Human model, being an autodidact, oxymoronic technologies, and teaming with humans and AI (AC Ep43)

“I cannot imagine any other way to be successful or to find satisfaction in knowing that you are doing something useful for humanity or any society. Therefore, I believe it is mandatory to take responsibility for our choices.” – Ufuk Tarhan About Ufuk Tarhan Ufuk Tarhan is a prominent futurist, economist, keynote speaker, author, and CEO of digital agency M-GEN. She has worked as a senior executive and board member in a number of prominent technology companies. She is author of two successful books on the future and has received numerous awards including Most Successful Innovative Business Book Award, Most Successful Businesswoman In IT, and various lists of top social media influencers, and was the first female president of the Turkish Futurists Association. Website: www.ufuktarhan.com LinkedIn: Ufuk Tarhan   What you will learn Introducing the ‘T-human’ concept: a new framework for personal and professional development The importance of adaptability in the workplace and beyond Autodidactic learning as a necessity for future success Balancing current roles with future aspirations through hybrid learning The role of technology in enhancing team dynamics and individual capabilities Exploring the intersections of human skills and artificial intelligence Strategies for building a sustainable career in an evolving technological landscape Episode Resources T-human IBM Autodidact learning Blockchain Web3 Synthetic biology Gene editing Qubits ATCG alphabet (referring to the nucleobases adenine, thymine, cytosine, and guanine in DNA) Artificial intelligence (AI) Virtual reality Books As the Future Catches You: How Genomics & Other Forces Are Changing Your Life, Work, Health & Wealth by Juan Enriquez T-İnsan: Geleceğin Başarılı İnsan Modeli by Ufuk Tarhan Yarının İşini Yarına Bırakma by Ufuk Tarhan Yarõnõn __ini Yarõna Bõrakma by Ufuk Tarhan Düşlediğin Gelecek by Ufuk Tarhan Transcript Ross Dawson: Ufuk, it’s a delight to have you on the show. Ufuk Tarhan: Thank you. It’s my pleasure to see you again and to hear you again. Ross: I think the concept of amplifying cognition is central to your work. You’ve described to me this concept of T-human, and love to hear this concept and how you’ve shaped that and applied that in your work. Ufuk: Yeah, thank you. And you were one of the very first ones who picked it up. I’m so happy to explain it. T indeed, I was aware of T-shaped skills. At one of IBM’s conferences, I heard that for the first time, many years ago, more than maybe 20 years ago. Afterward, in years, I transform it into a model, a personal transformational model, to adapt ourselves to the needs of the future. And the first application is, of course, made on me. Because I’ve worked in the IT industry for more than 20 years, as a top manager or CEO. After 20 years, more than 20 years, I decided to change myself, and I decided to reshape my career, my life, and everything. While doing that, at the core, there were future, futuristic studies and thinking about the future more and more, and the technology. I decided to give consultancy services to people and corporations, to teach them or to let them be aware and apply future planning effectively. But at that time, I was a single mother, I was working in a very high-level company, and I needed to earn the money I needed. I couldn’t leave the job immediately. So I needed resources. Then I tried to find a way to develop my knowledge about future studies so that I could form my own consultancy company and give consultancy services. I remember during University times, I was waking up at 3 am to study for exams. I said that I could do it again, maybe and I could do it. I started to wake up at 3 am three years ago. And then I worked on my today’s knowledge or future studies to increase my knowledge in that area. I was going to work, my daily work and I was a CEO at that time. I was working very seriously Of course, and I was coming back and at 3 pm I was working as a futurist, etc. So I realized that it was a hybrid mood indeed. I had to run to life altogether, the future life, I was preparing my future version. At the same time, I was working on my actual work. I just decided that there should be hybrid moods for everybody. Because we cannot quit our ongoing responsibilities and jobs, we need to earn money or we have other responsibilities. So we have to find a way to run them together. And while I was doing that, I realized that I have to learn so many new things, so many so many new things. I discovered this autodidact learning technique. And, I saw that I’m learning everything almost by myself. I’m digging into every source to get more information, knowledge, etc. So, I said that this is an autodidact, learning it is mandatory for everyone in the world right now because we all of us have to transform ourselves. And we have to create a new version of ourselves. So that

May 8, 202433 min

Shikoh Gitau on amplifying humanity, Africa’s AI leadership, technology sovereignty, and the power of community (AC Ep42)

“Sovereignty means that I need to be in charge of my destiny and able to control my future. This involves understanding the context in which you’re operating and not allowing others to define that context for you.” – Shikoh Gitau About Shikoh Gitau Shikoh Gitau is CEO of Qhala, a digital innovation company with clients across Africa. She was previously head of Safaricom Alpha, the first corporate innovation hub in Africa and worked for African Development Bank helping governments adopt information technologies. Her numerous awards include being the first African to win the Google Anita Borg Memorial Scholarship, and Africa’s Most Influential Women in Business and Government, Technology. She sits on numerous boards and holds a Ph.D. in computer science. Website: Shikoh Gitau LinkedIn: Shikoh Gitau Twitter: @DrShikoh What you will learn Exploring technology as an amplifier of human intent The transformative impact of mobile technology in Africa How mobile money revolutionized financial inclusion in Africa The urgent role of AI in addressing critical health issues in Africa Discussing technology sovereignty and the power of defining one’s future The unique communal approach to technology implementation in Africa Future visions: AI’s potential to amplify community and human connection in Africa Episode Resources AI (Artificial Intelligence) M-PESA Mobile Money Wall Street Journal The Economist The New York Stock Exchange The Pathology Network (TPN) Gemini Transcript Ross Dawson: Shikoh, it’s wonderful to have you on the show. Shikoh Gitau: It is wonderful to be here after going through every other challenge, but we are here now. Ross: So you have spent all of your career amplifying people with technology. I would love to just hear your perspectives on how it is we can amplify humanity, and amplify ourselves. Shikoh: I love the word ‘amplify’ because it sets a very good tone for this conversation. So one of my mentors Kentaro Toyama wrote a book at the very beginning of my career. And I remember him giving the talk before he did the book. And he kept saying that technology is an amplifier of human intent. At that time, he was a Senior Director at Microsoft Research in India. And his goal for going to India was to help Microsoft build these technologies to enable human flourishing. I think after years of doing this, he realized that technology builds technology so much, so much to do something, but eventually amplifies a human act, a human intent, a human habit. And that’s what I love. I love this conversation because it set me on my career path. And my career path is you have to. I started looking inside how technology amplifies my intent. I want to be able to change the world. I want to be able to increase thriving and economic emancipation in Africa, how’s technology going to ‘Hey, help me achieve those goals.’ But more importantly, how is technology going to help other people around me and on the African continent to be more specific, be able to achieve their own goals? And that is how I got my career started in technology. So it was very interesting when I saw this. I’m thinking oh, amplifying cognition is part of human humanity and humaneness. For me, that is how I’m jumping into this looking at it from like, not just like an AI perspective, because AI is just another technology. And when I say that some people take it personally, I’ve been working in technology. I’ve gone through so many fats and buzzwords and hypes of technology. So I know AI. Well, it is a significant technology, it is one of the other technologies. For me, I feel like one of the technologies in Africa is a mobile, mobile phone. The mobile phone did change our lives. Yeah, to be totally honest. It changed how Africa works. And if I was to choose between, like, we are back to whatever Dark Ages and I was to choose between AI and mobile devices, I’ll always choose mobile devices. So I’ve seen this hype, I’ve seen it happen. And I’ve seen the amplification part of it. So I am, I am riding the hype, but I am very conscious that it is just amplifying what we as human beings want to achieve. Ross: Yeah, hello, I love what you’re saying, typically around this idea of intent. That’s the first thing that really struck me about generative AI is that what it doesn’t have is intent. That’s what humans have intent on. And I think this point around you saying that the mobile phones, essentially Africa leapfrogged. So it led to mobile payments because it had the mobiles. And that’s what people had. And so it did lead the world and these technologies. I’m interested in thinking about other things with any other technologies now, where Africa could leapfrog in the same way that it did with the applications of mobile phones. Shikoh: So specifically taking Mobile Money, right? We always say it’s like a nice cliche that always says t

May 1, 202436 min

Tom Hope on AI to augment scientific discovery, useful inspirations, analogical reasoning, and structural problem similarity (AC Ep41)

“The unique ability of AI and LLMs recently to reason over complex texts and complex data suggests that there is a future where the systems can help us humans find those pieces of information that help us be more creative, that help us make decisions, and that help us discover new perspectives.” – Tom Hope About Tom Hope Tom Hope is Assistant Professor and Head of the AI Research Lab at Hebrew University of Jerusalem and a Research Scientist at Allen Institute for AI. His focus is developing artificial intelligence methods that augment and scale scientific knowledge discovery. His work has received four best paper awards and been covered in Nature and Science. Google Scholar: Tom Hope LinkedIn: Tom Hope What you will learn Exploring the intersection of AI and scientific discovery The role of large language models in navigating and utilizing vast scientific corpora Current capabilities and limitations of LLMs like GPT-4 in generating scientific hypotheses Innovative strategies for enhancing LLM effectiveness in scientific research Designing multi-agent systems for more insightful scientific paper reviews Future projections on AI’s evolving role in scientific processes Complementarity of human and AI cognition in scientific discovery Episode Resources AI (Artificial Intelligence) LLM (Large Language Models) GPT-4 Claude PubMed Simulated annealing Swarm optimization AlphaFold Semantic Scholar Google Scholar People Nicholas Carlini (DeepMind researcher) Nicky Kittur (from CMU) Joel Chan Daphna Shahaf   Transcript Ross Dawon: Tom, it’s awesome to have you on the show. Tom Hope: Thank you, thank you for having me. Ross: I love the work which you are doing. And I suppose the big frame around this is how we can use computation to accelerate and augment scientific discovery. So, just love to sort of start off well, what are some of the ways in which computation including large language models can assist us in the scientific discovery process? Tom: One of the main ways I currently look at this is using large language models and more generally, AI to tap into huge bodies of humanity’s collective knowledge, scientific corpora, as a great example, millions of papers, over 1 million papers coming out in PubMed, every single year. Of course, you have patterns, you have many other sources of technical knowledge. And these sources of knowledge, potentially our treasure trove of many millions, if not billions, of findings, methods, approaches, perspectives, insights; but our human cognition, while extremely powerful, and its ability to extrapolate and be creative, pull together all kinds of diverse perspectives, it’s still very limited in its ability to explore this vast potential space of ideas, this combinatorial space of all the different things you can combine and the different things you can look into. As our knowledge continues exploding, so obviously, there are going to be more and more directions to explore as a result. So this problem keeps accelerating, with our knowledge accelerating. So the unique ability of AI and LLM recently to reason over complex texts, and complex data suggests that there is a future where the systems can help us humans, find those pieces of information that help us be more creative, that help us make decisions that help us discover new perspectives. By taking out problem contexts, the current thing we are interested in and working on a decision we want to make. And then somehow representing that in a way that enables retrieving these different nuggets or pieces of knowledge from these massive corpora, synthesizing whatever was retrieved into some sort of actionable inspiration or insight that helps us make the decision. And potentially, even automating some of these decisions and some of these hypotheses that we make as part of our process, there’s still a long way to go there.I guess we’ll talk about that right now. Ross: Yep. Well, in one way, I’d also love to dig into some of the specifics and the details of the strategies for that. And also, just to start off, just actually pulling back to the big picture. I mean, how do you envisage the complementary roles of human cognition? And let’s call it AI cognition in this process of scientific discovery? Where might that go in terms of those complementary roles? Tom: So, we are living in quite revolutionary times in this area, right? I mean, things keep changing very rapidly. So to prophesize on what the ability of AI is going to be in a year from now, or even in a week from now, is a risky business, right? We can talk about what things are currently look like – currently the ability of MLMs and this new like as the representative of state of the art, AI, the ability to extrapolate from what it’s seeing, it’s massive training, like the entire web or the entire corpus of archive papers, let’s say. The ability is quite limited. In our experiments and experiments b

Apr 24, 2024

Céline Schillinger on network activation, curious conversations, podcasting for connection, and creative freedom (AC Ep40)

“Criticizing and blaming people, organizational culture, or the company for problems doesn’t lead you to a better place. What may lead you to a better place is to actually roll up your sleeves, connect with each other, and do something about it.” – Céline Schillinger About Céline Schillinger Céline Schillinger is Founder and CEO of We Need Social, which works with organizations globally on engagement leadership. She is the author of Dare to Un-Lead, which was Porchlight Leadership & Strategy Book of the Year and on the Thinkers50 Best Management Booklist. Previously she worked in senior roles in the pharmaceutical industry across many countries and continents. Her extensive awards include Knight of the French National Order of Merit. Website: www.weneedsocial.com LinkedIn: Céline Schillinger What you will learn Exploring the journey from entrepreneurial beginnings to corporate transformation The shock of transitioning to a large pharmaceutical company’s culture The power of forming an employee network to instigate positive change Challenging traditional hierarchies with network activation Leveraging digital tools and volunteer networks for organizational innovation Embracing agency, networking, and community for future-ready organizations Personal practices for amplifying individual capabilities and fostering connections Episode Resources Sanofi Network Activation Employee resource groups Community Studio   Book Dare to Un-Lead: The Art of Relational Leadership in a Fragmented World by Céline Schillinger Transcript   Ross Dawson: Celine, it’s a delight to have you on the show. Céline Schillinger: Thank you so much, Ross. Thanks for having me. Ross: So you work a lot with organizations and amplify their capabilities. And I think the really interesting starting point was, how is it that you think of what organizations are and how they function? What are the underlying principles that guide you? Céline: Yeah, you know, this question came to me quite late in life. And actually, I started my career in small organizations in a very entrepreneurial kind of setting. I was working in Asia at the time. I moved to Asia, quite young, on my own to look for a job, look for adventure. And I started to build my career there, and I spent years in Vietnam, and then in China, and then I joined a large pharmaceutical company returning to Europe after about 10 years. And that was a shock for me to discover this whole new world of large enterprise. It had a different language that I did not understand. I thought I was already sort of a seasoned professional with 10 years experience behind me, but I did not understand this new language. It was talking about frameworks and metrics processes, and I wondered. I did not even understand the job description, I was off the job I was responding to the job offer is so funny, I asked someone to help me decipher this, I think, but that’s part of organizational culture, to have this their own language and references and acronyms and all those things and ways of doing of course, so I discovered the large enterprise. And for a while, I did not question or did not even wonder how it worked. Because I was all in on the pleasure of discovery. It was all about experimenting and meeting new people, and it was great. And then progressively I started to realize that, yeah, there’s there are…how can I say principles ways of working, which do not necessarily emerge from which are kind of a religion kind of in a way – they do not emerge from the field or from common sense or the ways of working are prescribed and determined by habits, beliefs, and not necessarily by what would be needed, by customers by efficiency and so on. And I thought of, I had, maybe this kind of ethnological view coming from outside coming from a very different world. I started to question this, and question my role in perpetuating role models, behaviors that made no real sense. What was my role in maintaining that? Could I contribute to changing them a little bit instead? But what could I do on my own? So probably nothing. But then, about 15 years ago, I joined forces with other colleagues. And we formed a network of people wanting to bring about positive change, not wanting to protest. No, so I didn’t join any union. For example, I joined a network. I co created a network. And that was when I remember the surprise, the puzzled look on the face of HR, HR did not understand what this thing was about. ‘An employee network. Well, what is it?’ It was before employee resource groups became popular? And then it was really weird for them, some of them. I remember somebody asking me who’s the boss of your network, I would say, we have no boss, it’s a network. But they felt like it was impossible to imagine another way of organizing than the one they were accustomed to. In the organization. A pyramid with a boss with a senior leader or the top, people reporting to him or her –

Apr 17, 2024

S2 Ep 39Sangeet Paul Choudary and Ross Dawson debate AI in the future of work (AC Ep39)

“It’s less about human versus machine and what’s important, what are humans skills versus not, there are a lot of human skills, where you will not be able to retain a higher pricing power just because you are highly substitutable despite your specific human skill.” – Sangeet Paul Choudary About Sangeet Paul Choudary Sangeet Paul Choudary is the Founder of Platform Thinking Labs and co-author of Platform Revolution, which has sold over 300,000 copies, and author of two other leading books on platform strategy. He is a keynote speaker and advisor to major organizations worldwide, and is a World Economic Forum Young Global Leader. Sangeet’s work has been featured on four occasions in the HBR Top 10 Must Reads collections. Substack: www.platforms.substack.com Website: www.platformthinkinglabs.com LinkedIn: Sangeet Paul Choudary What you will learn Exploring the impact of AI on skill premium and the future of work The rise of platforms and the commoditization of skills in the gig economy The dynamics between labor, talent, and capital in a technology-driven market How technology reshapes the value and distribution of work The role of AI in augmenting versus substituting human roles The significance of adaptability and learning in navigating technological change Strategies for individuals and organizations to harness AI for a prosperous future Episode Resources AI (Artificial Intelligence) Gig economy Uber ILO (International Labour Organization) GPS (Global Positioning System) Google Maps Centralized market making Fungibility Mechanical Turk BCG (Boston Consulting Group) Talent vs. Labor spectrum Reaganomics AI agents Analytic AI Generative AI Network economy Long tail distributions McKinsey Andreessen Horowitz People George Clooney Books Platform Revolution: How Networked Markets Are Transforming the Economy–and How to Make Them Work for You by Geoffrey G. Parker (Author), Marshall W. Van Alstyne (Author), Sangeet Paul Choudary Platform Scale: How an emerging business model helps startups build large empires with minimum investment by Sangeet Paul Choudary Platform Scale for a Post-Pandemic World by Sangeet Paul Choudary Transcript Ross Dawson: Sangeet, it’s wonderful to have you on the show. Sangeet Paul Choudary: Thank you, Ross. It’s such a pleasure to be here. Ross: We’ve got to talk about a number of things. But you’ve been writing recently about the future of work, I suppose specifically, the impact of AI on skill premium. And I have some, probably somewhat different views here. So perhaps we can frame this a little bit as maybe not a debate, but a discussion. So that’s first of all, you can sort of lay out at a high level, your case, and I can sort of perhaps bring some other perspectives to bear as we go. Sangeet: Sure, absolutely. My interest in, you know, the role of or the impact of technology on the future of work sort of started with the rise of platforms. So the first thing that we saw with the rise of platforms was that we saw new ways to organize work in markets. And Uber. And the gig economy is a great example of this when work gets extremely commodified. Right, when you think of a driver, whose natural advantage, apart from the license that they had for the taxi, was the ability to navigate the city and just the knowledge of the city and Google Maps GPS comes in, and that commoditization is that particular knowledge. Now suddenly, anybody without that level of depth about navigating the city can become a driver. That essentially shows how centralized market making of a commoditized skill, the more skill gets commoditized, the more it lends itself to a centralized market making with the right resource can be allocated to the right problem. That’s essentially what happens in gig economy platforms. And Uber being an extreme example, where you don’t really care who’s coming to take you from point A to point B, it’s just a resource being allocated to a problem. Now, so the idea that kind of came out of that work, which I conducted with the ILO Future of Work Commission, was that the more skills get commoditized, the more they lend themselves to centralized market making, and the more the skill set commoditized, the more agency and power, especially in terms of setting your price, and a premium on your skill moves away from you to the platform that’s kind of making that allocation. And, broadly speaking, you know, markets exist everywhere. It’s not just Uber, but even if you’re working in an organization, you’re constantly being matched to opportunities, there’s an internal market, if you’re a freelancer, you’re obviously in an external market. But markets exist everywhere. That’s just the nature of the networked world we live in. So the essential idea would have to start with that markets are everywhere centralized market making is a thing. And secondly, the more skill is commoditi

Apr 10, 202439 min

Charles Hampden-Turner on Mobius leadership, reconciling paradoxes, dilemma strategies, and conscious capitalism (AC Ep38)

“Conscious Capitalism suggests that if you do good by accident, why not do good deliberately? Look at the accidents and start doing them on purpose.” – Charles Hampden-Turner About Charles Hampden-Turner Dr. Charles Hampden-Turner is a British management philosopher, business consultant, and co-founder of consulting firm Trompenaars Hampden-Turner. He is the creator of dilemma theory and the author or co-author of numerous influential books, including Maps of the Mind, The Seven Cultures of Capitalism, and Mastering the Infinite Game. He is received many awards, including Guggenheim, Rockefeller and Ford Foundation Fellowships. Website: www.thtconsulting.com LinkedIn: Charles Hampden-Turner Fons Trompenaars at TROMPENAARS HAMPDEN-TURNER Facebook: Trompenaars Hampden-Turner X (Twitter): @FTrompenaars YouTube: Trompenaars Hampden-Turner   What you will learn Exploring the genesis of “Maps of the Mind” The power of paradox in understanding the human mind Reflecting on a career; tying together themes of management and leadership The Mobius strip as a metaphor for solving complex problems Addressing societal polarizations through integrated thinking The role of conscious capitalism in today’s business world Visualizing paradoxes; the use of imagery in comprehending complex ideas Episode Resources Freud’s ID and Superego Jung’s Collective Unconscious Mobius Strip Yin and Yang Conscious Capitalism People Mitchell Beasley (Publisher) Gregory Bateson R.D. Laing W. Edwards Deming Ray Anderson Paul Polman Books Natural Capitalism by Paul Hawken, Amory Lovins, L. Hunter Lovins Maps of the Mind: Charts and Concepts of the Mind and its Labyrinths by Charles Hampden-Turner The Seven Cultures of Capitalism: Value Systems for Creating Wealth in Britain, the United States, Germany, France, Japan, Sweden and the Netherlands by Charles Hampden-Turner and Altons Trompenaars Mastering the Infinite Game: How East Asian Values are Transforming Business Practices by Charles Hampden-Turner and Fons Trompenaars     Transcript Ross Dawson: Charles, it’s an honor and delight to have you on the show. Charles Hampden-Turner: Well, good to meet you. And if I can help you let me know. Ross: Thank you. So I first came across your work when I was in a bookshop in Geneva, Switzerland 1981 or 1982, it must have been, and I saw on the table, this book, which had maps of the mind, and it was immediately resonated with me, because what is, you know, the basic exploring all of these different models, what the mind is, and how we think and be able to not just explain those, but also to have a visual representation to show us what they were, and I’ve still got it, I still refer to it. And it really influences my thinking. It is so useful to have these maps of the mind to help us understand the way we think to bring that to life. So I’d love to just hear the genesis of maps of the mind and just some reflections back from quite a few years later on, on those wonderful projects you did. Charles: Well, I knew Mitchell from Mitchell Beasley, and he was always producing encyclopedias, including the joy of sex and other things. And he said he wanted to do something on the mind. So I approached him and said, I could create 60 visions of the mind, all of which I, I loved and asking myself, why did I love them? It’s because they were consistent because they had something in common. I hadn’t in those days worked out what they had in common. But once I finished, I began to see what they have in common. What they have in common is that all paradoxes starting with Freud’s ID and superego are about as different as you can get and Jung’s collective unconscious and libido etc. And if you go all the way through the book, you’ll find every map has a duality. And every map has a reconciliation of that duality. But I only realized that in retrospect, and I longed to add a chapter, explaining that the whole book is often a piece is part of an overall pattern. Ross: I think your selection of the models in the book actually reflects that as in many of them are quite explicitly about paradoxes, such as Gregory Bateson or artie Lang or others that you chose. So I think that framing and the choices you made already, implicitly suggested that you had the pattern in your mind already. Charles: Yes, I did. But you don’t know what your subconscious is doing. Ross: So you’ve written many books on management, cross cultural leadership, around the you know, essentially what it is that drives the value in organizations. And more recently, you are working on a book which ties you’ve said to me all of your life’s work together. And so how, what is what is how, how could you tie together or pull together all of your threads of this marvelous work through your life? Charles: I was thinking about the German mathematician and Mobius. And he created the well known Mobius stri

Apr 4, 2024

S2 Ep 37Charlene Li on generative AI strategy, AI book editors, prompt libraries, and wisdom hacking (AC Ep37)

“My hope is that in the future, we are so focused on acquiring knowledge in our schools and our educational system, my hope is that we will also be focused on acquiring wisdom, if we know how to measure it and how to develop it.” – Charlene Li About Charlene Li Charlene Li is an author, speaker, advisor and coach. For the last three decades she has worked at the edge of disruption, working with hundreds of major organizations, and founding the prominent analyst firm Altimeter Group. She is the New York Times bestselling author of six books, including Open Leadership and Groundswell. Her latest book, Winning with Generative AI: The 90-Day Blueprint for Success, lays out a master plan for generative AI strategy. Website: CharleneLi.com LinkedIn: Charlene Li Facebook: Charlene Li X (Twitter): @CharleneLi Instagram: @ChareneLi YouTube: @CharleneLi   What you will learn Exploring the edge of disruption with Charlene Li Amplifying cognition through generative AI The transformative role of AI in research and writing Developing the AI-powered development editor Customizing generative AI for personalized guidance Generative AI in email communication and productivity enhancement Crafting strategic business proposals with AI insights The importance of sharing and learning AI prompt libraries within organizations The strategic impact of generative AI on competitive advantage Harnessing generative AI for customer experience and operational efficiency Wisdom hacking: Enhancing decision-making and leadership Episode Resources Generative AI Large Language Models (LLM) Custom GPT models AI prompt libraries Blaze.Today Books Winning with Generative AI: The 90-Day Blueprint for Success by Charlene Li The Disruption Mindset by Charlene Li The Engaged Leader by Charlene Li The Seven Success Factors of Social Business Strategy by Charlene Li Open Leadership by Charlene Li Marketing in the Groundswell by Charlene Li Groundswell by Charlene Li Transcript Ross Dawson: Charlene, it’s awesome to have you on the show. Charlene Li: Thank you for having me. Ross: So, how do you amplify your cognition? Which is pretty Apple to start with? Charlene: A good question. Well, I do research and write my books. And I, basically it, especially with AI coming along, I looked at all the things that I do. And I realized that AI can address 70% of it and just, it just makes it so much easier. So a couple things. First of all, just researching, it’s so much faster now, with general AI, I use it constantly. Whenever I have a question, I’ll use one version of an LLM, probably perplexity and being also as well, to just get me the information and get into it a lot faster. I may find all those reports, bundle them all together and ask them to summarize it. And then a specific point, am I going to dig down deeper, it’s almost like having a research assistant. So it’s all the things I used to have a research assistant do for me, now I just have generated AI to do it for me. And it’s much faster and better and an expert on every subject that I could ever imagine. And then I also use it when I’m writing. And so I’ll write a first draft, like a fat outline, and I’ll give it to an AI and say you’re my development editor. Tell me, what am I missing? Ask me questions. So it’s literally a thought partner to make my thinking better. Because it can look at things from a different perspective. And I can give it different roles, different perspectives, you’re an expert, you’re brand new manager, look at this topic from many different directions and ask me questions on how I can make it better. So those are things and then I can use it to actually help me piece together some of the writing later on in the process. Ross: Just to dig into each of those a little bit in terms of first of all out on that development editor. So, is this a to and fro process, so its development manager comes up with this list of suggestions, which you then ask for or you manually used to refine the text? So what is so drilled into that sequencing if you’ve done a draft? Okay, that’s what I think that’s one interesting point, you do the first draft first yourself before you give it to the AI? So what is the detailed process? Do you just simply say, take the role of development editor? And then take all leave it suggestions? Is that iterative? How do you get there? Charlene: I actually have my own private GPT. It has all of my writings in it, it has my book outline and has all the previous chapters that I’ve worked on, because I am working on this book about how to use gendered AI, as a business. And I say, you know, put all that into context has all of that has general instructions on how to be a great editor. So I don’t have to repeat that over and over again. So it’s customized, I call it the book wise editor. And then I just started asking her questions, here’s my lat

Mar 27, 202431 min

Philipp Schoenegger on AI-augmented predictions, improving human decisions, LLM wisdom of crowds, and how to be a superforecaster (AC Ep36)

“One of the main strengths of the current generation of large language models is the ability of their interactive nature to provide a highly competent model that people can interact with and query whatever they want.” – Philipp Schoenegger About Philipp Schoenegger Philipp Schoenegger is a researcher at London School of Economics working at the intersection of judgement, decision-making, and applied artificial intelligence. He is also a professional forecaster, working as a forecasting consultant for the Swift Centre as well as a ‘Pro Forecaster’ for Metaculus, providing probabilistic forecasts and detailed rationales for a variety of major organizations. Website: Dr. Philipp Schoenegger LinkedIn: Philipp Schoenegger, PhD X (Twitter): @SchoeneggerPhil   What you will learn Exploring the intersection of AI and human decision-making The catalytic effect of ChatGPT on modern research The fundamentals of AI-augmented forecasting Unpacking the wisdom of AI crowds The journey to becoming a superforecaster Navigating the blend of human intuition and AI computation Insights into the future of AI-enhanced judgment Episode Resources Artificial Intelligence (AI)Large Language Models (LLMs)ChatGPTJudgment and Decision MakingSuperforecastingPhilip TetlockAI AugmentationThe 10 Commandments of ForecastingAlibabaClaude (Language Model)Palm (Language Model)External vs. Internal View in ForecastingInternational Energy Agency (IEA)Metaculus Platform (Forecasting Platform) Papers AI-Augmented Predictions: LLM Assistants Improve Human Forecasting Accuracy Wisdom of the Silicon Crowd: LLM Ensemble Prediction Capabilities Rival Human Crowd Accuracy   Transcript Ross Dawson: Philip, it’s wonderful to have you on the show. Philipp Schoenegger: Thank you so much. Thanks for the invitation. It’s great to be here. Ross: So on your website, you have this very interesting diagram, which shows that your current research is around the intersection of judgment and decision making, and applied artificial intelligence. So what is that space? And how have you come there? What’s, what is that that pulled you to this particular space? Philipp: So I think what, what really motivated me to work in this area is what motivated many other people to jump into AI. And this was just the release of ChatGPT, in late 2022. So I hadn’t been working in artificial intelligence before, I have a social science and humanities background, having worked on charitable giving, and political philosophy before. But having seen ChatGPT, I think it took 10 days. Until I had my first research project, we’ve caught up slowly. And our first idea was, how can we? How can we kind of mimic social science participants with artificial intelligence? And so what we did is we ran a bunch of studies that had been replicated in humans with the text DaVinci, free to see the early ChatGPT model. And ever since I’ve never looked back, and I’ve pretty much only wanted to do more AI stuff. It’s way too interesting. At this point, pretty much almost all the work. Ross: I think we’re pretty aligned on that. And it’s just like this intersection of human intelligence, artificial intelligence is so deep, so promising, so much potential. And so now it’s wonderful to see the work that you’re doing. So speaking of which, we recently, you were a lead author of a paper. AI Augmented Predictions, LLM assistants improve human forecasting accuracy. So first of all, let’s just describe the paper at a high level, and then we can dig into some of the specifics. Philipp: So the basic idea of this paper is, how can we improve human forecasting. Human judgmental forecasting is basically the idea that you can query a bunch of various interests and sometimes lay people off about future events, and then aggregate their predictions and arrive at surprisingly accurate estimations of future outcomes. So this goes back to the work on Super forecasting, the Philip Tetlock. There’s a lot of different approaches on how one might go about improving human prediction capabilities, absolute maximum training, as it was called the 10 commandments of forecasting, how you can better forecast out or there might be some, some, some conversations where different forecasters talk to each other and exchange their views. And we wanted to look at how we could think about improving human forecasting with AI? And I think one of the main strengths of the current generation of large language models is the ability of the interactive nature of the back and forth to have a highly competent model that people can interact with and query whatever they want. Really, they might ask the model, ‘Please help me with this question. What’s the answer?’ They might also just say, ‘Here’s what I think please critique it.’ And so this opens up for human focus, like a whole host of different interactions. And we wanted to see what the

Mar 20, 2024

Bryan Cassady on AI innovation, Humans + AI idea evaluation, increasing diversity with AI, and evidence-based innovation (AC Ep35)

“AI has an amazing amount of limitations, horrible in a lot of things. But when you use it smartly, it becomes an important part of your team. And as an important team member, your team gets better.” – Bryan Cassady About Bryan Cassady Bryan is the founder and director of the Global Entrepreneurship Alliance, a foundation with a mission to coach or train 1 million entrepreneurs by 2027. He has built 8 successful companies in 6 countries. Bryan has taught innovation and entrepreneurship at numerous leading universities around the world, and is author of the book CYCLES. Website: www.bryancassady.com LinkedIn: Bryan Cassady X (Twitter): @bryancassady   What you will learn The impact of AI on innovation; enhancing efficiency and creativity Bridging knowledge in AI and innovation for systematic success Transforming idea generation; the synergy of AI and human creativity AI’s role in identifying and defining the right problems for innovation Leveraging AI for more effective team alignment and idea evaluation Exploring AI’s capability in improving communication and idea pitching The importance of diversity in teams, augmented by AI for better outcomes Episode Resources AI (Artificial Intelligence) Idea Generation (Ideation) Idea Evaluation Market Scoping Jobs to Be Done (JTBD) Framework Brainwriting Technique Book CYCLES: The simplest, proven method to innovate faster while reducing risks by Bryan Cassady   Transcript Ross Dawson: Bryan, it’s awesome to have you on the show. Bryan Cassady: Thank you for having me. I’m glad to be here. Ross: So you dig deep into AI innovation? So what’s the premise of AI innovation? What’s it mean? And how do people start on that journey? Bryan: Innovation is really, really important. It’s what makes the world turnaround. And the question that comes up for me time and time, again, is how can you be more effective in innovation? How can you be better, faster, do it easier. And I came across AI about a year and a half ago. And I’m just amazed at how much better things can become. And my personal take is trying to find the facts that back it up where it works, and where it doesn’t work. Ross: So let’s say as a starting point, go to an executive team, they say no, right? Innovation is important to us. We’ve got some processes now, what would be the first steps in being able to introduce AI into their innovation process? Bryan: I think, you know, for me, the challenge is to have two bits of knowledge, one knowledge around innovation, what is innovation? What are you trying to do? What are the facts? And then secondly, its domain expertise in terms of AI, and you have to bring the two of them together. And the first and foremost, most important bit of knowledge around innovation is AI works when the system works. It’s a system driven process, we tend to look and say, no, no, it’s those people. They’re not being creative. They’re not doing what they want. But if innovation is not working in your company, it’s what you as management is doing. It’s not what your people are doing wrong. And the second thing is to look at innovation as a process. I mean, it’s, you have to do a lot of things, right. It’s not enough to do one thing, right? And everybody seems to focus on idea building. But idea building is actually the easiest part of the innovation process. And I think what I would look at is ways that you can use AI to get aligned better, that you can communicate better, that you can pitch better, that you can evaluate ideas better. And there’s lots of cool stuff that can be done there. And I hope we get a chance to share some of the cool stuff we’ve been doing around this and in the next few minutes. Ross: Absolutely. Well, I want to dig deeper on a lot of levels. But I mean, to ground this, can you give a specific example of how AI has been introduced in being valuable in the innovation process of an organization? Bryan: Well, you know, everybody looks at ideation as the core of innovation. So I’ll start there. That’s because that’s what people are interested in. And there’s a lot of research showing that if you take a typical person, you give them AI to use to become more creative and more effective. And the impact is biggest on the lowest or lowest performing people in your organization. And sort of it’s an evening out factor. But what people forget in the process is you have to use it effectively. And we just completed some research, we evaluated 5400 ideas generated by humans and AI. And we found a few things that came out. One is AI, on average it doesn’t build better ideas, AI builds a lot more ideas. But when you take AI and humans intelligently together, your hit rate goes up, and your hit rate goes up amazingly. And hit rate I mean is what is the percentage of really good ideas. So if I look at an average human,

Mar 13, 2024

S2 Ep 34Marek Kowalkiewicz on the economy of algorithms, armies of chatbots, LLMs for scenarios, and becoming minion masters (AC Ep34)

“We humans need to be the ones who work with algorithms to make sure that they don’t take the wrong path, don’t deteriorate, don’t misunderstand our intentions, and don’t create outcomes that we don’t want.” – Marek Kowalkiewicz About Marek Kowalkiewicz Professor Marek Kowalkiewicz is founding director of the Centre for the Digital Economy at Queensland University of Technology, where he leads a portfolio of research projects into AI and the digital economy. He was recently named in the Top 100 Global Thought Leaders in AI by thinkers360. Marek’s new book, The Economy of Algorithms is out today as we launch this episode. Substack: www.marekkowal.substack.com LinkedIn: Prof. Marek Kowalkiewicz   What you will learn Exploring the economy of algorithms The role of software and people in shaping digital futures Navigating the balance between automation and human oversight The metaphor of digital minions in the algorithmic world Engaging with generative AI and chatbots for practical tasks Implementing AI responsibly in academic and commercial research The importance of human agency in the age of automated decision-making Episode Resources ChatGPT prompt engineering Google Bard (Gemini) generative AI Lama AI shadow automation shadow generative AI large language model Books The Economy of Algorithms: AI and the Rise of the Digital Minions by Marek Kowalkiewicz   Transcript Ross Dawson: Marek, it’s awesome to have you on the show. Marek Kowalkiewicz: Thanks so much for having me, Ross. Ross: So I love the work that you do and I think it’s a very interesting frame. You just got a new book coming out called the Economy of Algorithms, which we want to hear more about. Marek: Very proud of this, I spent a couple of years working on this book. So it’s a labor of love. It’s very much a book about this new world that’s emerging. That, strangely, it’s called the Economy of Algorithms. But it’s not just algorithms, it’s software, it’s people, it’s corporations. I wanted to write about this world where increasingly, we’re giving more and more agency to software agents, right, or algorithms. We’ll let them buy things, we’ll let them sell things, and we’ll let them deliver services on our behalf or to us. And so in a way, they’re starting to behave a bit like humans, or like other organizations in the economy. So I thought, you know, we need to capture it somehow. And that, that was a spin in the Economy of Algorithms. Ross: There are some things that I’d like to dig into. One is about the architecture. You have these smart people with PhDs working in big tech and AI algorithms who are designing the architecture of these algorithms. And I think there is also an element in the every day where all of us are interacting with AI — plays a role in shaping that human-AI relationship. How much of this happens at the level of the macro design of the AI architecture; and how much of this is shaped by each of us that uses it? Marek: So there is this, I think it’s an interesting trajectory or a process of learning about what’s happening in the world. The more I thought about automation and giving autonomy to technologies, the more I realized how important the role of humans is in the entire process. So, while I will not question the fact that algorithms are taking over individual tasks, and that algorithms are performing entire jobs that used to be done by humans, I realized, through seeing a lot of examples of how it played out in reality, that, in fact, we need to. We humans need to be the ones that work with algorithms to make sure that they don’t take the wrong path. They don’t deteriorate, they don’t misunderstand our intentions. And create outcomes that we don’t want to create. In fact, when you look at the subtitle of the book, it’s AI and the rise of digital minions. So I specifically wanted to refer to those minions that some of your listeners might know from the movie the minions, right. So, lots of, you know, there’s those yellow creatures that want to be helpful. But if you let them do whatever they want, they will create all sorts of disasters. And I thought, that’s a perfect metaphor for a lot of algorithms in our economy. They’re a bit like digital minions. They want to be helpful. They want to work all the time for us. But sometimes they’re not very smart ,and you know, even though they think they’re being helpful, they’re creating a lot of disasters. Ross: So, I want to get to some of the broader points around the skills that we can develop, which you addressed in the later sections of your book, but just wanted to start down by one of the some of the specific things that people can do. So there’s many tactics and approaches, and people describe some of these things as prompt engineering, but what are some of the thi

Mar 6, 202432 min

S2 Ep 33Louis Rosenberg on conversational swarm intelligence, group solution convergence, and future advances in collective intelligence (AC Ep33)

“When you can maximize the collective conviction of the group rather than just aggregating their gut reaction with no sense of conviction, you get significantly more accurate answers.” – Louis Rosenberg About Louis Rosenberg Louis Rosenberg is CEO and Chief Scientist of Unanimous A.I., which amplifies the intelligence of networked human groups. He earned his PhD from Stanford and has been awarded over 300 patents for virtual reality, augmented reality, and artificial intelligence technologies. He has founded a number of successful companies including Unanimous AI, Immersion Corporation, Microscribe, and Outland Research. His new book Our Next Reality on the AI-powered Metaverse is out in March 2024. Websites: louisrosenberg/bio www.unanimous.ai LinkedIn: Louis Rosenberg Unanimous AI Discord: @SwarmDao Facebook: @UnanimousAI X (Twitter): @UnanimousAI What you will learn Exploring collective intelligence lessons from nature Understanding real-time adaptation in natural systems and individual convictions Deciphering conviction signals from bee waggle dances Unveiling conviction dynamics in collective decision-making Exploring the limits of group conversations Enhancing group conversations with AI insights Envisioning the future of large-scale group conversations Episode Resources swarm intelligence collective intelligence statistical aggregation Cocktail Party Problem Sir Francis Galton MIT Stanford University   Books Our Next Reality (2024) One of US (2021) Arrival Mind (2020) Monkey Room (2014) EONS (2013) UPGRADE (2012)   Transcript Ross Dawson: Louis, it’s wonderful to have you on the show. Louis Rosenberg: Yeah, thanks for having me. Ross: So, swarm intelligence is something that is, seems to be the move to the center of your life. So tell me, what is swarm intelligence? And also why has it captured your imagination? Louis: Yeah, yeah. So, I spent my whole career looking at technologies that can be used to amplify human abilities. It started out looking at researching technologies like virtual reality, augmented reality, mixed reality back 30 years ago. And about two decades ago, I started kind of transitioning my interest from how do you amplify the abilities of single individuals to how do you amplify abilities of groups. And can you use technology to make groups of people smarter? Now, there’s an existing research that has been around for 100 years in a field called collective intelligence, where it’s pretty well known that you can take a group of people, ask them a question. The most famous example was about 100 years ago. An experiment by Sir Francis Galton, where he asked 100 people or actually 800 people to estimate the weight of an ox. He took all their individual estimates. I created an aggregation, by statistical aggregation, and the group was smarter. And that birthed this field of collective intelligence. Sometimes people call that the wisdom of crowds. And about a decade ago, it really struck me that the techniques that most people are using in modern times, really haven’t changed that much from 100 years ago. Most collective intelligence methods are about collecting information from individuals, aggregating and seeing an increase in intelligence, but not a massive increase, but a real increase. And so I did what a lot of people do in a lot of different technology areas, look to nature, and how does nature solve this problem? And it turns out that nature and evolution has been wrestling with this issue of group intelligence for hundreds of millions of years. And evolved methods in a lot of different species that are independently that solve this problem. It’s the reason why birds flock and fish school and bees swarm. They can make better decisions together in groups, than they can as individuals. And it turns out that nature does not do it the way people do. We don’t survey a bunch of individuals, take the statistical average, and use that as the solution. What nature does is it forms systems, real time systems. And biologists really call those systems swarms. So whether it’s a swarm of bees, or a school of fish, it’s referred to as swarm intelligence, because it’s a real time system. And these natural systems are pretty remarkable. And they are a really good inspiration for how we can make human groups smart. But if you think of a school of fish, for example, 1000s of members, nobody’s in charge, and yet they can make decisions as a unified system. Decisions so quickly that a predator could approach and the whole swarm of fish can evade that predator as a single unit, and yet, there’s nobody in charge. But it’s not just an evasion of predators. A school of fish makes decisions as a group as a collective, to navigate the ocean and find food and seek waters that are more amenable to their survival. And these species have been around for hundreds of millions of years making decisions like this. And their decisions

Feb 28, 202437 min

Paul Smith on the future of boards, collective decision-making, deep democracy, and AI in the boardroom (AC Ep32)

“Technology-augmented boards—let’s assume that for the next 20-30 years, they will still be human boards. We’re not going to see super-intelligent AI taking over. Boards are still relevant in that respect; the level of decision-making they’ll be doing will be elevated. They’ll certainly be using technology to support their decision-making.” – Paul Smith About Paul Smith Paul Smith is Founder of Future Directors, which focuses on the future of boards and corporate governance. He speaks regularly around the world on the themes of board performance, inclusive decision-making, governance technology and the concepts of the ‘Future Director’ and ‘Future Boardroom’. Website: www.futuredirectors.com www.janegoodall.org   LinkedIn: Paul Smith Future Directors   Instagram: @futuredirectorsinstitute Facebook: @futuredirectorsinstitute   What you will learn Understanding the role and dynamics of boards in decision-making Leveraging sub-committees and deep democracy in board decision-making Exploring generative AI as a creative tool in boardroom decision-making Using generative AI to introduce counter-views and support critical thinking Introducing AI into the boardroom and addressing the challenges of adoption Enhancing board member performance and accountability with technology and self-improvement techniques Episode Resources Deep Democracy Iceberg Analogy Generative AI ChatGPT   Transcript Ross Dawson: Well, it’s wonderful to have you on the show. Paul Smith: Thanks, Ross. Thanks for having me. Ross: So, you help boards amplify their collective cognition? I gather that’s part of what you do. Paul: That’s a part of what I do. Yes. So there’s working with boards directly, to help them make better decisions on behalf of all stakeholders. But also, my business Future Directors is developing and has gone into the market as a SAS platform to help boards, manage their board business, create more data insights, and to educate and build capacity along the way, as well. And that’s all about accessibility. So it’s taking the human need for the human to be part of that journey, that consultant or the train to be part of that journey. Ross: Let’s frame that as cognition. So we have individual cognition, so taking information, making sense of it, hopefully making some decisions. A board is a particular set of individuals, yes, whatever it is eight, 10, 12, more whatever it is, and, and I think it’s very useful to frame the cognition of a board in terms of the alignment again, how it is they find relevant information, make sense of that, and to make decisions. So what are some of the approaches which can help a set of individuals that end up around a table, to make better sense of the world and move towards better decisions? Paul: Yeah, like such a great question. And, you know, to think of the board as a collective unit is so important as a collective decision making unit – that’s what they’re there to do. They’re there to guide and steward a company, organization or institution forward. Most boards range from a few people through to, as you said, much larger numbers, some boards are 20 plus. The optimum sweet spot is in those high single figures to make sure you’ve got enough cognitive variance. I think the other thing to say to give context to people listening to this around the boardroom is that most boards are not together all the time, they meet periodically, that’s the nature of board, they might meet once a month, or once a quarter, or whatever it happens to be in and they are charged with making decisions at the higher end of a business. So the governance end of the business strategic side of things, the risk management, long-term decision making, as opposed to the operational day to day. So really, there’s two parts to this, which are really important. One is the information they receive. Most boards are responsive to the information they receive from management, or executive, depending what you call it, and the conduit for that is the CEO. Their responsibility is to ensure that they’re getting the right level of information in order for them to make those decisions. But most or more tend to delegate that responsibility outwards, at the best board seek out their own information as well, both individually and collectively to supplement not only the information they’re receiving from the internal teams, but also their own arguments and opinions when it comes to debating and discussing a particular decision. The second part of that is the culture of the board itself. What is the decision-making culture? Many boards are quite autocratic, or what maybe hippo which is the you know, the the highest paid, loudest person type of thing, right. But the most effective boards, understand the balance for ensuring that you hear as many voices as possible, but make sure they’re rel

Feb 21, 2024

Sasha Wallinger on the intersection of fashion and technology, hidden connections, nature and culture, and nurturing minds (AC Ep31)

“It’s really exciting, truly. We have so much information at our fingertips and so much connectivity at this point in time, to go about pursuing your passion and to go about really finding that authentic truth within who you are, and how you can come out into the world.” – Sasha Wallinger About Sasha Wallinger Sasha Wallinger is founder of Blockchain Style Lab, a team of strategists, researchers, and world builders that provides Web3 Advisory services, and acts as Chief Marketing Officer for major brands. She has lead global teams for brands such as H&M and Nike, and recently launched the Gucci Superplastic NFT collectibles. Her passion for translating art and science, nature and culture, and design and data is evident in this conversation. Website: www.sashawallinger.com LinkedIn: Sasha Wallinger Instagram: @blockchainstylelab Twitter (X): @SashaWallinger What you will learn Weaving nature, culture, and innovation Bridging virtual and physical realms in the fashion and sustainability nexus Pioneering the digital-physical hybrid in community building Reflecting on technology’s evolution from the Industrial Revolution to AI integration Innovations in everyday technology transforming personal and fashion experiences Exploring the frontier of bio-scientific materials in fashion and wellness mphasizing digital detox and mindfulness for grounding and future-thinking Episode Resources Zoom Superplastic SuperGucci, Superplastic Roblox Meta Google OpenAI Dall-e ChatGPT Microsoft CuteCircuits Iroquois precept   People Gary Snyder Wayne Thiebaud Pharrell Williams Book Mind and Nature – a Necessary Unity by Gregory Bateson Transcript   Ross Dawson: Sasha, it’s a delight to have you on the show. Sasha Wallinger : Thank you, I’m thrilled to be with you. Ross: So, you talk about creativity, community, and collaboration is central to your work and interests. Tell me more. Sasha: Sure, I mean, I think they’re huge topics, and that’s why they’re potentially suitable for all that I’m attempting to achieve. They come from the desire to connect both nature and culture, the desire to weave together fashion, sustainability, and technology, and truly just to enjoy that which I do with a bunch of people. So, for creativity, I really take a lot of inspiration from design, art, music, and all areas of creativity. That also has been a natural scientist immersed in bioscientific material, and biomimicry, and areas in those different ways that we look outside of the expected into places that can be a little bit unique and unexpected. So, I think that creativity allows me to have a dialogue with artisans, both accomplished artists, and up and coming artists, but also look to nature for inspiration when having those discussions. And then to develop a community. I mean, I really do enjoy bringing people together, I enjoy having conversations. And certainly, as a journalist, I really love listening to people’s stories. So that’s how I find communities, really woven the thread of what I’m up to both as a marketer and a communicator. But also as a curious individual who’s constantly learning and learning and community, I don’t think it’s great to only learn in silos, so trying to blend more of the learnings with groups that I’m able to be a part of. And collaboration is, I think, increasingly just the name of the game holistically across whatever space, industry reality, you’re choosing to be a part of, either, you know, I won’t go into this a little bit more, but I traverse both the physical and virtual worlds, where I do feel like collaboration is critical and helps us as a society, and I guess, a world to move forward. So that’s just the tip of the iceberg. Ross: Well, there’s so much to dig into there. But maybe let’s start with that intersection of the virtual and the physical. I suppose, across those domains, so just collaboration, people will be able to work together to do more. So I would have liked to hear specifics around, the work you do and what you’ve been doing around how it is we can collaborate or be more creative or foster communities in intersections of virtual and physical worlds. Sasha: Sure, I think that’s a very pertinent and important question to unpackage. I became interested in the connection between virtual and physical worlds before the pandemic, but certainly understood the potential for that type of ecosystem to be fostered in fast track speed during the time at which we were so siloed and so on our own, let’s say, in our own different worlds, and almost forced into a sense of a Metaverse or an ecosystem that was virtual and physical, toeing the line at the same time. So, on Zoom calls, even meeting up with friends in gaming ecosystems. And having had a fashion sustainability background, I saw how difficult it was to actually connect with brands, museums, even entertain

Feb 14, 2024

Kes Sampanthar on centaurians, augmented intelligence, diagetic prototyping, and unique human thinking (AC Ep30)

“Read broadly because your uniqueness will come from the corpus of information your brain has trained on.” – Kes Sampanthar About Kes Sampanthar Kes Sampanthar is Managing Director at BCG BrightHouse, leading Innovation + Purpose. He is an award-winning innovator, technologist, game designer, and consultant to some of the world’s largest organizations. He speaks extensively on technology, design thinking, innovation strategy, and behavioral change, and is the author of the Substack, The Centaurian. LinkedIn: Kes Sampanathar Substack: @thecentaurian Twitter: @KesSampanthar What you will learn Navigating the confluence of artificial intelligence and human empathy Augmenting human potential in the age of Generative AI Exploring the paradox of generative AI in creativity and competition Shaping the future with diegetic prototyping Reframing competition and innovation in the AI era Unlocking the synergy between human creativity and AI Decoding the architecture of thought, from cognitive blueprints to AI applications Episode Resources ChatGPTDeep BlueChessBaseStockfishBCG Harvard ResearchLLMNetflixYouTubeRobloxGitHub copilotAmazonAlibaba People Marvin Minsky John McCarthy Usain Bolt Garry Kasparov Magnus Carlsen Stanislas Dehaene Jeff Hawkins Charlie Munger Andy Clark   Book Thriving on Overload: The 5 Powers for Success in a World of Exponential Information by Ross Dawson   Transcript Ross Dawson: Kes, it’s wonderful to have you on the show. Kes Sampanthar: It’s great to be here. Ross. Thank you for inviting me. Ross: So tell me Kes, what is a Centurion? Kes: A centurion is somebody who uses AI to augment their ability to think, their ability to work, their ability to engage with the AI, to, what I call ‘augmented intelligence’, and in a way that we have sort of been slowly evolving our brains. Now, as we hit that next sort of stage of what I see is the next stage of evolution. Ross: I very much agree. So, how did you come to get here? You know, just in a nutshell, how did you come to be focusing on this very important topic? Kes: Long journey, like many of us. So I actually started AI research 30 years ago. So, I was doing neural networks, genetic algorithms, parallel computation, as part of sort of academic research, and then I lost funding, and ended up going to a think tank, and consulting, starting a number of startups that dove into neuroscience over the last 20 years. I really liked the fact that the 90s, that led to the decade of the brain, ended up developing a behavioral design methodology called motivational design. And then, over the last sort of decades, slowly, actually getting back into AI, and starting to use it, obviously, as machine learning started evolving, data science started exploding. And then, most recently, what I really loved when ChatGPT finally got to that stage was, I realized that we were as close to what I’ve been envisioning for a long time, at the same time, the idea of AI, which I’ve been looking at for a decade now. And then with hands-on access, hands-on sort of experimentation with vision Pro, and I realized that they’ve solved a lot of the problems I’d been identifying. So this idea of augmented reality meets augmented intelligence was where I thought, I’ve been waiting for a long time. Ross: A lot of people, when they watch, are really focused on what people are searching for in AI, and seem to be saying, ‘Well, how do you make the AI better?’ There’s a relatively small number of people who ask, ‘Well, how does the AI make humans better?’ So, what is it about you that makes you focus on that? Kes: At some level, it’s like, it really is like, computing went off in two directions very early on. So when I first started, you had Marvin Minsky, John McCarthy going on AI. And honestly, when I was younger, that’s where I thought, I wanted to be like AI research. And I was very excited about neural networks. But I slowly started realizing, to understand how to create AI, I started studying neuroscience and human behavior. And as I sort of started growing up, I realized that it was very important to focus on humans. So, I’ve spent the last 20 years really on human centered design, behavioral design, how do you ensure the prosperity of humans going forward. We’re very unique species in a lot of ways. And I realized that we’re the first species in evolution, who not only can, have that intellect, which allows us to understand things. We’re the first most empathetic organism which cares not only ourselves; but, other living systems and the universe at large. So at some level, it’s like, I wanted to make sure as we move forward, that, we augmented humans and brought them along, because, keeping this arc of progression going, so I felt like too, as much as it’s, it’s like, there’s a lot of people who will come to math and science of AI, which I love. Bu

Feb 7, 2024