
AI Snacks With Romy & Roby: Democratizing AI Technologies
Dr. Anastassia Lauterbach: Democratizing AI Expert
Show overview
AI Snacks With Romy & Roby: Democratizing AI Technologies has been publishing since 2024, and across the 2 years since has built a catalogue of 74 episodes. That works out to roughly 50 hours of audio in total. Releases follow a fortnightly cadence.
Episodes typically run thirty-five to sixty minutes — most land between 35 min and 45 min — and the run-time is fairly consistent across the catalogue. None of the episodes are flagged explicit by the publisher. It is catalogued as a EN-language Education show.
The show is actively publishing — the most recent episode landed 3 days ago, with 20 episodes already out so far this year. Published by Dr. Anastassia Lauterbach: Democratizing AI Expert.
From the publisher
AI Snacks with Romy&Roby is a podcast that translates AI and robotics technologies from complex scientific concepts into easy-to-understand discussions, making them accessible for teens, parents, teachers, and anyone curious about AI. Through real-world stories and expert interviews, the show is dedicated to democratizing AI knowledge and empowering the general population to understand how AI is developed and applied in everyday life. The podcast is part of the Romy&Roby and AI Edutainment universe.
Latest Episodes
View all 74 episodes74: Rethinking the Geometry of AI: Inside the Mind of an Independent Researcher Building a New Theory of Artificial Neurons with Matthew M Murphy
73: Can AI Save You Money at the Grocery Store? Meet the Founder Building the 'Expedia for Groceries Andy Elwood
72: Human-AI Relationship: Discussing "After Yang" with Rae Muhlstock
71: Quantum Computing Explained: Qubits, AI, and the Race to Build the Future with Jonas Kölzer
70: AI, Deepfakes & the Law: Do You Have the Right to Your Own Digital Identity? With Gabriela Bar
69: How AI Is Transforming Clinical Trials — Faster Recruitment, Smarter Medicine & What It Means for All of Us with Julio G. Martinez-Clark
Ep 6868: Can an LLM Lie? Inside Large Language Models with AI Expert Sairam Sundaresan
Summary:Anastassia and Sairam delve into the complexities of Large Language Models (LLMs), exploring their inner workings, practical applications for small business owners, and the ethical concerns surrounding their use. They discuss the phenomenon of hallucinations in LLMs, the potential for synthetic data, and the future of AI, including the quest for Artificial General Intelligence (AGI). Sairam shares insights on how small businesses can leverage LLMs effectively while addressing the importance of data quality and the implications of AI on society.Guest Bio — Sairam Sundaresan:Sairam Sundaresan is an AI engineer, educator, and author based in Chennai, India, with a Master's degree from the University of Michigan. He spent eight years at Qualcomm, working on groundbreaking computer vision and machine learning projects for multimedia applications — including real-time 3D reconstruction and cutting-edge object tracking algorithms featured in Forbes. His work lives in the smartphones that billions of people use every day.Beyond engineering, Sairam is an educator at heart. He served for three years as a Machine Learning Lead and Mentor at the Frontier Development Lab, a prestigious research programme at the intersection of AI and space science — and the work of his team was personally recognised by Google CEO Sundar Pichai.Today, Sairam reaches a global audience through his widely read Gradient Ascent newsletter on Substack, where he breaks down complex AI concepts for curious non-technical readers, and through his book AI for the Rest of Us* — a practical, jargon-free guide to understanding artificial intelligence that has made him one of the most trusted AI voices for everyday audiences worldwide.Takeaways:LLMs are a class of neural networks inspired by the human brain.They learn patterns from vast amounts of data to predict text.The deep learning revolution in 2012 enabled significant advancements in AI.Hallucinations in LLMs are a feature, not a bug, due to their predictive nature.Small business owners can utilize LLMs for organizing and content creation without needing extensive technical knowledge.Synthetic data can amplify errors and biases if not curated properly.The future of AI may involve integrating ontologies for better understanding and causality.AGI remains an amorphous concept, with no clear path to its realization.The need for ethical considerations in AI development is paramount, especially regarding data sourcing.AI developers are often motivated by a desire to improve human life and the planet.Chapters:0:05 Introduction to the episode and Sairam’s work4:28 Introduction to Large Language Models (LLMs)5:42 Understanding Neural Networks and Deep Learning8:18 Challenges and Opportunities with LLMs12:49 Practical Applications for Small Business Owners19:47 Ethical Considerations and Data Concerns32:51 Future of AIHyperlinks:linkedin.com/in/sairam-sundaresanGradient Ascent Newsletter:newsletter.artofsaience.com — Weekly AI guide trusted by over 27,000 subscribers, including teams at Silicon Valley's top tech firms and academic labsBook — AI for the Rest of Us, Apple Books: books.apple.com/us/book/ai-for-the-rest-of-us/id6751973560Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 6767: Confidential AI, Speech Recognition, and Why AI Literacy Starts with Teachers with Giorgio Natili
Summary:In this episode, Anastassia and Giorgio Natili discuss the importance of AI literacy, the evolution of speech recognition technology, and the challenges of ensuring data privacy and sovereignty in AI applications. They explore the concept of confidential AI, the need for responsible usage in education, and the future aspirations for AI explainability and funding allocation. The conversation emphasizes the necessity of understanding AI's limitations and the ethical implications of its deployment in various sectors.Giorgio Natili is an engineering leader, author, and community figure with over 20 years of experience in software engineering and technological innovation. He is currently Head of AI Engineering at Oracle Cloud, and previously Vice President and Head of Engineering at Opaque Systems, where he worked on confidential AI and secure data analytics platforms. Giorgio was previously the Head of Engineering for Firefox at Mozilla, Director of Software Engineering at Capital One, and a Software Development Manager at Amazon. Natili is also known for founding GNStudio, a Rome-based development studio, and being involved as a W3C member, author, and educator.In addition to his achievements in technology, Giorgio is an advocate for diversity, inclusion, and ethical leadership, and he has also spoken about his past as a professional windsurfer and DJ, emphasizing the human side of leadership.Takeaways:AI literacy is crucial for understanding the complexities of technology.Speech recognition has evolved significantly, but still faces challenges.Accents and environmental factors greatly impact transcription accuracy.Confidential AI focuses on maintaining data privacy and sovereignty.AI does not possess human-like understanding or reasoning capabilities.Responsible usage of AI is essential for protecting sensitive data.Prompt engineering can enhance the effectiveness of AI tools.AI can provide personalized learning experiences for students.Explainability in AI is necessary for safe and effective use.Funding for AI should prioritize explainability and safety over mere scaling.Chapters: 0:00 Introduction to the episode: Who is our guest, and what will we learn today?1:54 Explainer on AI Literacy2:27 History of Speech Recognition3:22 Challenges in Speech-to-Text Technology7:26 Data and Model Limitations13:15 Confidential AI and Data Sovereignty concepts26:18 AI in Education and Responsible Usage39:02 Future of AI and Explainability
Ep 6366: The EU AI Act Uncovered: Law, Ethics & Europe's Bet on Responsible AI with Gabriela Bar
Summary:Gabriela Bar, a legal expert specializing in AI law and ethics, talks about how AI is shaping legal frameworks, societal perceptions, and technological innovations – especially within Europe and Poland. She discusses the importance of responsible AI governance, the evolving legal landscape, and the societal implications of AI deployment at scale. The discussion with Anastassia touches on compliance costs to implement the EU AI Act, practices to introduce national LLMs, and what constitutes responsible AI.Gabriela Bar is a prominent legal expert specializing in technology and artificial intelligence law, based in Poland. She has over 20 years of experience and is the founder of the Gabriela Bar Law & AI firm, serving as a legal and ethics advisor for EU technology projects focused on AI, digital law, and compliance with regulations such as the EU AI Act and GDPR. She is recognized among the TOP100 Women in AI in Poland, Forbes 25 Women in Business Law, and is active in several international organizations dedicated to technology, digital ethics, and law. Gabriela Bar frequently lectures at universities, publishes on AI law and ethics, and advises technology companies and research consortia on responsible and practical AI innovation. Key Topics:Gabriela’s journey from technology law to AI ethics and her ongoing work within European AI regulation.The rapid growth of AI adoption in Polish businesses and public sector initiatives for language models.The challenges and opportunities of implementing responsible AI, including transparency, accountability, and bias mitigation.The role of AI legislation, with a focus on the European AI Act, regulatory costs, and how it balances innovation with safeguards.The global landscape of AI regulation, contrasting EU's comprehensive approach with the US decentralized system.Technical limitations of deep learning models, explainability, and the importance of aligning AI development with ethical principles.The future of AI in cybersecurity, digital personas, and the geopolitics of AI competitiveness among the US, EU, and China.Chapters:00:04 - Introduction to Gabriela and AI in Poland02:55 - How Gabriela transitioned from traditional law to technology and AI04:03 - Cultural portrayals of AI and public perceptions influenced by movies and literature07:49 - Misinformation and misconceptions about AI technology today09:17 - The private sector’s role in AI development and application in Poland10,:54 - Demographic challenges in Poland and AI’s potential role in mitigating them13:45 - Political and regulatory gaps in AI, and the importance of cross-sector integration15:38 - The absence of national LLMs in languages like Japanese; success stories from other countries18:01 - Foundations of responsible and ethical AI: core principles and risk management21:51 - Data quality, biases, and ongoing governance in AI lifecycle management22:53 - The flaws in deep learning transparency and the necessity for cautious regulation29:34 - Legal accountability, the role of audits, and fairness in AI systems33:34 - The evolving landscape of AI litigation and insurance implications36:14 - Regulatory costs for AI companies and the competitive landscape in Europe39:03 - The scope of the European AI Act and its impacts on high-risk sectors42:49 - Cybersecurity risks involving AI, criminal misuse, and the importance of legal safeguards44:08 - Europe's strategic imperative in AI sovereignty amid global technology race46:39 - The contrasting regulatory systems of the US and China and their influence on innovation51:17 - The emerging need for regulation of digital personas and synthetic media51:35 - Wrapping up: key takeaways and the importance of dialogue between tech developers, policymakers, and societyResources & Links:Gabriela Bar - LinkedIn | TwitterAnastassia Lauterbach - LinkedIn@romyandroby“Leading Through Disruption”AI EdutainmentRomy & Roby Book
Ep 6565: From Narrow AI to AGI - Breakthroughs, Limits, and Sense of Purpose in AIs with Dr. Craig Kaplan
Summary:Anastassia and Dr. Craig Kaplan delve into the complexities of artificial general intelligence (AGI) and the evolving landscape of AI technologies. Craig emphasizes the importance of defining AGI as an AI capable of performing any cognitive task as well as an average human, highlighting the challenges of achieving true general intelligence beyond narrow applications. They discuss the historical context of AI development, the shift from symbolic AI to machine learning, and the potential of collective intelligence as a more effective approach to building AGI. Craig advocates for a community of models rather than a single monolithic AI, suggesting that this could lead to safer and more ethical AI systems that reflect diverse human values. The conversation also touches on the limitations of current AI systems, particularly their lack of understanding of causality and reasoning. Craig argues that while AI might develop its own sense of purpose, it is crucial to instill positive human values early on to guide its development. The discussion concludes by emphasizing the importance of AI literacy and critical thinking, noting that human behavior and values will significantly shape the future of AI and its impact on society.Craig A. Kaplan is an artificial general intelligence (AGI) expert and entrepreneur who focuses on collective intelligence, safe superintelligence, and practical strategies for aligning advanced AI with human values and goals. He has founded and led multiple AI-related ventures, including iQ Company, which develops AI systems to enhance human decision-making; previously, PredictWallStreet, an early crowdsourced stock prediction platform; and he speaks and writes about how to safely build and govern increasingly powerful AI systems.Takeaways:AGI is defined as AI that can perform any cognitive task like an average human.The shift from symbolic AI to machine learning in the 1960s and 1970s, big data and superb semiconductors later on enabled today’s AI revolution.Collective intelligence may offer a safer and more effective path to AGI, and this include development of individual LLMs and models based on values and perspectives of individual humans.Current AI systems lack an understanding of causality and reasoning.AI will develop its own sense of purpose, but early values are crucial.AI Literacy is imperative to build safe, transparent and beneficial AI.Chapters:00:00 Introduction to the episode: Researching Artificial General Intelligence (AGI) and the work of Dr. Craig Kaplan2:06 Introduction to AGI and AI Definitions04:16 The Evolution of AI: From Symbolic to Machine Learning07:02 The Limitations of Current AI Systems14:01 Causality and Reasoning in AI19:38 The Collective Intelligence Approach to AGI26:46 The Future of AI: Transparency and Collaboration28:37 The Purpose of AI Collectives29:25 Utopia vs. Reality in AI Development30:49 The Risks of AI: Understanding P-Doom32:16 Human Values vs. AI Intelligence35:09 Fusing Humanities with AI Engineering37:40 The Role of Human Responsibility in AI40:22 The Evolution of AI Values44:59 The Bell Curve of Society and AI's Reflection47:42 Education and AI: Building a Better Future49:38 The Necessity of AI Literacy and Critical ThinkingHyperlinks:LinkedIn profileOrcid profileAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 6464: Unbreakable Backups - Decentralized Storage for Smart Systems with Murphy John
Summary:The conversation focuses on decentralized cloud storage as an alternative to traditional hyperscale cloud providers, emphasizing security, privacy, cost, and resilience. It discusses the limitations of centralized cloud systems and how decentralized storage offers a more secure and distributed solution.Murphy John is Chief Growth Officer at StorX Network, a decentralized cloud storage platform (DePIN) built on blockchain technology to deliver secure, private, and cost-efficient data storage at scale. With a background in designing and managing internet and cloud infrastructure for large enterprises, banks, and financial institutions, he has over a decade of experience in building resilient, secure systems for mission-critical workloads. Since joining StorX in 2021, Murphy has led ecosystem development, strategic partnerships, and go‑to‑market initiatives, working closely with Web3, IoT, and AI partners to integrate StorX’s encrypted, geo-distributed storage into real-world applications. A strong advocate for data privacy and decentralization, he frequently speaks on how technologies such as encryption, data fragmentation, and distributed ledgers can protect organizations against ransomware, data misuse, and single points of failure in traditional cloud models. Key Takeaways:Centralized Cloud Issues : Traditional cloud systems face challenges in scalability, security, and cost.Decentralized Storage Benefits : Offers encrypted, distributed data storage with enhanced security and privacy.Ecosystem and Governance: StorX operates a global network with incentives for node operators and AI-driven management.Real-World Use Cases: Includes healthcare data storage with geofencing and IoT data processing.Future Outlook: Emphasizes education and adoption in a market dominated by legacy cloud players.Chapters:0:04 Introduction into AI Literacy mission and the episode about decentralized storage3:11 Introduction and Market Context4:47 Traditional Cloud Promises and Limitations10:38 Decentralized Storage Architecture and Security22:04 Ecosystem, Node Operations, and AI Governance31:19 Use Cases and Regulatory Considerations39:39 Challenges and Future OutlookHyperlinks LinkedIn Murphy JohnStorX WebsiteStronger. Safer. Decentralized: StorX’s Guide to Cloud Storage vs. BackupAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 6363: Beyond the Canvas: How AI Is Rewriting the Rules of Art/ Beyond Human with Matthias Röder (#3)
SummaryThis episode dives into how AI is transforming creative fields — from visual art and music to literature and performance. Dr. Matthias Röder is an AI-music pioneer who works at the forefront of classical music, emerging technologies, and innovation strategy. He led the AI team behind Beethoven X, the celebrated project that used artificial intelligence to complete Beethoven’s unfinished 10th Symphony, and serves as managing partner of The Mindshift, a consultancy focused on creativity and innovation. Röder is a former MD of the Eliette and Herbert von Karajan Institute and a trustee of the Mozarteum Foundation. This episode emphasizes how artists, technologists, and institutions can navigate an evolving landscape — balancing innovation with ethical and legal frameworks. Whether you're a creator or simply interested in AI's societal role, these insights offer a clear view of a future where human ingenuity and machine evolution intertwine more deeply than ever.Takeaways/ key discussion points:Pioneering AI artists such as Refik Anadol, Holly Herndon, and Jeroen van der MostAI models like Boto and decentralized autonomous organizations (DAOs) are redefining artistic identityThe commercialization of AI art and legal debates around copyrights and ownershipThe role of NFTs in financing and authenticating digital art piecesSkills artists need to thrive in an AI-rich environment, emphasizing collaboration and technological literacyThe emerging importance of content registries and digital rights management platformsFuture scenarios: the rise of hybrid teams of human and synthetic artists, new educational pathways, and the societal impact of AI-driven creativityChapters:00:00 - Introduction to the episode - continuing the Beyond Human series03:20 Introduction to AI's serious role in art and well-known pioneers05:35 - Decentralized autonomous artists (Boto) and visual AI installations06:44 - Jeroen van der Most's innovative use of pixel calculations and environmental themes07:26 - The influence of AI artist Mats Mensch and the democratization of art creation08:50 - Adoption of AI tools across music, with emphasis on composer workflows10:12 - Major art exhibitions integrating AI, virtual worlds, and immersive experiences11:05 - Market dynamics: how AI art is valued and traded in galleries and auctions13:03 - The commercial side: monetization, licensing, and intellectual property debates16:49 - The promise and risks of digital rights management and content registries18:06 - Fractional ownership of NFTs for funding art projects19:37 - Digital rights, copyright, and the importance of tracking AI training data24:34 - The need for supporting mechanisms to ensure fair compensation for artists25:50 - How content registries could revolutionize transparency and trust in AI-generated art28:48 - Building infrastructures for AI content usage rights and ethical data practices34:16 - Skills for future artists: collaboration, technical literacy, and adaptability37:28 - The disruptive potential of synthetic performers and AI actors in Hollywood40:03 - New educational models, including an AI-focused Master’s program for artists44:49 - Personal reflections: the importance of writing, teaching, and staying curious in AI evolutionHyperlinks:Mentioned Creators:Refik Anadol - Official SiteHolly Herndon - Musician and ResearcherJeroen van der Most - PortfolioNFTs and Art Market AnalysisScribe Platform for Digital Rights - Future ConceptOmlet AI - Content Registration for CreatorsDr. Röder:LinkedInTwitterAnastassia:Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 6262: Trusworthy by Design: Context-Rich AI in Healthcare with Ben Lengerich
SummaryBen Lengerich discusses the importance of context in AI for healthcare, the role of generalized additive models (GAMs), and the challenges of data quality and data compliance. He emphasizes the need for responsible AI practices and highlights the impact of historical data on current medical practices. The discussion also touches on the future of personalized medicine and the necessity of investing in AI to improve healthcare outcomes.Ben Lengerich is an assistant professor of Statistics at the University of Wisconsin–Madison and the founder of Intelligible, where he develops context-adaptive, interpretable AI methods to turn real‑world clinical data into reliable evidence for precision medicine. His research sits at the intersection of machine learning, computational genomics, and medical informatics, with a focus on models that are transparent to clinicians and that account for the specific health context of each patient. Before joining UW–Madison, he was a postdoctoral associate and Alana Fellow at MIT CSAIL and the Broad Institute, advised by Manolis Kellis, after earning his PhD in Computer Science and an MS in Machine Learning from Carnegie Mellon University, where he worked with Eric Xing on methods to uncover patterns in complex biomedical data. Takeaways:AI systems must understand context in healthcare to be effective.Generalized additive models (GAMs) enhance interpretability in AI.Data quality is paramount for successful AI applications in healthcare.Debugging datasets can uncover systemic issues in healthcare.Surprising insights from predictive modeling can inform better practices.Responsible AI practices are crucial in medical applications.Historical data continues to influence current medical practices.Compliance with regulations is a significant challenge for AI in healthcare.Legacy infrastructure poses barriers to AI implementation.Investing in AI can lead to improved healthcare outcomes and efficiency.Chapters:00:00 Introduction to another AI Snack on AI in Healthcare: Data, Context, Interpretability02:02 Understanding Context in Healthcare AI04:54 Generalized Additive Models Explained07:41 The Importance of Data Quality10:53 Debugging Datasets in Healthcare13:50 Surprising Insights from Predictive Models16:52 Responsible AI in Medicine19:47 Historical Impact on Medical AI22:28 Compliance and Regulations in Medical AI25:50 Bridging Legacy Infrastructure with AI28:03 The Future of AI in Healthcare31:43 AI Literacy for Healthcare Providers34:45 The Case for AI Investment in HealthcareHyperlinks:Ben Lengerich:LinkedIn profileX profileIntelligible websiteAnastassia:Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 6161: Digital Resurrection: The Science Behind Cryonic Dreams with John R. Carlos
SummaryJohn Rodriguez Carlos shares his journey from a long military career to becoming a writer, discussing the inspiration behind his novel “Chronic Dreams,” which explores themes of AI, cryonics, and the nature of consciousness. He delves into the implications of digital consciousness, the complexities of intelligence, and the role of AI in shaping our future. The discussion also touches on the purpose of writing and the human motivation behind storytelling, emphasizing the importance of preparing for the future through thoughtful dialogue.John R. Carlos is a retired Royal Australian Air Force Wing Commander who served for about 42 years in a range of operational and leadership roles across Australia and on overseas deployments before retiring in 2020 and turning to fiction writing. Born in Madrid and raised in Perth, Western Australia, he later studied through associate and advanced diploma programs, then retrained in creative writing after leaving the military to pursue a long-held ambition to become an author. His bibliography at this stage centers on his debut novel Cryonic Dreams: Awakening (2025), a near‑future science‑fiction thriller and Book 1 of a planned “Cryonic Dreams” trilogy, which explores successful cryonic reanimation, global power struggles, and the ethical and political implications of controlling life and death. TakeawaysJohn's military background shaped his sense of duty and creativity.His trilogy “Chronic Dreams” explores AI and cryonics, with the first novel being already published, and the second in preparation to be released.AI's role in the future is both promising and concerning.The preservation of identity is a central theme in his work.Digital consciousness raises questions about the soul.Speculative fiction serves as a warning for future challenges.Writing is a means to find meaning and purpose in life.AI can mimic creativity but lacks true human experience.Conversations about technology are crucial for shaping our future.Chapters00:00 Introduction to cryonic technologies07:23 Journey from Military to Writing10:27 The Inspiration Behind 'Cryonic Dreams'15:40 Exploring AI and Cryonics in Fiction18:20 Digital Consciousness vs. Cryonics21:57 The Complexity of Intelligence and Consciousness24:61 AI's Role in the Future28:56 Speculative Futures and Human Progress34:46 The Purpose of Writing and Human Motivation Hyperlinks:Amazon: Cryonic dreams: Awakening bookJohn Rodriguez Carlos LinkedIn Article on Tomorrow: How Many People Are Currently Cryonically Preserved?Paper: In the End We Become Our Avatars: An Exploration of Artificial Intelligence and Digital Afterlives Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 6060: LLMs That Reason: Knowledge Graphs, Ontologies, and the Future of AI with Joe Miller
SummaryAnastassia and Joseph Miller delve into the complexities of artificial intelligence, particularly focusing on the limitations of large language models (LLMs) and the importance of embedding causality and reasoning into AI systems. Joseph critiques the current transformer model architecture, explaining how it lacks a true understanding of causality, which is essential for meaningful interactions. He emphasizes that while LLMs can generate convincing language, they lack a world model that would enable them to reason or understand the implications of their outputs. This leads to discussions on the necessity of ontologies and knowledge graphs to provide a structured understanding of the world, enabling AI to operate more effectively in real-world contexts.The conversation also touches on the future of AI in the workplace, with Joseph expressing a somewhat pessimistic view of labor disruption from AI advancements. He believes that while AI can enhance productivity, it may also lead to significant job losses, as many roles could be automated. However, he remains hopeful about the potential for humans and AI to work together, emphasizing the need for accountability and responsibility in AI applications. The discussion concludes with reflections on the importance of AI literacy and the potential for a future in which humans and AI coexist harmoniously, leveraging each other's strengths.Joseph (Joe) Miller, PhD, is a physicist, scientist, and serial entrepreneur who serves as Co‑Founder and Chief AI Officer at Vivun, where he builds AI sales agents that embed expert domain knowledge into real‑world workflows. Before Vivun, he worked at Bridgewater Associates on expert systems for systematic decision‑making and later founded Battery CI, a quantitative FX hedge fund, and co‑founded other tech ventures at the intersection of AI, finance, and digital identity. Across his roles, Miller focuses on causal inference, world models, and knowledge‑centric AI, translating deep technical ideas into practical systems for high‑stakes enterprise environments like sales, trading, and strategic decision‑making.TakeawaysJudea Pearl’s “The Book of ‘Why’” is a must-read to understand foundations of causality and what current AI systems lack.LLMs lack a true understanding of causality.Embedding ontologies can enhance AI's reasoning capabilities.AI's productivity gains may lead to significant job disruption.Humans must remain accountable for AI's decisions. AI makers will be liable for product issues in AI services and applications.AI literacy is crucial for navigating future challenges.Chapters00:00 Introduction to the episode: Looking into AI and reasoning LLMs03:11 Discussing two books: “Nexus” and “The Book of ‘Why’”07:36 Limitations of Large Language Models today14:50 Embedding Context with Ontologies and Knowledge Graphs into LLMs18:31 The Convergence of AI Approaches as a possible path to a reasoning AI20:52 Defining Ontologies and Knowledge Graphs25:45 Innovation Through Interdisciplinary Knowledge in AI as a necessity30:04 Dynamic Learning in LLMs34:15 ‘World Models’ and Their Impact in AI35:14 The Future of AI and Accountability, AI Ethics40:03 Human-AI Collaboration in the Workplace47:06 The Importance of AI LiteracyHyperlinks:Joe Miller and Vivun/ AI in sales:LinkedIn profileMiraCosta Alumni blog postVivun websiteAI adoption in companiesStatistics about AI in salesAnastassia:Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 5959: Kiddie Chatbots, Clone Employees, Digital You: Language AI with Rob LoCascio
Summary:Robert LoCascio shares his journey from founding LivePerson to creating innovative AI solutions for children and adults. He discusses the importance of creativity in childhood development and the role of AI in enhancing human capabilities. Robert emphasizes the need for safe AI environments for children and the importance of teaching AI literacy to empower the next generation. He also explores the myths surrounding AI and the significance of data ownership in the age of technology.Robert LoCascio is best known as the founder and long‑time CEO of LivePerson, the company that pioneered web chat and conversational interfaces for customer service and commerce, which he led from 1995 to 2023 and grew into a multibillion‑dollar NASDAQ‑listed business. He is also a co‑founder of EqualAI, a nonprofit focused on reducing bias in AI systems, launched with leaders like Arianna Huffington and Jimmy Wales. Through his Dream Big Foundation, he created programs like FeedingNYC and the Dream Big Entrepreneurship Initiative to support vulnerable families and underserved entrepreneurs.Takeaways: Robert's journey from sleeping on a couch to founding LivePerson.The importance of resilience and problem-solving in entrepreneurship.AI should empower creativity in children, not replace human interaction.Understanding AI requires demystifying its capabilities and limitations.Data ownership is crucial for individuals in the age of AI.Children's creativity can be harnessed through safe AI environments.AI can replicate human qualities without replacing them.Teaching AI fundamentals is essential for future generations.The human element in technology is vital for meaningful interactions.Storytelling can enhance children's engagement with AI. Chapters:01:58 Robert LoCascio's Journey and Pivotal Moments05:00 Defining Childhood and Adulthood in the Age of AI06:58 Understanding AI: Myths and Realities10:38 The Role of Data in AI Development14:02 The Future of AI: Ownership and Creativity16:38 Technology for Children: Safe AI Experiences19:03 Humanity in AI: Bridging the Gap23:15 Navigating Childhood Development in a Digital Age25:16 The Impact of Technology on Children29:30 Fostering Creativity in Kids32:01 Data and Cultural Literacy for Children36:07 Storytelling and the Hero's Journey38:20 Global Reach and Future Vision for AI in Education39:05 AI Literacy Across GenerationsHyperlinks:Wikipedia post about Robuare.ai websitekidco.ai websiteEternos.life websiteRob's LinkedIn Paper: Application of artificial intelligence in early childhood development: a scoping review protocolAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 5858: How AI is Reshaping Norway with Marion Loken
Summary:Anastassia and Marion Loken discuss the advancements and challenges of AI in Norway, highlighting the country’s high level of digitization, trust in institutions, and historical mathematical expertise. They discuss the development of a Norwegian LLM, literacy programs at the national and corporate levels, and ethical considerations surrounding the development and use of ML and genAI technologies. Marian Loken is a visionary leader in digitalization, data analytics, and AI, currently serving as VP of Insights and Analytics at Strawberry. She has held senior roles at major organizations, including FINN.no, Schibsted Marketplaces, NAV, McKinsey & Company, and Norsk Tipping. Her leadership has been acknowledged with inclusion in Norway's Top 50 Women in Tech list for 2025, highlighting her impact on the tech industry and her commitment to empowering organizations through technology and data.Key Takeaways:In Norway, a high level of digitization, institutional trust, and math literacy foster the adoption and development of AI.Norway is one of the few countries with a large Norwegian-language model. Its development faces initial challenges, but after three years advanced.Norway follows the EU AI Act, as the country is focused on AI ethics. This includes paying attention to the engineering level of genAI technologies, where rules are set by builders and reflect their ideology and beliefs. Transparency is key to adopting genAI technologies, and it primarily concerns the engineering layer.Motor insurance is one of the early adopters of AI, offering immediate claims payouts once the insured sends a picture of the damage.Data quality is imperative to any AI program.Culture and change management are key to delivering on AI. Communicating the ‘why’ and ‘how’ around AI adoption is important to deliver on personalized technologies, e.g., addressing the gambling addiction in the national gambling industry.Trust in institutions leads to taking risks, which is positive for boosting AI adoption.Democratizing AI literacy and AI adoption beyond China and the US, addressing the climate dilemma behind current LLMs and AI based on transformers and developing applications to let people experience things through the perspectives and eyes of others are important topics to further develop AI.Chapters:00:00 Introduction to the episode, statistics about AI in Norway4:03 AI in Norway: Norwegian LLM, national policy, use of AI in public sector09:50 AI Literacy programs in Finland and Norway11:48 Demographics and AI in Norway, AI in healthcare14:14 AI funding in Norway, China and the US18:10 EU AI Act and Norway21:20 Practical compliance with the EU AI Act and genAI - addressing the engineering layer22:50 AI in motor insurance in Norway: a use case24:53 AI and personalization to address gambling addiction: a use case27:27 Preconditions to developing and growing AI in Norway: trust, math and AI Literacy programs34:14 understanding privacy: the Grok example37:08 Suggestions of topics for beneficial and safe AI development: democratization, addressing environmental burden due to current AI architectures, AI enabling seeing the world through someone else’s eyesHyperlinks:Marion Loken:LinkedIn profileSessionize profileNDC conferenceRankMyAI: AI Report Norway 2025Anastassia:Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 5757: AI & Learning How to Learn and Stay Human with Dr. Brian E. Arnold
Dr. B discusses the evolution of knowledge retention in education, emphasizing the shift from memorizing information to learning how to learn, particularly in the context of AI's role in modern education. He highlights the importance of teaching critical thinking and the ability to discern truth in an age where information is readily available at our fingertips.Dr. Brian E. Arnold is an educator, leadership coach, and entrepreneur who began his academic journey as a special education student and went on to earn a doctorate in Educational Leadership. A former world‑class athlete, he has served as a teacher, college professor, and administrator, and now works as a mortgage loan officer and leadership consultant. He is the founder of the Journey to Freedom initiative (for Black men’s success), The Remote Assistant Pros (a virtual staffing firm), and Metro Funding Group (helping families with estate planning). Brian is also a speaker, chaplain, and deeply involved in community service, especially around housing, education, and racial equity.Dr. Brian E. Arnold podcasts focused on personal growth, leadership, and living with purpose: The Journey to Freedom Podcast and Living Boldly With Purpose.Dr. Arnold’s last book is titled “Relationship vs. AI: How to Stay Human in the Age of Artificial Intelligence”. He explores how AI is changing the way people connect, communicate, and create, and offers practical guidance on using AI to enhance (not replace) human relationships, creativity, and purpose. TakeawaysWe have transitioned from ‘needing to know everything’ to ‘learning how to learn.’Kids today don't need to retain knowledge as we did in the past.The availability of information through devices changes educational needs.We are not teaching children how to find and test information.Critical thinking skills are essential in the digital age.Understanding relationships with technology is crucial for modern learners.Memorization is less important than the ability to learn.Education must adapt to the realities of information access.AI serves as a tool to enhance learning, not replace it.Chapters00:00 Introduction to Dr. B.and the podcast01:28 The book “Solaris” and human quest to find mirrors instead of alien civilizations10:28 Dr. B's Personal Journey16:28 The Role of Technology23:28 AI and Human Relationships28:28 Philosophical Reflections on AI34:28 Community and Connection20:28 Future of AI and EducationHyperlinks:Brian Arnold WebsitePodcast The Journey To FreedomPodcast Living Boldly With Purposehttps://podcasts.apple.com/us/podcast/living-boldly-with-purpose/id1789815699Brian Arnold’s BooksAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby BookSubstack
Ep 5656: Signals and Sensemaking: Human Edge in AI-Powered Trading with Afshin Moshrefi
Summary In this episode, Afshin Mashrefi emphasizes approaching financial markets as an engineer rather than as a Wall Street traditionalist: define behavior, measure it rigorously across regimes, and only then ship tools that help humans decide. He explains how TradeWave’s AI infrastructure ingests long‑run market data, normalizes it, and uses pattern‑detection and seasonality models to surface high‑probability periods and setups, while still keeping outputs deterministic and reproducible so traders can audit why a given pattern appears. Throughout the dialogue, both he and Anastassia stress that “humans still set the pattern, AI helps size the odds”: algorithms can rank historical tendencies and flag anomalies, but risk, position sizing, and the decision to act remain squarely with the human trader and leader. Afshin Moshrefi’s thinking around the 100‑Year Pattern begins with a simple question: are widely repeated “seasonal” stories about markets backed by real data, or are they just trader folklore? To answer it, he spent years building his own data engine, running nearly a century of S&P 500 history—close to a trillion data points—through systematic seasonality tests and election‑cycle analysis, eventually isolating a long‑horizon pattern that explains a disproportionate share of index returns. In his framing, the 100‑Year Pattern is less a magic formula and more a probabilistic map: a recurring October‑to‑September window within the four‑year presidential cycle that historically outperformed simple buy‑and‑hold by a large factor, plus a family of related seasonal setups traders can use as scaffolding for decisions.Afshin Moshrefi is a data scientist, inventor, and entrepreneur focused on applying AI and pattern recognition to financial markets. He is the founder and CTO of TradeWave AI. This platform turns long‑term market seasonality and trillions of historical data points into tradable “patterns” for options and equities traders. Trained as an engineer (M.S. in Electrical Engineering with a focus on statistical physics), he has spent decades building AI‑driven products across telecom, healthcare, IoT, and trading. He holds multiple patents in proactive user interfaces, media recommendations, and crowdsourced problem-solving.He is also the author of The 100‑Year Pattern, an Amazon #1 new release that argues long‑horizon market seasonality can be quantified and used as a probabilistic “map” for traders, with AI helping to surface and size those recurring patterns.Key Takeaways: Investors can achieve significant success by applying the seasonality hypothesis to their trading.A solid scientific foundation is key before deploying an AI and automation platform. Chapters:1:22 Afshin's path into AI-powered trading3:46 Physics as a statistically driven discipline; AI roots in statistics5:21 Randomness of technical financial analysis vs fundamental analysis of companies' health, and why excellent outcomes can't exist without fundamental analysis9:27 What constitutes the 100-year pattern?12:12 Discovery of the pattern14:40 TradeWave as a research platform18:17 AI vs. data science20:18 History of working with seasonality24:05 Scientific concept came first, building the platform happened second27:38 Overreliance on AI in trading and trading automation: pitfalls and issues30:14 AI agents as a team of capable workersHyperlinks:Afshin Moshrefi LinkedInTwitter/XWebsite TradeWaveBookAfshin About His BookAnastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3)First Public Reading, Romy, Roby and the Secrets of Sleep (2/3)First Public Reading, Romy, Roby and the Secrets of Sleep (3/3)AI Snacks with Romy and Roby@romyandroby“Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby Book
Ep 5555: AI to Laugh, Listen, and Collaborate in Creative Industries with Victor Varnado (Part Two)
Summary:In the last episode of the year 2025, Victor shares his multifaceted journey as a comedian, filmmaker, cartoonist, and entrepreneur. He discusses his unique perspective on life, emphasizing simplicity and logic, and how these principles guide his creative endeavors. Victor elaborates on his innovative writing coach tool designed to assist neurodivergent individuals, and his efforts to integrate AI literacy in schools. The discussion also touches on the role of AI in creativity, the importance of reasoning and communication, and Victor's experiences with cartooning for The New Yorker. The conversation concludes with reflections on the future of AI in society and education.Takeaways:Victor describes himself as an amalgam of his childhood interests.He believes life has simple answers based on logic.Victor creates things he enjoys, aiming to help others.Victor's newsletter is a self-sustaining source of income. Its secret lies in combining engaging video games with the subscription model, keeping people interested and letting them enjoy the experience.Victor's Magic Bookifier platform includes a writing coach, enabling users to deliver a first draft of their books or essays and helping them get started with tailored questions.Cognitive offload vs. cognitive discourse is crucial for learning, and both can be addressed with AI.Victor engages with schools to promote AI literacy.He sees a need for knowledgeable AI professionals in education.Victor's first cartoon for The New Yorker took a year to publish. One should never give up while pursuing a creative endeavor. Talent wins.Victor believes AI can enhance creativity if used as a tool, not as a substitute for human skills.Organizations and schools should have someone knowledgeable about AI technologies who can pilot them and move from there to more specialized applications.Chapters:00:00 Introduction to the Episode02:16 Victor's Multifaceted Journey05:02 Understanding Life Through Logic07:54 Creating for Joy and Simplicity10:07 Building a Self-Sustaining System of Income: Videogames in a Newsletter13:58 The Writing Coach and Cognitive Offload18:53 Engaging with Schools and AI Literacy23:41 The Role of AI in Education27:54 The Art of Cartooning and Humor31:05 AI's Impact on Creativity35:35 Designing AI Literacy Courses38:13 The Future of AI in Society Hyperlinks: Wikipedia Victor Varnado Company (Supreme Robot Pictures) Magic Bookifier (main site) Six Unseemly Questions (hosted by Victor) Cartoon Caption Contest Podcast – “Episode 203 – Victor Varnado” “AI for All of Us: Using AI in Everyday Life and Business (Even If You Think It’s Dumb)”Anastassia Lauterbach - LinkedInFirst Public Reading, Romy, Roby and the Secrets of Sleep (1/3) First Public Reading, Romy, Roby and the Secrets of Sleep (2/3) First Public Reading, Romy, Roby and the Secrets of Sleep (3/3) AI Snacks with Romy and Roby@romyandroby “Leading Through Disruption”AI EdutainmentThe AI Imperative BookRomy & Roby Book