
Nature of Intelligence, Ep. 3: What kind of intelligence is an LLM?
Large language models, like ChatGPT and Claude, have remarkably coherent communication skills. Yet, what this says about their “intelligence” isn’t clear. Is it possible that they could arrive at the same level of intelligence as humans without taking the same evolutionary or learning path to get there? Or, if they’re not on a path to human-level intelligence, where are they now and where will they end up? In this episode, with guests Tomer Ullman and Murray Shanahan, we look at how large language models function and examine differing views on how sophisticated they are and where they might be going.
Show Notes
Guests:
- Tomer Ullman, Assistant Professor, Department of Psychology, Harvard University
- Murray Shanahan, Professor of Cognitive Robotics, Department of Computing, Imperial College London; Principal Research Scientist, Google DeepMind
Hosts: Abha Eli Phoboo & Melanie Mitchell
Producer: Katherine Moncure
Podcast theme music by: Mitch Mignano
Follow us on:
Twitter • YouTube • Facebook • Instagram • LinkedIn • Bluesky
More info:
- Tutorial: Fundamentals of Machine Learning
- Lecture: Artificial Intelligence
- SFI programs: Education
Books:
- Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell
- The Technological Singularity by Murray Shanahan
- Embodiment and the inner life: Cognition and Consciousness in the Space of Possible Minds by Murray Shanahan
- Solving the Frame Problem by Murray Shanahan
- Search, Inference and Dependencies in Artificial Intelligence by Murray Shanahan and Richard Southwick
Talks:
- The Future of Artificial Intelligence by Melanie Mitchell
- Artificial intelligence: A brief introduction to AI by Murray Shanahan
Papers & Articles:
- “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” in New York Times (Feb 16, 2023)
- “Bayesian Models of Conceptual Development: Learning as Building Models of the World,” in Annual Review of Developmental Psychology Volume 2 (Oct 26, 2020), doi.org/10.1146/annurev-devpsych-121318-084833
- “Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models,” in Findings of the Association for Computational Linguistics (December 2023), doi.org/10.18653/v1/2023.findings-emnlp.264
- “Role play with large language models,” in Nature (Nov 8, 2023), doi.org/10.1038/s41586-023-06647-8
- “Large Language Models Fail on Trivial Alterations to Theory-of-Mind Tasks,” arXiv (v5, March 14, 2023), doi.org/10.48550/arXiv.2302.08399
- “Talking about Large Language Models,” in Communications of the ACM (Feb 12, 2024),
- “Simulacra as Conscious Exotica,” in arXiv (v2, July 11, 2024), doi.org/10.48550/arXiv.2402.12422