PLAY PODCASTS
Large Language Models: Teaching the Parrot to Talk | AI Series Pt. 2
Season 1 · Episode 7

Large Language Models: Teaching the Parrot to Talk | AI Series Pt. 2

AI explained with parrots, neurons, and probability (Part 2)

Mr. Fred's Tech Talks · Fred Aebli

September 15, 20259m 16s

Audio is streamed directly from the publisher (sphinx.acast.com) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.

Show Notes

In Episode 7 of Mr. Fred’s Tech Talks, I dive deeper into Large Language Models (LLMs) and explore how they’re trained. Using the fun analogy of a parrot that never stops practicing, Mr. Fred explains the 9-step training pipeline: from collecting massive datasets and tokenizing text, to neurons, weights, backpropagation, GPUs, fine-tuning, and safety alignment...but in a LOW TECH JARGON way.


I’ll also talk about probability math, why LLMs don’t really “understand” but instead predict the most likely next word, like rolling loaded dice. Along the way, enjoy some nostalgic sound bites from movies and TV that connect the dots between memory, patterns, and AI.


🎧 Highlights:

  • The parrot analogy for LLMs
  • What AI “neurons” are (tiny math functions, not brain cells)
  • Why data quality and fine-tuning matter
  • Probability explained with dice and jokes
  • Tech Tip: Ask AI how it got its answer


Whether you’re a parent, teacher, student, or just curious about AI, this episode will give you a fun and clear view of how language models actually learn.

CONNECT

Website: https://www.getmecoding.com

Courses: https://courses.getmecoding.com


FOLLOW

YouTube: https://www.youtube.com/@GetMeCoding

Instagram: https://www.instagram.com/getmecoding

Facebook: https://www.facebook.com/GetMeCoding

LinkedIn: https://www.linkedin.com/in/mrfred77/

Follow, rate ★★★★★, and share!


Hosted on Acast. See acast.com/privacy for more information.