
789: Do More With AI - LLMs With Big Token Counts
Join Scott and CJ as they dive into the fascinating world of AI, exploring topics from LLM token sizes and context windows to understanding input length. They discuss practical use cases and share insights on how web developers can leverage larger...
Syntax - Tasty Web Development Treats · Wes Bos & Scott Tolinski - Full Stack JavaScript Web Developers
Audio is streamed directly from the publisher (traffic.megaphone.fm) as published in their RSS feed. Play Podcasts does not host this file. Rights-holders can request removal through the copyright & takedown page.
Show Notes
Join Scott and CJ as they dive into the fascinating world of AI, exploring topics from LLM token sizes and context windows to understanding input length. They discuss practical use cases and share insights on how web developers can leverage larger token counts to maximize the potential of AI and LLMs.
Show Notes- 00:00 Welcome to Syntax!
- 01:31 Brought to you by Sentry.io.
- 02:42 What is a token?
- 04:22 Context window sometimes called “max tokens”.
- 10:42 Understanding input length.
- 11:59 Models + services with big token counts.
- 13:22 Generating open API documentation for a complex API.
- 17:29 Generating JSDoc style typing.
- 21:07 Generating seed data for a complex database.
- 24:34 Summarizing 8+ hours of video.
- 29:35 Some things we’ve yet to try.
- 31:32 What about cost?
Syntax: X Instagram Tiktok LinkedIn Threads
Wes: X Instagram Tiktok LinkedIn Threads
Scott: X Instagram Tiktok LinkedIn Threads