PLAY PODCASTS
The Pragmatic Engineer

The Pragmatic Engineer

Big Tech and startups, from the inside. Highly relevant for software engineers, AI engineers and engineering leaders, useful for those working in tech.

Gergely Orosz

61 episodesEN

Show overview

The Pragmatic Engineer has been publishing since 2024, and across the 2 years since has built a catalogue of 61 episodes. That works out to roughly 80 hours of audio in total. Releases follow a fortnightly cadence.

Episodes typically run an hour to ninety minutes — most land between 1h 10m and 1h 30m — and the run-time is fairly consistent across the catalogue. None of the episodes are flagged explicit by the publisher. It is catalogued as a EN-language Technology show.

The show is actively publishing — the most recent episode landed 2 days ago, with 13 episodes already out so far this year. The busiest year was 2025, with 39 episodes published. Published by Gergely Orosz.

Episodes
61
Running
2024–2026 · 2y
Median length
1h 17m
Cadence
Fortnightly

From the publisher

Software engineering at Big Tech and startups, from the inside. Deepdives with experienced engineers and tech professionals who share their hard-earned lessons, interesting stories and advice they have on building software. Especially relevant for software engineers and engineering leaders: useful for those working in tech. newsletter.pragmaticengineer.com

Latest Episodes

View all 61 episodes

TypeScript, C# and Turbo Pascal with Anders Hejlsberg

May 13, 20261h 15m

Building Pi, and what makes self-modifying software so fascinating

Apr 29, 20261h 33m

Designing Data-intensive Applications with Martin Kleppmann

Apr 22, 20261h 25m

DHH’s new way of writing code

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—David Heinemeier Hansson (DHH) is the creator of Ruby on Rails and Omarchy, co-founder and CTO of 37signals (maker of Basecamp and HEY), and the author of several books including the best-seller, Remote: Office Not Required, co-written with Jason Fried.Six months ago, in an episode of the Lex Fridman podcast, David shared how he doesn’t use AI tools to write code: he types out all his code. But things have changed a lot since then. In this episode, we discuss his approach to building software, how it’s changed in the last six months, and why he now takes an agent-first approach, and how he barely writes any code by hand. We go into how he uses AI agents: which alter how he builds and explores ideas, but also how his standards of quality and craft remain the same.We also discuss how 37signals thinks about product development, from the role of designers to the importance of aesthetics and taste. David gets into how he sees beauty and functionality as closely linked, and why strong opinions about design lead to better software.Finally, we look into the uneven impact of AI which amplifies senior engineers while creating challenges for junior developers, and what this may mean for the role of the software engineer.—Timestamps(00:00) Intro(02:11) Omarchy and Ruby on Rails(08:25) 37signals overview(10:12) Launching HEY(18:38) Building HEY(22:47) Designers at 37signals(28:08) The craft of design(31:52) Why DHH now embraces AI workflows(39:45) The AI inflection point(44:23) DHH’s agent-first workflow(55:09) AI’s impact on junior developers(1:03:08) Developer experience with AI(1:16:43) What does AI mean for developers?(1:23:33) 37signals teams and hiring(1:38:20) Work-life balance with AI(1:41:41) Why DHH keeps building(1:45:24) Closing—The Pragmatic Engineer deepdives relevant for this episode:• Are AI agents actually slowing us down?• How Claude Code is built• The future of software engineering with AI: six predictions• The AI Engineering Stack• Mitchell Hashimoto’s new way of writing code• How Linux is built with Greg Kroah-Hartman—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Apr 8, 20261h 46m

Scaling Uber with Thuan Pham (Uber’s first CTO)

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Thuan Pham was Uber's first and longest-serving CTO, and today he’s the CTO of Faire, a B2B wholesale platform. Back when Thuan joined Uber, it had around 40 engineers and 30,000 rides per day, and the system crashed multiple times a week. Over seven years, he helped rebuild the system, move it from a monolith to microservices, and scaled the engineering organization behind it. I had the privilege of working with Thuan for four of those seven years. Later, the very first issue of The Pragmatic Engineer newsletter was a deepdive into Uber’s Program and Platform split. This episode of the podcast contains a nice “full circle” moment, where Thuan shares even more details about why Uber chose to embrace that structure.We discuss what it takes to operate and build in that kind of environment. Thuan explains how he divided his time at Uber into three “tours of duty,” from stabilizing a fragile system, to re-architecting it, and scaling the org.We go deep into the platform-and-program split, the Helix app rewrite, and what it took to launch Uber in China in just five months (the original estimate was 18 months). We also cover Uber’s in-house tools and explain why they were necessary to support rapid growth.Finally, we discuss his role today as CTO of Faire, how the company is using AI, and how he sees AI changing software engineering.—Timestamps(00:00) Intro(05:32) Getting into tech(16:09) The dot-com bust(20:42) VMware(26:29) Getting hired by Travis at Uber(33:22) Early days at Uber and scaling challenges(40:57) Uber’s China launch(47:12) The platform and program split(50:26) From monolith to microservices (53:38) Internal tools at Uber (57:05) Helix: Uber’s mobile app rewrite(59:55) Thuan’s email about naming(1:02:03) Org structure changes under(1:06:34) Thuan’s work philosophy (1:12:23) The “three tours of duty” at Uber(1:15:37) Why Thuan left Uber (1:17:34) Coupang and Nubank(1:21:59) Faire(1:25:31) How Faire uses AI(1:28:24) AI’s impact on software engineering (1:31:09) The role of the CTO (1:35:13) Career advice—The Pragmatic Engineer deepdives relevant for this episode:• How Uber uses AI for development: inside look• The Platform and Program split at Uber• How Uber is measuring engineering productivity• Inside Uber’s move to the cloud• Uber's crazy YOLO app rewrite, from the front seat• How Uber built its observability platform• Developer experience at Uber with Gautam Korlam• Uber’s engineering level changes—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Apr 1, 20261h 38m

Building WhatsApp with Jean Lee

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—How did a tiny team of 30 engineers build the world-famous messaging app more than a decade ago, and what can dev teams learn from that feat today? Jean Lee was engineer #19 at WhatsApp, joining when the company was still small, with almost no formal processes. She helped it scale to hundreds of millions of users, went through the $19B acquisition by Facebook, and later worked at Meta.In this episode of Pragmatic Engineer, I talk with Jean about what it was like building WhatsApp. When Facebook bought WhatsApp in 2014, only around 30 engineers supported hundreds of millions of users across eight platforms.We discuss how the founders kept things simple, saying “no” to most feature requests for years. Jean explains why WhatsApp chose Erlang for the backend, why the team avoided cross-platform abstractions, and how charging users $1 per year paid everyone’s salaries, while keeping growth intentionally slow.Jean also shares what the Facebook acquisition was like on the inside, how she dealt with sudden personal wealth, and what it was like transitioning from an IC to a manager at Facebook – including the reality of calibration meetings and performance reviews.We also discuss how AI enables smaller engineering teams, and why WhatsApp’s experience suggests ownership and trust might matter more than tools.—Timestamps(00:00) Intro(01:39) Early years in tech(06:18) Becoming engineer #19 at WhatsApp(13:53) WhatsApp’s tech stack(18:09) WhatsApp’s unique ways of working(25:27) Countdown displays and outages(27:07) Why WhatsApp won(28:53) The Facebook acquisition(33:13) Life after acquisition(39:27) Working at Facebook in London(44:07) Transitioning to management(47:27) Performance reviews as a manager(53:29) After Facebook(58:53) AI’s impact on engineering(1:02:34) Jean’s advice to new grads and startups(1:06:45) Empowering employees(1:08:17) Book recommendations—The Pragmatic Engineer deepdives relevant for this episode:• How Meta built Threads• How Big Tech runs tech projects and the curious absence of Scrum• Performance calibrations at tech companies• Software engineers leading projects—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Mar 18, 20261h 10m

From IDEs to AI Agents with Steve Yegge

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Steve Yegge has spent decades writing software and thinking about how the craft evolves. From his early years at Amazon and Google, to his influential blog posts, he has often been early at spotting shifts in how software gets built. In this episode of Pragmatic Engineer, I talk with Steve about how AI is changing engineering work, why he believes coding by hand may gradually disappear, and what developers should focus on, instead. We discuss his latest book, Vibe Coding, and the open-source AI agent orchestrator he built called Gas Town, which he said most devs should avoid using.Steve shares his framework for levels of AI adoption by engineers, ranging from avoiding AI tools entirely, to running multiple agents in parallel. We discuss why he believes the knowledge that engineers need to know keeps changing, and why understanding how systems evolve may matter more than mastering any particular tool.We also explore broader implications. Steve argues that AI’s role is not primarily to replace engineers, but to amplify them. At the same time, he warns that the pace of change will create new kinds of technical debt, new productivity pressures, and fresh challenges for how teams operate.—Timestamps(00:00) Intro(01:43) Steve’s latest projects(02:27) Important blog posts(04:48) Shifts in what engineers need to know(10:46) Steve’s current AI stance(13:23) Steve’s book Vibe Coding(18:25) Layoffs and disruption in tech(31:13) Gas Town(40:10) New ways of working(51:08) The problem of too many people(54:45) Why AI results lag in business(59:57) Gamification and product stickiness(1:04:54) The ‘Bitter Lesson’ explained(1:07:14) The future of software development(1:23:06) Where languages stand(1:24:47) Adapting to change(1:27:32) Steve’s predictions —The Pragmatic Engineer deepdives relevant for this episode:• Vibe coding as a software engineer• The full circle of developer productivity with Steve Yegge• AI Tooling for Software Engineers in 2026• The AI Engineering Stack—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Mar 11, 20261h 31m

Building Claude Code with Boris Cherny

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Boris Cherny is the creator and Head of Claude Code at Anthropic. He previously spent five years at Meta as a Principal Engineer and is the author of the book Programming TypeScript.In this episode of Pragmatic Engineer, we went through how Claude Code was built and what it means when engineers no longer write most of the code themselves.We discuss how Claude Code evolved from a side project into a core internal tool at Anthropic and how Boris uses it day-to-day. We go deep into workflow details, including parallel agents, PR structure, deterministic review patterns, and how the system retrieves context from large codebases. We also get into how Claude Cowork was built.As coding becomes more accessible, the role of engineers shifts rather than shrinks. We examine what that shift means in practice, which skills become more important, and why the lines between product, engineering, and design are blurring.—Timestamps(00:00) Intro(11:15) Lessons from Meta(19:46) Joining Anthropic(23:08) The origins of Claude Code(32:55) Boris's Claude Code workflow(36:27) Parallel agents(40:25) Code reviews(47:18) Claude Code's architecture(52:38) Permissions and sandboxing(55:05) Engineering culture at Anthropic(1:05:15) Claude Cowork(1:12:48) Observability and privacy(1:14:45) Agent swarms(1:21:16) LLMs and the printing press analogy(1:30:16) Standout engineer archetypes(1:32:12) What skills still matter for engineers(1:35:24) Book recommendations—The Pragmatic Engineer deepdives relevant for this episode:• How Claude Code is built• How Anthropic built Artifacts• How Codex is built• Real-world engineering challenges: building Cursor—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Mar 4, 20261h 37m

Mitchell Hashimoto’s new way of writing code

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—How has the day-to-day workflow of Mitchell Hashimoto changed, thanks to AI tools?Mitchell Hashimoto is one of the most influential infrastructure engineers of our time, and is one of the most pragmatic builders I’ve met. He is the co-founder of HashiCorp and creator of Ghostty. In this episode, we talk about how he got into software engineering, the history of HashiCorp, and the challenges of turning widely used open-source tools into a durable business. We also go into what it’s really like to work with AWS, Azure and GCP as a startup.Mitchell shares how he uses AI these days, and how agents have completely changed how he works. We touch on Ghostty, open source, and what’s changing for software engineers and founders in an AI-native era.—Timestamps(00:00) Intro(02:03) Mitchell’s path into software engineering(07:19) The origins of HashiCorp(15:52) Early cloud computing(18:22) The 2010s startup scene in SF(23:11) Funding HashiCorp(25:23) The Hashi stack(32:33) Why HashiCorp’s business lagged behind its technology(35:28) An early failure in commercialization(38:28) The open-core pivot and path to enterprise profitability(48:08) Taking HashiCorp public(51:58) The near VMware acquisition(59:10) Mitchell’s take on all the cloud providers(1:06:02) AI’s impact on open source(1:07:00) Why Mitchell built Ghostty(1:09:11) Why Mitchell used Zig(1:10:38) How terminals work and Ghostty’s approach(1:17:31) AI’s impact on terminals and libghostty(1:19:13) How Mitchell uses AI(1:22:02) Ghostty’s evolving AI use policy(1:28:36) Why open source must change(1:31:46) The problem of Git in monorepos(1:36:22) What needs to change to work effectively with AI(1:39:57) Mitchell’s hiring practices(1:47:52) Mitchell’s AI adoption journey(1:50:41) Advice to would-be founders(1:52:21) Mitchell’s advising work(1:53:20) What’s changing for software engineers(1:55:03) How Mitchell recharges(1:55:50) Book recommendation—The Pragmatic Engineer deepdives relevant for this episode:• AI Engineering in the real world• The AI Engineering stack• Pressure on commercial open source to make more money – and HashiCorp changing its license• How Linux is built with Greg Kroah-Hartman—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Feb 25, 20261h 57m

The programming language after Kotlin – with the creator of Kotlin

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Andrey Breslav is the creator of Kotlin and the founder of CodeSpeak, a new programming language that aims to reduce boilerplate by replacing trivial code with concise, plain-English descriptions. He led Kotlin’s design at JetBrains through its early releases, shaping both the language and its compiler as Kotlin grew into a core part of the Android ecosystem.In this episode, we talk about what it takes to design and evolve a programming language in production. We discuss the influences behind Kotlin, the tradeoffs that shaped it, and why interoperability with Java became so central to its success. Andrey also explains why he is building CodeSpeak as a response to growing code complexity in an era of LLM agents, and why he believes keeping humans in control of the software development lifecycle will matter even more as AI becomes more capable.—Timestamps(00:00) Intro(01:02) Why Kotlin was created(06:26) Dynamic vs. static languages(09:27) Andrey joins the Kotlin project(14:26) Designing a new language (19:40) Frontend vs. Backend in language design(21:05) Why is it named Kotlin?(24:37) Kotlin vs. Java tradeoffs(28:32) Null safety (31:24) Kotlin’s influences (39:12) Smartcasts (40:42) Features Kotlin left out(44:54) Bidirectional Java interoperability(55:01) The Kotlin timeline (58:00) Kotlin’s development process(1:07:20) From Java to Android developers(1:12:12) How Android became Kotlin-first (1:18:20) CodeSpeak: a language for LLMs(1:24:07) LLMs and new languages(1:28:20) How software engineering is changing with AI(1:36:12) Developer tools of the future (1:39:00) Andrey’s advice for junior engineers and students (1:42:32) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Cross-platform mobile development• How Swift was built – with Chris Lattner, the creator of the language• Building Reddit’s iOS and Android app• Notion: going native on iOS and Android• Is there a drop in native iOS and Android hiring at startups?—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Feb 12, 20261h 44m

The third golden age of software engineering – thanks to AI, with Grady Booch

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Every few decades, software engineering is declared “dead” or on the verge of being automated away. We’ve heard versions of this story before. But what if it’s just the start of a new “golden age” of a different type of software engineering, like it has been many times before?In this episode of The Pragmatic Engineer, I’m joined once again by Grady Booch, one of the most influential figures in the history of software engineering, to put today’s claims about AI and automation into historical context.Grady is the co-creator of the Unified Modeling Language, author of several books and papers that have shaped modern software development, and Chief Scientist for Software Engineering at IBM, where he focuses on embodied cognition.Grady shares his perspective on three golden ages of computing since the 1940s, and how each emerged in response to the constraints of its time. He explains how technical limits and human factors have always shaped the systems we build, and why periods of rapid change tend to produce both real progress and inflated expectations.He also responds to current claims that software engineering will soon be fully automated, explaining why systems thinking, human judgment, and responsibility remain central to the work, even as tools continue to evolve.—Timestamps(00:00) Intro(01:04) The first golden age of software engineering(18:05) The software crisis(32:07) The second golden age of software engineering (41:27) Y2K and the Dotcom crash (44:53) Early AI (46:40) The third golden age of software engineering (50:54) Why software engineers will very much be needed(57:52) Grady responds to Dario Amodei(1:06:00) New skills engineers will need to succeed(1:09:10) Resources for studying complex systems (1:13:39) How to thrive during periods of change—The Pragmatic Engineer deepdives relevant for this episode:• When AI writes almost all code, what happens to software engineering? • Inside a five-year-old startup’s rapid AI makeover• Software architecture with Grady Booch• What is old is new again—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Feb 4, 20261h 17m

The creator of Clawd: "I ship code I don't read"

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Peter Steinberger ships more code than I’ve seen a single person do: in January, he was at more than 6,600 commits alone. As he puts it: “From the commits, it might appear like it's a company. But it’s not. This is one dude sitting at home having fun."How does he do it?Peter Steinberger is the creator of Clawdbot (as of yesterday: renamed to Moltbot) and founder of PSPDFKit. Moltbot – a work-in-progress AI agent that shows what the future of Siri could be like – is currently the hottest AI project in the tech industry, with more searches on Google than Claude Code or Codex. I sat down with Peter in London to talk about what building software looks like when you go all-in with AI tools like Claude and Codex.Peter’s background is fascinating. He built and scaled PSPDFKit into a global developer tools business. Then, after a three-year break, he returned to building. This time, LLMs and AI agents sit at the center of his workflow. We discuss what changes when one person can operate like a team and why closing the loop between code, tests, and feedback becomes a prerequisite for working effectively with AI.We also go into how engineering judgment shifts with AI, how testing and planning evolve when agents are involved, and which skills and habits are needed to work effectively. This is a grounded conversation about real workflows and real tradeoffs, and about designing systems that can test and improve themselves.—Timestamps(00:00) Intro(01:07) How Peter got into tech (08:27) PSPDFKit(19:14) PSPDFKit’s tech stack and culture(22:33) Enterprise pricing(29:42) Burnout (34:54) Peter finding his spark again(43:02) Peter’s workflow (49:10) Managing agents (54:08) Agentic engineering(59:01) Testing and debugging (1:03:49) Why devs struggle with LLM coding(1:07:20) How PSPDFkit would look if built today (1:11:10) How planning has changed with AI (1:21:14) Building Clawdbot (now: Moltbot)(1:34:22) AI’s impact on large companies(1:38:38) “I don’t care about CI”(1:40:01) Peter’s process for new features (1:44:48) Advice for new grads(1:50:18) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Inside a five-year-old startup’s rapid AI makeover• When AI writes almost all code, what happens to software engineering?• Why it’s so dramatic that “writing code by hand is dead”• AI Engineering in the real world• The AI Engineering stack—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Jan 28, 20261h 54m

How AWS S3 is built

Brought to You By:• Statsig — ⁠ The unified platform for flags, analytics, experiments, and more.• Sonar – The makers of SonarQube, the industry standard for automated code review• WorkOS – Everything you need to make your app enterprise ready.—Amazon S3 is one of the largest distributed systems ever built, storing and serving data for a significant portion of the internet. Behind its simple interfaces hides an enormous amount of engineering work, careful tradeoffs, and long-term thinking.In this episode, I sit down with Mai-Lan Tomsen Bukovec, VP of Data and Analytics at AWS, who has been running Amazon S3 for more than a decade. Mai-Lan shares how S3 operates at extreme scale, what it takes to design for durability and availability across millions of servers, and why building for failure is a core principle.We also go deep into how AWS approaches correctness using formal methods, how storage tiers and limits shape system design, and why simplicity remains one of the hardest and most important goals at S3’s scale.—Timestamps(00:00) Intro(01:03) S3’s scale (03:58) How S3 started (07:25) Parquet, Iceberg, and S3 tables(09:46) S3 for developers (13:37) Why AWS keeps S3 prices low (17:10) AWS pricing tiers(19:38) Availability and durability (26:21) The cost of S3's consistency(31:22) Automated reasoning and proof of correctness (35:14) Durability at AWS scale(39:58) Correlated failure and crash consistency (43:22) Failure allowances (46:04) Two opposing principles in S3 design(49:09) S3’s evolution (52:21) S3 Vectors (1:01:16) The 50 TB limit on AWS(1:07:54) The simplicity principle(1:10:10) Types of engineers working on S3(1:14:15) Closing recommendations —The Pragmatic Engineer deepdives relevant for this episode:• Inside Amazon’s engineering culture• How AWS deals with a major outage• A Day in the Life of a Senior Manager at Amazon• What is a Principal Engineer at Amazon? – with Steve Huynh• Working at Amazon as a software engineer – with Dave AndersonAmazon papers recommended by Mai-Lan:• Using lightweight formal methods to validate a key-value storage node in Amazon S3• Formally verified cloud-scale authorization• Analyzing metastable failures• Amazon’s engineering tenets—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Jan 21, 20261h 18m

The history of servers, the cloud, and what’s next – with Oxide

Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more.•⁠ Linear ⁠ — ⁠ The system for modern product development.—How have servers and the cloud evolved in the last 30 years, and what might be next? Bryan Cantrill was a distinguished engineer at Sun Microsystems during both the Dotcom Boom and the Dotcom Bust. Today, he is the co-founder and CTO of Oxide Computer, where he works on modern server infrastructure.In this episode of The Pragmatic Engineer, Bryan joins me to break down how modern computing infrastructure evolved. We discuss why the Dotcom Bust produced deeper innovation than the Boom, how constraints shape better systems, and what the rise of the cloud changed and did not change about building reliable infrastructure.Our conversation covers early web infrastructure at Sun, the emergence of AWS, Kubernetes and cloud neutrality, and the tradeoffs between renting cloud space and building your own. We also touch on the complexity of server-side software updates, experimenting with AI, the limits of large language models, and how engineering organizations scale without losing their values.If you want a systems-level perspective on computing that connects past cycles to today’s engineering decisions, this episode offers a rare long-range view.—Timestamps(00:00) Intro(01:26) Computer science in the 1990s(03:01) Sun and Cisco’s web dominance(05:41) The Dotcom Boom(10:26) From Boom to Bust (15:32) The innovations of the Bust(17:50) The open source shift(22:00) Oracle moves into Sun’s orbit(24:54) AWS dominance (2010–2014)(28:15) How Kubernetes and cloud neutrality(30:58) Custom infrastructure (36:10) Renting the cloud vs. buying hardware(45:28) Designing a computer from first principles (50:02) Why everyone is paid the same salary at Oxide(54:14) Oxide’s software stack (58:33) The evolution of software updates(1:02:55) How Oxide uses AI (1:06:05) The limitations of LLMs(1:11:44) AI use and experimentation at Oxide (1:17:45) Oxide’s diverse teams(1:22:44) Remote work at Oxide(1:24:11) Scaling company values(1:27:36) AI’s impact on the future of engineering (1:31:04) Bryan’s advice for junior engineers(1:34:01) Book recommendations—The Pragmatic Engineer deepdives relevant for this episode:• Startups on hard mode: Oxide. Part 1: Hardware• Startups on hard mode: Oxide, Part 2: Software & Culture• Three cloud providers, three outages: three different responses• Inside Uber’s move to the Cloud• Inside Agoda’s private Cloud—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Dec 17, 20251h 39m

Being a founding engineer at an AI startup

Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Linear ⁠ — ⁠ The system for modern product development. —Michelle Lim joined Warp as engineer number one and is now building her own startup, Flint. She brings a strong product-first mindset shaped by her time at Facebook, Slack, Robinhood, and Warp. Michelle shares why she chose Warp over safer offers, how she evaluates early-stage opportunities, and what she believes distinguishes great founding engineers.Together, we cover how product-first engineers create value, why negotiating equity at early-stage startups requires a different approach, and why asking founders for references is a smart move. Michelle also shares lessons from building consumer and infrastructure products, how she thinks about tech stack choices, and how engineers can increase their impact by taking on work outside their job descriptions.If you want to understand what founders look for in early engineers or how to grow into a founding-engineer role, this episode is full of practical advice backed by real examples—Timestamps(00:00) Intro(01:32) How Michelle got into software engineering (03:30) Michelle’s internships (06:19) Learnings from Slack (08:48) Product learnings at Robinhood(12:47) Joining Warp as engineer #1(22:01) Negotiating equity(26:04) Asking founders for references(27:36) The top reference questions to ask(32:53) The evolution of Warp’s tech stack (35:38) Product-first engineering vs. code-first(38:27) Hiring product-first engineers (41:49) Different types of founding engineers (44:42) How Flint uses AI tools (45:31) Avoiding getting burned in founder exits(49:26) Hiring top talent(50:15) An overview of Flint(56:08) Advice for aspiring founding engineers(1:01:05) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Thriving as a founding engineer: lessons from the trenches• From software engineer to AI engineer• AI Engineering in the real world• The AI Engineering stack—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Dec 3, 20251h 4m

Code security for software engineers

Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig are helping make the first-ever Pragmatic Summit a reality. Join me and 400 other top engineers and leaders on 11 February, in San Francisco for a special one-day event. Reserve your spot here.•⁠ Linear ⁠ — ⁠ The system for modern product development. Engineering teams today move much faster, thanks to AI. Because of this, coordination increasingly becomes a problem. This is where Linear helps fast-moving teams stay focused. Check out Linear.—As software engineers, what should we know about writing secure code?Johannes Dahse is the VP of Code Security at Sonar and a security expert with 20 years of industry experience. In today’s episode of The Pragmatic Engineer, he joins me to talk about what security teams actually do, what developers should own, and where real-world risk enters modern codebases.We cover dependency risk, software composition analysis, CVEs, dynamic testing, and how everyday development practices affect security outcomes. Johannes also explains where AI meaningfully helps, where it introduces new failure modes, and why understanding the code you write and ship remains the most reliable defense.If you build and ship software, this episode is a practical guide to thinking about code security under real-world engineering constraints.—Timestamps(00:00) Intro(02:31) What is penetration testing?(06:23) Who owns code security: devs or security teams?(14:42) What is code security? (17:10) Code security basics for devs(21:35) Advanced security challenges(24:36) SCA testing (25:26) The CVE Program (29:39) The State of Code Security report (32:02) Code quality vs security(35:20) Dev machines as a security vulnerability(37:29) Common security tools(42:50) Dynamic security tools(45:01) AI security reviews: what are the limits?(47:51) AI-generated code risks(49:21) More code: more vulnerabilities(51:44) AI’s impact on code security(58:32) Common misconceptions of the security industry(1:03:05) When is security “good enough?”(1:05:40) Johannes’s favorite programming language—The Pragmatic Engineer deepdives relevant for this episode:• What is Security Engineering?•⁠ Mishandled security vulnerability in Next.js•⁠ Okta Schooled on Its Security Practices—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Nov 26, 20251h 7m

How AI will change software engineering – with Martin Fowler

Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. AI-accelerated development isn’t just about shipping faster: it’s about measuring whether, what you ship, actually delivers value. This is where modern experimentation with Statsig comes in. Check it out.•⁠ Linear ⁠ — ⁠ The system for modern product development. I had a jaw-dropping experience when I dropped in for the weekly “Quality Wednesdays” meeting at Linear. Every week, every dev fixes at least one quality isse, large or small. Even if it’s one pixel misalignment, like this one. I’ve yet to see a team obsess this much about quality. Read more about how Linear does Quality Wednesdays – it’s fascinating!—Martin Fowler is one of the most influential people within software architecture, and the broader tech industry. He is the Chief Scientist at Thoughtworks and the author of Refactoring and Patterns of Enterprise Application Architecture, and several other books. He has spent decades shaping how engineers think about design, architecture, and process, and regularly publishes on his blog, MartinFowler.com.In this episode, we discuss how AI is changing software development: the shift from deterministic to non-deterministic coding; where generative models help with legacy code; and the narrow but useful cases for vibe coding. Martin explains why LLM output must be tested rigorously, why refactoring is more important than ever, and how combining AI tools with deterministic techniques may be what engineering teams need.We also revisit the origins of the Agile Manifesto and talk about why, despite rapid changes in tooling and workflows, the skills that make a great engineer remain largely unchanged.—Timestamps(00:00) Intro(01:50) How Martin got into software engineering (07:48) Joining Thoughtworks (10:07) The Thoughtworks Technology Radar(16:45) From Assembly to high-level languages(25:08) Non-determinism (33:38) Vibe coding(39:22) StackOverflow vs. coding with AI(43:25) Importance of testing with LLMs (50:45) LLMs for enterprise software(56:38) Why Martin wrote Refactoring (1:02:15) Why refactoring is so relevant today(1:06:10) Using LLMs with deterministic tools(1:07:36) Patterns of Enterprise Application Architecture(1:18:26) The Agile Manifesto (1:28:35) How Martin learns about AI (1:34:58) Advice for junior engineers (1:37:44) The state of the tech industry today(1:42:40) Rapid fire round—The Pragmatic Engineer deepdives relevant for this episode:• Vibe coding as a software engineer• The AI Engineering stack• AI Engineering in the real world• What changed in 50 years of computing—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Nov 19, 20251h 48m

Netflix’s Engineering Culture

Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Statsig enables two cultures at once: continuous shipping and experimentation. Companies like Notion went from single-digit experiments per quarter to over 300 experiments with Statsig. Start using Statsig with a generous free tier, and a $50K startup program.•⁠ Linear ⁠ — ⁠ The system for modern product development. When most companies hit real scale, they start to slow down, and are faced with “process debt.” This often hits software engineers the most. Companies switch to Linear to hit a hard reset on this process debt – ones like Scale cut their bug resolution in half after the switch. Check out Linear’s migration guide for details.—What’s it like to work as a software engineer inside one of the world’s biggest streaming companies?In this special episode recorded at Netflix’s headquarters in Los Gatos, I sit down with Elizabeth Stone, Netflix’s Chief Technology Officer. Before becoming CTO, Elizabeth led data and insights at Netflix and was VP of Science at Lyft. She brings a rare mix of technical depth, product thinking, and people leadership.We discuss what it means to be “unusually responsible” at Netflix, how engineers make decisions without layers of approval, and how the company balances autonomy with guardrails for high-stakes projects like Netflix Live. Elizabeth shares how teams self-reflect and learn from outages and failures, why Netflix doesn’t do formal performance reviews, and what new grads bring to a company known for hiring experienced engineers.This episode offers a rare inside look at how Netflix engineers build, learn, and lead at a global scale.—Timestamps(00:00) Intro(01:44) The scale of Netflix (03:31) Production software stack(05:20) Engineering challenges in production(06:38) How the Open Connect delivery network works(08:30) From pitch to play (11:31) How Netflix enables engineers to make decisions (13:26) Building Netflix Live for global sports(16:25) Learnings from Paul vs. Tyson for NFL Live(17:47) Inside the control room (20:35) What being unusually responsible looks like(24:15) Balancing team autonomy with guardrails for Live(30:55) The high talent bar and introduction of levels at Netflix(36:01) The Keeper Test (41:27) Why engineers leave or stay (44:27) How AI tools are used at Netflix(47:54) AI’s highest-impact use cases(50:20) What new grads add and why senior talent still matters(53:25) Open source at Netflix (57:07) Elizabeth’s parting advice for new engineers to succeed at Netflix —The Pragmatic Engineer deepdives relevant for this episode:• The end of the senior-only level at Netflix• Netflix revamps its compensation philosophy• Live streaming at world-record scale with Ashutosh Agrawal• Shipping to production• What is good software architecture?—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Nov 12, 202559 min

From Swift to Mojo and high-performance AI Engineering with Chris Lattner

Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. Companies like Graphite, Notion, and Brex rely on Statsig to measure the impact of the pace they ship. Get a 30-day enterprise trial here.•⁠ Linear – The system for modern product development. Linear is a heavy user of Swift: they just redesigned their native iOS app using their own take on Apple’s Liquid Glass design language. The new app is about speed and performance – just like Linear is. Check it out.—Chris Lattner is one of the most influential engineers of the past two decades. He created the LLVM compiler infrastructure and the Swift programming language – and Swift opened iOS development to a broader group of engineers. With Mojo, he’s now aiming to do the same for AI, by lowering the barrier to programming AI applications.I sat down with Chris in San Francisco, to talk language design, lessons on designing Swift and Mojo, and – of course! – compilers. It’s hard to find someone who is as enthusiastic and knowledgeable about compilers as Chris is!We also discussed why experts often resist change even when current tools slow them down, what he learned about AI and hardware from his time across both large and small engineering teams, and why compiler engineering remains one of the best ways to understand how software really works.—Timestamps(00:00) Intro(02:35) Compilers in the early 2000s(04:48) Why Chris built LLVM(08:24) GCC vs. LLVM(09:47) LLVM at Apple (19:25) How Chris got support to go open source at Apple(20:28) The story of Swift (24:32) The process for designing a language (31:00) Learnings from launching Swift (35:48) Swift Playgrounds: making coding accessible(40:23) What Swift solved and the technical debt it created(47:28) AI learnings from Google and Tesla (51:23) SiFive: learning about hardware engineering(52:24) Mojo’s origin story(57:15) Modular’s bet on a two-level stack(1:01:49) Compiler shortcomings(1:09:11) Getting started with Mojo (1:15:44) How big is Modular, as a company?(1:19:00) AI coding tools the Modular team uses (1:22:59) What kind of software engineers Modular hires (1:25:22) A programming language for LLMs? No thanks(1:29:06) Why you should study and understand compilers—The Pragmatic Engineer deepdives relevant for this episode:•⁠ AI Engineering in the real world• The AI Engineering stack• Uber's crazy YOLO app rewrite, from the front seat• Python, Go, Rust, TypeScript and AI with Armin Ronacher• Microsoft’s developer tools roots—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Nov 5, 20251h 32m

Beyond Vibe Coding with Addy Osmani

Brought to You By:•⁠ Statsig ⁠ — ⁠ The unified platform for flags, analytics, experiments, and more. •⁠ Linear – The system for modern product development. —Addy Osmani is Head of Chrome Developer Experience at Google, where he leads teams focused on improving performance, tooling, and the overall developer experience for building on the web. If you’ve ever opened Chrome’s Developer Tools bar, you’ve definitely used features Addy has built. He’s also the author of several books, including his latest, Beyond Vibe Coding, which explores how AI is changing software development.In this episode of The Pragmatic Engineer, I sit down with Addy to discuss how AI is reshaping software engineering workflows, the tradeoffs between speed and quality, and why understanding generated code remains critical. We dive into his article The 70% Problem, which explains why AI tools accelerate development but struggle with the final 30% of software quality—and why this last 30% is tackled easily by software engineers who understand how the system actually works.—Timestamps(00:00) Intro(02:17) Vibe coding vs. AI-assisted engineering(06:07) How Addy uses AI tools(13:10) Addy’s learnings about applying AI for development(18:47) Addy’s favorite tools(22:15) The 70% Problem(28:15) Tactics for efficient LLM usage(32:58) How AI tools evolved(34:29) The case for keeping expectations low and control high(38:05) Autonomous agents and working with them(42:49) How the EM and PM role changes with AI(47:14) The rise of new roles and shifts in developer education(48:11) The importance of critical thinking when working with AI(54:08) LLMs as a tool for learning(1:03:50) Rapid questions—The Pragmatic Engineer deepdives relevant for this episode:•⁠ Vibe Coding as a software engineer•⁠ How AI-assisted coding will change software engineering: hard truths•⁠ AI Engineering in the real world•⁠ The AI Engineering stack•⁠ How Claude Code is built—Production and marketing by ⁠⁠⁠⁠⁠⁠⁠⁠https://penname.co/⁠⁠⁠⁠⁠⁠⁠⁠. For inquiries about sponsoring the podcast, email [email protected]. Get full access to The Pragmatic Engineer at newsletter.pragmaticengineer.com/subscribe

Oct 29, 20251h 8m
Gergely Orosz